Jekyll2022-12-30T09:11:25+00:00https://matheja.me/feed.xmlBen MathejaProduct Owner @porsche
| All things Cloud | Product Management | Agile Methods | posts are my own opinion
Ben Mathejaben@matheja.meHomelab - Part 1: Overview2020-11-07T08:31:00+00:002020-11-07T08:31:00+00:00https://matheja.me/2020/11/07/homelab-part1<p>Due to Covid-19 I took on the task to build something new and exciting.
It was time to rebuild and extend services i’m hosting in my local network.
As a quick premise: Everything i’m running locally is only exposed to clients within the same network.
<!--more--></p>
<h2 id="status-quo">Status Quo</h2>
<p>Having the need to run at least the Unifi-Controller within my network to control my Unifi devices I used an old Raspberry Pi 2 before.
The RPI 2 used my Synology NAS as NFS as I was aware of file system issues using only the internal SD cards.
Still the setup was painfully slow and not extendable.</p>
<h2 id="overview">Overview</h2>
<p>With the use of Containers and Orchestration Tools such as <code class="language-plaintext highlighter-rouge">docker-compose</code> you can bring up entire stacks within seconds and manage them in a painless way.
I procured a used <a href="https://sp.ts.fujitsu.com/dmsp/Publications/public/ds-py-tx120-s3.pdf">Fujitsu TX-120 S3</a> with a single Xeon E3-1220 3,1GHz 8GB Ram and 300GB SAS Drives for below 200€.
Then I installed Ubuntu 18.04 on it to use it as my primary docker-compose host.</p>
<p>Sidenote: The TX-120 is placed in the hall of our flat and I’m a bit annoyed by the continous noisy write on the Disks.
So i’ll opt to switch the SAS Drives towards 2,5” SSDs in the Future.
But still i’m really amazed by the size of the case. You’ll find a spot for a server that big. And as a bonus tt features 2 usable Network Interfaces out of the box.</p>
<p>Here is an overview of the current setup.</p>
<pre><code class="language-asci"> +--------------------------------------+
| TX120 S3 (io) |
| docker-compose |
| |
+-----------+ | +-----------+ +-------------+ |
| | | | | | | |
| Quaysi.de +---------------> Traefik +--------> Services | |
| | | | | | | |
+-----------+ | +-----------+ +-------------+ |
| |
+--------------------------------------+
</code></pre>
<p>Everything is accessible and exposed via HTTPS below the quaysi.de domain.
Each services has its own subdomain e.g. unifi.quaysi.de.</p>
<h3 id="traefik-domain-and-certificates">Traefik, Domain and Certificates</h3>
<p><a href="https://traefik.io/">Traefik</a> is configured as the central edge router handling incoming requests and forwarding them to the services.
If you ever set up local services I assume you encountered the issues regarding non-trusted certificates.
With the help of <a href="https://aws.amazon.com/route53/">Amazon Route 53</a>, <a href="https://traefik.io/">Traefik</a> and the LetsEncrypt Resolver its possible to bypass that.</p>
<ul>
<li>Set public Records in a Hosted Zone within Route 53 for your internal services e.g. quaysi.de points to 192.168.1.9, 192.168.1.10 and 192.168.1.11</li>
<li>Configure Traefik to use the <a href="https://doc.traefik.io/traefik/user-guides/docker-compose/acme-dns/">DNS-Challenge</a> and provision AWS Credentials</li>
</ul>
<p>The effect is: all your internal services receive a trusted LetsEncrypt certificate and you can just work around those annoying “untrusted connection” issues, as everything is provisioned.</p>
<p>This was the first part of the series about my homelab. As always love to hear your feedback about.</p>Ben Mathejaben@matheja.meDue to Covid-19 I took on the task to build something new and exciting. It was time to rebuild and extend services i’m hosting in my local network. As a quick premise: Everything i’m running locally is only exposed to clients within the same network.Stop Trying to Make Hard Work Easy2020-08-20T13:37:00+00:002020-08-20T13:37:00+00:00https://matheja.me/2020/08/20/stop-trying-to-make-hard-work-easy<p>Today I want to recommend an interesting article from Nir Eyal.
<!--more--></p>
<p><a href="https://superorganizers.substack.com/p/stop-trying-to-make-hard-work-easy">Stop Trying to Make Hard Work Easy by Nir Eyal</a></p>
<p>Nir Eyal hightlights the number one barrier to getting our work done is distraction. And for me quite interesting elaborates on the opposite of distraction which is not focus, it’s traction.</p>
<p>From his perspective the main challenge to remain productive i.e gaining traction is mastering our triggers. They may be internal (our discomfort when pushed to focus on a given task without allowing distraction to ourselves) as well as external triggers (disturbances like a message on our phone).</p>Ben Mathejaben@matheja.meToday I want to recommend an interesting article from Nir Eyal.Journey to Serverless - Migrated my Todoist Integration to Lambda2020-06-15T09:00:00+00:002020-06-15T09:00:00+00:00https://matheja.me/2020/06/15/transform-todoist-integration-to-serverless<p>I migrated my <a href="https://github.com/BenMatheja/todoist-serverless-lambda">Todoist Webhook Integration</a> from a self-hosted version towards <a href="https://aws.amazon.com/lambda/">AWS Lambda</a>.</p>
<p>Here is why!
<!--more--></p>
<h2 id="why">Why</h2>
<p>My old approach had a lot of shortcomings.</p>
<p>The way I was developing on the app and how it ran in production was not the same.
There was no real CI/CD process involved and the packaging of the app itself really differed from local development. This caused issues, whenever changes had to be done.</p>
<p>The simple function consisted of too many moving parts which also introduced more complexity to the overall system.
I used an <em>apt-get installed</em> Nginx as reverse proxy. There was <a href="https://github.com/benoitc/gunicorn">Gunicorn</a> with it’s own configuration files and last but not least the Python App itself.</p>
<p>The durability of the app was not convincing. From a user’s perspective the performance got worse the longer the application has been running. This resulted in dropped events and made the service not reliable.
I remember someone talking about self-driving cars and he said “if it doesn’t work in all circumstances - there is no use for it”. To be honest, the complexity of the <a href="https://github.com/BenMatheja/todoist-serverless-lambda">Todoist Webhook Integration</a> is trivial compared to the challenges of writing software for self-driving cars, but still the argument holds.</p>
<p>The handling of credentials was far from optimal. With the current app, they were just inserted to an settings.py file on the machine.</p>
<h2 id="what-i-did">What I did</h2>
<p>I used both <a href="https://github.com/Miserlou/Zappa">Zappa</a> and <a href="https://www.serverless.com/">Serverless</a> for setting up the AWS Stack and configuring the app to run properly on <a href="https://aws.amazon.com/lambda/">AWS Lambda</a>. Serverless seemed more mature to me which is the reason I’m still using it that way.</p>
<p>The integration runs at free tier with no hassle. I configured <a href="https://developer.todoist.com/sync/v8/#webhooks">Todoist Webhooks</a> to fire whenever an item on my list is marked as <em>completed</em></p>
<p>I reduced the moving parts necessary to maintain, it’s just packaging the app, making sure that it’ll run, deploy it, test it on Lambda and you’re done.</p>
<p>I kind of fought with Github Actions to set up a CI/CD but can say that it’ll now deploy to AWS whenever something is pushed to master. This is a real relief, knowing that wherever I commit changes to the repository, the app will get deployed in the same (working) way.
So you can say I’m pretty happy with the status quo of the app.</p>
<h2 id="future">Future</h2>
<ul>
<li>Let Github Actions run integration tests after deployment on dev (e.g. is the Endpoint responding as expected using newman or something else). If succesfull stage to prod</li>
</ul>Ben Mathejaben@matheja.meI migrated my Todoist Webhook Integration from a self-hosted version towards AWS Lambda. Here is why!Secure your Services with Traefik and Google oAuth2020-04-10T09:19:00+00:002020-04-10T09:19:00+00:00https://matheja.me/2020/04/10/secure-your-services-with-traefik-and-google-oauth<p>Hobby projects tend to grow, so is the need to have proper authentification in place.</p>
<p>With the use of Containers and Orchestration Tools such as <code class="language-plaintext highlighter-rouge">docker-compose</code> you can bring up entire ELK stacks within seconds.
Still the setup will feature non-protected installations. Luckily there is a solution for Traefik my edge router of choice. I want to show some of the key parts to get it working.
<!--more--></p>
<h2 id="setup">Setup</h2>
<p>The setup is quite easy. Imagine you want to have a couple of services running at your domain.</p>
<p>In this example it will be <em>farsity.de</em>.</p>
<p>I’m using Traefik 2.2 in the examples below. Be careful if you came by tutorials written for Traefik v1. The stuff will just not work because traefik changed their internal modules (bye bye frontend, hello routers)</p>
<p>The layout is quite simple, below <a href="http://farsity.de">farsity.de</a> we will find a</p>
<ul>
<li>Sample whoami service which I want to protect at https://api.farsity.de</li>
<li>Authentication Container which redirects to Google for oAuth at https://auth.farsity.de</li>
</ul>
<p>After applying the configuration, the sample whoami service should no longer be reachable without a proper authorization.</p>
<p>You will find the complete examples at <a href="https://github.com/BenMatheja/traefik-sandbox">BenMatheja/traefik-sandbox</a></p>
<h1 id="how-does-it-look-like-from-user-perspective">How does it look like from user perspective?</h1>
<p>GET at <a href="https://api.farsity.de">https://api.farsity.de</a> will check if the user is already authenticated.</p>
<p>if not, the request is forwarded to our authentication proxy which builds up an redirection to receive a token. The user does not recognise anything of the internal logic. Either he will just access the Service or will see a login at Google and then access the Service.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="o">></span> GET / HTTP/2
<span class="o">></span> Host: api.farsity.de
<span class="o">></span> User-Agent: curl/7.64.1
<span class="o">></span> Accept: <span class="k">*</span>/<span class="k">*</span>
<span class="o">></span>
<span class="k">*</span> Connection state changed <span class="o">(</span>MAX_CONCURRENT_STREAMS <span class="o">==</span> 250<span class="o">)!</span>
< HTTP/2 307
< content-type: text/html<span class="p">;</span> <span class="nv">charset</span><span class="o">=</span>utf-8
< <span class="nb">date</span>: Fri, 10 Apr 2020 09:04:02 GMT
< location: https://accounts.google.com/o/oauth2/auth?client_id<span class="o">=</span>dtl2sgbj48q8it.apps.googleusercontent.com&redirect_uri<span class="o">=</span>https%3A%2F%2Fapi.farsity.de%2F_oauth&response_type<span class="o">=</span>code&scope<span class="o">=</span>https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.profile+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email&state<span class="o">=</span>24490f27856b1515a121ec763550347e%3Agoogle%3Ahttps%3A%2F%2Fapi.farsity.de%2F
< set-cookie: <span class="nv">_forward_auth_csrf</span><span class="o">=</span>24490fe<span class="p">;</span> <span class="nv">Path</span><span class="o">=</span>/<span class="p">;</span> <span class="nv">Domain</span><span class="o">=</span>api.farsity.de<span class="p">;</span> <span class="nv">Expires</span><span class="o">=</span>Fri, 10 Apr 2020 21:04:02 GMT<span class="p">;</span> HttpOnly<span class="p">;</span> Secure
< content-length: 450
<
<a <span class="nv">href</span><span class="o">=</span><span class="s2">"https://accounts.google.com/o/oauth2/auth?client_id=dtl2sgbj48q8it.apps.googleusercontent.com&redirect_uri=https%3A%2F%2Fapi.farsity.de%2F_oauth&response_type=code&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.profile+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email&state=24490f27856b1515a121ec763550347e%3Agoogle%3Ahttps%3A%2F%2Fapi.farsity.de%2F"</span><span class="o">></span>Temporary Redirect</a>.
</code></pre></div></div>
<h2 id="register-your-application-with-google">Register your application with Google</h2>
<ul>
<li>Create a new Project at <a href="https://console.developers.google.com/apis/credentials?project=my-traefik-oauth-proxy-273717">https://console.developers.google.com/apis/credentials</a></li>
<li>Create new oAuth 2.0-Client-IDs</li>
<li>Make sure to add all authorised redirect domains (in this case <a href="https://api.farsity.de/_oauth">https://api.farsity.de/_oauth</a>)</li>
</ul>
<h2 id="configuration-of-the-authentication-proxy">Configuration of the authentication proxy</h2>
<p>Below is the configuration for the traefik-forward-auth container.</p>
<p><a href="https://github.com/thomseddon/traefik-forward-auth">thomseddon/traefik-forward-auth</a></p>
<p>The configuration happens within the environment.</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">traefikforward</span><span class="pi">:</span>
<span class="na">image</span><span class="pi">:</span> <span class="s">thomseddon/traefik-forward-auth</span>
<span class="na">container_name</span><span class="pi">:</span> <span class="s">traefikforward</span>
<span class="na">environment</span><span class="pi">:</span>
<span class="c1"># These Variables are injected via environment file</span>
<span class="c1">#- PROVIDERS_GOOGLE_CLIENT_ID=${GOOGLE_CLIENT_ID}</span>
<span class="c1">#- PROVIDERS_GOOGLE_CLIENT_SECRET=GOOGLE_CLIENT_SECRET</span>
<span class="c1">#- SECRET=${SECRET}</span>
<span class="c1">#- INSECURE_COOKIE=true # Example assumes no https, do not use in production</span>
<span class="c1">#- WHITELIST=${WHITELIST}</span>
<span class="pi">-</span> <span class="s">DOMAIN=farsity.de</span>
<span class="pi">-</span> <span class="s">AUTH_HOST=auth.farsity.de</span>
<span class="pi">-</span> <span class="s">LOG_LEVEL=debug</span>
<span class="na">env_file</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">./traefik-auth.env</span>
<span class="na">labels</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s2">"</span><span class="s">traefik.enable=true"</span>
<span class="pi">-</span> <span class="s2">"</span><span class="s">traefik.http.services.traefikforward.loadbalancer.server.port=4181"</span>
<span class="pi">-</span> <span class="s2">"</span><span class="s">traefik.http.routers.traefikforward.entrypoints=websecure"</span>
<span class="pi">-</span> <span class="s2">"</span><span class="s">traefik.http.routers.traefikforward.tls.certresolver=myresolver"</span>
<span class="pi">-</span> <span class="s2">"</span><span class="s">traefik.http.routers.traefikforward.rule=Host(`auth.farsity.de`)"</span>
</code></pre></div></div>
<p>Security critical values are handled in a separate .env file</p>
<p>To generate the secret use <code class="language-plaintext highlighter-rouge">openssl rand -hex 16</code> . It is used to sign the cookie.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>PROVIDERS_GOOGLE_CLIENT_ID=123.apps.googleusercontent.com
PROVIDERS_GOOGLE_CLIENT_SECRET=456
SECRET=something-random
WHITELIST=me@farsity.de,you@farsity.de
</code></pre></div></div>
<h2 id="configuration-of-a-to-be-secured-service">Configuration of a to-be-secured service</h2>
<p>Below is the configuration of sample service which shall be secured by the aforementioned auth proxy.</p>
<p>What has been done here is quite generic i.e. it will also just work for Kibana, Grafana or other Web-based applications.</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">whoamisecure</span><span class="pi">:</span>
<span class="na">image</span><span class="pi">:</span> <span class="s">containous/whoami</span>
<span class="na">labels</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s2">"</span><span class="s">traefik.enable=true"</span>
<span class="c1"># Route which handles HTTPS Traffic</span>
<span class="pi">-</span> <span class="s2">"</span><span class="s">traefik.http.routers.whoamisecure.rule=Host(`api.farsity.de`)"</span>
<span class="pi">-</span> <span class="s2">"</span><span class="s">traefik.http.routers.whoamisecure.entrypoints=websecure"</span>
<span class="pi">-</span> <span class="s2">"</span><span class="s">traefik.http.routers.whoamisecure.tls.certresolver=myresolver"</span>
<span class="c1"># Apply Forward Auth to the Service </span>
<span class="pi">-</span> <span class="s2">"</span><span class="s">traefik.http.routers.whoamisecure.middlewares=whoamisecure"</span>
<span class="pi">-</span> <span class="s2">"</span><span class="s">traefik.http.middlewares.whoamisecure.forwardauth.address=http://traefikforward:4181"</span>
<span class="pi">-</span> <span class="s2">"</span><span class="s">traefik.http.middlewares.whoamisecure.forwardauth.authResponseHeaders=X-Forwarded-User"</span>
<span class="pi">-</span> <span class="s2">"</span><span class="s">traefik.http.middlewares.whoamisecure.forwardauth.authResponseHeaders=X-Auth-User,</span><span class="nv"> </span><span class="s">X-Secret"</span>
<span class="pi">-</span> <span class="s2">"</span><span class="s">traefik.http.middlewares.whoamisecure.forwardauth.trustForwardHeader=true"</span>
</code></pre></div></div>
<h2 id="read-on">Read on</h2>
<ul>
<li>Excellent walk-through how to obtain the Google Credentials. But be careful with the configurations as they are written for Traefik v1.7
(<a href="https://www.smarthomebeginner.com/google-oauth-with-traefik-docker/">https://www.smarthomebeginner.com/google-oauth-with-traefik-docker/</a>).</li>
<li>Worth to read as well, features the same limitations as the article above (<a href="https://sysadmins.co.za/integrating-google-oauth-with-traefik/">https://sysadmins.co.za/integrating-google-oauth-with-traefik/</a>)</li>
</ul>Ben Mathejaben@matheja.meHobby projects tend to grow, so is the need to have proper authentification in place. With the use of Containers and Orchestration Tools such as docker-compose you can bring up entire ELK stacks within seconds. Still the setup will feature non-protected installations. Luckily there is a solution for Traefik my edge router of choice. I want to show some of the key parts to get it working.Getting Started with Minikube on WSL22020-04-08T08:46:00+00:002020-04-08T08:46:00+00:00https://matheja.me/2020/04/08/getting-started-with-minikube-on-wsl2<p>One nice Sunday morning, I wanted to get started with Kubernetes to learn the underlying concepts. I wanted to run it on my local machine to play a bit around similar to what i’m already doing with docker-compose.
<!--more--></p>
<p>This was my starting point</p>
<ul>
<li>Windows 10 with WSL 2 enabled</li>
<li>Docker Desktop installed on Windows 10, exposed via 2375 without TSL</li>
</ul>
<p>I tried following the guide below
<a href="https://medium.com/@joaoh82/setting-up-kubernetes-on-wsl-to-work-with-minikube-on-windows-10-90dac3c72fa1">Setting up Kubernetes on WSL to work with Minikube on Windows 10</a></p>
<p>At that point, i created the minikube alias for using the windows installation and was able to run minikube(.exe) start</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ben@ben-desktop ~ which minikube
/home/ben/minikube/minikube
ben@ben-desktop ~ minikube start
😄 minikube v1.8.2 on Microsoft Windows 10 Pro N 10.0.19041 Build 19041
✨ Automatically selected the docker driver
💾 Downloading preloaded images tarball for k8s v1.17.3 ...
> preloaded-images-k8s-v1-v1.17.3-docker-overlay2.tar.lz4: 499.26 MiB / 499
🔥 Creating Kubernetes in docker container with (CPUs=2) (2 available), Memory=6100MB (7964MB available) ...
🐳 Preparing Kubernetes v1.17.3 on Docker 19.03.2 ...
▪ kubeadm.pod-network-cidr=10.244.0.0/16
🚀 Launching Kubernetes ...
🌟 Enabling addons: default-storageclass, storage-provisioner
⌛ Waiting for cluster to come online ...
🏄 Done! kubectl is now configured to use "minikube"
</code></pre></div></div>
<h3 id="cannot-reach-kubernetes-cluster-from-wsl">Cannot reach Kubernetes Cluster from WSL</h3>
<p>Run from my WSL Ubuntu Distribution fails. The Kubernetes Cluster cannot be reached.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>minikube kubectl get all
> kubectl.exe.sha256: 65 B / 65 B [----------------------] 100.00% ? p/s 0s
> kubectl.exe: 41.95 MiB / 41.95 MiB [------------] 100.00% 5.48 MiB p/s 8s
Unable to connect to the server: dial tcp 127.0.0.1:32768: connectex: No connection could be made because the target machine actively refused it.
</code></pre></div></div>
<p>Confirmed that indeed the cluster seemed to be faulty by running on Windows via Terminal</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>PS C:\Users\ben> kubectl get all
Unable to connect to the server: dial tcp 127.0.0.1:32768: connectex: No connection could be made because the target mac
hine actively refused it.
PS C:\Users\ben>
</code></pre></div></div>
<p>Recreate minikube cluster with minikube delete and minikube start from Ubuntu distribution</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ben@ben-desktop ~ minikube start
😄 minikube v1.8.2 on Microsoft Windows 10 Pro N 10.0.19041 Build 19041
✨ Automatically selected the docker driver
🔥 Creating Kubernetes in docker container with (CPUs=2) (4 available), Memory=6100MB (9968MB available) ...
🐳 Preparing Kubernetes v1.17.3 on Docker 19.03.2 ...
▪ kubeadm.pod-network-cidr=10.244.0.0/16
🚀 Launching Kubernetes ...
🌟 Enabling addons: default-storageclass, storage-provisioner
⌛ Waiting for cluster to come online ...
🏄 Done! kubectl is now configured to use "minikube"
ben@ben-desktop ~ minikube kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22s
</code></pre></div></div>
<p>Problem: kubectl from Ubuntu Distribution is not able to access minikube deployment.</p>
<p>Connection to localhost fails (127.0.0.1:32768)</p>
<p>Tried to inject kubectl configuration as in</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>pavel@MSI:~$ kubectl --kubeconfig /mnt/c/Users/Pavel/.kube/config cluster-info
</code></pre></div></div>
<p>Not working</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>✘ ben@ben-desktop ~ minikube kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:32771
KubeDNS is running at https://127.0.0.1:32771/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
ben@ben-desktop ~ kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 127.0.0.1:32771 was refused - did you specify the right host or port?
</code></pre></div></div>
<h3 id="things-done-changed">Things done changed</h3>
<p>Apparently since converting to my Ubuntu Distrubition to WSL 2 things done changed.</p>
<p><a href="https://github.com/microsoft/WSL/issues/4321#issuecomment-573351391">WSL 2 docker client cannot reach Docker Desktop via tcp://0.0.0.0:2375 · Issue #4321 · microsoft/WSL</a></p>
<p>Don’t believe it and keep researching. Then deleted all previous Docker installations on Ubuntu WSL Distribution. Did run the following steps with good output</p>
<p><a href="https://docs.docker.com/docker-for-windows/wsl-tech-preview/">Docker Desktop WSL 2 backend</a></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>PS C:\Users\ben> wsl -l -v
NAME STATE VERSION
* Ubuntu Running 2
docker-desktop-data Running 2
docker-desktop Running 2
</code></pre></div></div>
<p>Reinstalled</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo apt-get install docker-ce-cli.
</code></pre></div></div>
<p>Docker cannot connect to /var/run/docker.sock. How is this shit supposed to work?</p>
<p>How can the Ubuntu distribution “just access” the same docker engine as the one on powershell without configuration?</p>
<p>Looked on</p>
<p><a href="https://www.hanselman.com/blog/DockerDesktopForWSL2IntegratesWindows10AndLinuxEvenCloser.aspx">Docker Desktop for WSL 2 integrates Windows 10 and Linux even closer</a></p>
<p>Should all work out of the box just fine.</p>
<p>Doubt it, still try if if a reinstall of docker desktop works</p>
<p><a href="https://github.com/docker/for-win/issues/5268">[WSL2] docker CLI cannot connect to running docker engine · Issue #5268 · docker/for-win</a></p>
<p>Enabled the ‘Virtual Machine Platform’ optional component and make sure WSL is enabled before.</p>
<p><a href="https://docs.microsoft.com/en-us/windows/wsl/wsl2-install">Install WSL 2</a></p>
<h3 id="did-a-reboot---ermahgerd-it-werks">Did a reboot - ERMAHGERD IT WERKS</h3>
<p><em>View from Ubuntu WSL2 Distribution</em></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ben@ben-desktop docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.elastic.co/logstash/logstash-oss 7.6.0 50251e88900f 6 weeks ago 804MB
ea_logstash latest 50251e88900f 6 weeks ago 804MB
ea_kibana latest 3ad6636ee22e 6 weeks ago 646MB
docker.elastic.co/kibana/kibana-oss 7.6.0 3ad6636ee22e 6 weeks ago 646MB
ea_elasticsearch latest 1d8bbe9f233d 6 weeks ago 690MB
docker.elastic.co/elasticsearch/elasticsearch-oss 7.6.0 1d8bbe9f233d 6 weeks ago 690MB
hello-world latest fce289e99eb9 14 months ago 1.84kB
</code></pre></div></div>
<p><em>View from Windows Terminal</em></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>PS C:\Users\ben> docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.elastic.co/logstash/logstash-oss 7.6.0 50251e88900f 6 weeks ago 804MB
ea_logstash latest 50251e88900f 6 weeks ago 804MB
ea_kibana latest 3ad6636ee22e 6 weeks ago 646MB
docker.elastic.co/kibana/kibana-oss 7.6.0 3ad636ee22e 6 weeks ago 646MB
docker.elastic.co/elasticsearch/elasticsearch-oss 7.6.0 1d8bbe9f233d 6 weeks ago 690MB
ea_elasticsearch latest 1d8bbe9f233d 6 weeks ago 690MB
hello-world latest fce289e99eb9 14 months ago 1.84kB
PS C:\Users\ben>
</code></pre></div></div>
<h1 id="back-to-the-initial-plan">Back to the initial plan</h1>
<p>Do a small tutorial of kubernets</p>
<p><a href="https://kubernetes.io/docs/tutorials/hello-minikube/">Hello Minikube</a></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>minikube start
</code></pre></div></div>
<p>fails</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>minikube delete & minikube start
</code></pre></div></div>
<p>works</p>
<p>mother of god kubectl is also giving me the cluster-info</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ben@ben-desktop ~ kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:32771
KubeDNS is running at https://127.0.0.1:32771/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
</code></pre></div></div>
<p>Cannot access the service via IP, suspect minikube.exe installation is causing issue</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ben@ben-desktop ~ minikube service hello-node
|-----------|------------|-------------|-------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|------------|-------------|-------------------------|
| default | hello-node | | http://172.17.0.2:30926 |
|-----------|------------|-------------|-------------------------|
🎉 Opening service default/hello-node in default browser...
"\\wsl$\Ubuntu\home\ben"
CMD.EXE wurde mit dem oben angegebenen Pfad als aktuellem Verzeichnis gestartet.
UNC-Pfade werden nicht unterstützt.
Stattdessen wird das Windows-Verzeichnis als aktuelles Verzeichnis gesetzt.
</code></pre></div></div>
<p>Removed Minikube proxy and installed it within the WSL Distro using</p>
<p><a href="https://kubernetes.io/de/docs/tasks/tools/install-minikube/">Installation von Minikube</a></p>
<p>Minikube is really the only thing which really heals itself when errors occur.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ben@ben-desktop ~ minikube start
😄 minikube v1.8.2 on Ubuntu 18.04
✨ Automatically selected the docker driver
💾 Downloading preloaded images tarball for k8s v1.17.3 ...
> preloaded-images-k8s-v1-v1.17.3-docker-overlay2.tar.lz4: 499.26 MiB / 499
🔥 Creating Kubernetes in docker container with (CPUs=2) (4 available), Memory=4700MB (19124MB available) ...
🐳 Preparing Kubernetes v1.17.3 on Docker 19.03.2 ...
▪ kubeadm.pod-network-cidr=10.244.0.0/16
🚀 Launching Kubernetes ...
💣 Error starting cluster: running cmd: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase certs all --config
/var/tmp/minikube/kubeadm.yaml": /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml": exit status 1
stdout:
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority[certs] Using existing apiserver certificate and key on disk
stderr:
W0322 11:26:27.055133 1180 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0322 11:26:27.055179 1180 validation.go:28] Cannot validate kubelet config - no validator is available
error execution phase certs/apiserver-kubelet-client: [certs] certificate apiserver-kubelet-client not signed by CA certificate ca: crypto/rsa: verification error
To see the stack trace of this error execute with --v=5 or higher
😿 minikube is exiting due to an error. If the above message is not useful, open an issue:
👉 https://github.com/kubernetes/minikube/issues/new/choose
✘ ben@ben-desktop ~ minikube delete
❗ Unable to get the status of the minikube cluster.
🔥 Removing /home/ben/.minikube/machines/minikube ...
💀 Removed all traces of the "minikube" cluster.
ben@ben-desktop ~ minikube start
😄 minikube v1.8.2 on Ubuntu 18.04
✨ Automatically selected the docker driver
🔥 Creating Kubernetes in docker container with (CPUs=2) (4 available), Memory=4700MB (19124MB available) ...
🐳 Preparing Kubernetes v1.17.3 on Docker 19.03.2 ...
▪ kubeadm.pod-network-cidr=10.244.0.0/16
🚀 Launching Kubernetes ...
🌟 Enabling addons: default-storageclass, storage-provisioner
⌛ Waiting for cluster to come online ...
🏄 Done! kubectl is now configured to use "minikube"
</code></pre></div></div>
<p>Trying to expose a service works but the service cannot be accessed (timeout)</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ben@ben-desktop ~ minikube service hello-node
|-----------|------------|-------------|-------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|------------|-------------|-------------------------|
| default | hello-node | | http://172.17.0.2:32663 |
|-----------|------------|-------------|-------------------------|
🎉 Opening service default/hello-node in default browser...
💣 open url failed: http://172.17.0.2:32663: exec: "xdg-open": executable file not found in $PATH
😿 minikube is exiting due to an error. If the above message is not useful, open an issue:
👉 https://github.com/kubernetes/minikube/issues/new/choose
✘ ben@ben-desktop ~ curl http://172.17.0.2:32663/ -v
* Trying 172.17.0.2...
* TCP_NODELAY set
* connect to 172.17.0.2 port 32663 failed: Connection timed out
* Failed to connect to 172.17.0.2 port 32663: Connection timed out
* Closing connection 0
curl: (7) Failed to connect to 172.17.0.2 port 32663: Connection timed out
✘ ben@ben-desktop ~
</code></pre></div></div>
<p>Seemingly without minikube tunnel one cannot expose a deployment. Tunnel seems not to be build up correctly means i cannot finish the tutorial.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Status:
machine: minikube
pid: 6483
route: 10.96.0.0/12 -> 172.17.0.2
minikube: Running
services: []
errors:
minikube: no errors
router: error adding Route: Error: Nexthop has invalid gateway.
, 2
loadbalancer emulator: no errors
</code></pre></div></div>
<p>Stopped it at this point. Even though there are some nice things to do e.g. set up deployments via kubectl and see how they are applied on our minikube cluster I cannot expose services which is a main thing of having Kubernetes running.
I’ll be back :).</p>Ben Mathejaben@matheja.meOne nice Sunday morning, I wanted to get started with Kubernetes to learn the underlying concepts. I wanted to run it on my local machine to play a bit around similar to what i’m already doing with docker-compose.Remote Work - Challenges ahead2020-03-20T11:40:00+00:002020-03-20T11:40:00+00:00https://matheja.me/2020/03/20/remote-work-new-challenges<p>Due to the COVID-19 spread many companies have to transform their working reality towards a fully remote approach.
Wherever possible, companies try to remain their operations while protecting their workforce.
Before the spread, the company I’m working at allowed a part of their workforce to work remotely for one or two days a week.
The spread completely changed that.
<!--more--></p>
<p>In a sense of protecting the workforce, everyone capable of doing remote work is now obliged to.
We, the modern knowledge workers, are not used to this new situation at hand, being fully-remote workers in a remote teams setting.</p>
<h4 id="missing-opportunities-for-informal-communication">Missing opportunities for informal communication</h4>
<p>Everyone knows occasional “coffee breaks” or informal chatter which occurs naturally in on-site settings.
Talking to your colleagues is a vital part in maintaining relationships within a team.
But in a distributed teams setting those opportunities diminish. The amount of informal communication decreases if not actively stimulated.
If there are no occurences which substitute informal communications, people will feel lonely at some point in time.</p>
<h4 id="early-hours-are-eroded">Early hours are eroded</h4>
<p>A fair share of my colleagues do visit the office early to get “stuff done”. Within these early hours individuals have a lower chance of getting interrupted by calls, chats or meetings. And people are actively seizing that opportunity.
But if everyone is obliged to do remote work those commuting will now start working right after finishing their morning routine.</p>
<p>What happens to the “early hours” with no interruptions?
They are eroded as people will start their workday at the same time.</p>
<h4 id="managing-the-continous-buzz-of-notifications">Managing the continous buzz of notifications</h4>
<p>Work flows different in remote teams settings. To aid us in getting stuff done we heavily depend on communication and collaboration tools such as Slack, Teams, Jitsi or Skype. Roughly two work settings are encountered by the individual working in remote teams.</p>
<p>“Deep work” comprises settings the individual has to engage a good amount of it’s cognitive capabilities to successfully complete a (mostly complex) task. An example for deep work is the implementation of an algorithm to solve a given problem.
Individuals being in deep work settings are vulnerable to interruptions as those realize unwanted context switches.
Remember the developer in his flow getting disturbed on the progress of task x. The context switch realized will make it harder for the individual to progress with the initially started work.</p>
<p>“Shallow work” defines settings where it’s not necessary for the individual to engage with their complete cognitive capabilities such as attending meetings. Those settings require individuals to manage their attention and beeing present whenever necessary. Interruptions may occur but what’s striking here is that the individual will not be that vulnerable to them. They are expected in those shallow work settings and may be managed quite easily.</p>
<p>But managing the individuals attention becomes key within the unknown situation for remote workers.
The necessary collaboration tools produce a constant buzz of notifications to be consumed by the individual.
Every individual produces new parts of information, tasks, updates which are turned into a never-ending stream of notifications.</p>
<p>The individual has to be aware of managing their attention and apply strategies to keep and maintain focus.</p>
<h3 id="takeaways">Takeaways</h3>
<ul>
<li>Bridge the distance to your colleagues by using rich communication channels. Apply the usage of video wherever possible in remote settings.</li>
<li>Facilitate informal communication within your teams. Create opportunities to let informal communication happen. You could organize virtual coffee breaks, create offtopic chatrooms or play icebreaker games with your colleagues on occasion.</li>
<li>Be aware that collaboration tools facilitate interruptions. Balance your usage based on the necessary work setting to complete a given task. Being offline in Teams to remain productive is not a crime.</li>
<li>Allow asynchronous work to happen. Use defined sync points within your team e.g. “the daily” and avoid asking for status updates where not necessary.</li>
</ul>
<h3 id="worth-to-read">Worth to read</h3>
<ul>
<li><a href="https://doist.com/blog/asynchronous-communication/">Doist - A fully remote company on asynchronous communcation</a></li>
</ul>Ben Mathejaben@matheja.meDue to the COVID-19 spread many companies have to transform their working reality towards a fully remote approach. Wherever possible, companies try to remain their operations while protecting their workforce. Before the spread, the company I’m working at allowed a part of their workforce to work remotely for one or two days a week. The spread completely changed that.Showcase - Build your own Todoist Integration2020-03-14T08:52:00+00:002020-03-14T08:52:00+00:00https://matheja.me/2020/03/14/my-todoist-integration<p>When I started working at my current employer, I noticed an unconvenience every day.
Keeping track of the time of arrival and calculate when I should leave the office latest possible to not get into trouble for beeing present too long. This “trouble” actually protects employees, which is quite an asset in Germany.
<!--more--></p>
<p>But reminding yourself on such stuff seems like unnecessary load for your brain. In the “Getting Things Done” methodology, you should try to put anything that you “have to take care of” into a trusted system.
So in theory, your brain gets clear on the task and you will not feel overburden by the amount of open things to do.</p>
<h2 id="what">What</h2>
<p>Beeing a heavy Todoist user, I started looking for ways how my “trusted system” would do the “grunt work” of counting hours and reminding me.
Todoist should recognize whenever a Task named “Clock in” has been completed to create another one for this day to clock out at a specific time. The time should be the latest possible clock out to not get into trouble.</p>
<p>I started looking for ways and found the following simple truths. There are few integration platforms which offers a variety of use cases.</p>
<h3 id="zapier">Zapier</h3>
<p><a href="https://zapier.com/apps/todoist/integrations">Zapier</a> featured a lot of integrations out of the box.
But it wasn’t an option as the free plan featured a 15 minute update time.</p>
<p>I wasn’t able to find out, if the update time will exactly be 15 minutes. So waiting 15 minutes until the event has been processed by <a href="https://zapier.com/apps/todoist/integrations">Zapier</a> seemed like a show stopper. In fact the event entering the office had to be computed in near real-time to produce accurate results.</p>
<h3 id="iftt">IFTT</h3>
<p>I looked on <a href="https://ifttt.com/todoist">IFTT</a> and it immediately seemed promising.
The platform has been build on “if XY happened in Todoist, then do YZ”.</p>
<p>I looked through available integrations but the use case I was longing for was missing. Back in the days the pricing has been different, so a premium plan was necessary to have the integration running.
Every time someone asks you to pay something, you get creative in how to avoid those costs.</p>
<h3 id="run-your-own">Run your Own</h3>
<p>Still i was having a reserved instance running linux at netcup, which was hardly doing any intensive work.
So any integration could run on my own machine. While looking for todoist integrations, I found out that the app itself already offers <a href="https://developer.todoist.com/sync/v8/#webhooks">Webhooks</a> based on events you can tailor e.g event:created. Shouldn’t be that hard to build it on your own.</p>
<h2 id="how">How</h2>
<h3 id="python-flask-app-running-on-your-own-machine">Python Flask-App running on your own machine</h3>
<p>In that time I was working on a Python <a href="https://github.com/pallets/flask">Flask</a> backend to receive performance events via a REST API. If you never tried <a href="https://github.com/pallets/flask">Flask</a> give it a try. It is amazing how concise and easy you can create APIs.</p>
<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="o">@</span><span class="n">app</span><span class="p">.</span><span class="n">route</span><span class="p">(</span><span class="s">'/todoist/events/v1/items'</span><span class="p">,</span> <span class="n">methods</span><span class="o">=</span><span class="p">[</span><span class="s">'POST'</span><span class="p">])</span>
<span class="k">def</span> <span class="nf">handle_event</span><span class="p">():</span>
<span class="n">begin_time</span> <span class="o">=</span> <span class="n">datetime</span><span class="p">.</span><span class="n">datetime</span><span class="p">.</span><span class="n">now</span><span class="p">()</span>
<span class="n">event_id</span> <span class="o">=</span> <span class="n">request</span><span class="p">.</span><span class="n">headers</span><span class="p">.</span><span class="n">get</span><span class="p">(</span><span class="s">'X-Todoist-Delivery-ID'</span><span class="p">)</span>
<span class="c1"># Check if user-agent matches to todoist webhooks
</span> <span class="k">if</span> <span class="n">request</span><span class="p">.</span><span class="n">headers</span><span class="p">.</span><span class="n">get</span><span class="p">(</span><span class="s">'USER-AGENT'</span><span class="p">)</span> <span class="o">==</span> <span class="s">'Todoist-Webhooks'</span><span class="p">:</span>
</code></pre></div></div>
<p>I started building my own integration <a href="https://github.com/BenMatheja/todoist-flask">todoist-flask</a>. A small service, running on my own machine.</p>
<p><img src="/assets/todoist-flask-overview.jpg" alt="Todoist Flask Overview" title="Todoist Flask Overview]" /></p>
<h4 id="learning-find-a-smart-way-to-contain-your-application">Learning: Find a smart way to contain your application</h4>
<p>In retrospective I probably spent two thirds of the time not on business logic.
Rather I spent a lot of time to build a poor mans CI/CD i.e. how to deploy a version of the app to my instance.</p>
<p>Even more of a hassle seemed to have the app running and being able to see what is happening right now. I developed the app on my machine and ran it with built-in Python HTTP-Server.
Worked like a charm and I could even verify the webhook integration was working using <a href="https://ngrok.com/">ngrok</a>.
The service creates a tunnel to proxy requests from the web to your local machine.</p>
<p>Bringing the app to their target environment was the the hardest part.
I planned to use <a href="https://gunicorn.org/">Gunicorn</a> instead of the buit-in Python HTTP-Server. <a href="https://gunicorn.org/">Gunicorn</a> would expect a different entry point into the app. As <a href="https://gunicorn.org/">Gunicorn</a> now was running the app, the log output changed. In fact I lost complete visibility on how calls are handled within the application. Some research and configuration afterwards fixed that issue.</p>
<p>I don’t want to say it’s not possible to have it running smoothly with the aformentioned setup. My (limited) experience with python in general and <a href="https://gunicorn.org/">Gunicorn</a> made it a real hassle.
Learning here: Find a way to package your application to have the same platform and concepts applied already during local development. Nowadays I would probably go for a docker container, package everything and then excessively make sure that the container works as expected.</p>
<h2 id="next">Next</h2>
<p>The service has been running for a couple of months and there are pain points to adress.</p>
<ul>
<li>Todoist deprecated the version of the sync API I was using for my integration. So my poor mans CI/CD and smart contained app made the upcoming changes a real nightmare to deploy, test and run.</li>
<li>The service was running all the time for exactly one occurence within the work days. Coming to the office early. This was the only moment of a real necessity to have the service running.</li>
<li>After running some time, the service seemed to get unresponsive. Clock-in events were handled (if they were) after minutes, resulting in unreliable clock-out tasks.</li>
</ul>
<p><img src="/assets/serverless-all-the-things.jpg" alt="SERVERLESS All the Things" title="Serverless" /></p>
<p>The integration is the perfect use-case for a serverless workload.</p>
<ul>
<li>The function to create the clock-out task is really small</li>
<li>The footprint of the python app with regards to startup-time and memory consumption is small (no heavy JVM / Springboot startup)</li>
<li>The function is only used occasionally.</li>
</ul>Ben Mathejaben@matheja.meWhen I started working at my current employer, I noticed an unconvenience every day. Keeping track of the time of arrival and calculate when I should leave the office latest possible to not get into trouble for beeing present too long. This “trouble” actually protects employees, which is quite an asset in Germany.How I’m using Todoist2019-01-03T00:00:00+00:002019-01-03T00:00:00+00:00https://matheja.me/journal/2019/01/03/todoist-markdown<p>My journey with <a href="https://todoist.com">Todoist</a> began in 2014 during my placement at <a href="https://zweitag.de">Zweitag</a>.
During a meeting with my former manager Julian, I did a short glance on his screen spotting the infamous red icon. Knowing that Julian was really passionate on efficient working approaches I had to do research of the unknown app.
<!--more--></p>
<p>Later the underlying concept, the <a href="https://gettingthingsdone.com/">Getting Things Done methodology</a> tremendeously helped me to stay focussed during the completion of my Masters studies. If you haven’t already read the <a href="https://gettingthingsdone.com/getting-things-done-the-art-of-stress-free-productivity/">Book</a> i do recommend it. Don’t read it by heart, but get inspired by the concepts discussed.</p>
<p>I’ve compiled some “hacks” on this page which I use to make Todoist productivity tool of choice.</p>
<h2 id="use-shared-projects">Use Shared Projects</h2>
<p>I’m using a shared project as a shopping list with my spouse. It’s dead simple and works with her free plan.</p>
<h2 id="use-not-ending-tasks-with-bold-typeface">Use not Ending Tasks with bold typeface</h2>
<p>To get a better overview within Projects, you can define tasks which are not completeable serving as kind of sections.</p>
<p>Both convetions lead to a non completeable task with a bold typeface</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>* **Gelbe Säcke** 🗑
</code></pre></div></div>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>* !!Work!! 👔
</code></pre></div></div>
<h2 id="using-unicode-emojis">Using Unicode Emojis</h2>
<p>Todoist supports <a href="https://unicode.org/emoji/charts/full-emoji-list.html">unicode Emojis</a> within Tasks, Projects and Labels.</p>
<h2 id="use-webhook-integration-for-custom-workflows">Use Webhook Integration for Custom Workflows</h2>
<p>I’m using <a href="https://developer.todoist.com/sync/v7/#webhooks">Todoist Webhooks</a> to feed completed Tasks into AWS Lambda which inspects the completed task and triggers custom actions such as creating a new task.
The integration is pretty similar to that what <a href="https://zapier.com">Zapier</a> within premium does, besides it costs me 0€ as it runs within the AWS free tier.</p>
<p>Currently using that to remind me before reaching the 10-hours working time limit in Germany.</p>
<p>The function is based <a href="https://github.com/BenMatheja/todoist-flask">on a previous project</a> which did the same as a standalone Python <a href="http://flask.pocoo.org/">Flask</a> application.</p>
<h2 id="use-inbox-for-ideas">Use Inbox for ideas</h2>
<ul>
<li><a href="https://www.youtube.com/watch?v=CKjIJYCfBJA&feature=youtu.be">Use your Inbox to store Ideas</a></li>
</ul>
<h2 id="see-also">See Also</h2>
<ul>
<li><a href="https://get.todoist.help/hc/en-us/articles/205195102-Text-Formatting-">Todoist Help on Text Formatting</a></li>
<li><a href="https://hairofthedogblog.com/2018/07/using-todoist-photography-workflow/">Usage of Templates in Todoist</a></li>
<li><a href="https://blog.todoist.com/user-stories/systemist-personal-workflow/">Essentials of a Productivity System</a></li>
<li><a href="http://www.43folders.com/izero">Related Concept: Inbox Zero</a></li>
</ul>Ben MathejaMy journey with Todoist began in 2014 during my placement at Zweitag. During a meeting with my former manager Julian, I did a short glance on his screen spotting the infamous red icon. Knowing that Julian was really passionate on efficient working approaches I had to do research of the unknown app.