MoreRSS

site iconGarrit FrankeModify

a generalist DevOps Engineer
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Garrit Franke

About Container Interfaces

2025-03-25 08:00:00

There are a couple of interfaces that container orchestration systems (like Kubernetes) implement to expose certain behavior to their container workloads. I will only be talking about Kubernetes in this post since it's the orchestrator I'm most comfortable with, but some interfaces are also implemented in other orchestrators (like HashiCorp Nomad) too, which makes the interfaces cross-platform.

Container Storage Interface (CSI)

Storage behavior used to be built into Kubernetes. The container storage interface (CSI) defines a unified interface to manage storage volumes, regardless of the orchestrator (as long as they implement the CSI). This makes it way easier for third-party storage providers to expose data to Kubernetes. If a storage provider implements this interface, orchestrators can use it to provision volumes to containers. Notable storage providers are:

A full list of CSI drivers can be found here.

Container Runtime Interface (CRI)

The Container Runtime Interface (CRI) is an API that allows Kubernetes to use different container runtimes without needing to recompile the entire Kubernetes codebase. Before CRI, Kubernetes had a direct integration with Docker, making it difficult to use alternative container runtimes.

CRI defines the API between Kubernetes components (specifically kubelet) and container runtimes. This abstraction allows Kubernetes to support multiple container runtimes simultaneously, giving users the flexibility to choose the runtime that best fits their needs. Some popular container runtimes that implement the CRI include:

  • containerd - The industry-standard container runtime that powers Docker and is maintained by the CNCF
  • CRI-O - A lightweight runtime specifically designed for Kubernetes
  • Kata Containers - A secure container runtime that uses hardware virtualization for stronger isolation

With CRI, switching between different runtimes becomes more straightforward, allowing operators to optimize for security, performance, or compatibility based on their specific requirements.

Container Network Interface (CNI)

The Container Network Interface (CNI) defines a standard for configuring network interfaces for Linux containers. Similar to CSI and CRI, the CNI was created to decouple Kubernetes from specific networking implementations, allowing for a pluggable architecture.

CNI plugins are responsible for allocating IP addresses to pods and ensuring proper network connectivity between pods, nodes, and external networks. They implement features like network policies, load balancing, and network security. Some popular CNI plugins include:

  • Calico - Known for its performance, flexibility, and strong network policy support
  • Cilium - Uses eBPF for high-performance networking and security
  • Flannel - Simple overlay network focused on ease of use
  • AWS VPC CNI - Integrates pods directly with Amazon VPC networking

Each CNI plugin has its strengths and is suitable for different use cases. For example, Calico excels at enforcing network policies, Cilium is optimized for performance and observability, while Flannel is valued for its simplicity.

Wrapping Up

One thing I've always admired about Kubernetes is its pluggable architecture. These standardized interfaces (CSI, CRI, and CNI) showcase how well-designed the system really is. Instead of building everything into the core, the Kubernetes team made the smart decision to create extension points that allow the community to innovate without touching the core codebase.

The great news? You don't have to swap out all these components or even understand them deeply to use Kubernetes effectively. While the array of options might seem daunting at first glance, most Kubernetes distributions (like EKS, GKE, AKS, or Rancher) come with sane defaults that work well out of the box. They've already made sensible choices about which storage, runtime, and networking components to include.

This pluggability is what makes Kubernetes so powerful for those who need it. Need a specific storage solution? Plug in a CSI driver. Want a more secure container runtime? Swap in a different CRI implementation. But for everyone else, the defaults will serve you just fine.

The beauty of this approach is that it gives you room to grow. Start with the defaults, and when you have specific requirements, the extension points are there waiting for you. That's the real magic of Kubernetes – it works great out of the box but doesn't limit your options as your needs evolve.

A trick to manage frequently used prompts in Claude/ChatGPT

2025-02-27 08:00:00

So far, whenever I wanted to recycle a prompt from another context in Claude (though this also applies to ChatGPT and some other LLMs), I went back in my conversation history, copied the prompt, pasted it in a new chat and adjusted the context. But I recently discovered that Claude Projects can be misused as "prompt templates", which makes it way easier to handle repetitive tasks.

In Projects, you can set a system prompt that will be applied to all conversations in the project. I guess it's supposed to be used for relevant information about whatever you want to work on, but I like to think about a project more as a prompt template, rather than a project. For example, here's a project prompt that I use to brainstorm project ideas:

Ask me one question at a time so we can develop a thorough, step-by-step spec for this idea. Each question should build on my previous answers, and our end goal is to have a detailed specification I can hand off to a developer. Let’s do this iteratively and dig into every relevant detail. Remember, only one question at a time.

(Stolen from this great blog post)

While you could copy and paste this from a text file into every new conversation, Claude's projects make it super easy to save this as a template.

I guess this is an obvious feature for some people, but to me, it was a huge help once I found this out.

Got any other neat tricks for working with LLMs? I'd love to hear them!

2024 in Review

2024-12-29 08:00:00

It's super exciting looking back at 2024. So much has happened! I haven't been very active on this blog lately, mainly because I've been super busy in other areas of life.

The biggest change for me this year is one that I can cross off of my impossible list: We bought a house! I've been wanting to write post about this for a few weeks, and now that things are settling in, I might have the time to do that. In short: We moved from Brunswick (Germany) back to a more rural town just outside Brunswick. It's closer to my girlfriends workplace, closer to my family and overall way cheaper than properties in Brunswick. We're super happy about the house and have yet to find any major annoyances. The neighbors are nice, and the community is incredibly wholesome.

As for other personal projects, there hasn’t been as much traction as there used to be. My interests have partially shifted to hobbies that don’t involve a computer, which I’m grateful for. I still love building software projects – especially now that it’s become so easy to get started with a project using an AI brainstorming companion – I just do it less often. I picked up running as a hobby (more on that later), gather with friends to play Dungeons & Dragons and work on stuff in my new basement.

Goal Results

These are the goals I set myself for 2024:

Build a side project that generated at least 100€ of value

That’s a bust. As mentioned, I haven’t really spent a lot of time building side projects. And the ones I did build, I consider tools rather than an income stream.

Get a scuba diving license

I’m a certified PADI Open Water Diver! You can see some impressions here and here.

Do a 10 km run (I really need to start moving again)

I ran three 10k runs this year. The first in May, then in June and finally, in September, I improved my 2018 personal best by a few minutes!

Sell my car

Check. More infos here.

Write at least 50 blog posts

I didn’t reach this goal, but I’m also not disappointed by it. The frequency of my posts varies, and this year it was just lower than before.

2025 Goals

Here are some new goals for this year:

  • Build a side project that generated at least 100€ of value (I’m keeping this goal until I reach it!)
  • Run a half marathon (and optionally beat my PB)
  • Learn woodworking and build something that takes at least 3 days to complete

Prediction Results

The results for last years predictions:

Bitcoin will at last reach a value of 100,000$, following the halving likely happening in 2024

Bitcoin did reach 100,000$, though not following the halving event. Still, I’d call this a pass!

SpaceX will use a Starship prototype to launch Starlink satellites into orbit

Nope, they didn’t. But they made incredible leaps.

Tesla cars a still not fully self driving (reference)

They claim to launch it in Europe earlier next year. It’s currently in closed testing. Still, I wouldn’t give it a pass.

Google will replace their main search with an AI chat bot (see Bard)

Not sure if I’d call Gemini a full replacement, but it does have a wrapper around their traditional search engine. The product is shit compared to Anthropic and OpenAI, but I would call this a pass, since you can use Gemini instead of Google Search.

2025 Predictions

  • Apple will not ship a successor to the Apple Vision Pro
  • At least one country will mandate AI watermarking for all AI-generated content used in commercial applications
  • Russia and Ukraine will come to a consensus and end the war. Ukraine will have lost territory

And that's a wrap. I hope you all had a wonderful Christmas (if celebrated), and are as excited as I am for the new year!

Installing MSSQL Client Drivers for a PHP Application

2024-09-24 08:00:00

I just had the pleasure (cough) to connect an MSSQL database to a Laravel application at work. Because the process was super tedious, I wanted to quickly jot this down so I will never have to go through this again.

Our setup

We're building a Laravel application with DDEV. DDEV essentially moves all development tools into Docker containers and adds some nice features like local database management.

The process

Laravel comes with the boilerplate to use MSSQL out of the box. In your app, just set the database config to use sqlsrv:

php 'connections' => [ 'sqlsrv' => [ 'driver' => 'sqlsrv', 'url' => env('DB_URL'), 'host' => env('DB_HOST', '127.0.0.1'), 'port' => env('DB_PORT', '1433'), 'database' => env('DB_DATABASE', 'laravel'), 'username' => env('DB_USERNAME', 'root'), 'password' => env('DB_PASSWORD', ''), 'unix_socket' => env('DB_SOCKET', ''), 'charset' => env('DB_CHARSET', 'utf8'), 'prefix' => '', 'prefix_indexes' => true, // 'encrypt' => env('DB_ENCRYPT', 'yes'), // 'trust_server_certificate' => env('DB_TRUST_SERVER_CERTIFICATE', 'false'), ], ],

You will see errors when starting your app, because you need to install the corresponding drivers first. Instead of adding them through Composer (a widely adopted package manager for PHP), you have to install the ODBC drivers through the system package manager, because Microsoft doesn't maintain a PHP package. Furthermore, you also have to install the driver repository because Microsoft doesn't even maintain packages for the major Linux distributions. In our setup with DDEV, this has to be done by amending the Dockerfile used for the application container. Create a file at .ddev/web-build/Dockerfile and add the following contents:

`dockerfile

https://ddev.readthedocs.io/en/stable/users/extend/customizing-images/#adding-extra-dockerfiles-for-webimage-and-dbimage

https://stackoverflow.com/questions/58086933/how-to-install-the-sql-server-php-drivers-in-ddev-local#new-answer

ARG BASEIMAGE FROM $BASEIMAGE

RUN npm install --global forever RUN echo "Built on $(date)" > /build-date.txt

RUN curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -

RUN curl https://packages.microsoft.com/config/debian/11/prod.list > /etc/apt/sources.list.d/mssql-release.list

RUN curl -fsSL https://packages.microsoft.com/keys/microsoft.asc | sudo gpg --dearmor -o /usr/share/keyrings/microsoft-prod.gpg RUN curl https://packages.microsoft.com/config/debian/12/prod.list | sudo tee /etc/apt/sources.list.d/mssql-release.list

RUN apt-get update RUN apt-get --allow-downgrades -y install libssl-dev RUN apt-get -y update && yes | ACCEPTEULA=Y apt-get -y install php8.3-dev php-pear unixodbc-dev htop RUN ACCEPTEULA=Y apt-get -y install msodbcsql18 mssql-tools18 RUN sudo pecl channel-update pecl.php.net RUN sudo pecl install sqlsrv RUN sudo pecl install pdo_sqlsrv

RUN sudo printf "; priority=20\nextension=sqlsrv.so\n" > /etc/php/8.3/mods-available/sqlsrv.ini RUN sudo printf "; priority=30\nextension=pdosqlsrv.so\n" > /etc/php/8.3/mods-available/pdosqlsrv.ini RUN sudo phpenmod -v 8.3 -s cli sqlsrv pdosqlsrv RUN sudo phpenmod -v 8.3 -s fpm sqlsrv pdosqlsrv RUN sudo phpenmod -v 8.3 -s apache2 sqlsrv pdo_sqlsrv

RUN sudo printf "; priority=20\nextension=sqlsrv.so\n" > /etc/php/8.3/mods-available/sqlsrv.ini RUN sudo printf "; priority=30\nextension=pdosqlsrv.so\n" > /etc/php/8.3/mods-available/pdosqlsrv.ini RUN sudo phpenmod -v 8.3 -s cli sqlsrv pdosqlsrv RUN sudo phpenmod -v 8.3 -s fpm sqlsrv pdosqlsrv RUN sudo phpenmod -v 8.3 -s apache2 sqlsrv pdo_sqlsrv

RUN echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bash_profile `

If you're reading this in the future and Microsoft may have released a new version of the ODBC drivers, you may have to follow the new installation instructions from their documentation. It took me a while to realize that I couldn't install version 17 of the driver because I was using the installation instructions for version 18. They are apparently incompatible with each other.

I hope that you'll never have to touch the shithole that is MSSQL, but if you do, I hope that this guide will be of value to you.

Mental AI Fog and how to cure it

2024-09-01 08:00:00

The term "AI Slop" is currently on the rise. It describes all the AI generated images and texts we see on the internet. I'd like to propose a term that basically describes reverse AI Slop: Mental AI Fog.

Instead of consuming too much AI generated content (which also applies), AI Fog describes the inability to produce content without the help of AI. We're surrounded by flowery written articles and resources that we think that it's not worth it to put in the effort to write a text ourselves. This is comparable to how computer keyboards, spellchecking and autocorrection have rendered my generation and the ones to come incapable of comfortably writing longform text.

I'm currently strongly suffering from AI fog. I'm so used to letting some LLM flesh out a quick and dirty prompt nowadays that it's hard for me to write this text, get the point across and not feel insecure about my style of writing. This site is supposed to be a way to persist my thoughts whenever I want to, but are they still my thoughts if they have been proofread and corrected by a computer?

As a result, all these thoughts are piling up in my head. Where I previously braindumped thoughts on a piece of paper, I now only come up with a couple of words and let the AI elaborate. I'm consuming what are supposed to be my own thoughts, which perpetuates the cycle.

This needs to stop. I need to get back creating things myself. I decided to abandon the use of LLMs for most content on this site. And where AI has been used, it will be clearly mentioned. I'm contemplating adding some sort of "Backed by AI" label for certain pages to make it harder for myself to fall back to a helping hand. I will likely still be using LLMs, but making it obvious will force me to mindfully choose where it will be used.

Is this something you can relate to? Is AI fog even a fitting term for this? I don't know. And if it isn't, that's okay because I came up with it myself.

Sentiment analysis using ML models

2024-08-31 08:00:00

I just rewrote parts of my Positive Hacker News RSS Feed project to use an ML model to filter out any negative news from the Hacker News timeline. This method is far more reliable than the previous method of using a rule-based sentiment analyzer through NLTK.

I'm using the model cardiffnlp/twitter-roberta-base-sentiment-latest, which was trained on a huge amount of tweets. It's really tiny (~500 MB) and easily runs inside the existing GitHub Actions workflows. You can try out the model yourself on the HuggingFace model card.

<img width="522" alt="grafik" src="https://github.com/user-attachments/assets/06f42df6-624a-4108-ada8-d0d37a53e693">

If you want to subscribe to more positive tech news, simply replace your Hacker News feed of your RSS reader with this one (or add it if you haven't already): https://garritfra.github.io/positive_hackernews/feed.xml