MoreRSS

site iconChristian HeilmannModify

A Principal Program Manager living and working in Berlin, Germany. Author of The Developer Advocacy Handbook.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Christian Heilmann

The rise of Model Fatigue – or is it just me?

2025-04-16 23:33:49

A shot of the video for the Kraftwerk song Das Model with the band standing behind synthesizers in front of a film of a 1950s model show.

As someone curating a newsletter and dabbling in AI, I am feeling both overwhelmed and bored with news about yet another AI model being released by Company XYZ that will be a “game changer” and “leaves the others in the dust”. It feels hard to guess what I should be excited about. The size of the model? Who owns it and what it costs to use? It’s terms and conditions? What it is good for? If I can use it although I live in Europe?

If I check Cursor’s list of possible models I have no idea what each of them mean and it feels weird to see minor versions of each…

It doesn’t help that the names of the models and their descriptions on Huggingface don’t make much sense to me or anyone who isn’t deeply involved in Machine Learning. And it doesn’t help either that news outlets and company marketing blogs don’t stop covering us in hyperbole headlines about them instead of selling them through case studies.

This is nothing new. We had the same with AJAX libraries, frameworks and CSS libraries before. But if we consider the amount of energy and computation power that goes into training and weighing models this seems a lot more wasteful. What we need is fewer news about models and more information what each of them is good for. Right now, it feels much more like a size competition rather than a competition of which is more applicable. It also doesn’t help that the few benchmarks we have continue to be rigged and skewed. This is something we already had during the browser wars, so thank you, but no.

I’m much more excited reporting and learning from case studies of people who used different models and found one or the other more appropriate. So, if you have those, please don’t hold back posting these.

Getting ready for WeAreDevelopers WebDev & AI Day – 27/03/2025

2025-03-25 01:51:09

WeAreDevelopers WebDev & AI Day

On Thursday this week I will be in Vienna to moderate the WeAreDevelopers WebDev & AI Day and I am chuffed to bits that I managed to get such an amazing line-up together!

The event is an online event and you can follow on your computer. It starts at 16:00 and ends at 20:30. Tickets are 79 Euro and I can not stop you from showing the stream on a monitor and invite others along…

The show starts with Laurie Voss who I worked with at Yahoo and who now keeps posting insightful AI stuff almost weekly. He’ll talk about “AI and the future of software development”:

AI represents a massive change in the way software works, a leap forward. It also presents an opportunity for a massive change in the way software is written. AI-assisted coding may be an entirely new level of abstraction that sits on top of traditional programming languages and unlocks a huge new generation of software developers. In this talk, we’ll explore what this change looks like, how it came to be, and why it’s nothing to be scared of. And remember: there’s no such thing as “the fundamentals”.

Next is Hannah Foxwell with “Platform Engineering and DevEx for Your On-Prem LLMs”

I then lead a panel discussion on the topic of “Developing in an AI world – are we all demoted to reviewers?”

As developers, the new AI world seems great, but also strange. Originally we saw ourselves as creators, writers of code and owners of the functionality. In a world where more and more code is generated and users build throwaway applications it can feel like we’re losing control. In this panel we will discuss what that means in terms of ownership, security, quality and maybe find that not all is eaten as hot as it is cooked.

The all-star cast of this one is Thomas Steiner (Google), Laurie Voss (Llamaindex), Rizèl Scarlett (Block) and Rey Bango (my former Boss at Microsoft, colleague at Mozilla and also of Ajaxian fame). This will rock!

We then continue with Thomas Steiner covering “AI right in the browser with Chrome’s built-in AI APIs”

In this talk, Thomas Steiner from Google Chrome’s AI team dives into the various built-in AI APIs that the team is currently exploring. First, there’s the exploratory Prompt API that allows for free-form interactions with the Gemini Nano model. Second, there are the different task APIs that are fine-tuned to work well for a particular task like translations, language detection, summarization, and writing or rewriting. After introducing the APIs, the final part of the talk focuses on demos, applications, and use cases unlocked through these APIs.

Then there’s another highlight with Kris Rasmussen, the Chief Technology Officer of Figma, who will chat with me about ”
Honing craft and quality in an AI-powered world”:

Real-time, browser-based environments helped shift the product-building paradigm toward more open and collaborative ways of working. As barriers between design and development continue to come down, and AI-powered tools make it easier for anyone to build, the question today is how will product development continue to improve and evolve? How might functions and roles converge? And how might developers focus on quality and craftsmanship in an increasingly AI-powered world?

And the day ends with me spent and Simon Maple of Tessl talking about “Navigating AI Native Development: The Future of Software and the Power of Prompting”

AI is reshaping the way we build software, shifting from code-centric to spec-centric development—where developers define what they want, and AI determines how to achieve it. But how do we get there? In this session, we’ll explore what AI Native Development means for the open ecosystem, why it’s worth pursuing, and how we can apply lessons from cloud-native and DevOps transformations to make it a reality.We’ll also look at the practical side understanding how the craft of prompt engineering can help us refine structured specifications to improve results. By experimenting with different prompting techniques and validation methods, you’ll gain actionable strategies for guiding AI tools more effectively.

All of this for a measly 79 Euro, so why not go and get a ticket ?

Nobody should be a “content creator”

2025-03-12 14:01:31

A happy Labrador sitting seemingly smugly in front of a picture that was done by putting its paws in paint with the caption 'When you make a nice painting and your parents hang it up and you feel nice'

As part of my job, I have to keep up with the social media space and I’m worried, bored and annoyed in equal measures. There is not much social about it any longer. Instead it’s become a race to the bottom of lowest common denominator content. And interaction bait. Or rage bait. Or just obvious spam disguised in seemingly sophisticated sound bites generated by AI. I never thought I would miss listicles and “50 things you didn’t know about X – number 16 will surprise you” posts, but these at least were obvious.

Social media platforms don’t care about quality content – they cares about interactions

The problem is that social media has become a game of numbers much like SEO used to be. Posting because you learned something, found something interesting or created something has given way to feeding the machine for more clicks and interactions. The reason is monetisation. The rules of the platforms discourage creativity and authenticity and prefer short-lived bursts of emotion. Keep the noise flowing, signals are not in fashion any longer.

When I had some highly successful posts on Facebook, I automatically got “promoted” to “digital creator” which is such a generic term, it makes me want to stop altogether.

I don’t want to be a “digital creator”. I also don’t want to be a “content creator”. Either sound to me like you should create anything to fill up the platform. It’s stacking digital shelves with empty boxes, not creation. And it’s about deception. Take the following examples that should be punished and removed by Facebook but despite reporting them over and over again, you keep seeing them.

Posts that show content from others and hide things like “the history of BMW” in the “see more” section:

Facebook post with a dog holding a stick and the description that he loves sticks and when you expand the see more you get the whole history of BMW

Posts that show one type of content and have tons of hashtags of highly engaged topics utterly unrelated to the content.

Post on Facebook with a cute dogs and tons of hashtags like JenniferLopez and TaylorSwift

I understand the latter as people navigate social media by hashtags, not by search. But it also saddens me as hashtags were a community invention – thanks to Chris Messina – and now are just a deception. I have a hard time even understanding giving me the history of BMW in a dog picture post. Do people search for something like that on Facebook? As far as I understand, Facebook doesn’t show up on web search results. Other than Pinterest in images searches, which is another annoyance…

On the web, both of those would have been punished by Google and Bing for obvious spamming. On social media, these are rampant. And you know what? They are digital content and their creators (most likely a script) are “digital creators”. We should be more than that.

You can create all of that with AI!

As a maintainer of a blog, editor of an online magazine, curator of a newsletter read by 200k subscribers and host of a weekly show on YouTube I get about 20 requests a week if I want to look at a revolutionary AI product that can automatically create digital content for me.

You can create blog posts, images, videos and podcasts using “extremely life-like voices” or even clone your own voice, image or video presence. I don’t want them. I don’t think any creator should use those. We cheapen the end results for sake of an ongoing cadence and we cheat ourselves out of the joy of creating something.

Our goal should be to post something we enjoy doing. Or something that brightens the day of others or gives them something to think any. Our contributions should make social media a happy little cloud full of blotches and weirdness. Not a well oiled perfect creation machine of cookie cutter, highly ephemeral “digital content”.

I am a writer when I write articles, posts and books. I’m an artist when I paint. I’m a designer when I manipulate photos and create logos. I’m a composer when I make music. I’m sort of an actor when I create short videos. And I’m a teacher and educator when I give talks. Because I care. I don’t want to compete in a race to keep people occupied and I don’t think anyone should.

The question is why you take part in social media.

  • If your goal is to make money, good luck trying to compete with the deluge of AI slop.
  • If your goal is to get reach as an influencer, this is also getting trickier as a lot of people want a slice of that pie. We are in a post Mr. Beast world and quite some ground has been scorched.
  • If your goal is to use it as a source of passive income, there’s still some bits to gain, but you’d also have to keep abiding to the rules of the platforms.

Me, I’m happy with the reach I gained. I’m very happy about all the connections I found and people I got to know from social media over the years. But I can’t be bothered with platforms that allow obvious spam and highly manipulative content and have policies that value vapid interaction over real discourse and original content.

Good thing I have this blog. Here is where I make the rules. Maybe this will get a lot of readers, maybe it won’t. I don’t make any money with it either way. It’s out of my head now, and that is what counts. Do wonderful things that make you happy, folks. Chasing the numbers will not give you any fulfilment. Quite the opposite.

Building a “shoutout” component in plain HTML/CSS/JavaScript

2025-02-26 18:15:05

Every Wednesday I host WeAreDevelopers Live on YouTube. Afterwards we cut out short videos to post on social media. What we needed was an obvious “shoutout moment” in the recording to indicate an interesting quote or when we moved on to the next topic – much like a clapper in classic movies to sync audio and video. To this end, I wrote some functionality to show a “cool” and “next” overlay that will make it easier in post production to find the interesting spots. Here’s what that looks like:

The shoutout component in action

You can try the functionality out yourself by checking this codepen, focusing on the browser part and pressing either the `[` or `]` key.
You can also watch the following video to get this article as a step-by-step explanation:

And there is a detailed explanation of the code on the WeAreDevelopers Magazine.

I just love how easy these things are nowadays in HTML, CSS and plain JavaScript.

How to trim a video in MacOS using Finder!

2025-02-20 20:41:04

Context menus are a treasure trove of features you miss otherwise. Did you know for example that you can trim videos in MacOS Finder?

All you need to do is open the context menu on any video file, go to “Quick Actions” and select “Trim”:

MacOS finder with a selected video file and the context menu open and the Quick Actions selected and Trim available.

You then get a trimming interface that allows to drag the start, end and move the selection around.

A video with the trimming interface visible

You can then choose to save the video as a copy or replace the original one. If the video is an MP4 there is no re-encoding and the saving is immediately done.

Interface to select to save the video as a copy or replace the current one.

If you are already previewing the video by pressing space, you can also get to the trim interface using the button.

Trim button in the preview