MoreRSS

site iconChristian HeilmannModify

A Principal Program Manager living and working in Berlin, Germany. Author of The Developer Advocacy Handbook.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Christian Heilmann

I fell for a phishing attack and lost access to my X account. Here are five mistakes I did that you need to avoid!

2026-02-22 19:57:36

After 20 years of using Twitter, I just lost access to my X account. The reason is that I fell for a phishing attack. As someone who helps a lot of people with their security issues, this is embarassing, but I want to make it a learning experience, so I will share my mistakes with you so you can avoid them.

First Mistake: doing any security things in a rush.

It was the end of the week, I sent out a few last social media updates and already contacted my partner that I will soon be home and we go to drive to our weekend place. So I felt obligated to wrap things up quickly. I wanted to pick up my company iPhone and it had run out of juice. So I rebooted it and it asked for a Sim Pin which of course I don’t know by heart. It also asked for my iCloud password which I forgot as I just updated my computers to the latest MacOS. So I was in the middle of the reset-your-password dance with verification across different devices when the phishing mail came in.

Things I should have done instead:

  • I should have just ignored the mail and finish my other tasks so I have full attention on the mail and not be in a rush.

Second Mistake: falling for a pretty good phishing mail.

The mail looked like this:

The phishing email

The content was the following:

Case Update: Status Notification

Hi Chris Heilmann codepo8.bsky.social,
We are notifying you regarding a recent flag on your profile content. A preliminary check suggests it might not be aligned with our community standards.

The notification indicates that the post may contain intellectual property or restricted media.

Chris Heilmann codepo8.bsky.social
@codepo8
Getting my life on track…

Tweet Media
A support case has been opened for this matter. If you believe this flag is incorrect, you may request a re-evaluation. Deleting the post does not automatically close this case — if the issue persists, your account status may be affected.

Submit appeal

Notification sent to @codepo8

If this is not your account, you can unsubscribe or manage email preferences.

X Corp. 6428 Market Street, Suite 900 San Francisco, CA 94441

I had opened the mail on my second monitor as I was still wrapping up work on the main one and thought immediately that this is silly, why would they flag this tweet as a copyright issue? I even took a screenshot as I wanted to complain about this nonsense on other social media platforms. But I also wanted this issue to be resolved quickly, so I clicked the “Submit appeal” button.

What I should have done instead:

  • Verify that this is really a legit mail by checking the sender and the URL of the link.

Third Mistake: not checking the URL of the link or the sender of the mail.

In my rush and glancing on the second monitor I only saw the “Submit appeal” button and clicked it. I didn’t check the URL of the link, which was not an X domain, but `https://cdn.ampproject.org/c/s/velitoya.com/codepo8`. I also didn’t check the sender of the mail, which was not an X email address, but `X Notices `. It is interesting to see that they use a secondary obfuscation by going through AMP

The interface it showed me looked pretty legit and redirected to `https://noticedirect-x.com/copyright/codepo8`.

copyright notice screen

The text on the page was the following:

Copyright Violation Notice
Your content contains copyrighted material. This violates X’s Community Guidelines.

Notice Date: February 22, 2026
Violation Type: Copyright Infringement
Status: Objection in Progress
Violated Content:

Chris Heilmann codepo8.bsky.social
@codepo8
1:18 PM · Feb 20, 2026
Getting my life on track…

Tweet Image
Continue

I thought this looked pretty legit, so I clicked the “Continue” button and it took me to a page that asked for my X login details.

The verification modal

Verification
You need to verify that you are the account holder to proceed with your appeal.
Chris Heilmann
codepo8.bsky.social
@codepo8
Password
Login

Notice that It showed my account with the correct image and all details. Well done, you bastards.

Fourth Mistake: not realising a fake form despite using autofill.

I store my passwords in my browser and I use the autofill function to log in to sites. So when I got to the login page, it should have triggered the autofill feature, but it didn’t. I should have realised that this is a sign that this is not a legit login page. Instead, I thought maybe the phishing site is just not well made and doesn’t trigger the autofill, so I entered my login details manually.

What I should have done instead:

  • I should have realised that the lack of autofill is a sign that this is not a form hosted on the correct domain.

Fifth Mistake: allowing myself to be kept busy while the phishing attack is happening.

After I entered my login details, I was taken to a page that said that my appeal is processed and – get this – it asked for me to verify my identity further by asking for a scan of my passport, credit card and other personal details. This is where I was out and knew that I had been phished. Luckily for me, as a different person might have been tempted to enter these details, losing even more control over their account and personal information.

In the meantime, I got a notification on my phone that there was a login attempt from an unrecognized device:

Login attempt email

New login
Location*İstanbul, Türkiye
DeviceChromeDesktop on Windows

*Location is approximate based on the login’s IP address.

I immediately went to my X account and try to my password, but the damage was already done. Instead of being able to change my password, I got a message that my account is locked and I need to use my authentication app to unlock it. I don’t have an authentication app set up for my X account, so I was locked out of my account and had to go through the account recovery process.

Meanwhile, a second email came in from X that stated that the email of my accoungt has been changed to `[email protected]`.

Your email address has been changed
The email address on your account codepo8 has been changed to ashleyhaviliigmail.com. Based on this change, please be aware that additional changes to your account may be restricted temporarily.
If you did not make this change, please secure your account.

Any attempt to access the account ended in a message that the account is locked and I need to use my authentication app to unlock it. I filed a complaint with X support.

What I should have done instead:

  • I should have immediately tried to change my password and enable two-factor authentication on my account
  • Use an authentication app to secure my account instead of SMS-based two-factor authentication, which is less secure and X doesn’t even support it anymore.

Where I am now…

So, this is where I am now. I have lost access to my X account and I am going through the account recovery process. I have also contacted X support to try to get my account back. The first attempt was not successful and currently I should wait seven days before I can try again. Seven days in which the attackers have full control over my account and can do whatever they want with it. I understand that I made a stupid mistake entering my login details on a phishing site, but I hope that X support will be able to help me recover my account. After all, I am a long-time user of twitter and I have been using it for 20 years, so I hope that they will be able to help me recover my account which had the same email since 2006 and a backup phone number that I have access to.

The biggest issue is that I have a lot of followers on X and I use it for my work, so losing access to my account is a big deal for me. I also have a lot of personal memories on my account, so I really hope that I will be able to get it back. I have used X as “write only” for long time – I post there, but I don’t really interact with other users, which is also why I didn’t care enough for it to keep all the security measures up to date. I didn’t have an authentication app set up, which is another mistake. I thought that SMS-based two-factor authentication would be enough, but I was wrong.

I will keep you updated on the situation and I hope that my experience can help others to avoid falling for phishing attacks and to secure their accounts better.

Conclusion

Here are the things I did you should not do:

1. Don’t do any security things in a rush.
2. Not question emails that look legit but seem very urgent.
3. Not check the URL of the link or the sender of the mail.
4. Not realising a fake form despite using autofill.
5. Not allowing yourself to be kept busy while the phishing attack is happening.

Stay safe out there and always double-check before you click on any links or enter your login details on any site. And if you do fall for a phishing attack, don’t panic, but immediately try to secure your account and contact support.

I fell for a phishing mail and lost access to Twitter/X

2026-02-21 04:08:24

If you are following me there nothing that might be posted now is from me. DO NOT CLICK ANY links.

A phishing email claiming one of my posts was copyright infringement

I keep you posted when and if I got access again and will talk about it on the live show this coming Wednesday

This will be a good opportunity to re-assess my social media presence in general…

WebMCP – a much needed way to make agents play with rather than against the web

2026-02-17 01:06:05

WebMCP is an exciting W3C proposal that just landed in Chrome Canary to try out. The idea is that you can use some HTML attributes on a form or register JavaScript tool methods to give agents direct access to content. This gives us as content providers and web developers an active way to point agents to what they come for rather than dealing with tons of traffic of scripts that haphazardly and clumsily try to emulate a real visitor.

Agents vs. the web

The current relationship of agentic AI and the web is predatory, wasteful and fraught with error. What is happening is that agents scrape web sites, take screenshots and scan those or keep trying to fill out form fields and click on buttons to get to content that was meant for real, human visitors. Under the hood, agents use browser automation we created for testing, both of browsers and web apps. But instead of going through a defined test suite with knowledge of the structure of the web app, agents brute force their way on. This is exactly what we’ve been hardening the web against because of malware, spammers and content thieves. Companies like Cloudflare make a good living providing the tools for that. Publishing on the web is full of hazards. Just publish an free form to enter things you store in a database and boy will you have to deal with a mess within seconds. Spam and malware bots are quite at the ready to find any vulnerability to post their content to your site and XSS protection is the biggest game of whack-a-mole I hate having to play.

Agents vs. user wallets

For the users of agents this means that they burn through tokens much quicker as the agent grabs web content that is bloated, slow to parse and often needs several authentication steps. WebMCP can improve this as it allows content providers to show agents where the content to index is and what to put into form fields to reach the content it came for. Or – even better – it gives agents programatic access to trigger functionality and get content instead of trying to fill out a form and perform a side wide search that needs filtering afterwards.

Agents are now first-class citizens

In essence, this standard and its implementation in Chrome means that agents have become first-class citizens of the world wide web. A future we as publishers have to deal with. The good thing is that the web is pretty much ready for this, as we’ve done it before for search engine bots, syndication services and many other automated travelers of the information superhighway.

The web was designed to be machine readable!

The thing that annoys me about this is that we are re-inventing the wheel over and over again. When the web came around it was an incredible simple and beautiful publishing mechanism. HTML described the content and all you needed to do was to put a file on a server:


















The header, main and footer elements do not only help assistive technologies to understand the structure of the page, but they also help search engines and agents to find the content they are looking for. A clear description and keywords help search engines to understand what the page is about and to index it correctly. A good title makes it easy for users to see what the app is about. The meta tags and the structure of the page make it easy for agents to find the content they are looking for and to understand how to interact with it.

The web was designed to be machine readable and to allow for easy indexing and syndication. We had meta tags for search engines, we had sitemaps, we had RSS feeds and APIs. We had all the tools we needed to make our content discoverable and accessible to machines. But instead of using those tools, we have been building more and more complex web pages that are designed for human consumption and then trying to scrape them with agents. This is not only inefficient but also disrespectful to the web and its creators.

Semantic HTML would be a great thing for agents!

But the agent creators aren’t to blame for this, they are just trying to get the content they need to provide their users with the best experience possible. The problem is that we as content providers have given up on semantic HTML and machine readability in favour of flashy designs and complex interactions that are meant to impress human visitors but are a nightmare for agents to parse and understand. And are often a nightmare for human visitors as well, but that’s a different topic. I’ve been advocating for semantic HTML for decades, as I just love that it means that my content comes with a description and interaction for free. But for decades now we have been fighting a new breed of developers that see the web as a compilation target for their JavaScript and not as a publishing platform. Why bother with semantic HTML when you can just throw a div on the page and style it to look like anything you want? Why bother with meta tags when you can just stuff your content with keywords and hope for the best?

The meta content like description, keywords and author are still there, but they are often ignored or misused. The same goes for sitemaps and RSS feeds. We have been so focused on making our content look good for humans (and act and look like native apps) that we have neglected the machine readability of our content. And we have been focusing hard on making our content look good for search engines, which is a different kind of machine than agents. The meta description, title and keywords had a short life span of usefulness as search engines quickly learned to ignore them and rely on the actual content of the page because often the meta content was misleading or stuffed with keywords. Instead of using these in-built mechanisms of the web we added tons of extra information to the HTML head for Twitter, Facebook and many other services, some of which are dead by now and just add to the overall landfill of forgotten bespoke HTML solutions. Maybe this is a good time to read up on meta tags, alternative display modes of your content connected via LINK elements beyond 34 CSS files and 20 fonts.

Will WebMCP get adoption or will we take another loop around the conversion tree?

The question is, will we use this opportunity to make the web better for everyone, or will we continue to build bloated and inefficient web pages that are designed for human consumption or – worse – optimised for developer convenience. Will providers of agent services embrace this standard or discard it as a nice to have and keep brute forcing their way through the web. Or find other ways to make the web cheaper to read by agents. Cloudflare just introduced Markdown for Agents – a service that turns your already rendered HTML with thousands of DIVs and unreadable class names into structured markdown. Markdown, a non-standardised format, that just caused a scary security issue in Windows Notepad.

Alternative content has been a staple for Web2.0

We have had the tools for quite a while, many content providers offer feeds and APIs you and your agent can play with. Did you know for example that WordPress has a built in REST API that gives you access to all the content of a WordPress site? You can use that to get the content you need without having to scrape the web page. Terence Eden wrote a great article about how to use the WordPress REST API to get content with the lovely title Stop crawling my HTML you dickheads – use the API!.

Find-ability has always been the issue with this. Remember the incredibly simple and powerful idea of Microformats? They were a way to add semantic meaning to your content by using a few CSS classes. They were a way to make your content more machine readable and accessible without having to change the way it looked for human visitors. But they never really took off because they were not widely adopted and because they were not supported by search engines or somehow shown to end users in browsers. They were a great idea, ahead of their time and they never really caught on.

I am on team WebMCP, are you?

With WebMCP, we have the opportunity to go back to the roots of the web and make our content truly machine readable and accessible. We can use the new attributes and methods to point agents to the content we want them to index and to provide them with the information they need to understand our content. This is a chance to make the web a better place for both humans and machines, and to create a more symbiotic relationship between the two. We can use WebMCP to create a more efficient and effective web, where agents can easily find and index the content they need, and where publishers can have more control over how their content is accessed and used by agents. This is excellent news for the future of the web and for the future of AI, and I can’t wait to see how it evolves and how we can use it to create a better web for everyone.

When being Hitler’s guard was a literal drag…

2026-02-02 15:51:34

Quick segue here, but this story is too good. In 1942, Die Grosse Liebe came out, Goebbel’s Magnum Opus other than Triumph of the Will. The Nazi propaganda minister was really into this movie and wanted it to be a huge success swaying the emotions of the German people back to believe in winning the all-out war. Movies back then had a double release: they came to the cinemas and the songs in them also came out on record at the same time. Songs were specifically made to be positive, easy to get into and some epic. This movie features the songs “Davon geht die Welt nicht unter” (the world doesn’t collapse because of that) which was the lighter bit and the epic “Ich weiß, es wird einmal ein Wunder gescheh’n” (I know, one day a wonder will happen), being the epic one. This one had a bombastic scene in the movie where the then superstar Zarah Leander sings it in front of a wall of splendid ballet dancers:

The issue here was that Miss Leander was curvy and the dancers in comparison incredibly lithe and petite. This didn’t give the scene enough gravitas and took some of the limelight away from her. The solution that Goebbels offered on the spot of the shoot was to replace the dancers with members of Hitler’s personal guards. So what you see in the scene is burly men in drag. Well, you would, but the editors made very sure that there is no closeup of the chorus but only of Zara Leander.

The chorus of ballet dancers to the song in the movie clearly being men in drag

It is a shame that there aren’t more behind the scenes shots of that, as the pissed facial expression of some of them is excellent.

Zoomed in closeups of the men in dresses looking not happy at all.

My favourite is the last one that looks eerily like Eric Idle in his Monty Python days in drag.

You can watch the full movie on archive.org with English subtitles. It is a piece of propaganda trash, but also very well made.

Monky Business: Creating a Cistercian Numerals Generator

2026-01-13 23:20:54

In the 13th century Cistercian monks came up with a way to show the numbers from 1 to 9999 as a single character.

The cistercian numerals showing numbers 1 - 9 and 10x multiples of those as different characters

The way it works is to add the lines of different characters to each other until the number is reached. So, if you want to show 161, you take the 1, the 60 and the 100 and add them together:

Showing the correct numeral for 161 by showing the ones for 1, 60 and 100 and adding them to the same image

Same with 1312 as 1000 + 300 + 10 + 2:

Showing the correct numeral for 1312 by showing the ones for 2, 10, 300 and 1000 and adding them to the same image

Which is pretty much incredible, so I thought it would be fun to create a generator for those characters. And here it is:

Screen recording of the generator in action

And while we’re at it, why not have a Cistercian Clock ?

How to use the generator

Open it in your browser and enter the numbers you want to generate. You can also get the source code, download it and use it offline. You can generate numerals as PNG or as SVG, click them to download the images and click the X buttons to remove them.

How to use the code in your own products

The generator is based on a script I wrote to generate the numerals, all available on the GitHub Repo. There are two flavours, a simple Node based one that returns SVG strings and a more advanced one that allows for in-browser PNG and SVG generation and customisation.

toCistercian.js – Node or browser number to Cistercian numeral converted in SVG

You can use this on the command line using:


node toCistercian.js {number}

For example `node toCistercian.js 161` results in the following SVG:



Cistercian numeral for 161







You can also use this in a browser as shown in the simple example:





Cistercian.js – convert to svg/png/canvas with customisation

The generator uses the more detailed cistercian.js version, which allows you to generate numerals in various versions and formats.

Usage is in JavaScript and a browser environment.


const converter = new Cistercian();
converter.rendernumber(1312);

This would add an `output` element to the body and render the numeral with a text representation and a button to remove it again.
You can configure it to change the look and feel and what gets rendered by calling the `configure` method. See the advanced example for that.

If you want, for example, to render the numeral inside the element with the ID `mycanvas` as SVG with a `width` of `400`, lines 10 pixels thick and in the colour `peachpuff` and without any text display or button to delete, you can do the following:




myConverter.configure({
renderer: ‘svg’,
canvas: { width: 400 },
stroke: { colour: ‘peachpuff’, width: 10 },
addtext: false,
addinteraction: false,
outputcontainer: document.getElementById(‘mycanvas’)
});
myConverter.rendernumber(1312);

How I built the thing

As with many things I code for fun, this started offline, with me thinking how to approach this issue. In essence, all I had was an image of the numerals. When I got home, I thought I should give this to Copilot to vibe code like all the cool kids do. I asked it to take this image of numerals and create SVG versions for each of them (so I could link to them). The result was fast, immediate, confident and utter garbage.

Generated SVG for 1-9 of the numerals, all wrong

So I went back to analysing the numerals and instead of creating them as SVGs, I created them as a dataset. In essence, these are characters on a 3 by 5 grid. I numbered the points and wrote them down as coordinates:

my glyph cheatsheet


this.points = [
[10,10],[30,10],[50,10],
[10,30],[30,30],[50,30],
[10,50],[30,50],[50,50],
[10,60],[30,60],[50,60],
[10,80],[30,80],[50,80]
];

The middle line is never used in the real numerals, but hey, why not?

Then I looked at the numerals and noted down which points are connected for each of them. 1 and 13 are always there as this is a vertical line in the middle. This gave me the dataset to use with Canvas or generate SVG from. Here are the indices of the points array that describe all the glyphs:


this.glyphs = {
0: [[1,13]],
1: [[1,2]], 10: [[0,1]], 100: [[14,13]], 1000: [[12,13]],
2: [[4,5]], 20: [[3,4]], 200: [[10,11]], 2000: [[9,10]],
3: [[1,5]], 30: [[1,3]], 300: [[13,11]], 3000: [[13,9]],
4: [[4,2]], 40: [[4,0]], 400: [[10,14]], 4000: [[10,12]],
5: [[1,2],[2,4]], 50: [[0,1],[0,4]], 500: [[13,14],[14,10]], 5000: [[13,12],[12,10]],
6: [[2,5]], 60: [[0,3]], 600: [[14,11]], 6000: [[12,9]],
7: [[1,2],[2,5]], 70: [[0,1],[0,3]], 700: [[13,14],[14,11]], 7000: [[13,12],[12,9]],
8: [[4,5],[5,2]], 80: [[4,3],[3,0]], 800: [[10,11],[11,14]], 8000: [[12,9],[9,10]],
9: [[1,2],[2,5],[5,4]], 90: [[0,1],[0,3],[3,4]], 900: [[13,14],[14,11],[11,10]], 9000: [[13,12],[12,9],[9,10]]
};

The rest was just comparing and looping over this array.

The logic of adding to the final numeral was not too taxing either. When the number wasn’t defined in the glyphs array, I turn it into a string and loop over it from the end to the start. Each number then gets the added zeroes to allow for the lookup:


let chunks = number.toString().split(‘’).reverse();
chunks.forEach((chunk, index) => {
let value = chunk + ‘0’.repeat(index);

So, for 1312, this would become 1312 and on each loop iteration I get the data:

  • 2
  • 10
  • 300
  • 1000

Feel free to check the source of the script for some more fun bits. And yes, I did use Copilot to help with some of the cruft code I didn’t feel like writing by hand, especially turning functions into methods and such.

I had fun, I hope you find it interesting, too.