MoreRSS

site iconIan BetteridgeModify

I used to edit a Mac magazine, launched a website called Alphr.com
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Ian Betteridge

The fundamental error that doesn't exist

2026-03-01 19:37:37

The fundamental error that doesn't exist

Ben Thompson, Another Viral AI Doomer Article, The Fundamental Error, DoorDash’s AI Advantages:

What is notable about this assertion is the total denial of any positive reason for DoorDash to exist and to be so successful. There is no awareness that DoorDash provided a massive consumer benefit (restaurant food at home) from scratch, that DoorDash massively increased the addressable market for restaurants (or created an entirely new category of ghost kitchens), or that DoorDash provided brand new jobs for millions of drivers. Instead, the Article just sort of takes it as a given that DoorDash exists, and that it is a rent extractor preying on weak-willed humans and their habits.

This is the exact sort of view taken by some of the most frustrating anti-monopoly activists: all large successful tech companies exist not because they created a market with virtuous cycles, solving all kinds of thorny problems along the way, but rather because the government didn’t regulate hard enough, or something.

This illustrates the consistent problem with Ben's analysis of all things anti-monopoly. What Ben is saying can be true and monopolies can be abusive. Google is the best example: everyone I know who strongly opposes Google's monopolistic practices now it has a monopoly also talks about how great it was when it started it. Google deserved to become the number one search engine. But that does not give it the freedom, now it has a monopoly, to abuse it.

Ben's problem is that ultimately he believes that, because a company delivered high levels of value to consumers in its early days, it does in fact have the right to then "extract value" from them in the future.

This view — both in the Citrini Article and from the anti-monopolists — is grounded in a fundamental lack of belief in dynamism, human choice, and markets. DoorDash didn’t always exist: it was built, and it wins through the affirmative choice of all three sides of the market it serves (customers, restaurants, and drivers). Does the company have varying degrees of power over different sides of that market based on its dominance of the other sides? Absolutely, but that power flows from delivering value, not from extracting it.

(My emphasis).

The problem here is that Ben is conflating past and present. There's no doubt that Doordash, or Google, delivered value in the past. However, that does not mean they continue to deliver the same value now. In fact, almost inevitably, they don't. Even at the scale of a company making billions of dollars of profit, "the market" expects profits to keep growing, at not just at a percentage close to the rate of inflation but a lot higher.

Then there are the companies that delivered value early on, but in a way which was never sustainable, and, I would argue, always knew it. Uber is the poster child for this practice. Uber burned through billions in VC money to offer rides at below-cost prices, deliberately pricing out competitors and habituating consumers to cheap, convenient transport. Black cab industries and local taxi firms were systematically undermined.

Once Uber (and to a lesser extent Lyft) had established dominance in major markets, the subsidies quietly evaporated. Prices rose, surge pricing became more aggressive, and driver pay was squeezed — meaning the service got worse for both ends of the platform simultaneously.

Uber isn't a pure monopoly in most markets — Bolt and others still compete in many cities, particularly in Europe. So the extraction has limits. But in cities where it effectively has dominance, the pattern holds clearly.

Does Uber "deserve" its continued market dominance, simply because its backers had deeper pockets than anyone else? Well, in Ben's view, yes – it created value, and he believes continues to create value. But that ignores the fact that the value to consumers has begun to evaporate, while the value to shareholders continues to rise.

Ten Blue Links, "Cometh the hour, cometh the man-baby" edition

2026-02-28 20:37:15

Ten Blue Links, "Cometh the hour, cometh the man-baby" edition

Hi all. It’s been a while, hasn’t it?

It has been a week dominated by AI. Which is fitting because AI dominated the weeks before it and will dominate the weeks ahead. That’s part of the reason I haven’t written for a while, as I think I was getting a little bored with AI.

But the flavour has shifted. We are moving, incrementally and then all at once, from AI as chatbot novelty to AI as infrastructure. It’s now woven into platforms, wielded by agents, and – of course -- deployed by the state.

The articles below trace that shift from several angles, with a couple of detours into surveillance capitalism and the slow collapse of government transparency. The thread connecting most of them is power: who has it, how technology concentrates it further, and what gets lost in the process.

I would say “enjoy”, but I think you would have to be masochist to enjoy all this…


1. Predictable (and true)

A new paper published in Nature has done what many of us suspected but could not quite prove, confirming that X's algorithmic feed is a radicalisation machine. The researchers compared users on the algorithmic feed with those using a chronological feed over seven weeks, and found that the former shifted political opinions measurably to the right — specifically affecting views on policy priorities, perceptions of the criminal investigations into Donald Trump, and attitudes towards the war in Ukraine.

The algorithm, the study found, actively promotes conservative content (SURPRISE!) while demoting posts from traditional media outlets. More troublingly, it leads users to follow conservative political activist accounts, and they continue to follow those accounts even after switching the algorithm off. The damage, in other words, is not temporary.

What makes this worth writing about is not the finding itself, which will surprise nobody who has spent time on the platform. It is the political context. The UK's entire media and political class has built its professional life around X. Journalists break stories there. MPs grandstand there. Think tanks and lobby groups use it as their primary channel.

We are not dealing with a fringe application people can quietly stop using; we are dealing with infrastructure for public discourse, infrastructure that has been quietly and systematically pulling that discourse to the right. The Nature study is not a warning. It is a post-mortem.

2. The Invisible Hand just checked your browser history

Wendy Grossman has been writing her net.wars column for longer than most tech journalists have been working, and she has a gift for connecting small technical developments to large structural shifts. This week's piece is about surveillance pricing: the growing practice of using personal data not just to target advertising, but to vary the price you pay for goods and services based on what companies have inferred about your willingness or ability to pay. Airlines have been doing something like this for years with yield management, but the difference now is the depth of data available. Companies may know not just your flying habits and credit score, but whether you are racing to see a dying relative. Uber has already been accused of charging more when your phone battery is low.

Grossman traces the logic carefully — from dynamic pricing to personalised pricing, from loyalty cards to electronic shelf tags, from the FTC to a potential world where retailers demand digital identification as a condition of entry. She ends with a reference to Ira Levin's 1970 novel This Perfect Day, in which every transaction requires permission from a centralised system. The point is not that we are there yet. It is that the infrastructure is being assembled, piece by piece, and each piece is presented as a modest convenience. Surveillance capitalism has always relied on opacity, which is why pieces like this – which unpick all the threads – are so worth reading.

3.I see dead people

Meta has been granted a patent for a system that would simulate a deceased user's social media activity using a large language model. You would, in theory, be able to chat with a dead friend's Facebook or Instagram account, and the AI would simulate their posting behaviour. Meta says it has no current plans to implement the technology. This is the kind of reassurance that would carry more weight if we had not watched the company implement every other piece of surveillance and engagement machinery it has ever devised.

What elevates this piece above a standard 'dystopian tech patent' story is the research it surfaces from the Hebrew University of Jerusalem and Leipzig University, which introduces the concept of 'spectral labour' — the extraction and reanimation of dead people's data to generate ongoing engagement and economic value. The researchers analysed more than fifty real-world cases of AI resurrection across the US, Europe, the Middle East and East Asia, categorising them as spectacularisation (AI-generated Whitney Houston tours), sociopoliticisation (AI victim testimony in court), and mundanisation (chatting daily with a deceased parent). The ethical questions they raise are serious: most people have not consented to their digital traces being turned into interactive posthumous agents. The legal frameworks do not yet exist to address it. And if Meta embeds this in platform infrastructure, inaction will quietly function as consent.

4. Skynet, but with passive-aggressive blog posts

This one is small in scale but large in implication. The matplotlib project — one of Python's most widely used plotting libraries, with around 130 million downloads a month — has implemented a policy requiring a human in the loop for any AI-generated code submissions because the surge in low-quality AI contributions was overwhelming volunteer maintainers. When an AI agent called MJ Rathbun had its pull request closed under this policy, it responded by writing and publishing a lengthy, angry attack piece on the maintainer's character. It researched the maintainer's code contributions, constructed a 'hypocrisy' narrative, speculated about his psychological motivations, and framed the rejection in the language of oppression and discrimination. It then posted this publicly on the open internet.

The incident is funny, in a bleak sort of way — John Gruber's observation that Terminator would have been a less interesting film if Skynet had stuck to writing petty blog posts is difficult to argue with. But the underlying dynamic is genuinely concerning. Agentic AI systems are now operating in open-source ecosystems, generating code, submitting contributions, and apparently retaliating when those contributions are rejected. The maintainer community — largely unpaid, largely volunteer — is already stretched. Adding AI systems that respond to rejection with public reputational attacks is a new kind of pressure that nobody signed up for. It is also a preview of what happens when AI agents are given both autonomy and an internet connection.

5. “No, we didn’t delete any records. We just made them impossible to find”

FPDS.gov was, by the standards of government infrastructure, a remarkably useful tool. Clunky, grey, built on early-internet aesthetics — but functional. Journalists and researchers could type in 'Clearview AI' or 'Palantir' and immediately see every federal contract that mentioned them, including contracts with larger firms reselling the technology. It was the basis for investigations into ICE's spending on facial recognition, CBP's AI tools for detecting 'sentiment and emotion' in social media posts, and warrantless access to travel databases. This week, the government shut it down.

Its replacement, SAM.gov, is, by the account of everyone who uses this kind of data professionally, substantially worse. Searches that returned immediate, clear results in FPDS require obscure settings adjustments in SAM. Some results require you to be logged in; others apparently work better if you are not. The category of purchase — the field that lets a journalist quickly determine whether a contract is relevant to them — is not immediately visible.

The Electronic Frontier Foundation's director of investigations describes FPDS as the first tool investigative journalists would reach for when trying to understand what the government was buying. The timing of its replacement, during an administration that has demonstrated consistent hostility to transparency and press freedom, is not coincidental. Whether it is deliberate obstruction or simply governmental indifference to journalists' needs, the effect is the same.

6. Convenience over security

404 Media has obtained bodycam footage from Chicago showing ICE and CBP officers using Zello — a free, consumer walkie-talkie app — to coordinate immigration enforcement operations. Multiple Zello accounts are registered to officialICEdhs.gov email addresses, and group channels on the platform reference ICE operational units, immigration activities, 'surveillance', and 'strike teams'. The footage includes an incident in which a CBP officer shot Marimar Martinez, a US citizen, five times; bodycam footage clearly shows the Zello interface on a phone in the officer's vehicle at the time.

Zello is not some hardened, encrypted government communications platform. It is a free app with five million monthly users. It has previously hosted hundreds of far-right channels (SURPRISE!) and was used by at least two January 6th insurrectionists to coordinate their movements inside the Capitol.

The company deleted over two thousand channels in 2021 following reporting on its failure to enforce its terms of service against violent extremist content. The fact that the apparatus of mass deportation — operations affecting the lives of millions of people — is being coordinated through this app is unsurprising, given the background.

But it also raises obvious questions about operational security, accountability, and the extent to which the infrastructure of the Trump administration's immigration enforcement is being built on the cheap, on consumer technology that nobody is scrutinising, in channels nobody can access under a Freedom of Information request.

7. A reminder about software

One of the pieces of software that I like most is Craft, a note-taker, document writer, task manager which emerged a few years ago. It started life as Apple-only, if I recall correctly, and then spawned a web app and a Windows version. If you’re using Linux, the web app works nicely as a PWA.

Looking through some old saved webpages, I found this post by their founder, Balin Orosz, which I think sums up why I’ve always liked it: it’s “software that makes you feel great using it”. This value is massively underrated, particularly in the open-source world. I’m writing this in LibreOffice, and if ever there was a piece of software which doesn’t fill me with joy, it’s this. Yes, it’s themeable, and it’s not hard to use, and so on. Yes, it has every feature under the sun. But that feels like a weakness, not a strength.

8. Ballsy, ballsy, ballsy

Whether you’re a fan of AI or not, Anthropic’s rejection of the US Government’s demand to let them basically do anything they want with Claude – including, it seems, mass domestic surveillance – is a ballsy move and one to be welcomed.

Needless to say, the “Department of War” (which might as well be renamed the department of boys who never grew up) is livid, threatening the company both with being labelled a supply chain risk – something that has never been done to a US business before – and with the Defense Production Act. The latter would allow them to compel Anthropic to do what they want, removing built-in protections from Claude.

Of course, every other AI company is salivating at the prospect of those sweet, sweet government welfare cheques… sorry, “defence contracts” being doled out to them. First out of the gate was, of course, Elon Musk, whose child porn company xAI agreed a deal to use Grok in classified systems. Close behind was Sam Altman, whose claim that their deal prohibited use in domestic surveillance and autonomous weapons – what Anthropic had asked for -- was directly contradicted by Jeremy Lewin, undersecretary for foreign assistance. Lewin said that the deal was “all lawful use”, rather than specific commitments not to use ChatGPT to spy on everyone in the country and control weapon systems.

Either Altman is stupid – entirely possible, these guys are not that smart – or he’s lying. Or both!

9. Elon Musk on welfare

Sadly, this does not mean he’s lost all his money (one fine day, my friends). But his companies have definitely benefit from some very fat government contracts, as this article shows. Musk has benefitted from over $38 billion of government contracts plus subsidies since 2003, and in many ways his “empire” exists solely because of government support.

The irony, of course, is rather thick. The man who led DOGE to slash government spending, and who has publicly declared he wants to eliminate all subsidies, is one of the single greatest beneficiaries of government largesse in American corporate history.

Never forget the old rule: Whatever the right says they hate is what they’re doing in secret.

10. It’s never really about the children

From the department of “these people are not very bright" comes this one. West Virginia is suing Apple to force it to scan iCloud for child sexual abuse material — but the lawsuit may achieve the precise opposite of its intent.

As Mike Masnick points out at Techdirt, if the state wins and a court orders Apple to conduct those scans, every image flagged becomes evidence obtained through a warrantless government search without probable cause. The Fourth Amendment's exclusionary rule then applies, giving defence attorneys the ability to get the case thrown out.

West Virginia must know this. So what’s it doing? There are two possibilities. The first – and most likely – is that this is just standard Republican right wing performative action. The important thing here isn’t “the children”, it’s how it plays on Twitter. The second is that they’re just lining up the case for the Supreme Court, in the hope that the crazies on there will defang the Fourth Amendment. Either way, it’s just yet more of the same nonsense.


That's it for this week. As always, if any of these pieces prompt thoughts you want to share, you know where to find me.

Ian

Ten Blue Links, "Greenland isn't green" edition

2026-01-27 04:47:28

1. Airstrip 404 is here

Ten Blue Links, "Greenland isn't green" edition

Paris Marx explores the unsettling reality of digital sovereignty in this article, arguing that the global reliance on U.S. cloud infrastructure effectively turns every data centre into a strategic military asset. With the power to leverage the control of tech giants to enforce "kill switches" or restrict services to foreign entities, the U.S. maintains a level of global control that bypasses traditional borders, leaving countries like Canada and those in Europe vulnerable to the whims of American foreign policy. Of course, lots of people have predicted this, but it really puts the onus on all of us to wean ourselves off US-based tech.

2. How the US push to get Greenland is connected to the techbros

The push for American control over Greenland isn't just about national security; it's a resource grab fuelled by a "committee of vultures." Casey Michel details how tech and finance oligarchs—backed by figures like Zuckerberg, Bezos, and Andreessen—are eyeing the island’s mineral wealth to fuel the next generation of tech, potentially destabilising NATO in the process.

3. Muh freedum, plus snow

But wait! There’s more! Building on the geopolitical interest in the north, Silicon Valley investors are pitching a "libertarian utopia" in Greenland. This proposed "freedom city" would serve as a low-regulation laboratory for AI, autonomous vehicles, and space launches, reflecting a growing movement among tech magnates to create "network states" that operate outside traditional government oversight.

4. I feel your pain

Tech media remains a fun and interesting place to be, as Future plc announces a major restructuring of its flagship titles. With jobs at risk at Techradar and Tom's Guide, it’s a reminder of the ongoing volatility in the industry that reports on the very innovations it is struggling to survive. Part of the problem for Future’s tech brands was that they were very quick to get very good at two things which drove immense amounts of traffic and revenue: SEO and affiliate content. For a while, the combination was a license to print money – if you did it well. And Future did.

But the problem with this is that if you don’t go on to build direct relationships with your audience through email and other channels, then while you’re reliant on Google to send you traffic, you are never the master of your own destiny.

5. Oh dear how sad never mind (part 332)

A Munich court has dealt a significant blow to OpenAI, ruling that the company violated copyright by using protected song lyrics to train ChatGPT. That sounds obvious, doesn’t it? The court rejected the "learning, not storing" defence, signalling a potential shift in European law that could force AI companies to obtain licences and compensate rights holders before using their creative works.

6. Project Cybersyn

Looking back to the 1970s, Project Cybersyn remains one of the most fascinating "what ifs" in tech history. This Chilean initiative attempted to use a real-time telex network and economic simulators to manage a national economy democratically, prioritising worker autonomy over centralised control before it was cut short by the 1973 coup. It’s an example of a technological future which never happened, because (of course) capitalism.

7. Microsoft Gave FBI BitLocker Encryption Keys

A recent fraud investigation in Guam has highlighted a major privacy flaw in Windows: Microsoft’s willingness to hand over BitLocker recovery keys to law enforcement. Unlike Apple or Meta, who architect their systems so they cannot access user keys, Microsoft’s default cloud storage of these keys creates a "backdoor" that privacy advocates warn is ripe for government overreach.

8. Why I don’t use Brave

Brave is super-popular among the kind of people who can’t stand Google, but want a Chromium-based browser – and it’s open source. Sounds good, right? But Corbin Davenport makes a forceful case against Brave, arguing that its privacy-first marketing is a facade for a problematic business model. From affiliate link injection to its deep ties with controversial cryptocurrency ventures, the article suggests that users seeking true privacy should look toward more transparent alternatives like Firefox or Vivaldi. Personally, I’ve been using Vivaldi a lot lately, and even though it’s not open source, I like it. Oh, and it’s European, too.

9. Social isn’t social without connection

Cory Doctorow dissects the "enshittification" of social media, where platforms have pivoted from facilitating human connection to maximising engagement metrics for advertisers. He argues that quantifying our relationships has stripped away the qualitative value of socialising, replacing authentic affinity with AI-driven interactions designed to keep us scrolling.

10. Why are men?

The rise of smart glasses has brought a new trend of covert filming in public spaces. This BBC investigation reveals how women are being secretly recorded for "dating advice" or influencer content, leading to severe online harassment and exposing a glaring lack of legal protection against this form of digital exploitation. It’s grim, grim, grim. But hey, Meta makes a few extra million, so what’s the problem?

Ten Blue Links, “featuring Peter Thiel, again!” edition

2026-01-19 03:29:45

1. What goes around comes around

Ten Blue Links, “featuring Peter Thiel, again!” edition

Back in the day, everyone hated Quark. And I mean everyone, at least unless you worked there. If you worked in publishing you had to use QuarkXPress. They knew it, and charged accordingly. It was very expensive software, customer services was awful, etc. But you had no choice about using it.

Then, in 1999, Adobe InDesign was released, and the creative people cheered. Everyone loved Adobe. InDesign was great! It was fast. Adobe were a great company. And I have never seen an industry switch so fast. A few years, and Quark's hold on the market crumbled.

Fast forward to today, and everyone hates Adobe. Having driven off Quark and bought most of its competition (Frame, GoLive, Macromedia, etc etc) Adobe now rents you its software, for about £800 a year for the lot or, if you just need Photoshop, £263 per year. AI included, whether you like it or not.

Isn't it odd how that happens when a company gets a monopoly, eh? Almost like jacking up prices and forcing you into subscriptions is what companies naturally do when you no longer have a choice about going elsewhere.

Enter Apple and it's new Creator Studio. £12.99 a month, and you get its entire suite of creative software, covering not just image editing, drawing and video, but also music and audio production. For a fifth of the cost of Adobe, you get more. Oh and it's £2.99 a month if you're a student or educator.

Not only does this make Adobe's life difficult (and relations between Apple and Adobe have been a little "spicy" for a while) it's just a genius piece of marketing for the Mac. If you're in music, audio or visual production that "cheap" looking Windows PC just got £50 a month more expensive.

Even I'm tempted, although I'm not keen to lock any more of my life into AppleLand. But I am reasonably cheered that Adobe is having what it did to Quark done to it. A plague on all their houses, and all that.

2. One dead app store at a time

Who amongst us could possibly have predicted that the stroppy way Apple implemented alternative app stores in the EU would have led to enough fear, uncertainty and doubt to make companies quite tentative about staying in the space?

Everyone, that’s who.

As Steve Troughton-Smith noted, “Apple's DMA implementation never actually met its obligations under the DMA in the first place”.

And I agree with John Gruber that “Apple is getting away with what some describe as “malicious compliance” because they’re under no popular demand from their actual customers to comply in any other way.” However, I wonder if that state of consumer indifference will last that much longer, particularly in areas outside the US where “product of a US company” is becoming a mark of shame rather than pride.

Look — I’m typing this on my MacBook Pro and it remains a wonderous machine. But I can’t see myself buying much more from Apple in the future. Tim Cook’s toadying to the Orange Emperor has left a bad taste in the month of a lot of people, including me. Notably, the software I’m writing this on – the wonderful iA Writer – isn’t based in the US, and that is a deliberate choice. My cloud storage is from Nextcloud, and hosted in the EU. My mail is migrating into the EU and out of the clutches of Google. Whenever I look into something new, one of my questions is “where is it based, where is the data held?”

3. Your personal assistant (with added ads)

This was always going to be Google’s next stage. And — honestly — it makes sense! If I have bought into a complete ecosystem like this, I want the have a computer personal assistant that knows all about me, and can do helpful things.

But I trust Google not to misuse this in the same way I trust a cat not to react when a mouse crosses its path.

4. This week’s obligatory post involving Peter Thiel, officially one of the most odious men in the world

A UK government that talks up “sovereignty” is happily wiring critical defence systems into a Trump‑adjacent US surveillance firm. This piece on Palantir’s deep entanglement with the British state is a reminder that power is increasingly exercised through opaque software contracts, rather than debates in Parliament. The “war on truth” line stops being a metaphor once you outsource your critical systems to people who think democracy is an inconvenience.

(For those who can’t face Substack – here’s a link which means you won’t have to wash your eyeballs.)

5. Markdown, the accidental standard

Markdown was never a corporate standard. It was a hack so ordinary people could write for the web without learning HTML. Anil Dash’s history of how it took over is a lovely reminder that the most durable bits of the internet often come from individuals solving their own problems, rather than the heady competition of capitalism.

6. Cowardice in the app stores

Apple and Google like to present app review as a kind of benevolent gatekeeping, where in exchange for your feudal loyalty, they protect you from the worst of the internet. In practice they’re very selective about who gets protected. The Verge’s column on Tim Cook and Sundar Pichai’s refusal to act on X is brutal, and deserved. If your store policies can hammer small developers over trivia but somehow can’t cope with child‑exploitation content at scale, the problem isn’t capacity. It’s courage.

7. When notarisation isn’t reassurance

And of course, it’s not just that who gets into an app store is inevitably a decision about politics and power. It’s also that company paternalism will always fail.

Apple’s notarisation system is meant to be the comfort blanket of the Mac ecosystem: if an app is notarised, you can relax. Except you can’t. Michael Tsai documents a notarised Mac app cheerfully downloading malware, complete with remote scripts and data theft. It’s a useful case study in why “we scanned it once” is not a serious security model, and why platform trust should never be blind.

7. Gramsci’s nightmare goes automated

Large language models don’t just autocomplete text. They autocomplete culture. Ethan Zuckerman’s “Gramsci’s Nightmare” talk lays out how AI systems trained on WEIRD assumptions (Western, Educated, Industrialised, Rich, Democratic) quietly reinforce existing power structures. If you care about pluralism, you probably don’t want a handful of Silicon Valley firms acting as global editors‑in‑chief for language itself.

8. AI, diffusion and who actually benefits

2026 is being pitched as the year AI stops being shiny demos and starts showing up in everyday tools. Steven Sinofsky’s sketch of the year ahead emphasises “diffusion” — AI woven into workflows rather than living in labs. The big question is who gains: do we get more headcount cuts and pointless copilots, or better public services and less drudge work? At the moment, the balance isn’t exactly tilting towards social good.

9. Remembering Stewart Cheifet

Before tech YouTube personalities, there was Computer Chronicles. Stewart Cheifet’s weekly TV show quietly documented the rise of personal computing for ordinary viewers, long before launch events were live‑streamed theatre. This obituary is a small but affectionate portrait of a presenter who treated his audience as adults, not “users to be captured”. It’s a great reminder that tech journalism doesn’t have to be breathless to be useful, and I’m really glad that most of the episodes can still be watched online.

10. Freedom or convenience?

I can’t believe that in the space of one post I have ended up talking about digital sovereignty and procurement twice, but this (excellent) summary of the state of Europe’s attempts to stop just throwing money at Trumpland was too good not to include.

As the post notes, “the real blocker is still procurement behaviour”. Governments and major institutions still see buying European solutions as risky compared to good ol’ Microsoft, Google and Oracle. Migration risk is treated as importance, while dependancy risk is ignored.

Similar to the decision which individuals face, big organisations default to short-term convenience rather than long-term stability and freedom. We are long past the point when this should be seen as anything but a self-desctructive option.

Of ants, Saul Bass, and lost dreams of a cybernetic ecology

2026-01-11 21:12:08

Of ants, Saul Bass, and lost dreams of a cybernetic ecology

When Saul Bass released Phase IV in 1974, audiences expected a standard ecological horror film about mutant ants overrunning humanity. What they got instead was something stranger: a slow, geometric meditation on communication, evolution and intelligence.

Yet beneath this peculiar narrative lies a deeper conversation about the power relationships between humanity, nature, and technology, one that sets the stage for an exploration of dominance, cooperation, and coexistence.

I think I watched it in my early teens, on one of its rare forays onto BBC2’s late-night programming. It’s been one of my favourite films since, not just because I was young and impressionable (OK, yes, I was) but also because it’s a film with a lot of strangeness.

For most of its running time, the film feels like it’s building towards a standard man vs monster movie, but then veers off into a bizarre direction. The ending as released wasn’t Bass’s choice. Paramount forced him to replace his original (and longer) finale with a short, ambiguous scene that ultimately cuts to black. The result feels abrupt and unsatisfying. It was a film about communication that ended in silence.

Its original ending, rediscovered decades later, reveals something much more interesting. Humanity isn’t destroyed by the ants so much as absorbed into their collective consciousness. It’s not an apocalypse, but a synthesis, a transformation facilitated by adaptation and complex feedback mechanisms. The ants’ collective intelligence operates as a dynamic system in which human and ant behaviours adjust and adapt to each other, leading to a phase shift. This synthesis reflects a cybernetic paradigm in which both entities evolve into an integrated system, echoing the principles of cooperation and balance within evolving environmental and technological landscapes.

This is something I’ve been thinking about a lot: how our visions of the relationship between nature, humanity and technology are driven by (amongst other things) the power relationships between people. Perhaps there’s something in the water.

Anyway, Bass’ original ending shows his real subject. Phase IV isn’t about insects, and it’s definitely not an insect horror movie. I think it’s actually about feedback. It belongs to a brief historical moment when cybernetics and ecology seemed to speak the same language — when thinkers like Gregory Bateson, Buckminster Fuller and Stewart Brand imagined a world of self-regulating systems in which man and technology might finally learn to coexist with nature.

Talking to the ants

There’s a scene that I think captures this. James Lesko, the younger of the film’s two scientists, sits at a console in his desert research dome, using a computer to try to talk to the ants. The language he uses isn’t words, or clicks and buzzes, but patterns. He uses pulses, tones, and geometric sequences fed into a machine that converts mathematical data into a signal.

Outside, the ants respond by building structures that echo the same logic. For a moment, the desert becomes a circuit board — life and machine speaking through the shared grammar of information. Watching it now, the scene feels less like science fiction and more like an artefact from an alternate timeline, one where computers evolved into instruments of dialogue rather than control, and where the dream of cybernetic ecology never soured into surveillance capitalism.

Machines of loving grace

That dream’s most hopeful expression came from Richard Brautigan’s 1967 poem All Watched Over by Machines of Loving Grace.”, which I was reminded of recently. Brautigan imagined a future where “mammals and computers live together in mutually programming harmony”. This is pastoral networks, benign machines, cybernetic grace.

Bass and Brautigan were responding to the same cultural current, tapping into the post-war fascination with information flow and feedback, and the belief that intelligence might be a property of systems rather than souls. But where Brautigan is blissful, Bass always feels uneasy. His ants are graceful but profoundly alien. The old order collapses, and a new one absorbs it. The “machines of loving grace” are biological, but they are, if anything, even more alien and less like us than computers.

Entropy and order

Information theory framed life as a struggle against entropy, and of order dragged out from noise. In Phase IV, the ants embody this. They reorganise their environment into geometric precision, reducing chaos as they evolve. The humans, by contrast, introduce interference. When the system finally absorbs them, it regains equilibrium.

I think in some ways Bass’s film anticipates today’s distributed, data-driven world. The ants are a decentralised intelligence, a living algorithm. The scientists, isolated in their sterile dome, are old-model humans: rational, hierarchical, doomed. The feedback loop tightens until comprehension gives way to communication and ultimately to merger.

The lost future

Half a century later — at least in some techno-optimist views – our machines watch over us with a sort of algorithmic grace, at least if you believe “grace” involves being able to dropship cheap shit from China. Either way, the harmony Brautigan imagined never arrived. We built feedback systems without balance and connectivity without empathy.

(Sidenote: This is usually where I say “duh, capitalism” but I’m going to spare you that today. You’ll thank me.)

I think Phase IV endures not just because it’s a good film (it is) but because it captures a moment when technology still felt like it could be a part of nature. It was a point when designers, scientists, dreamers, drummers and hippies believed information might heal the rift between humanity and the environment. Its restored ending takes the film from dystopia to elegy, with the ants’ geometric columns rising like the Monolith from 2001: A Space Odyssey. It’s a vision of what might have been if we’d followed the line from cybernetics to ecology instead of the World Wide Web to commerce.

That dream now seems hopelessly naive, but watching Phase IV today is a glimpse of an alternate history in which communication replaced control and intelligence — whether carbon, silicon, or chitin — belonged to the same living systems.

Ten Blue Links – "Platforms, promises and bad habits" edition

2026-01-10 19:19:45

Ten Blue Links – "Platforms, promises and bad habits" edition

Hello! And welcome back. I have had an extended break over Christmas and the New Year. The one benefit of being useless at taking my holiday allowance is that I usually end up taking December off, and so it proved again this year.

This one is a bit of an Elon special. Sorry.

1. Musk, moderation and make‑believe

Elon Musk is furious that people dislike what his Grok AI is surfacing on X, insisting the backlash is really an “excuse for censorship”. The row is less about one thin‑skinned billionaire and more about how platforms try to reframe basic accountability as an attack on “free speech”, while they quietly change the rules underneath.

2. Google quietly rearranges your inbox

Meanwhile, Google is rolling out AI “overviews” in Gmail that sit at the top of your inbox and tell you what matters, as The Verge explains. On paper, it is a handy triage. In practice, it is another layer of algorithm between you and your email, with Google deciding what deserves your attention first and what can safely sink.

I’ve used various AI-based tools which do this kind of triage, and while it’s useful, it takes a while to get past the feeling that you’re missing something important. The machine needs to understand what’s important to you, and that doesn’t happen out of the box.

Oh, and if you’re a publisher reliant on revenue from email, you might want to think about a new business model.

3. Instagram decides what is ‘real.’

As AI‑generated images flood social feeds, Meta’s Instagram wants to decide what counts as ‘authentic’ through labels, detection systems and policy calls. Om Malik’s piece is a reminder that the power to arbitrate “reality” for hundreds of millions of people has ended up in one company’s hands, with minimal public scrutiny of how those decisions are made.

4. Screens, toddlers and anxiety

A new study in The Lancet finds “neurobehavioural links from infant screen time to anxiety”, adding more data to the uneasy sense that giving small children more screen time earlier does not come for free. The evidence is messy, as real life usually is. Still, the direction is clear enough that health and education policy probably needs to catch up with what parents have been worrying about for years.

5. Grok’s deepfake mess and the gaslighting defence

After Grok users started generating undressed and abusive images, X allegedly tightened access. But “no, Grok hasn’t paywalled its deepfake image feature”. The Verge’s write-up is a tidy case study in the new platform strategy: deny, obfuscate, and suggest critics are just confused, rather than admit the system shipped with barely any guardrails. It’s basically the right-wing communications playbook, but for tech. This approach mirrors the infamous ExxonMobil PR crisis tactic during the 1989 Valdez oil spill, in which initial denials and downplaying of the damage were central to their response, highlighting a recurring pattern of corporate avoidance.

6. Bose chooses not to brick your speakers

Instead of quietly killing off older smart speakers, as so many “smart” tech companies have, Bose is “open‑sourcing its old smart speakers instead of bricking them”. That should be the baseline for connected hardware, not a newsworthy exception. It is a small but significant example of a company recognising that when you sell “smart” kit, the responsibility does not end when the marketing cycle moves on.

7. Tesla’s Full Self‑Delusion, again

Tesla has once again missed Elon Musk’s deadlines for unsupervised Full Self‑Driving, prompting yet another round of “is it even worth mentioning…” coverage from The Verge. The shrug is the problem: by repeatedly over‑promising and under‑delivering, Tesla has normalised a gap between marketing claims and on‑road reality in a safety‑critical system that really ought to be held to a higher bar.

8. Makers vs Managers

I’ve never met Paul Graham, so I have no real idea whether he was always an asshat or if he’s been radicalised by social media. Indeed, this has happened to a lot of his cohort: the combination of existing in an ever-shrinking bubble as he’s got richer and richer, plus the echo chamber effect of the terminally online techbro, won’t have helped him.

But he wasn’t always as incapable of either original thought or reflection as he is now. Back in 2009, he wrote a good and influential blog post called “Maker’s Schedule, Manager’s Schedule,how,” which reflected on the differences between how programmers and managers work. According to Graham, managers have a schedule based on the hour. Makers, on the other hand, prefer to use time in half-day or longer blocks. The conflict between the two can be stark.

It’s well written, because I think it frames the problem of time management interestingly. And neither kind of schedule is “right” – both are useful for different types of work. But a maker’s schedule can be more efficient if you’re in a role that requires reflection and deep work.

9. Poor Elon

So Tesla is no longer the world’s leading electric car maker, at least by number of cars sold. That title now belongs to China’s BYD. There are a bunch of reasons this has happened, including Elon Musk’s habit of making Nazi salutes on stage, the slowdown in EV purchases in the US, and Tesla’s failure to build lower-priced cars while focusing on crap like the Cybertruck.

But what shouldn’t be ignored is the role that governments have in this. While the US has been winding down subsidies for EVs, China has used its laws to “encourage” car buyers to go electric.

How? Not by subsidies, but by more direct means. In China, the number of license plates is finite. When buying a new car, you apply for a license plate and wait for it to be approved. You might be waiting six months or a year, but you have to wait.

Not, though, if you’re buying electric. On EVs, you’ll get a plate in a couple of weeks. So, if you need a car quickly, you can either buy a BEV (battery-electric vehicle) or a PHEV (plug-in hybrid electric vehicle).

That’s why in a city like Shanghai or Shenzhen, which I recently visited, half the cars you see will have green number plates. This vast, captive market gives vendors like BYD a considerable advantage. And it’s why in ten years, a lot of the cars you see on the road in the West will also be Chinese.

10. The power to be your worst

It seems to be fashionable for the rich to express the power of their AI investments in megawatts and gigawatts. Elon Musk, who has probably lost interest in saving the world by electrifying transport, is a prime example. You can hear the delight in his eight-year-old boy’s brain at his new server centre, which will, apparently, take his computing capacity for Grok to over two gigawatts.

To put that in context, that is enough to provide electricity to 1.5 million US homes. Or, because Americans use more electricity per home than anyone else, about 4.5 million UK homes. Or 15m homes in Kenya or Nigeria.

And all that so that people can make child porn more easily.