MoreRSS

site iconIan BetteridgeModify

I used to edit a Mac magazine, launched a website called Alphr.com
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Ian Betteridge

Of ants, Saul Bass, and lost dreams of a cybernetic ecology

2026-01-11 21:12:08

Of ants, Saul Bass, and lost dreams of a cybernetic ecology

When Saul Bass released Phase IV in 1974, audiences expected a standard ecological horror film about mutant ants overrunning humanity. What they got instead was something stranger: a slow, geometric meditation on communication, evolution and intelligence.

Yet beneath this peculiar narrative lies a deeper conversation about the power relationships between humanity, nature, and technology, one that sets the stage for an exploration of dominance, cooperation, and coexistence.

I think I watched it in my early teens, on one of its rare forays onto BBC2’s late-night programming. It’s been one of my favourite films since, not just because I was young and impressionable (OK, yes, I was) but also because it’s a film with a lot of strangeness.

For most of its running time, the film feels like it’s building towards a standard man vs monster movie, but then veers off into a bizarre direction. The ending as released wasn’t Bass’s choice. Paramount forced him to replace his original (and longer) finale with a short, ambiguous scene that ultimately cuts to black. The result feels abrupt and unsatisfying. It was a film about communication that ended in silence.

Its original ending, rediscovered decades later, reveals something much more interesting. Humanity isn’t destroyed by the ants so much as absorbed into their collective consciousness. It’s not an apocalypse, but a synthesis, a transformation facilitated by adaptation and complex feedback mechanisms. The ants’ collective intelligence operates as a dynamic system in which human and ant behaviours adjust and adapt to each other, leading to a phase shift. This synthesis reflects a cybernetic paradigm in which both entities evolve into an integrated system, echoing the principles of cooperation and balance within evolving environmental and technological landscapes.

This is something I’ve been thinking about a lot: how our visions of the relationship between nature, humanity and technology are driven by (amongst other things) the power relationships between people. Perhaps there’s something in the water.

Anyway, Bass’ original ending shows his real subject. Phase IV isn’t about insects, and it’s definitely not an insect horror movie. I think it’s actually about feedback. It belongs to a brief historical moment when cybernetics and ecology seemed to speak the same language — when thinkers like Gregory Bateson, Buckminster Fuller and Stewart Brand imagined a world of self-regulating systems in which man and technology might finally learn to coexist with nature.

Talking to the ants

There’s a scene that I think captures this. James Lesko, the younger of the film’s two scientists, sits at a console in his desert research dome, using a computer to try to talk to the ants. The language he uses isn’t words, or clicks and buzzes, but patterns. He uses pulses, tones, and geometric sequences fed into a machine that converts mathematical data into a signal.

Outside, the ants respond by building structures that echo the same logic. For a moment, the desert becomes a circuit board — life and machine speaking through the shared grammar of information. Watching it now, the scene feels less like science fiction and more like an artefact from an alternate timeline, one where computers evolved into instruments of dialogue rather than control, and where the dream of cybernetic ecology never soured into surveillance capitalism.

Machines of loving grace

That dream’s most hopeful expression came from Richard Brautigan’s 1967 poem All Watched Over by Machines of Loving Grace.”, which I was reminded of recently. Brautigan imagined a future where “mammals and computers live together in mutually programming harmony”. This is pastoral networks, benign machines, cybernetic grace.

Bass and Brautigan were responding to the same cultural current, tapping into the post-war fascination with information flow and feedback, and the belief that intelligence might be a property of systems rather than souls. But where Brautigan is blissful, Bass always feels uneasy. His ants are graceful but profoundly alien. The old order collapses, and a new one absorbs it. The “machines of loving grace” are biological, but they are, if anything, even more alien and less like us than computers.

Entropy and order

Information theory framed life as a struggle against entropy, and of order dragged out from noise. In Phase IV, the ants embody this. They reorganise their environment into geometric precision, reducing chaos as they evolve. The humans, by contrast, introduce interference. When the system finally absorbs them, it regains equilibrium.

I think in some ways Bass’s film anticipates today’s distributed, data-driven world. The ants are a decentralised intelligence, a living algorithm. The scientists, isolated in their sterile dome, are old-model humans: rational, hierarchical, doomed. The feedback loop tightens until comprehension gives way to communication and ultimately to merger.

The lost future

Half a century later — at least in some techno-optimist views – our machines watch over us with a sort of algorithmic grace, at least if you believe “grace” involves being able to dropship cheap shit from China. Either way, the harmony Brautigan imagined never arrived. We built feedback systems without balance and connectivity without empathy.

(Sidenote: This is usually where I say “duh, capitalism” but I’m going to spare you that today. You’ll thank me.)

I think Phase IV endures not just because it’s a good film (it is) but because it captures a moment when technology still felt like it could be a part of nature. It was a point when designers, scientists, dreamers, drummers and hippies believed information might heal the rift between humanity and the environment. Its restored ending takes the film from dystopia to elegy, with the ants’ geometric columns rising like the Monolith from 2001: A Space Odyssey. It’s a vision of what might have been if we’d followed the line from cybernetics to ecology instead of the World Wide Web to commerce.

That dream now seems hopelessly naive, but watching Phase IV today is a glimpse of an alternate history in which communication replaced control and intelligence — whether carbon, silicon, or chitin — belonged to the same living systems.

Ten Blue Links – "Platforms, promises and bad habits" edition

2026-01-10 19:19:45

Ten Blue Links – "Platforms, promises and bad habits" edition

Hello! And welcome back. I have had an extended break over Christmas and the New Year. The one benefit of being useless at taking my holiday allowance is that I usually end up taking December off, and so it proved again this year.

This one is a bit of an Elon special. Sorry.

1. Musk, moderation and make‑believe

Elon Musk is furious that people dislike what his Grok AI is surfacing on X, insisting the backlash is really an “excuse for censorship”. The row is less about one thin‑skinned billionaire and more about how platforms try to reframe basic accountability as an attack on “free speech”, while they quietly change the rules underneath.

2. Google quietly rearranges your inbox

Meanwhile, Google is rolling out AI “overviews” in Gmail that sit at the top of your inbox and tell you what matters, as The Verge explains. On paper, it is a handy triage. In practice, it is another layer of algorithm between you and your email, with Google deciding what deserves your attention first and what can safely sink.

I’ve used various AI-based tools which do this kind of triage, and while it’s useful, it takes a while to get past the feeling that you’re missing something important. The machine needs to understand what’s important to you, and that doesn’t happen out of the box.

Oh, and if you’re a publisher reliant on revenue from email, you might want to think about a new business model.

3. Instagram decides what is ‘real.’

As AI‑generated images flood social feeds, Meta’s Instagram wants to decide what counts as ‘authentic’ through labels, detection systems and policy calls. Om Malik’s piece is a reminder that the power to arbitrate “reality” for hundreds of millions of people has ended up in one company’s hands, with minimal public scrutiny of how those decisions are made.

4. Screens, toddlers and anxiety

A new study in The Lancet finds “neurobehavioural links from infant screen time to anxiety”, adding more data to the uneasy sense that giving small children more screen time earlier does not come for free. The evidence is messy, as real life usually is. Still, the direction is clear enough that health and education policy probably needs to catch up with what parents have been worrying about for years.

5. Grok’s deepfake mess and the gaslighting defence

After Grok users started generating undressed and abusive images, X allegedly tightened access. But “no, Grok hasn’t paywalled its deepfake image feature”. The Verge’s write-up is a tidy case study in the new platform strategy: deny, obfuscate, and suggest critics are just confused, rather than admit the system shipped with barely any guardrails. It’s basically the right-wing communications playbook, but for tech. This approach mirrors the infamous ExxonMobil PR crisis tactic during the 1989 Valdez oil spill, in which initial denials and downplaying of the damage were central to their response, highlighting a recurring pattern of corporate avoidance.

6. Bose chooses not to brick your speakers

Instead of quietly killing off older smart speakers, as so many “smart” tech companies have, Bose is “open‑sourcing its old smart speakers instead of bricking them”. That should be the baseline for connected hardware, not a newsworthy exception. It is a small but significant example of a company recognising that when you sell “smart” kit, the responsibility does not end when the marketing cycle moves on.

7. Tesla’s Full Self‑Delusion, again

Tesla has once again missed Elon Musk’s deadlines for unsupervised Full Self‑Driving, prompting yet another round of “is it even worth mentioning…” coverage from The Verge. The shrug is the problem: by repeatedly over‑promising and under‑delivering, Tesla has normalised a gap between marketing claims and on‑road reality in a safety‑critical system that really ought to be held to a higher bar.

8. Makers vs Managers

I’ve never met Paul Graham, so I have no real idea whether he was always an asshat or if he’s been radicalised by social media. Indeed, this has happened to a lot of his cohort: the combination of existing in an ever-shrinking bubble as he’s got richer and richer, plus the echo chamber effect of the terminally online techbro, won’t have helped him.

But he wasn’t always as incapable of either original thought or reflection as he is now. Back in 2009, he wrote a good and influential blog post called “Maker’s Schedule, Manager’s Schedule,how,” which reflected on the differences between how programmers and managers work. According to Graham, managers have a schedule based on the hour. Makers, on the other hand, prefer to use time in half-day or longer blocks. The conflict between the two can be stark.

It’s well written, because I think it frames the problem of time management interestingly. And neither kind of schedule is “right” – both are useful for different types of work. But a maker’s schedule can be more efficient if you’re in a role that requires reflection and deep work.

9. Poor Elon

So Tesla is no longer the world’s leading electric car maker, at least by number of cars sold. That title now belongs to China’s BYD. There are a bunch of reasons this has happened, including Elon Musk’s habit of making Nazi salutes on stage, the slowdown in EV purchases in the US, and Tesla’s failure to build lower-priced cars while focusing on crap like the Cybertruck.

But what shouldn’t be ignored is the role that governments have in this. While the US has been winding down subsidies for EVs, China has used its laws to “encourage” car buyers to go electric.

How? Not by subsidies, but by more direct means. In China, the number of license plates is finite. When buying a new car, you apply for a license plate and wait for it to be approved. You might be waiting six months or a year, but you have to wait.

Not, though, if you’re buying electric. On EVs, you’ll get a plate in a couple of weeks. So, if you need a car quickly, you can either buy a BEV (battery-electric vehicle) or a PHEV (plug-in hybrid electric vehicle).

That’s why in a city like Shanghai or Shenzhen, which I recently visited, half the cars you see will have green number plates. This vast, captive market gives vendors like BYD a considerable advantage. And it’s why in ten years, a lot of the cars you see on the road in the West will also be Chinese.

10. The power to be your worst

It seems to be fashionable for the rich to express the power of their AI investments in megawatts and gigawatts. Elon Musk, who has probably lost interest in saving the world by electrifying transport, is a prime example. You can hear the delight in his eight-year-old boy’s brain at his new server centre, which will, apparently, take his computing capacity for Grok to over two gigawatts.

To put that in context, that is enough to provide electricity to 1.5 million US homes. Or, because Americans use more electricity per home than anyone else, about 4.5 million UK homes. Or 15m homes in Kenya or Nigeria.

And all that so that people can make child porn more easily.

Ten Blue Links “Orion rising, EV prices falling and ads arriving” edition

2025-11-30 02:57:30

1. Lawyers, assemble!

Ten Blue Links “Orion rising, EV prices falling and ads arriving” edition

Authors v OpenAI part 273. A judge has ordered disclosure of internal communications about dataset deletions. This sounds incredibly dull, but it matters fin determining whether there was willfulness in OpenAI’s blatant theft, and that, in turn, determines how much indamages they are potentially on the hook for. If the entire AI industry falls into a pit of doom because it didn’t think it was worth spending a few hundred million on licensing, while spending hundreds of billions on compute, I for one will laugh my socks off. The Hollywood Reporter has the best quick read.

2. The job number you won’t like

MIT reckons AI could already replace 11.7% of US wages. That is not 11.7% of jobs, but it is still a significant number. The policy story is local, not national. Which towns get hit, which sectors hollow out, and who pays to reskill? Start with CNBC, then ask politicians for the postcode‑level cut, because that’s where all the inequalities will lie.

3. Timeless Tekserve

A wonderful celebration for David Lerner, co‑founder of Tekserve. It is a great example that repair, care and community support beat “move fast” every day of the week. The New York Times captures why that shop mattered to so many New Yorkers and to the broader Mac community.

4. China’s EV charm offensive (with ring lights)

Car YouTubers are flying out, filming, and flipping their opinions as Chinese EVs step up on design and price. Attention markets meet industrial and propaganda strategy in this neat piece from The Verge. And it’s more than just perception: China has got really good at EVs in a remarkably short period of time.

5. Trade policy with the subtlety of a brick

A senior EU figure calls US negotiating tactics “blackmail” over metals tariffs versus digital rule‑making. Even M. Le Président himself says Brussels is “afraid” to tackle Big Tech. Hyperbole aside, this is a tidy snapshot of how tech regulation and geoeconomics get horse-traded, and how even big blocs can’t protect us when we’re so reliant on one country’s technology giants.

6. A browser that respects you

Kagi’s Orion 1.0 is out. WebKit under the hood, privacy first by design, and a set of thoughtful touches that make Chrome feel bloated (because Chrome is bloated). Even if you don’t switch, it is good to be reminded that defaults are choices. Details on Kagi’s blog, and one of the most important details: the Linux version is coming next. Eat that, Windows suckers!

7. Oh, you thought it was all private, did you?

OpenAI is preparing to roll out ads in ChatGPT. I’m willing to bet that they won’t put ads into paid accounts. Or at least, they won’t at first. The general law of enshittification means that even if you’re paying a hefty fee to them every month, your data will be used to target you with ads in a year or two.

8. Meet the new boss

It turns out that some of the biggest users of AI are executives. Yes, the people who think that the jobs of almost every entry-level employee can be replaced by AI are, in fact, demonstrating how much they can be replaced by AI.

The fact is that an awful lot of what executives do is exactly the kind of thing which LLMs can do just as well. The part they can’t replace is the genuine art of leadership: getting people to all go in the same direction at the same time, with a reasonable degree of happiness and confidence. These “soft” leadership skills are usually the ones Silicon Valley execs don’t have and don’t value, so I will be very pleased when they make themselves basically redundant.

9. Apple is making something unimaginable

I’m not one of those who believe that William Gibson’s Neuromancer is basically unfilmable. But it’s certainly one of those books which, if done wrong, will be an absolute mess.

So I’m actually quite happy that Apple is taking a crack at it. The company has proven it can do science fiction, even if Foundation remains a dull plod (because the books are a dull plod). One thing that already stands out is the casting – I can’t imagine a better Armitage than Mark Strong, who can shift from professionalism to ultraviolence and extreme threat in a heartbeat.

10. And finally, something wonderful

Maruyama Ōkyo popularised the shasei technique, painting directly from nature to convey the inner lives of animals, such as puppies. This approach reflects Buddhist beliefs that all beings are animated by spirits, allowing painters to express emotions and subjective experiences. If you’re a teacher, you should download this lesson plan about the wonderful work. Or just look at the art's elegance.

Ten Blue Links, “Pop goes the bubble” edition

2025-11-23 05:19:39

1. Being a Luddite is cool, actually

Ten Blue Links, “Pop goes the bubble” edition

For years, calling someone a “Luddite” was the ultimate insult in Silicon Valley—a shorthand for being backwards, anti-progress, and probably afraid of your own toaster. But as Brian Merchant points out in this excellent piece in the New Yorker, we’ve got the Luddites all wrong. They weren’t anti-technology – they were anti-poverty. They destroyed looms not out of hatred for machines, but because they opposed how those machines were used to suppress wages and devastate their communities.

As we watch the ‘AI revolution’ unfold, the deployment of technology to replace workers or worsen conditions mirrors the Luddites’ concerns. The merchant’s description of smashing Ring cameras and printers highlights justified anger over who benefits from these innovations. Asking ‘who does this technology actually serve?’ is crucial for understanding its societal impact, making this question central to critical thinking.

2. The ladder is being pulled up

If you want to know why Gen Z is anxious, look at the entry-level job market. As this piece details, the traditional first rungs of the career ladder are being sawed off by AI. Jobs that used to be the training ground for new graduates—copywriting, basic coding, data analysis—are precisely the ones that LLMs are “good enough” at doing for pennies.

Executives, naturally, are delighted to cut headcount. But there’s a massive systemic risk here that nobody seems to be planning for. If you don’t hire juniors, where do your seniors come from in five years? We’re creating a hollowed-out workforce structure where you either enter as an expert or you don’t enter at all. The “reskilling” narrative is a convenient fiction because you can’t reskill for a job that doesn’t exist. This approach risks a youth unemployment crisis that could make 2008 look mild by comparison.

3. Your asteroid mining startup is a cry for help

There is a pervasive immaturity in tech culture, a refusal to engage with the messy reality of the world as it is. TechCentral puts it brutally: the obsession with sci-fi futures—asteroid mining, Mars colonies, AGI—is a form of escapism. It’s fundamentally unserious and distracts from addressing urgent societal issues.

This wouldn’t matter if these were just the daydreams of nerds in a basement. But these are the people controlling the capital and the infrastructure of our digital lives. When you justify burning the planet’s resources today for a hypothetical techno-utopia tomorrow, you’re not a visionary; you’re a vandal. We need fewer spaceships and more maintenance of the things that actually keep society running. But I guess fixing the trains or paying for social care doesn’t get you a TED talk.

4. Even the AI guys are worried about the AI guys

Dario Amodei runs Anthropic, one of the leading AI businesses. You’d think he’d be the first to tell you everything is fine. Instead, he spends a lot of time warning us that his own industry is potentially building something catastrophic.

It’s a strange cognitive dissonance: “We must build this powerful thing before the bad guys do, even though building it might kill everyone.” Amodei talks a good game about safety and responsibility, and compared to the accelerationists at OpenAI and the weird thieves at Perplexity, he sounds like the adult in the room. But he’s still in the room, pouring gasoline on the fire, just doing it with a slightly more worried expression. If the people building the tech are terrified of it, maybe we should listen to their fears rather than their sales pitch.

5. You can’t fix the web with a memoir

Tim Berners-Lee gave us the web. Now he wants to save it. In a new memoir, he laments the commercialisation and centralisation that have turned his open garden into a series of walled prisons owned by five companies.

It’s hard not to feel sympathy for TBL. He built a tool for connection, and it was weaponised for surveillance capitalism. But reflecting on the ‘missed opportunity’ of micropayments should prompt us to demand more vigorous antitrust enforcement instead of just new protocols. Recognising that political economy shapes the web encourages readers to consider systemic solutions.

6. Party like it’s 1999 (until the hangover hits)

Does the current AI boom feel familiar? It should. As Crazystupidtech points out, the vibes are distinctly late-90s. We have the same astronomical valuations for companies with zero revenue, the same “this time it’s different” rhetoric, and the same FOMO driving otherwise rational investors to throw billions at anything with “.ai” in the domain name.

The spending is projected to hit $1.5 trillion. The revenue, though, is nowhere near that. When the correction comes—and it will—it’s going to be ugly. The difference is that when the dot-com bubble burst, we got cheap fibre and Amazon. When the AI bubble bursts, we might just be left with a lot of useless GPUs and a melted ice cap.

7. Free your ears from the ecosystem

Apple’s “walled garden” is nowhere more evident than in how AirPods work—or don’t work—with non-Apple devices. Enter LibrePods, an open-source project to unlock the full functionality of your expensive earbuds on Android.

This is the kind of hacking (in the original, good sense) that makes technology fun again. It’s a reminder that we bought the hardware and can use it however we want. It’s a small victory against the ecosystem lock-in that tries to turn us from owners into renters of our own devices.

8. Grok confirms Musk is the Messiah, surprisingly

Elon Musk’s AI, Grok, recently started outputting paeans to its creator, declaring him superior to Einstein. As ReadTPA notes, this wasn’t a bug; it’s a feature of how these systems mirror the biases of their creators and their training data.

Musk blamed “adversarial prompting,” which is tech-speak for “people asked it questions I didn’t like.” But it reveals the danger of these “truth-seeking” AIs. They don’t seek truth; they aim to please their prompter or their owner. When the owner is a billionaire with a messiah complex, you get a digital sycophant. It’s funny, until you remember people are using this for news.

9. The AI training data is going to be… interesting

Speaking of training data, the Guardian reports that hundreds of websites are now unwittingly (or wittingly) linking to a massive pro-Kremlin disinformation network. This content is flooding the web, and inevitably, it’s flooding into the datasets used to train the next generation of LLMs.

We talk about “hallucinations” in AI, but what happens when the model isn’t hallucinating, but accurately reporting the lies it was fed? We are polluting the information ecosystem at an industrial scale, and then building machines to summarise that pollution for us—garbage in, authoritarian propaganda out.

10. The fate of Google’s ad empire hangs in the balance (but don’t hold your breath)

The closing arguments are underway in the US government’s attempt to break up Google’s ad tech monopoly, and now Judge Leonie Brinkema has gone away to think it over. The New York Times reports that her decision won’t land until next year, but she’s already fretting about whether a breakup would take too long compared to a slap on the wrist.

This is the classic regulator’s dilemma. Do you try to structurally fix a broken market and accept it will take years of appeals, or do you receive a “behavioural remedy” that the company will immediately lawyer its way around? Google, naturally, is arguing for the latter. They want to keep their money-printing machine intact, where they represent the buyer and the seller and run the auction house.

If Brinkema bottles it and opts for behavioural tweaks, she’ll be repeating the mistakes of the past twenty years. You cannot regulate a monopoly that owns the entire stack by asking it to play nice. You have to take the toys away. Yes, a breakup is messy and slow. But the alternative is a permanent tax on the entire internet paid directly to Mountain View.

Ten Blue Links “Don’t Be Evil (some conditions apply)” edition

2025-11-16 22:02:20

1. This is fine dot gif of the week, part the first

Ten Blue Links “Don’t Be Evil (some conditions apply)” edition

Cookie banners are theatre. Consent is supposed to be the point. The European Commission is now workshopping a new ending: “you consented, spiritually”. The plan hands a blank cheque to AI training with vanishing upside for Europe. Of course this benefits big American companies, but it's hard to see how it gives anything to anyone in Europe.

2. I want to shoot Bluesky just to watch it die

I love writers that can make me read a line, stop, and read it again. Sarah Kendzior is that kind of writer (just go and read this if you don’t believe me). Sarah recently got banned from Bluesky for – and I am not making this up – quote-tweeting an article about Johnny Cash and making a wordplay on “Folsom Prison Blues”. The moderators thought that the line “I want to shoot the author of this article just to watch him die” was an actual threat, rather than just pasticheing a line from the song.

Remember that if one company owns the rules, one moderator controls your voice. Tim Bray has a good overview.

3. Oh come on, you have always wanted this

Apple made a weird looking pocket for your iPhone. It’s nearly sold out, which proves two things: capitalism is undefeated, and shame is on back‑order.

4. AI data centre jobs are as mythical as AGI

Remember how every time the government talks about AI and data centres it mentions the thousands of jobs which will be created? Turns out that isn’t true. The build phase hires – and then fires – lots of workers. The run phase hums quietly with few technicians, a lot of cooling, and the kind of power use that could run thousands of houses. And those temporary jobs build something that leads to the elimination of far, far more jobs.

Interestingly, in its enthusiasm for all-things AI, the UK government seems to have forgotten that its own report from 2023Interestingly, in its enthusiasm for all things AI, the UK government seems to have forgotten that its own report from 2023 predicted that between 10% and 30% of jobs could be automated and simply vanish. It will also disproportionately affect London and the South East. Imagine 5-10% unemployment becoming the norm, rather than the exception, and without the kind of social safety net we had in the Thatcher-driven era of mass unemployment. It's going to be a wild few years.

5. Just when you thought tech bros couldn’t get any worse

They managed it. After all, nothing says “closure” like an ongoing subscription to see your loved ones again, a seance with in-app purchases. Who, I ask, could possibly think that AI-driven avatars of your dead family would be a good idea? People who read too much science fiction that the author intended to be a dystopia and thought, “hey, that sounds cool”. That’s who.

6. It might be all our fault, but we got some things done

Via Pixel Envy comes this great look at the history of Last.fm, one of the best and most long-lasting Web 2.0 projects. It didn’t need agentic, just clever tech and some actual, human friends. Remember Web 2.0? The excitement! The optimism! We had interoperability once, we just called in “links”.

7. Don’t be evil, but do collaborate with the government to deny peoples’ constitutional rights

It's been a long time since Google erased all trace of its “informal motto” from its code of conduct, presumably on the grounds that as a publicly-traded company being evil might actually be the best way to make a profit. And the company has travelled a long way down since then. But I think that instantly hosting an app designed for unconstitutionally targeted people for deportation days after removing legal apps which allow people to report sightings of ICE agents really does take the No-Prize for collaboration and hypocrisy.

8. I for one welcome our new robo-canine overlords

What could possibly go wrong?

9. Guys, just read some better books, OK?

Another great example of how Silicon Valley is obsessed with science fiction is the influence of Iron Man’s Jarvis on the vision of artificial intelligence they have. I just wish that they would look to different kinds of visions of the future which don’t involve giant robots, egotistical men and the doom of humanity. If only they had read Le Guin instead.

10. The longevity of the MacBook Pro M1

There is a lot about the transition to ARM which has been great for Mac users. The M-series sips power and delivers the kind of performance that early versions are still performing really well today.

But.

This longevity poses something of a challenge for Apple, which would love you to upgrade every few years. In the Intel era Mac users tended to fall into two camps: power users who needed the best performance, who would upgrade every couple of years; and the rest of us who didn’t need that kind of performance, but would find that battery life dropped down to 2-3 hours after maybe five years, so would upgrade then.

Neither of these scenarios is the same in the M-series era. When your starting point for performance and battery life is as high as ARM delivers for Macs, you’re much less likely to need to upgrade. Except Apple only gives five years of upgrades for macOS, with another three years of security updates. That’s eight years – not bad, but probably not what the M-series could support.

This is why projects like Linux for M-series Macs are so important. Why consign a perfectly good computer to e-waste just because its maker no longer wants to write software for it?

If Apple was… well, not Apple, it would sponsor and support something like Asahi Linux as a way to extend the working lives of its products. A few million dollars – chump change for $4 trillion company – alongside some technical support would make a huge difference to an open source project. It would make a small difference to the sales of Macs while adding a sheen of something different to how the company is perceived.

So why not? Well remember “don’t be evil”? We are no longer in the era of Apple, or any other big tech company, really needing to care. And that era of “caring capitalism”. Isn’t going to come back.

Ten Blue Links, “forever blowing bubbles” edition

2025-11-10 00:53:31

Ten Blue Links, “forever blowing bubbles” edition

This week’s topic is AI. In some ways at the moment, every week's topic is AI, but this one especially so.

1. DeepSeek’s big debut comes with a cold shower about jobs

In their first major public outing since going global, a senior DeepSeek researcher warned that AI’s short‑term upsides could give way to serious employment shocks within 5–10 years, as reported by Reuters. That’s a rare moment of candour in an industry that has basically made out this is all for the benefit of mankind.

China is, of course, positioning DeepSeek as proof it can innovate despite (or perhaps because of) US sanctions, and the company keeps shipping — including an upgraded model tuned for domestic chips. The subtext is that AI scale is coming either way. The only real question is whether policy and industry will manage the human transition or pretend it’ll sort itself out.

2. Meta’s fraud economy problem

Internal docs suggest Meta could book around 10% of 2024 revenue — roughly $16bn — from ads linked to scams and banned goods, according to Platformer. Pair that with the finding that a third of successful US scams touch Meta’s platforms, plus user reports being allegedly ignored 96% of the time, and you have a portrait of the company’s incentives gone feral. When fines are rounding errors and high‑risk ads are lucrative, why should Meta even bother trying to fix this?

3. Are AI bubbles good for society?

You can guess what I’m going to say. The romantic story is that bubbles leave behind useful infrastructure. The less romantic truth, as the FT notes: they also waste capital, invite fraud, and distorte priorities. The AI boom is a typical bubble, with huge build‑out, overheated expectations and crowd psychology. Useful to remember when every cost is waved away with “progress.”

4. Big Tech quietly dims the diversity lights

Google, Microsoft, and Meta have stopped releasing workforce diversity statistics, citing shifting politics and priorities—a reversal covered by Wired. Apple, Amazon and Nvidia still publish. Transparency isn’t a panacea, but turning off the lights makes it harder to see whether representation improves or slides backwards. The message is clearly “this isn’t a focus anymore.” If it ever really was.

5. A self‑driving tragedy that says the quiet part out loud

After a Waymo car killed a beloved neighbourhood cat in San Francisco, the backlash wasn’t just about one incident. As Bay Area Current recounts, it tapped into a deeper resentment about tech occupying public space without owning the consequences. Corporate condolences don’t cut it when accountability feels optional. If autonomy wants public trust, it needs humility — and skin in the game when things go wrong.

6. Philanthro‑optimism meets politics

Bill Gates’s recent pivot from “cut carbon now” to the fuzzier ideal of “human flourishing” has been rightly read as a retreat from climate politics. The critique — laid out in Dave Karpf’s newsletter — is that technology can’t substitute for legitimacy, coalition‑building, and the grind of governance, especially under an administration openly hostile to climate action. If the plan relies on benevolent billionaires, it’s not a plan.

7. Amazon vs. Perplexity is a preview of the agentic web

Amazon sent a cease‑and‑desist to Perplexity over its Comet shopping agent operating on Amazon.com, alleging ToS violations and potential fraud, per Platformer. Perplexity says it’s enabling user intent rather than impersonating it. Beyond the legal wrangling is a bigger question: when AI agents do the browsing and buying, who holds power — platforms, publishers, or the agent itself (somehow)?

8. “Only biology can be conscious,” says Microsoft’s AI chief

Mustafa Suleyman wants developers to stop flirting with machine consciousness and focus on useful systems that don’t pretend to feel pain, as he told CNBC. Treating models as tools rather than quasi‑people might spare us a lot of anthropomorphic nonsense and some bad policy.

9. I don’t know, just why would kids not be working?

The number of young people not in education, employment or training is rising, and the government are bamboozled as to why. But the answer is pretty obvious: the very AI which the government has been championing is hitting the entry-level job market hard. And employers are finally admitting what anyone with a brain would know: they are using AI to cut headcount.

It’s going to get worse. Earlier this year, Dario Amodei, CEO of Anthropic, predicted that AI could take away half of all entry-level jobs, and that this would disproportionately affect what we used to call white-collar jobs. For decades, a university education has been pitched as the gateway to one of these higher-paying jobs. Now that that's gone, young people have less incentive to stay in education. Who wants to be saddled with £50,000 of debt when you’re going to end up unemployed anyway?

As demand for degrees falls, this will lead to further pressure on our already near-bankrupt universities. And… you can see where all this goes.

10. That rumble you can hear is the sound of the impending a(i)pocalypse. Maybe.

Two hundred billion dollars. That’s how much debt has been loaded onto the markets to fund the relentless expansion of AI capabilities that tech companies are currently indulging in. If that sounds scary, it’s understandable.

Now to put that into context, OpenAI’s Sam Altman has publicly stated the company is on its way to $100bn a year in revenue. And if those predictions turned out to be true, then that $200bn looks like a bargain.

Some people, though, have predicted we are in for a 2008-style crash when – not if – the AI bubble implodes. But there are some profound differences between now and 2008. The 2008 crisis was driven by loose underwriting, subprime defaults, and complex securitisations (MBS/CDOs) that transmitted losses through the global banking system. There’s no evidence that this is happening now.

A burst AI bubble would more likely manifest as equity drawdowns, capex cuts, and sector-specific spread widening rather than a cascading credit crisis via complex securitisations.

For me, the question marks over AI aren’t about potential financial risks, but about societal and cultural risks. Mass unemployment amongst the young rarely leads to a more stable society and will magnify the division between the young and the old, who have property and pensions to fall back on. Meanwhile, a class of billionaires will take the message from AI that they no longer need the rest of us. It’s going to be a difficult decade.