2025-11-10 00:53:31
This week’s topic is AI. In some ways at the moment, every week's topic is AI, but this one especially so.
In their first major public outing since going global, a senior DeepSeek researcher warned that AI’s short‑term upsides could give way to serious employment shocks within 5–10 years, as reported by Reuters. That’s a rare moment of candour in an industry that has basically made out this is all for the benefit of mankind.
China is, of course, positioning DeepSeek as proof it can innovate despite (or perhaps because of) US sanctions, and the company keeps shipping — including an upgraded model tuned for domestic chips. The subtext is that AI scale is coming either way. The only real question is whether policy and industry will manage the human transition or pretend it’ll sort itself out.
Internal docs suggest Meta could book around 10% of 2024 revenue — roughly $16bn — from ads linked to scams and banned goods, according to Platformer. Pair that with the finding that a third of successful US scams touch Meta’s platforms, plus user reports being allegedly ignored 96% of the time, and you have a portrait of the company’s incentives gone feral. When fines are rounding errors and high‑risk ads are lucrative, why should Meta even bother trying to fix this?
You can guess what I’m going to say. The romantic story is that bubbles leave behind useful infrastructure. The less romantic truth, as the FT notes: they also waste capital, invite fraud, and distorte priorities. The AI boom is a typical bubble, with huge build‑out, overheated expectations and crowd psychology. Useful to remember when every cost is waved away with “progress.”
Google, Microsoft, and Meta have stopped releasing workforce diversity statistics, citing shifting politics and priorities—a reversal covered by Wired. Apple, Amazon and Nvidia still publish. Transparency isn’t a panacea, but turning off the lights makes it harder to see whether representation improves or slides backwards. The message is clearly “this isn’t a focus anymore.” If it ever really was.
After a Waymo car killed a beloved neighbourhood cat in San Francisco, the backlash wasn’t just about one incident. As Bay Area Current recounts, it tapped into a deeper resentment about tech occupying public space without owning the consequences. Corporate condolences don’t cut it when accountability feels optional. If autonomy wants public trust, it needs humility — and skin in the game when things go wrong.
Bill Gates’s recent pivot from “cut carbon now” to the fuzzier ideal of “human flourishing” has been rightly read as a retreat from climate politics. The critique — laid out in Dave Karpf’s newsletter — is that technology can’t substitute for legitimacy, coalition‑building, and the grind of governance, especially under an administration openly hostile to climate action. If the plan relies on benevolent billionaires, it’s not a plan.
Amazon sent a cease‑and‑desist to Perplexity over its Comet shopping agent operating on Amazon.com, alleging ToS violations and potential fraud, per Platformer. Perplexity says it’s enabling user intent rather than impersonating it. Beyond the legal wrangling is a bigger question: when AI agents do the browsing and buying, who holds power — platforms, publishers, or the agent itself (somehow)?
Mustafa Suleyman wants developers to stop flirting with machine consciousness and focus on useful systems that don’t pretend to feel pain, as he told CNBC. Treating models as tools rather than quasi‑people might spare us a lot of anthropomorphic nonsense and some bad policy.
The number of young people not in education, employment or training is rising, and the government are bamboozled as to why. But the answer is pretty obvious: the very AI which the government has been championing is hitting the entry-level job market hard. And employers are finally admitting what anyone with a brain would know: they are using AI to cut headcount.
It’s going to get worse. Earlier this year, Dario Amodei, CEO of Anthropic, predicted that AI could take away half of all entry-level jobs, and that this would disproportionately affect what we used to call white-collar jobs. For decades, a university education has been pitched as the gateway to one of these higher-paying jobs. Now that that's gone, young people have less incentive to stay in education. Who wants to be saddled with £50,000 of debt when you’re going to end up unemployed anyway?
As demand for degrees falls, this will lead to further pressure on our already near-bankrupt universities. And… you can see where all this goes.
Two hundred billion dollars. That’s how much debt has been loaded onto the markets to fund the relentless expansion of AI capabilities that tech companies are currently indulging in. If that sounds scary, it’s understandable.
Now to put that into context, OpenAI’s Sam Altman has publicly stated the company is on its way to $100bn a year in revenue. And if those predictions turned out to be true, then that $200bn looks like a bargain.
Some people, though, have predicted we are in for a 2008-style crash when – not if – the AI bubble implodes. But there are some profound differences between now and 2008. The 2008 crisis was driven by loose underwriting, subprime defaults, and complex securitisations (MBS/CDOs) that transmitted losses through the global banking system. There’s no evidence that this is happening now.
A burst AI bubble would more likely manifest as equity drawdowns, capex cuts, and sector-specific spread widening rather than a cascading credit crisis via complex securitisations.
For me, the question marks over AI aren’t about potential financial risks, but about societal and cultural risks. Mass unemployment amongst the young rarely leads to a more stable society and will magnify the division between the young and the old, who have property and pensions to fall back on. Meanwhile, a class of billionaires will take the message from AI that they no longer need the rest of us. It’s going to be a difficult decade.
2025-11-04 01:55:31
If 21st-century capitalism has one central tenet, it's that innovation rules and disruption drives progress. And generative AI is one of the most disruptive innovations of our lifetime. Which is why, as Morra Aarons-Mele writes in HBR, it's making us all so bloody anxious.
The anxiety is entirely rational. Our nervous system is designed to react to sudden change and perceived threats, and AI is delivering stressors at both scale and speed. Aarons-Mele identifies three drivers: lack of control over the speed of change, loss of meaning in work, and uncomfortable emotions we'd rather avoid. That last one is particularly acute when your boss casually refers to ChatGPT as the "Chief Marketing Officer" in front of the actual CMO.
The article's strength is in treating AI anxiety not as something to be suppressed, but as information to be understood. What values are being threatened? Is it fairness? Trust? Craft? Agency? Understanding your emotional response, Aarons-Mele argues, is the key to acting strategically rather than reactively. As Brené Brown puts it: "If you're not feeling unsettled, you're probably not paying attention."
Elon Musk's latest wheeze is Grokipedia, an AI-powered "encyclopedia" that's supposed to be a superior alternative to Wikipedia. Spoiler: it isn't. Instead of addressing Wikipedia's perceived biases, it simply reinforces Musk's own worldview, whilst lacking the reliability that makes Wikipedia actually useful. Read more at the FT.
The fundamental problem, as Jemima Kelly points out, is that Grokipedia illustrates the ongoing challenges of maintaining truth and accuracy in AI-generated knowledge without proper human oversight. Wikipedia works not despite its human editors, but because of them. It's a system built on transparency, sourcing, and endless arguments about whether something is notable enough to include.
What Musk has created is the opposite: an opaque black box that produces plausible-sounding text with all the hallmarks of AI-generated slop. It's telling that someone with Musk's resources and apparent obsession with "free speech" can't grasp that knowledge curation requires exactly the kind of human judgment and community governance he seems to despise. Another win for the "move fast and break things" crowd.
OpenAI is ending October with a new for-profit structure, a new deal with Microsoft, and an entirely new level of pressure to achieve artificial general intelligence. The Verge has the details.
The stakes are enormous. The 2019 partnership between OpenAI and Microsoft included an "AGI clause" that said Microsoft's rights to use OpenAI's technology would end once OpenAI achieved AGI. But now everything has changed. Microsoft can independently pursue AGI alone or with third parties, and crucially, can use OpenAI's IP to do it. An independent panel will now verify any AGI declaration, rather than OpenAI making unilateral decisions.
This restructuring puts billions of dollars on the line and kicks off an arms race in earnest. Microsoft could work with Anthropic or other OpenAI competitors to reach AGI first. Meanwhile, Sam Altman keeps shifting the goalposts, recently saying AGI is "hugely overloaded" as a term. What's clear is that the race to define and achieve AGI is now as much about corporate competition as it is about technological capability.
The German state of Schleswig-Holstein has completed its migration from Microsoft Exchange and Outlook to open source alternatives, moving over 40,000 mailboxes with more than 100 million emails to Open-Xchange and Thunderbird. Heise reports on what Minister Dirk Schrödter calls a milestone for digital sovereignty.
This isn't just about email. The state is systematically replacing its entire Microsoft stack: LibreOffice is replacing Office, Nextcloud is replacing SharePoint, and eventually Linux will replace Windows across all state computers. It's a genuinely ambitious project, and one that hasn't been without problems — Schrödter recently had to admit to errors and downtime during the migration.
But here's the thing: they're doing it anyway. The northern German state is betting that independence from large tech companies is worth the pain of transition. And unlike most digital sovereignty projects, this one is actually happening, with real deadlines and real consequences. Other European governments are watching closely.
Cory Doctorow has written a brilliant piece arguing that Europe's digital sovereignty efforts will fail without repealing Article 6 of the EU Copyright Directive. His point? You can build all the European alternatives you want, but if it's illegal to create the tools to migrate away from US platforms, you've achieved nothing.
The anti-circumvention laws the EU adopted under US pressure create a legal barrier — a "Berlin Wall," as Cory calls it — that prevents European companies from building migration tools. The Digital Markets Act requires interoperability, but it relies on gatekeepers' goodwill. We've seen how that works with Apple's malicious compliance.
Cory's insight is that this isn't just about consumer rights or repair anymore. It's about whether Europe can ever break free from US tech dependency. Building a European cloud stack is pointless if your citizens and businesses can't legally extract their data from Microsoft, Google, and AWS. The EuroStack Initiative is taking note.
Francis Fukuyama has written a thoughtful piece puncturing the AI hype bubble. His argument is simple: even if we achieve superintelligent AI, it won't deliver the explosive economic growth that Silicon Valley promises because intelligence isn't the binding constraint on growth.
The real constraints are material and political. We're already running into planetary limits. China, America, and Europe, at 10 per cent annual growth, would rapidly exhaust agricultural land, water, energy, and everything else. At the micro level, translating smart ideas into physical products requires iterative testing in the real world, which no amount of intelligence can simulate.
But the killer point is about implementation. AGI might know how to provide clean water to a struggling city in the developing world. But the problem isn't knowledge — it's the political realities of vested interests, corruption, and armed water mafias. Intelligence doesn't overcome those obstacles. It's a useful corrective to the cathedral of genius worship that Silicon Valley has become.
Claer Barrett at the FT has been experimenting with AI shopping assistants, and unlike most AI hype, this actually sounds useful. She describes using an AI app to find a white bookcase, specifying exact requirements, including that she hates DIY and wants it fully assembled.
Within seconds, she had a comparison table of UK retailers, prices, delivery times, and even a summary of customer service reviews. Adobe reckons more than half of US shoppers will be using some form of GenAI by the end of this year. The next phase is "agentic commerce" — AI not just recommending products but actually completing transactions.
Shopify has partnered with OpenAI to enable merchants to sell directly through ChatGPT conversations. As these apps collect data about our lives, they'll start anticipating our needs and suggesting purchases. Retailers are terrified about what happens to brand loyalty when AI intermediates every purchase. But if an AI shopping agent could check my bank balance and arrange delivery on a day I'm working from home? I might be persuaded.
Canva has scrapped the three separate Affinity apps and combined them into a single free Mac app. On the surface, this is good news — no subscriptions for core functionality, and the apps aren't being asset-stripped or rebranded.
But there are caveats. The three apps (Publisher, Photo, and Designer) are now just modes within a single app, accessed via Vector, Pixel, and Layout buttons. Users will need to heavily customise toolbars to recreate their workflows. The iPad version won't arrive until 2026. Existing Affinity documents need to be updated to open in the new app, so they won't work in the old software anymore.
Still, compared to Adobe's relentless subscription price hikes, a free one-off download is a relief. Canva clearly sees this as its route to the professional market, and presumably expects to monetise through its paid AI tiers. For users who loved the original Affinity trio, it's bittersweet — the apps aren't dead, but they're not quite the same either.
Microsoft CEO Satya Nadella has revealed something that should worry us all: the company has GPUs sitting in inventory that it can't plug in because there isn't enough electricity. The problem isn't a shortage of chips, but a shortage of power to run them.
This isn't some theoretical future concern. AI data centres are already causing consumer energy bills to skyrocket in the US. OpenAI is calling on the federal government to build 100 gigawatts of power generation capacity annually, framing it as a strategic asset in the race against China. Meanwhile, Beijing is apparently miles ahead in electricity supply thanks to massive investments in hydropower and nuclear power.
This demand for power cannot be endless. We're already butting up against planetary limits, and if we're planning to meet AI's insatiable appetite for electricity by burning more fossil fuels, we're trading fast computers for an unliveable climate. The AI industry is effectively asking us to choose between their profit margins and a habitable planet. And right now, it looks like they're winning that argument.
Silicon Valley's joyless digital monks, subsisting on meal replacement drinks to maximise productivity, have been accidentally dosing themselves with toxic lead and cadmium. Consumer Reports found that Huel's Black Edition powder contains 6.3 micrograms of lead per serving — 13 times the daily recommended limit — plus double the safe daily serving of cadmium.
Long-term lead exposure causes kidney dysfunction, hypertension, nervous system damage, and decreased cognitive performance. Cadmium causes cumulative nervous system damage. If you've ever wondered why some tech bros have such bizarre views and erratic behaviour, chronic heavy metal poisoning that impairs cognitive function might be part of the explanation.
They've been optimising themselves into toxicity, reducing the "messy realities of existing in a body" to industrial slurry. The same naive confidence that code can solve everything has led them to replace food with brain toxins. It's the perfect metaphor for Silicon Valley hubris.
2025-10-27 00:23:32
Hey, I actually wrote some things this week! The first was about politics, riffing off Amazon's plan to dump half a million workers thanks to "automation", and the second was a bit of a meditation on the division between tech, humans and nature. The themes in these pieces are ones that I have been mulling on for a while. I have another one brewing which masquerades as something that's about the movie Phase IV (which you should watch) but is actually on the connection between 1960s cybernetics and the online world we have constructed now. It still needs some work, but I should post it this week.
One thing I'm not sure about is whether I should send these out via email or just post them online (if you follow me on The Socials or Old School RSS you will, of course, see them). What do you think? Let me know in the comments or via email.
You might have noticed half the internet falling down on its backside earlier in the week, thanks to an outage at Amazon Web Services which most of your digital life that's not made by Google or Microsoft uses. It was, of course, DNS (it's always DNS). As Wendy points out, this has added to the general call for more digital sovereignty -- if you imagine that the government's Digital ID idea (insert rolling eyes emoji here) might rely on services like this, it's probably a good idea that the UK owns its own tech stack.
I'm not an infrastructure expert, but I should note that I barely noticed the outage. I've been gradually moving away from services which are based in the US, including Amazon, mostly for reasons of privacy and lack of trust in the Trump administration. Proton (which I use for most of my email) runs its own servers, in Switzerland. Draw your own conclusions.
In Europe, when a company sells you a laptop, phone, or anything in a few other electronic device categories, it has to give you the option of not including a power adapter. The aim is to cut e-waste and gently encourage common standards for charging, mostly around USB-C. It's part of the EU’s wider “right to repair” and sustainable electronics strategy, aimed at making devices longer-lasting, interoperable, and less wasteful to produce and sell.
How big a problem is e-waste from chargers? The European Commission (EC) estimates that unused chargers from small devices account for about 11,000 tonnes per year, and, if you include larger devices like laptops, that rises to around 35,000 tonnes per year. This is a small fraction of the amount of e-waste per year, but it's not trivial, especially given that it's really simple to avoid.
But here's John Gruber, who has never seen an EC directive that he didn't dislike on sight if it affects Apple:
Anyway, the reason this regulation is subject to ridicule was never that European MacBook buyers were, effectively, paying for a charger that was no longer included. It’s that this is a silly law, and likely causes more harm than good. If Apple thought it was a good idea to no longer include power adapters in the box with MacBooks, they’d just stop including chargers in the box, worldwide. That’s what Apple started doing with iPhones with the iPhone 12 lineup five years ago. That wasn’t because of a law. It was because Apple thought it was a good idea.
Remind me again why Apple uses USB-C on iPhones? Apple fought tooth and nail against this simple step which both reduced e-waste and actually ended up with a better experience for users. Oh, and remember when people were saying that the thickness of USB-C compared to Lightning would mean it would be hard to make thinner phones? Hello, the iPhone Air called!
Which is odd, because Apple claimed at the time that using USB-C would put the brakes on innovation:
“We remain concerned that strict regulation mandating just one type of connector stifles innovation rather than encouraging it, which in turn will harm consumers in Europe and around the world.”
Of course the obvious riposte to this is "yeah, but what could they do if they didn't have to accomodate USB-C?" And the answer is "not much, unless they develop new battery technologies."
I just bought a new Framework laptop. Like most vendors, you get the option not to include a power adapter, or to add one at extra cost. I didn't bother, and when I use it at home it gets plugged into chargers from Anker, Asus -- and of course, Apple. This is another example where European customers have already moved on, while the US lags behind.
Yes, you read that right.
This is actually a return to classic pre-Twitter Musk, the guy who believes that it's going to end world poverty, take humanity to Mars, and enable everyone to get free medical treatment for everything. Of course all those things would also be possible if we just redistributed the world's resources from people like Musk to everyone else, but that particular solution to the world's ills never seems to cross his mind.
Porsche's new CEO, Michael Leiters, is shifting focus back to petrol engines, citing that electric vehicles lack emotional appeal and depreciate faster. Despite significant investments in EVs, demand for Porsche versions has fallen short, leading to the shelving of a new electric SUV and job cuts. The company faces challenges in key markets like China and the US, with sales dropping and new tariffs impacting imports.
Of course one of the reasons that China isn't buying Porsche's is that they are buying cheaper and just as luxurious cars from the likes of BYD. The Chinese have invested in EVs to such a degree that it's going to be hard to compete in the next couple of decades, but if Leiters thinks the answer is to focus on more primitive technology then he's a fool.
Signal's engineering team has made significant advancements in post-quantum encryption, enhancing the Signal Protocol to ensure robust security against future threats based on quantum computing. The update introduces a third ratchet, the Sparse Post Quantum Ratchet (SPQR), which combines traditional and quantum-safe key generation methods, allowing for secure messaging even in asynchronous environments. No, I'm not totally sure what it means either. But it sounds good.
The point of course is to keep your messages away from the prying eyes of spooks and criminals not just now, but in the foreseeable future too. Signal isn't the only privacy-focused company doing this: Tuta, the German email company known for its relentless focus on keeping things private, launched their post-quantum system TutaCrypt system last year.
The biggest blocker with systems like Tuta and Signal, though, isn't the technology. It's persuading other people to use them. Oh, if you want to send me encrypted email, you can reach my Tuta account at [email protected], and you can always send me messages to Signal (evilmole.100). If you want to send to my marginally (and I really do mean marginally) less secure Proton account too.
The claims are splashy, but the useful takeaway is simpler: as models become more capable, ambiguous shutdown instructions and reward loops can generate stubborn behavior that looks like self‑preservation. The right response isn’t panic. It’s tighter evals, clearer operational constraints, and better interpretability. If we can’t reliably turn systems off, we don’t have control -- and that’s a governance and ownership problem before it’s a capabilities milestone.
Europe often talks a big game about building a homegrown stack, then quietly keeps the locks on the doors. Anti‑circumvention rules make migration tools legally hazardous, which is exactly how incumbents like it.
If policymakers want competition and real sovereignty, they need to legalise exit: portability, interoperability, and the permission to build the crowbars that move users and data across walled gardens.
Badging cloud contracts as national strategy doesn’t make them independent. Handing the keys to a single supplier — especially in jurisdictions allergic to scrutiny — creates a new kind of lock‑in with political window dressing. If countries want actual sovereignty, they need open standards, exit ramps, and a procurement model that prefers composable pieces over turnkey dependence.
The end of free Windows 10 support could lead to 400 million computers becoming obsolete, contributing to significant e-waste and security risks. Many users, including businesses and schools, will end up purchasing new devices due to the lack of updates.
Activists are urging Microsoft to extend support, citing successful campaigns that influenced other companies, like Google's extension of Chromebook updates to ten years to prevent waste.
Or you could install Linux, of course.
Is Sam Altman a psycho-CEO? You betcha!
2025-10-23 03:26:59
We like to imagine that there’s a clean border between technology and nature. Out there are trees, rivers, fungi, and weather; in here are laptops, data centres, and motorways. One is wild and ancient, the other human and new. It’s a neat, precise division — and completely artificial.
Humans are natural beings, products of the same long evolutionary processes that gave rise to coral reefs and mycelium networks. Everything we make emerges from those same impulses: tool use, cooperation, pattern recognition, the desire to shape our surroundings. The first chipped stone, the first loom, the first line of code — all are extensions of our biology. In that sense, the iPhone and the termite mound share a lineage.
The idea that technology stands apart from nature is a cultural construction, not a scientific truth. It’s a story that took shape alongside industrialisation and empire, reflecting a way of seeing the world as material to be organised and exploited. Over time, that mindset hardened into common sense, shaping both our technologies and our politics. The division isn’t neutral; it emerged from power relations — from the ways societies built on extraction learned to understand themselves.
In fiction, solarpunk offers a way to dissolve that border. It imagines technologies that behave like ecosystems: adaptive, regenerative, interdependent. Solar panels become leaves, networks mimic root systems, architecture turns into habitat. In that world, technology doesn’t dominate but participates. And while solarpunk remains a genre of imagination, the fictions we tell have power: we see the world through the lenses our stories create, and sometimes those stories can help us remember different ways of living.
The task, then, isn’t to make technology more natural — it already is — but to make our relationship with it conscious again. To recognise that every machine is part of a larger organism, every algorithm an ecological act. When we design and deploy technology as though it were alive, connected, and accountable, we remember what was always true: that we have never stood outside nature, only forgotten that we belong to it.
Perhaps the next question is evolution itself: if technology is part of nature, what forces are shaping which forms survive?
2025-10-22 05:44:44
Every industrial revolution has created winners and losers, but this time the scale of displacement — and the indifference of those in power — feels different.This time, the number of people the next one might sweep aside could be vast, and no one in power seems prepared for it.
Amazon’s plan to automate hundreds of thousands of warehouse roles isn’t just a corporate efficiency play; it’s a preview of what happens when the middle of the labour market disappears. The jobs that once formed the backbone of stable, suburban lives — skilled enough to pay a mortgage, routine enough to be widely available — are being eaten up. AI and robotics take care of the predictable tasks, while what remains drifts towards either high-skill technical work or low-wage service labour.
That hollowing-out is already shaping politics. The old centrist coalition — white-collar managers, skilled trades, clerical workers — is dissolving. In its place, we see two groups pulling further apart: the hyper-connected, globally mobile professionals who build and profit from automation, and the increasingly precarious majority who feel automation happening to them.
When both sides vote, they do so for very different futures — but the cruel irony is that the “different future” many in the precarious majority are voting for often serves the interests of the hyper-rich. The rage against elites is quietly redirected by those very elites, repackaged into culture wars and nationalist fantasies that leave wealth and power untouched. While ordinary voters demand someone to burn the system down, figures like Musk and his peers are busy redesigning it in their own image.
The populist movements of the last decade weren’t an aberration; they were the early symptoms of this transformation. The Brexit vote, Trump’s rise, France’s gilets jaunes — all drew energy from the sense that effort no longer leads to reward, that institutions have broken their side of the deal. If Amazon can replace half a million jobs with robots while politicians call it “innovation”, why trust anyone promising that work still pays?
In theory, automation could free people from drudgery and rebalance economies. In practice, under capitalism, it won’t. The benefits of technology flow to capital, not labour — and robots, algorithms and data centres are capital. The system is doing exactly what it’s built to do: concentrate ownership, reduce labour costs (and labour agency), and expand returns for those who already hold power.
Western politics keeps pretending there’s a technical fix for what is, at heart, a structural problem. You can’t regulate your way out of a system designed to extract value from workers and hand it to shareholders. Apprenticeship schemes and AI ethics panels won’t rebuild the social contract, because capitalism no longer needs one.
Without an alternative — not a reform, but a replacement — the missing middle will keep voting for anyone who promises to tear it all down.
And one day, they’ll get their wish.
2025-10-19 20:41:25
Ahh, the cloud. Or, as we used to call it, The Cloud.
The cloud was the future! Apps were all going to be in the cloud! Clients would be thin! Everyone would use a Chromebook! Everything which could possibly have a connection to the internet in your house would be better, because of its connection to the internet!
I have to say that I fell for this one hook, line, and WiFi. Which is why the loss of basically every feature of the Bose SoundTouch “smart” speakers feels like they personally hate me. They’re great speakers! You could do cool things with them! And now, they basically don’t work!
Fool me once, shame on you. Fool me twice, I probably bought “smart” products.
You were warned. App stores, which have always been claimed to be “good for users”, are absolute crack for oppressive and abusive regimes. Now normally when we talk about stuff like this it’s to highlight what China, Russia or some other country is doing. But this time, we’re going to be talking about the USA.
There is absolutely nothing illegal under US law about ICEblock, an app which allows people to report the location of ICE agents. But in a repressive regime like the US, “the law” is whatever the President says it is – and Trump’s minion Pam Bondi told Apple to remove the app from its store, a request that Tim Cook was only too quick to say “yes ma’am” to.
And ICEblock isn’t the only app – Apple has done a general sweep of apps connected with tracking ICE actions, under the guise of “protecting” law enforcement. Although it’s also banned apps which simply preserve evidence of ICE brutality, which don’t identify or locate anyone.
What next? LGBTQ+ apps? Anything connected with Islam? Spanish-language apps? It almost doesn’t matter, because the problem is systemic: any system which centralises control over software distribution with a single company is open to being abused by governments.
Cory Doctorow has a term for the trade-off that we make with locked-down devices: feudal security. It’s trusting the local lord to protect you, which sometimes works but inevitably ends with the serfs getting the sharp end of the sword. Seems like Americans are now learning this the hard way.
Oh, you thought that the UK government would just give up on breaking Apple’s encryption? Think again! Now it “just” wants the ability to tap into encrypted files on iCloud (including passwords and photos) for UK people only. Which, of course, just means that Apple will remove the ability for UK users to get the encryption at all, because there is no practical way it can give it to just a single government for use only on their citizens.
Completely by coincidence, I’m moving all my files to local storage, and all the important ones to Proton Drive.
I was in Shanghai recently and the first thing you notice is the number of cars which have green license plates. Blue plates = traditional engines. Green plates = new electric vehicles (plug-in hybrids and battery EVs). And getting on for half of the cars I saw had green plates.
What should be scary for the US is the speed at which China has built a n industry which supplies the majority of the world’s batteries, EVs and more. So much so, in fact, that a group of Western venture capitalists came back from a visit there espousing the view that there’s simply no point in investing in a wide range of new companies in those product areas.
While China is building factories which run dark because they require no people at all, and have built a huge lead in robotics, Trump is trying to “make America great” by having more of the kind of jobs that really no longer exist. For better or worse, this is going to be the Chinese century, and it’s probably too late to change that.
The problem with political corruption is that it taints every decision the government makes. Small or large, important or trivial, once people believe the only way to get things done is through bribery, the game is over.
Which leads me nicely on to the news that the US Labor Board has abandoned an investigation into Apple. The fact that Tim Cook has been cozying up to Donald Trump will undoubtedly lead to speculation that the decision is down to favours being done, rather than the merits of the investigation.
Comet, the agentic browser from Perplexity, is now available for anyone to download. I’ve been testing it for a while and it’s OK, although I found the agentic features hit or miss: when it works, it’s wondrous, but when it doesn’t, it’s exasperating.
I am not a psychiatrist (I can barely spell the word) but even I can spot when someone has something deeply wrong with their mind.
If you want to see evidence that, once again, the lunatics have taken over the asylum then look no further than this interview with wanker-in-chief Marc Andreessen, where he talks about “The Elon Method” of management. In, of course, an approving way.
So what is the Elon Method? There’s the obvious stuff: interventionist CEOs who get in the face of anyone in the company and a cult of personality around the leader (he literally uses that phrase, which given its history is an astounding statement).
But there is also this: an aggressive legal stance as deterrence. The legal team’s primary function is filing suits against adversaries.
But let’s talk anybody who goes up against us, we are going to terrorise, we are going to declare war. And then of course as a consequence of declaring war, we’re not always going to win all the wars, but we’re going to establish massive deterrence. And so nobody will screw around with us.
That’s bad enough if you think he’s talking about other businesses. But he’s not, is he?
Cambridge University Library has launched a project called Future Nostalgia to rescue data from old floppy disks. They have invited the public to bring in floppy disks, although they probably don’t want your copy of 1987’s MacPlaymate.
Steven Levy (one of the greatest tech writers all time, and no, I will not be commenting further) should know better than most to set this Betteridge's Law- baiting trap. I guess the only caveat is that AI is already shit, because essentially everyone with smart money knows it's a bubble and it just waiting for the right time to get out.
There is a certain kind of tech person who wishes that politics would just go away. But I didn't expect that to include the executives at Framework, who have managed to get themselves in a complete pickle.
The short version: they company decided to donate money to two open source products which they like a lot, Omarchy and Hyprland. The issue is that the key people in both those projects -- DHH and Vaxry -- are, respectively, a right-wing douchebag who conflates white people with "native Brits" and a toxic young man who, and I quote, believes "there could be arguments to sway my opinion towards genocide".
This donation has caused a big controversy with Framework owners and supporters, who feel that donating to people best-described as "Nazi-curious" isn't really in tune with what they see as the (political) values of Framework. The right to repair and upgrade your own computer is a fundamental value that's often part of the modern left-wing mindset.
But not, it should be said, exclusively: right-to-repairers come from all kinds of places politically, from people like myself and my friend Cory to right-wing prepper crazies who have spare parts in their nuclear bunker.
I have no idea where, on the political spectrum, the people at Framework sit. What I do know, though, is that there is a big swathe of tech boys (and they almost always are boys) who don't see the world in political terms, don't want to think about politics, and just want to do cool things with computers. My gut feeling is that Framework falls into this category, and, having been called on it, did the thing which tech boys are most comfortable doing: avoided the issue like the plague.