2025-04-22 08:22:45
Thesis: The missing element in forecasting the future of AI is to understand that AI needs culture just as humans need culture.
One of the most significant scientific insights into understanding our own humanity was the relatively recent insight that we are the product of more than just the evolution of genes. While we are genetic descendents of some ape-like creatures in the past, we modern humans are also molded each generation by a shared learning that is passed along by a different mechanism outside of biology. Commonly called “culture”, this human-created environment forms much of what we consider best about our species. Culture is so prevalent in our lives, especially our modern urban lives, that it is invisible and hard to recognize. But without human culture to support us, we humans would be unrecognizable.
A solo, naked human trying to survive in the prehistoric wilderness, without the benefit of the skills and knowledge gained by other humans, would rarely be able to learn fast enough to survive on their own. Very few humans by themselves would be able to discover the secrets of making fire, or the benefits of cooking food, or to discover the medicines found in plants, or learn all the behaviors of animals to hunt, let alone the additional educations need for the habits of planting crops, learning how to knap stone points, sew and fish.
Humanity is chiefly a social endeavor. Because we invented language – the most social thing ever – we have been able to not only coordinate and collaborate in the present, but also to pass knowledge and know-how along from generation to generation. This is often pictured as a parallel evolution to the fundamental natural selection evolution of our bodies. Inside the long biological evolution happening in our cells, learning is transmitted through our genes. Anything good we learn as a species is conveyed through inheritable DNA. And that is where learning ends for most natural creatures.
But in humans, we launched an extended evolution that transmits good things outside of the code of DNA, embedded in the culture conveyed in families, clans, and human society as a whole. From the very beginning this culture contains laws, norms, morals, best practices, personal education, world views, knowledge of the world, learnable survival skills, altruism, and a pool of hard-won knowledge about reality. While individual societies have died out, human culture as a whole has continued to expand, deepen, grow, and prosper, so that every generation benefits from this accumulation.
Our newest invention – artificial intelligence – is usually viewed in genetic terms. The binary code of AI is copied, deployed, and improved upon. New models are bred from the code of former leading models – inheriting their abilities –, and then distributed to users. One of the first significant uses for this AI is in facilitating the art of coding, and in particular helping programmers to code new and better AIs. So this DNA-like code experiences compounding improvement as it spreads into human society. We can trace the traits and abilities of AI by following its inheritance in code.
However, this genetic version of AI has been limited in its influence on humans so far. While the frontier of AI research runs fast, its adoption and diffusion runs slow. Despite some unexpected abilities, AI so far has not penetrated very deep into society. By 2025 it has disrupted our collective attention, but it has not disrupted our economy, or jobs, or our daily lives (with very few exceptions).
I propose that AI will not disrupt human daily life until it also migrates from a genetic-ish code-based substrate to a widespread, heterodox culture-like platform. AI needs to have its own culture in order to evolve faster, just as humans did. It cannot remain just a thread of improving software/hardware functions; it must become an embedded ecosystem of entities that adapt, learn, and improve outside of the code stack. This AI epizone will enable its cultural evolution, just as the human society did for humans.
Civilization began as songs, stories, ballads around a campfire, and institutions like grandparents and shamans conveyed very important qualities not carried in our genes. Later, religions and schools carried more. Then we invented writing, reading, texts and pictures to substitute for reflexes. When we invented books, libraries, courts, calendars, and math, we moved a huge amount of our inheritance to this collaborative, distributed platform of culture that was not owned by anyone.
AI civilization requires a similar epizone running outside the tech stack. It begins with humans using AI everyday, and an emerging skill set of AI collaboration taught by the AI whisperers.There will be alignment protocols, and schools for shaping the moralities of AIs. There will be shamans and doctors to monitor and nurture the mental health of the AIs. There needs to be corporate best practices for internal AIs, and review committees overseeing their roles. New institutions for reviewing, hiring and recommending various species of AI. Associations of AIs that work best together. Whole departments are needed to train AIs for certain roles and applications, as some kinds of training will take time (not just downloaded). The AIs themselves will evolve AI-only interlinguals, which needs mechanisms to preserve and archive. There’ll be ecosystems of AIs co-dependent on each other. AIs that police other AIs. The AIs need libraries of content and intermediate weights, latent spaces, and petabytes of data that need to be remembered rather than re-invented. There are the human agents that have to manage the purchase of, and maintenance of, this AI epizone, at local, national and global levels. This is a civilization of AIs.
A solo, naked AI won’t do much on their own. AIs need a wide epizone to truly have consequence. They need to be surrounded and embedded into an AI culture, just as humans need culture to thrive.
Stewart Brand devised a beautiful analogy to understand civilizational traits. He explains that the functions of the world can be ranked by their pace layers, which depend on all the layers below it. Running the fastest is the fashion layers which fluctuate daily. Not far behind it in speed is the tech layer, which includes the tech of AI. It changes by the week. Below that, (and dependent on it), is the infrastructure layer, which moves slower, and even slower below that is culture, which crawls in comparison. (At the lowest, slowest level is nature, glacial in its speed.) All these layers work at the same time, and upon each other, and many complex things share multiple levels. Artificial Intelligence also works at several levels. Its code-base improves at internet speed, but its absorption and deployment runs at the cultural level. In order for AI to be truly implemented, it must be captured by human culture. That will take time, perhaps decades, because that is the pace of culture. No matter how quick the tech runs, the AI culture will run slower.
That is good news in many respects, because part of what the AI epizone does is incorporate and integrate the inheritable improvements in the tech stack and put them into the slower domain of AI culture. That gives us time to adapt to the coming complex changes. But to prepare for the full consequences of these AIs, we must give our attention to the emerging epizone of AIs outside the code stack.
2025-03-22 04:45:00
2025-03-15 01:23:23
The other day I was slicing a big loaf of dark Italian bread from a bakery; it is a pleasure to carve thick hunks of hearty bread to ready for the toaster. While I was happily slicing the loaf, the all-American phrase “the best thing since sliced bread” popped into my head. So I started wondering, what was the problem that pre-sliced bread solved? Why was sliced bread so great?
Shouldn’t the phrase be “the best thing since penicillin”, or something like that?
What is so great about this thing we now take for granted? My thoughts cascaded down a sequence of notions about sliced bread. It is one of those ubiquitous things we don’t think about.
Turns out I am not the first to wonder about this. The phrase’s origins lie — no surprise — in marketing the first commerical sliced bread in the 1930s. It was touted in ads as the best new innovation in baking. The innovation was not slices per se, but uniform slices. During WWII in the US, sliced bread was briefy banned in 1943 to conserve the extra paper wrapping around sliced bread for more paper for the war effort, but the ban was rescinded after 2 months because so many people complained of missing the convienence of slice bread — a time when bread was more central to our diets. With the introduction of a mass-manufacture white bread like Wonder Bread, the phrase became part of its marketing hype.
I think the right answer is 4 — its a marketing ploy for an invention that turns a luxury into a necessity. I can’t imagine any serious list of our best inventions that would include sliced bread, although it is handy, and is not going away.
That leads me to wonder: what invention today, full of our infactuation, will be the sliced bread of the future?
Instagram? Drones? Tide pods, Ozempic?
This is the best thing since ozempic!
2025-03-14 00:58:30
Imagine 50 years from now a Public Intelligence that was a distributed, open-source, non-commercial artificial intelligence, operated like the internet, and available to the whole world. This public AI would be a federated system, not owned by any one entity, but powered by millions of participants to create an aggregate intelligence beyond what one host could offer. Public intelligence could be thought of as an inter-intelligence, an AI composed of other AIs, in the way that the internet is a network of networks. This AI of AIs, would be open and permissionless: any AI can join, and its joining would add to the global intelligence. It would be transnational, and wholly global – an AI commons. Like the internet, it would run on protocols that enabled interoperability and standards. Public intelligence would be paid for by usage locally, just as you pay for your internet access, storage, or hosting. Local generators of intelligence, or contributors of data, would operate for profit, but in order to get the maximum public intelligence, you need to share your work in this public non-commercial system.
For an ordinary citizen, the AI commons of public intelligence would be an always-on resource, that would deliver as much intelligence as they required, or are willing to pay for. Minimum amounts would almost be free. Maximum amounts would be gated and priced accordingly. AI of many varieties will be available from your own personal devices, whether it be a phone, glasses, in a vehicle, or in a bot in your house. Fantastic professional intelligence can also be bought from specialty AI providers, like Anthropic and DeepSeek. But public intelligence offers all these plus planetary-scale knowledge and a super intelligence that works at huge scales.
Algos within public intelligence route hard questions one way and easy questions in another, so for most citizens, they only deal with the public intelligence with one interface. While public intelligence is composed of thousands of varieties of AI, and each of those comprises an ecosystem of cognitions, to the user these appear as a single entity, a public intelligence. A good metaphor for the technical face of this aggregated AI commons, is to imagine it as a rainforest, crowded with thousands of species, all co-dependent on each other, some species consuming what the other produces, all of them essential for the productivity of the forest.
Public intelligence is a rainforest of thousands of species of AI, and in summation it becomes – like our forests and oceans – a public commons, a public utility at a global scale.
At the moment, the training material for artificial intelligences we have is haphazard, opaque, and partial. So far, as of 2025, LLMs have been trained on a very small, and very peculiar set of writings, that are far from either the best, or the entirety, of what we know. For archaic legal reasons, much of the best training material has not been used. Ideally, the public intelligence would be trained on ALL the books, journals and documents of the world, in all languages, in order to create for the public good the best AIs we can make for all.
As the public intelligence grows, it will continue to benefit from having access to new information and new knowledge, including very specific, and local information. This is one way its federated nature works. If I can share with the public intelligence what I learn that is truly new, the public intelligence gains from my participation, and in aggregate gains from billions of other users as they contribute.
A chief characteristic of public intelligence is that it is global, or perhaps I should say, planetary. It is not only accessible by the public globally, it also is trained on a globally diverse set of training materials in all languages, and it is also planetary in its dimensions. For instance, this AI commons integrates environmental sensing data – such as weather, water, air traffic – from around the world, and from the cloak of satellites circling the planet. Billions of moisture sensors in farmland, tide flows in wetlands, air quality sensors in cities, rain gauges in backyards and trillions of other environmental sensors feed rivers of data into the public intelligence creating a sort of planetary cognition grid.
Public intelligence would encompass big thoughts about what is happening planet wide, as well as millions of smaller thoughts on what is happening in niche areas that would feed the intelligence with specific information and data, such as DNA sampling of sewage water from cities, to monitor the health of cities.
There is no public intelligence right now. Currently Open AI is not a public intelligence; there is very little open about it beyond its name. Other models in 2025 that are classified as open source, such as Meta’s, and Deepseek’s, are leaning in the right direction, but only open and to very narrow degrees. There have been several initiatives to create a public intelligence, such as Eleuther.ai, and LAION, but there is no real progress or articulated vision to date.
The NSF (in the US) is presently funding an initiative to coordinate international collaboration on networked AI. This NSF AI Institute for Future Edge Networks and Distributed Intelligence is primarily concerned with trying to solve hard technical problems such as 6G and 7G wireless distributed communication.
Diagram from NSF AI Institute for Future Edge Networks and Distributed Intelligence
Among these collaborators is a program at Carnegie Mellon University focused on distributed AI. They call this system AI Fusion, and say “AI will evolve from today’s highly structured, controlled, and centralized architecture to a more flexible, adaptive, and distributed network of devices.” The program imagines this fusion as an emerging platform that enables distributed artificial intelligence to run on many devices, in order to be more scalable, more flexible, more active, in redirecting itself when needed, or even finding data it needs instead of waiting to be given it. But in none of these research agendas is the mandate of a public resource, open source, or an intelligence commons more than a marginal concern..
Sketch from AI Fusion
A sequence of steps will be needed to make a public intelligence:
There is a very natural tendency for AI to become centralized by a near monopoly, and probably a corporate monopoly. Intelligence is a networked good. The more it is used, the more it can learn. The more it learns, the smarter it gets. The smarter it gets, the more it is used. Ad infinitum. A really good AI can swell very fast as it is used and gets better. All these dynamics move AI to become centralized and a winner-take-all. The alternative to public intelligence is a corporate or a national intelligence. If we don’t empower public intelligence, then we have no choice but to empower non-public intelligences.
The aim of public intelligence is to make AI a global commons, a public good for maximum people. Political will to make this happen is crucial, but equally essential are the technical means, the brilliant innovations needed that we don’t have yet, and are not obvious. To urge those innovations along, it is helpful to have an image to inspire us.
The image is this: A Public Intelligence owned by everyone, composed of billions of local AIs, needing no permission to join and use, powered and paid for by users, trained on all the books and texts of humankind, operating at the scale of the planet, and maintained by common agreement.
2025-03-05 02:12:01
It is odd that science fiction did not predict the internet. There are no vintage science fiction movies about the world wide web, nor movies that showed the online web as part of the future. We expected picture phones, and online encyclopedias, but not the internet. As a society we missed it. Given how pervasive the internet later became this omission is odd.
On the other hand, there have been hundreds of science fiction stories and movies predicting artificial intelligence. And in nearly every single one of them, AI is a disaster. They are all cautionary tales. Either the robots take over, or they cause the end of the world, or their super intelligence overwhelms our humanity, and we are toast.
This ubiquitous dystopia of our future with AI is one reason why there is general angst among the public for this new technology. The angst was there even before the tech arrived. The public is slightly fearful and wary of AI based not on their experience with it, but because this is the only picture of it they have ever seen. Call up an image of a smart robot and you get the Terminator or its ilk. There are no examples of super AI working out for good. We literally can’t imagine it.
Another factor in this contrast between predicting AI and not predicting the internet is that some technologies are just easier to imagine. In 1963 the legendary science fiction author Arthur C. Clarke created a chart listing existing technologies that had not been anticipated widely, in comparison to other technologies that had a long career in our imaginations.
Clarke called these the Expected and the Unexpected, published in his book Profiles of the Future in 1963.
Clarke does not attempt to explain why some inventions are expected while others are not, other than to note that many of the expected inventions have been anticipated since ancient times. In fact their reality – immortality, invisibility, levitation – would have been called magic in the past.
Artificial beings – robots, AI – are in the Expected category. They have been so long anticipated that there has been no other technology or invention as widely or thoroughly anticipated before it arrived as AI. What invention might even be second to AI in terms of anticipation? Flying machines may have been longer desired, but there was relatively little thought put into imagining what their consequences might be. Whereas from the start of the machine age, humans have not only expected intelligent machines, but have expected significant social ramifications from them as well. We’ve spent a full century contemplating what robots and AI would do when it arrived. And, sorry to say, most of our predictions are worrisome.
So as AI is beginning to finally hatch, it is not being as fully embraced as say the internet was. There are attempts to regulate it before it is operational, in the hopes of reducing its expected harms. This premature regulation is unlikely to work because we simply don’t know what harms AI and robots will really do, even though we can imagine quite a lot of them.
This lopsided worry, derived from being Over Expected, may be a one-time thing unique to AI, or it may become a regular pattern for tech into the future, where we spend centuries brewing, stewing, scheming, and rehearsing for an invention long before it arrives. That would be good if we also rehearsed for the benefits as well as harms. We’ve spent a century trying to imagine what might go wrong with AI. Let’s spend the next decade imagining what might go right with AI.
Even better, what are we not expecting that is almost upon us? Let’s reconsider the unexpecteds.
2025-03-03 04:37:02
We aren’t the only species on this planet that have domesticated another species. There is one kind of ancient ant that herds and cares for insect aphids in order to milk them of honeydew sugar. But we are the only species to have domesticated more than one species. Over time humans have domesticated dogs, cats, cows, horses, chickens, ducks, sheep, goats, camels, pigs, guinea pigs, and rabbits, among many others. We have modified their genes with selective breeding so that their behavior aligns with ours. For example, we have tweaked the genetic makeup of a wild dog so that it wants to guard our sheep. And we have designed wild cattle to allow us to milk it in exchange for food. In each case of domestication we alter genetics by clever breeding over time, using our minds to detect and select traits. In a very real sense, the tame dog and milk cow were invented by humans, and were among the earliest human inventions. Along each step of the process our ancestors imagined a better version of what they had, and then made a better version happen. Domestication is for the most part, an act of imagination.
One of the chief characteristics of domesticated animals is their reduced aggression compared to wild types. Tame dogs, cats, cattle and goats, are much more tolerant of others and more social than their feral versions. This acquired tameness is why we can work close with them. In addition, domestication brings morphological changes to the skulls of adults – they resemble the young more with larger wider eyes, smaller teeth, flatter rounder faces, and more slender bones. Tame dogs look like wolf puppies, and domesticated cats more like lion kittens.
This retention of juvenile traits into adulthood is called neoteny and is considered a hallmark of domestication. The reduction of certain types of aggression is also a form of neoteny. The behavior of domesticated animals is similar to that of juvenile animals: more trusting of strangers, less hostile aggression over threats, less violent in-group fighting.
In the 1950s, the Russian geneticist Dmitry Belyaev started breeding wild silver foxes in captivity, selecting the friendliest of each generation to breed into the next. Each generation of less aggressive foxes displayed more puppy-like features: rounder, flatter heads, wider eyes, floppy ears. Within 20 generations he had bred domesticated foxes.
Later analysis of their genomes in 2018 showed the presence of a set of genes shared with other domesticated animals, suggesting that there are “domestication” genes. Some scientists propose that dozens of interacting genes form a “domestication syndrome” that will alter features and behaviors in a consistent direction across many species at once.
Although wolves were domesticated into dogs in several regions of the world around 15 to 40 thousand years ago, they were not the first animals to be domesticated. We were. Homo sapiens may have been the first species to select for these genes. When anthropologists compare the morphological features of modern humans to our immediate ancestors like the Neanderthal and Denisovans, humans display neoteny. Humans resemble juvenile Neanderthal, with rounder falter faces, shorter jaws with smaller teeth, and slender bones. And in fact the differences between a modern human skull and a Neanderthal skull parallel those between a dog and its wild wolf ancestor. [See figure below; Source.]
The gene BAZ1B influences a network of developmental genes, and is one of the gene networks found in the domesticated silver foxes. In a rare human genetic disorder, the gene BAZ1B is duplicated twice, resulting in a person with longer jaws and longer teeth, and social awkwardness. In another rare genetic disorder called Williams-Beuren syndrome, the same BAZ1B gene is not doubled, it is missing. This omission results in “elfin” features, rounder face, short chin, and extreme overly friendliness and trust of strangers – a type of extreme neoteny. A network of developmental genes controlled by BAZ1B are common in all modern humans but absent in Neanderthals, suggesting our own juvenile-like domestication has been genetically selected.
What’s distinctive about humans is that homo sapiens domesticated themselves. We are self-domesticated apes. Anthropologist Brian Hare characterizes recent human evolution (Late Pleistocene) as “Survival of the Friendliest”, arguing that in our self-domestication we favored prosociality – the tendency to be friendly, cooperative, and empathetic. We chose the most cooperative, the least aggressive, the less bullying types, and that trust in others resulted in greater prosperity, which in turn spread neoteny genes, and other domestication traits, into our populations.
Domesticated species often show increased playfulness, extended juvenile behavior, and even enhanced social learning abilities. Humans continued to extend their childhood far later than almost any other animal. This extended childhood enabled an extended time to learn beyond inherent instincts, but it also demanded greater parental resources and nuanced social bonds.
We are the first animals we domesticated. Not dogs. We first domesticated ourselves, and then we were able to domesticate dogs. Our domestication is not just about neoteny and reduced aggression and increased sociability. We also altered other genes and traits.
For at least a million years hominins have been using fire. Many animals and all apes have the manual dexterity to start a fire, but only hominins have the cognitive focus needed to ignite a fire from scratch and keep it going. Fires serve many purposes, including heat, light, protection from predators, annealing sharp points, and control burns for flushing out prey. But its chief consequence was fire’s ability to cook food. Cooking significantly reduced the time humans needed to forage, chew, and digest, freeing up time for other social activities. Cooking acted as a second stomach for humans, by pre-digesting hard-to-digest ingredients, releasing more nutrients that could be used to nourish a growing brain. Over many generations of cooking-fed humans, this invention altered our jaws and teeth, reduced our gut, and enlarged our brains. Our invention changed our genes.
Once we began to domesticate ungulates like cows and sheep, we began to consume their milk in many forms. This milk was especially important in raising children to healthy adults. But fairly quickly (on biological time scales, 8,000 years) in areas with domesticated ungulates, adults acquired the genetic ability to digest lactose. Again our invention altered our genes, enlarging our options. We changed ourselves in an elemental, foundational way.
In my 2010 book, What Technology Wants, I made this argument, which I believe is the first time anyone suggested that humans domesticated themselves:
We are not the same folks who marched out of Africa. Our genes have coevolved with our inventions. In the past 10,000 years alone, in fact, our genes have evolved 100 times faster than the average rate for the previous 6 million years. This should not be a surprise. As we domesticated the dog (in all its breeds) from wolves and bred cows and corn and more from their unrecognizable ancestors, we, too, have been domesticated. We have domesticated ourselves. Our teeth continue to shrink (because of cooking, our external stomach), our muscles thin out, our hair disappears. Technology has domesticated us. As fast as we remake our tools, we remake ourselves. We are coevolving with our technology, and so we have become deeply dependent on it. If all technology—every last knife and spear—were to be removed from this planet, our species would not last more than a few months. We are now symbiotic with technology….We have domesticated our humanity as much as we have domesticated our horses. Our human nature itself is a malleable crop that we planted 50,000 years ago and continue to garden even today.
Our self-domestication is just the start of our humanity. We are self-domesticated apes, but more important, we are apes that have invented ourselves. Just as the control of fire came about because of our mindful intentions, so did the cow and corn arise from our minds. Those are inventions as clear as the plow and the knife. And just as domesticated animals were inventions, as we self-domesticated, we self-invented ourselves, too. We are self-invented humans.
We invented our humanity. We invented cooking, we invented human language, we invented our sense of fairness, duty, and responsibility. All these came intentionally, out our imaginations of what could be. To the fullest extent possible, all the traits that we call “human” in contrast to either “animal” or “nature,” are traits that we created for ourselves. We self-selected our character, and crafted this being called human. In a real sense we collectively chose to be human.
We invented ourselves. I contend this is our greatest invention. Neither fire, the wheel, steam power, nor anti-biotics or AI is the greatest invention of humankind. 0ur greatest invention is our humanity.
And we are not done inventing ourselves yet.