MoreRSS

site iconKevin KellyModify

Senior Maverick at Wired, author of bestseller book, The Inevitable. Also Cool Tool maven, Recomendo chief, Asia-fan, and True Film buff.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Kevin Kelly

Essentials for Independent Travel in China

2025-10-30 02:40:06

You should visit China. It’s vast, very diverse, safe, easy to get around, inexpensive, interesting, not what you expect, and increasingly important in the world. Go see for yourself. 

However, unlike the rest of the world, China uses its own parallel set of apps that you will need to operate there. Here are what I consider the essential mobile apps for independent travel in China. A good rule of thumb is to download your apps outside of China before you leave, because most are behind their great firewall. Make sure you are downloading the “international” version of the app (if it has one) so that it uses English.

(China has a new visa-free 10-day option that makes it easier than ever to visit. For details of this transit visa see this blog thread.)

Airalo

Modern China lives on the phone. You need a good solid local connection for your apps. Some US-based mobile plans like T-Mobile or Google Fi will automatically give you cell coverage in China, and others like Verizon will charge you a premium $12/day. For everyone else, or to avoid the surcharge, you’ll need a local SIM. You can get a SIM card at the landing airport but recent phone models permit eSIMS, which is a mere app you load instead of a card. You pay for different data plans. 

There are a bunch of eSIM startups, like Saily, Holify, and Airalo. You can also purchase an eSIM from Trip (see below). Loading an eSIM at the moment is way more cumbersome than it should be for all of them, so I’d recommend you download your eSIM before you arrive. One of the benefits of some of the eSIMS is that they route their traffic through Hong Kong or Singapore so they act as a built-in VPN, which means you can avoid needing that app on your phone (see below). I’ve been using Airalo as an eSIM in Asian and European countries, and it works fine in China. Importantly, Airalo has a VPN built in, so in China I could do my email, chat AI, and newssites on my phone effortlessly.

Express VPN

The Great Firewall will block Gmail, ChatGPT, social media, YouTube, and major newssites among others. To get around this frustrating block on your phone, use a good eSIM. To get around it on your laptop you’ll need a VPN. VPNs are in a cat-and-mouse race to keep the channel open, and they don’t always work. I’ve had success in China using a paid version of Express VPN ($8 per month). Other travelers and many expats living in China prefer AstrillVPN, which is more expensive at $15 per month. Make sure you have updated it before you leave with the latest version, which is updated frequently. It’s a good idea to download a second version as a backup in case your main choice stops working, which is not uncommon. 

Alipay

China is cashless; no one uses cash anymore, except desperate tourists. And most stores do not accept credit cards. There are two widespread digital cash systems, WeChat and Alipay. I recommend using Alipay because they now make it very easy for foreigners to set up an account linked to your credit card. The Alipay app will generate a QR code the seller will scan, or you can scan their QR code. Alipay is a super-app with many sub apps inside it, such as rideshares, bikeshares, tickets, and translation. This app is an essential must-have.

Didi

There is no Uber or Lyft, but there is Didi, which works the same. You can install the Didi app, but even better, Didi is available within the Alipay app. So you don’t need to set up anything new. Look for Didi inside Alipay. Very convenient. I prefer using Didi to a taxi to eliminate the challenge of communicating my destination. The common protocol is to recite the last four digits of your phone number when you enter the car to confirm your ride.

WeChat

Use Wechat to text, call, chat with other Chinese at all levels. It’s the one app everyone in China has. Everyone will ask you for your Wechat account when you part or before you meet. It by far is the preferred way to communicate. You scan each other’s QR codes to exchange numbers. Installing WeChat outside of China is cumbersome, but a good idea. 

Amap

Amap is the best map app in China. (Google and Apple Maps are not as current, nor as detailed as Amap.) Besides navigation purposes, Amap is also useful for searching for what restaurants or sights are nearby in English. It kind of works like Yelp-lite. It also gives you detailed instructions and connections for public transit and subways in a city, which is a real lifesaver. (Use Alipay to purchase your tickets.) You need to load the app on your phone away from China or with a VPN in order to get the English version.

Apple Translate

You can use the Google translate app (with a VPN), but I usually wind up using the built in Apple Translate app on my iPhone. (Alipay has a translation app, too). You’ll need at least one of them. Point your camera/phone at text on a sign or menu for translation, or use ‘conversation” mode to translate speech as text, or even a spoken conversation. 

Trip

For booking flights and high speed trains in China, use Trip. They are reliable and pretty much cover all options. Trains go almost anywhere, but domestic flights are cheap and plentiful. You can book both with Trip. Also Trip is the easiest for booking hotels in China. Just out of habit I still use Booking.com, which also works for hotels (not trains or planes).  Trip has the best selection in less touristy places, while Booking.com will work for most cities.

Hello RideNot essential, but bike shares are available in most Chinese cities. Weather permitting, bikes are a great way to sight see or get around. Good news: Use your Alipay app to unlock several colors of them (each color a different company). The Hello Ride sub-app inside Alipay will unlock the blue ones. (You do not need to download a separate app.) They are cheap, about $1 per hour, and you can rent them one-way.

Weekly Links, 10/03/2025

2025-10-09 06:21:21

Paying AIs to Read My Books

2025-10-08 01:09:36

Some authors have it backwards. They believe that AI companies should pay them for training AIs on their books. But I predict in a very short while, authors will be paying AI companies to ensure that their books are included in the education and training of AIs. The authors (and their publishers) will pay in order to have influence on the answers and services the AIs provide. If your work is not known and appreciated by the AIs, it will be essentially unknown.

Recently, the AI firm Anthropic agreed to pay book authors a collective $1.5 billion as a penalty for making an illegal copy of their books. Anthropic had been sued by some authors for using a shadow library of 500,000 books that contained digital versions of their books, all collected by renegade librarians with the dream of making all books available to all people. Anthropic had downloaded a copy of this outlaw library in anticipation of using it to train their LLMs, but according to court documents, they did not end up using those books for training the AI models they released. Even if Anthropic did not use this particular library, they used something similar, and so have all the other commercial frontier LLMs.

However the judge penalized them for making an unauthorized copy of the copyrighted books, whether or not they used them, and the authors of all the copied books were awarded $3,000 per book in the library. 

The court administrators in this case, called Bartz et al v. Anthropic, have released a searchable list of the affected books on a dedicated website. Anyone can search the database to see if a particular book or author is included in this pirate library, and of course, whether they are due compensation. My experience with class action suites like this is that very rarely does award money ever reach people on the street. Most of the fees are consumed by the lawyers of all sides. I notice that in this case, only half of the amount paid per book is destined to actually go to the author. The other 50% goes to the publishers. Maybe. And if it is a text book, good luck with getting anything.

I am an author so I checked the Anthropic case list. I found four out of my five books published in New York included in this library. I feel honored to be included in a group of books that can train AIs that I now use everyday. I feel flattered that my ideas might be able to reach millions of people through the chain of thought of LLMs. I can imagine some authors feeling disappointed that their work was not included in this library.

However, Anthropic claims it did not use this particular library for training their AIs. They may have used other libraries and those libraries may or may not have been “legal” in the sense of having been paid for. The legality of using digitized books for anything is still in dispute. For example, Google digitizes books for search purposes, but only shows small snippets of the book as the result. Can they use the same digital copy they have already made for training AI purposes? The verdict in the Bartz v. Anthropic case was that, yes, using a copy of a book for training AI is fair use, if it was obtained in a fair way. Anthropic was penalized not for training AI on books, but for having in its possession a copy of the books it had not paid for. 

This is just the first test case of what promises to be many more tests in the future as it is clear that copyright law is not adequate to cover this new use of text. Protecting copies of text – which is what copyright provisions do – is not really pertinent to learning and training. AIs don’t need to keep a copy; they just have to read it once. Copies are immaterial. We probably need other types of rights and licenses for intellectual property, such as a Right of Reference, or something like that. But the rights issue is only a distraction from the main event, which is the rise of a new audience: the AIs.

Slowly, we’ll accumulate some best practices in regards to what is used to train and school AIs. The curation of the material used to educate the AI agents giving us answers will become a major factor in deciding whether we use and rely on them. There will be a minority of customers who want the AIs to be trained with material that aligns with their political bent. Devout conservatives might want a conservatively trained AI; it will give answers to controversial questions in the manner they like. Devout liberals will want one trained with a liberal education. The majority of people won’t care; they just want the “best” answer or the most reliable service. We do know that AIs reflect what they were trained on, and that they can be “fine tuned” with human intervention to produce answers and services that please their users. There is a lot of research in reinforcing their behavior and steering their thinking.

Half a million books sounds like a lot of books to learn from, but there are millions and millions of books in the world already that the AIs have not read because their copyright status is unclear or inconvenient, or they are written in lesser-used languages. AI training is nowhere near done. Shaping this corpus of possible influences will become a science and art in itself. Someday AIs will have really read all that humans have written. Having only 500,000 books forming your knowledge base will soon be seen as quaint, but it also suggests how impactful it can be to be included in that small selection, and that makes inclusion a prime reason why authors will want their works to be trained on AIs now.

The young and the earliest adopters of AI have it set to always-on mode; more and more of their intangible life goes through the AI, and no further. As the AI models become more and more reliable, the young are accepting the conclusions of the AI. I find something similar in my own life. I long ago stopped questioning a calculator, then stopped questioning Google, and now find that most answers from current AIs are pretty reliable. The AIs are becoming the arbiters of truth.

AI agents are used not just to give answers but to find things, to understand things, to suggest things. If the AIs do not know about it, it is equivalent to it not existing. It will become very hard for authors who opt out of AI training to make a dent. There are authors and creators today who do not have any digital presence at all; you cannot find them online; their work is not listed anywhere. They are rare and a minority. As Tim O’Reilly likes to say, the challenge today for most creators is not piracy (illegal copies) but obscurity. I will add, the challenge for creators in the future will not be imitation (AI copy) but obscurity.

If AIs become the arbiters of truth, and if what they trained on matters, then I want my ideas and creative work to be paramount in what they see. I would very much like my books to be the textbooks for AI. What author would not? I would. I want my influence to extend to the billions of people coming to the AIs everyday, and I might even be willing to pay for that, or to at least do what I can to facilitate the ingestion of my work into the AI minds.

Another way to think of this is that in this emerging landscape, the audience for books – especially non-fiction books – has shifted away from people towards AI. If you are writing a book today, you want to keep in mind that you are primarily writing it for AIs. They are the ones who are going to read it the most carefully. They are going to read every page word by word, and all the footnotes, and all the endnotes, and the bibliography, and the afterward. They will also read all your books and listen to all your podcasts. You are unlikely to have any human reader read it as thoroughly as the AIs will. After absorbing it, the AIs will do that magical thing of incorporating your text into all the other text they have read, of situating it, of placing it among all the other knowledge of the world – in a way no human reader can do.

Part of the success of being incorporated by AIs is how well the material is presented for them. If a book can be more easily parsed by an AI, its influence will be greater. Therefore many books will be written and formatted with an eye on their main audience. Writing for AIs will become a skill like any other, and something you can get better at. Authors could actively seek to optimize their work for AI ingestion, perhaps even collaborating with AI companies to ensure their content is properly understood, and integrated. The concept of “AI-friendly” writing, with clear structures, explicit arguments, and well-defined concepts, will gain prominence, and of course will be assisted by AI.

Every book, song, play, movie we create is added to our culture. Libraries are special among human inventions. They tend to get better the older they get. They accumulate wisdom and knowledge. The internet is similar in this way, in that it keeps accumulating material and has never crashed, or had to restart, since it began. AIs are very likely similar to these exotropic systems, accumulating endlessly without interruption. We don’t know for sure, but they are liable to keep growing for decades if not longer. At the moment their growth seems open ended. What they learn today, they will probably continue to know, and their impact today will have compounding influence in the decades to come. Influencing AIs is among the highest leverage activities available to any human being today, and the earlier you start, the more potent.

The value of an author’s work will not just be in how well it sells among humans, but how deep it has been included within the foundational knowledge of these intelligent memory-based systems. That potency will be what is boasted about. That will be an author’s legacy.

The Periodic Table of Cognition

2025-09-24 06:50:22

I’ve been studying the early history of electricity’s discovery as a map for our current discovery of artificial intelligence. The smartest people alive back then, including Isaac Newton, who may have been the smartest person who ever lived, had confident theories about electricity’s nature that were profoundly wrong. In fact, despite the essential role of electrical charges in the universe, everyone who worked on this fundamental force was profoundly wrong for a long time. All the pioneers of electricity — such as Franklin, Wheatstone, Faraday, and Maxwell — had a few correct ideas of their own (not shared by all) mixed in with notions that mostly turned out to be flat out misguided. Most of the discoveries about what electricity could do happened without the knowledge of how they worked. That ignorance, of course, drastically slowed down the advances in electrical inventions.

In a similar way, the smartest people today, especially all the geniuses creating artificial intelligence, have theories about what intelligence is, and I believe all of them (me too) will be profoundly wrong. We don’t know what artificial intelligence is in large part because we don’t know what our own intelligence is. And this ignorance will later be seen as an impediment to the rate of progress in AI.

A major part of our ignorance stems from our confusion about the general category of either electricity or intelligence. We tend to view both electricity and intelligence as coherent elemental forces along a single dimension: you either have more of it or less. But in fact, electricity turned out to be so complicated, so complex, so full of counterintuitive effects that even today it is still hard to grasp how it works. It has particles and waves, and fields and flows, composed of things that are not really there. Our employment of electricity exceeds our understanding of it. Understanding electricity was essential to understanding matter. It wasn’t until we learned to control electricity that we were able to split water — which had been considered an element — into its actual elements; that enlightened us that water was not a foundational element, but a derivative compound made up of sub elements.

It is very probable we will discover that intelligence is likewise not a foundational singular element, but a derivative compound composed of multiple cognitive elements, combined in a complex system unique to each species of mind. The result that we call intelligence emerges from many different cognitive primitives such as long-term memory, spatial awareness, logical deduction, advance planning, pattern perception, and so on. There may be dozens of them, or hundreds. We currently don’t have any idea of what these elements are. We lack a periodic table of cognition. 

The cognitive elements will more resemble the heavier elements in being unstable and dynamic. Or a better analogy would be to the elements in a biological cell. The primitives of cognition are flow states that appear in a thought cycle. They are like molecules in a cell which are in constant flux, shifting from one shape to another. Their molecular identity is related to their actions and interactions with other molecules. Thinking is a collective action that happens in time (like temperature in matter) and every mode can only be seen in relation to the other modes before and after it. It is a network phenomenon that makes it difficult to identify its borders. So each element of intelligence is embedded in a thought cycle, and requires the other elements as part of its identity. So each cognitive element is described in context of the other cognitive modes adjacent to it.

I asked ChatGPT5Pro to help me generate a periodic table of cognition given what we collectively know so far. It suggests 49 elements, arranged in a table so that related concepts are adjacent. The columns are families, or general categories of cognition such as “Perception”, “Reasoning”, “Learning”, so all the types of perception or reasoning are stacked in one column. The rows are sorted by stages in a cycle of thought. The earlier stages (such as “sensing”) are at the top, while later stages in the cycle (such as “reflect & align”) are at the bottom. So for example, in the family or category of “Safety” the AIs will tend to do the estimation of uncertainty first, later do verification, and only get to a theory of mind at the end.

The chart is colored according to how much progress we’ve made on each element. Red indicates we can synthesize that element in a robust way. Orange means we can kind of make it work with the right scaffolding. Yellow reflects promising research without operational generality yet.

I suspect many of these elements are not as distinct as shown here (taxonomically I am more of a lumper than a splitter), and I would expect this collection omits many types we are soon to discover, but as a start, this prototype chart serves its purpose: it reveals the complexity of intelligence. It is clear intelligence is compounded along multiple dimensions. We will engineer different AIs to have different combinations of different elements in different strengths. This will produce thousands of types of possible minds. We can see that even today different animals have their own combination of cognitive primitives, arranged in a pattern unique to their species’ needs. In some animals some of the elements — say long-term memory — may exceed our own in strength; of course they lack some elements we have.

With the help of AI, we are discovering what these elements of cognition are. Each advance illuminates a bit of how minds work and what is needed to achieve results. If the discovery of electricity and atoms has anything to teach us now, it is that we are probably very far from having discovered the complete set of cognitive elements. Instead we are at the stage of believing in ethers, instantaneous action, and phlogiston – a few of the incorrect theories of electricity the brightest scientists believed.

Almost no thinker, researcher, experimenter, or scientist at that time could see the true nature of electricity, electromagnetism, radiation and subatomic particles, because the whole picture was hugely unintuitive. Waves, force fields, particles of atoms did not make sense (and still does not make common sense). It required sophisticated mathematics to truly comprehend it, and even after Maxwell described it mathematically, he found it hard to visualize.

I expect the same from intelligence. Even after we identify its ingredients, the emergent properties they generate are likely to be obscure and hard to believe, hard to visualize. Intelligence is unlikely to make common sense. 

A century ago, our use of electricity ran ahead of our understanding of it. We made motors from magnets and coiled wire without understanding why they worked. Theory lagged behind practice. As with electricity, our employment of intelligence exceeds our understanding of it. We are using LLMs to answer questions or to code software without having a theory of intelligence. A real theory of intelligence is so lacking that we don’t know how our own minds work, let alone the synthetic ones we can now create.

The theory of the atomic world needed the knowledge of the periodic table of elements. You had to know all (or at least most) of the parts to make falsifiable predictions of what would happen. The theory of intelligence requires knowledge of all the elemental parts, which we have only slowly begun to identify, before we can predict what might happen next.

The Trust Quotient (TQ)

2025-09-03 04:15:23

Wherever there is autonomy, trust must follow. If we raise children to go off on their own, they need to be autonomous and we need to trust them. (Parenting is a school for learning how to trust.) If we make a system of autonomous agents, we need lots of trust between agents. If I delegate decisions to an AI, I then have to trust it, and if that AI relies on other AIs, it must trust them. Therefore we will need to develop a very robust trust system that can detect, verify, and generate trust between humans and machines, and more importantly between machines and machines.

Applicable research in trust follows two directions: understanding better how humans trust each other, and applying some of those principles in an abstract way into mechanical systems. Technologists have already created primitive trust systems to manage the security of data clouds and communications. For instance, should this device be allowed to connect? Can it be trusted to do what it claims it can do? How do we verify its identity, and its behavior? And so on.

So far these systems are not dealing with adaptive agents, whose behaviors and IDs and abilities are far more fluid, opaque, shifting, and also more consequential. That makes trusting them more difficult and more important.

Today when I am shopping for an AI, accuracy is the primary quality I am looking for. Will it give me correct answers? How much does it hallucinate? These qualities are proxies for trust. Can I trust the AI to give me an answer that is reliable? As AIs start to do more, to go out into the world to act, to make decisions for us, their trustworthiness becomes crucial.

Trust is a broad word that will be unbundled as it seeps into the AI ecosystem. Part security, part reliability, part responsibility, and part accountability, these strands will become more precise as we synthesize it and measure it. Trust will be something we’ll be talking a lot more about in the coming decade.

As the abilities and skills of AI begin to differentiate – some are better for certain tasks than others – reviews of them will begin to include their trustworthiness. Just as other manufactured products have specs that are advertised – such as fuel efficiency, or gigabytes of storage, pixel counts, or uptime, or cure rates – so the vendors of AIs will come to advertise the trust quotient of their agents. How reliably reliable are they? Even if this quality is not advertised it needs to be measured internally, so that the company can keep improving it.

When we depend on our AI agent to book vacation tickets, or renew our drug prescriptions, or to get our car repaired, we will be placing a lot of trust in them. It is not hard to imagine occasions where an AI agent can be involved in a life or death decision. There may even be legal liability consequences for how much we can expect to trust AI agents. Who is responsible if the agent screws up?

Right now, AIs own no responsibilities. If they get things wrong, they don’t guarantee to fix it. They take no responsibility for the trouble they may cause with their errors. In fact, this difference is currently the key difference between human employees and AI workers. The buck stops with the humans. They take responsibility for their work; you hire humans because you trust them to get the job done right. If it isn’t, they redo it, and they learn how to not make that mistake again. Not so with current AIs. This makes them hard to trust.

AI agents will form a network, a system of interacting AIs, and that system can assign a risk factor for each task. Some tasks, like purchasing airline tickets, or assigning prescription drugs, would have risk scores reflecting potential negative outcomes vs positive convenience. Each AI agent itself would have a dynamic risk score depending on what its permissions were. Agents would also accumulate trust scores based on their past performances. Trust is very asymmetrical; It can take many interactions over a long time to gain in value, but it can lose trust instantly, with a single mistake. The trust scores would be constantly changing, and tracked by the system.

Most AI work will be done invisibly, as agent to agent exchanges. Most of the output generated by an average AI agent will only be seen and consumed by another AI agent, one of trillions. Very little of the total AI work will ever be seen or noticed by humans. The number of AI agents that humans interact with will be very few, although they will loom in importance to us. While the AIs we engage with will be rare statistically, they will matter to us greatly, and their trust will be paramount.

In order to win that trust from us, an outward facing AI agent needs to connect with AI agents it can also trust, so a large part of its capabilities will be the skill of selecting and exploiting the most trustworthy AIs it can find. We can expect whole new scams, including fooling AI agents into trusting hollow agents, faking certificates of trust, counterfeiting IDs, spoofing tasks. Just as in the internet security world, an AI agent is only as trustworthy as its weakest sub-agent. And since sub-tasks can be assigned for many levels down, managing quality will be a prime effort for AIs.

Assigning correct blame for errors and rectifying mistakes also becomes a huge marketable skill for AIs. All systems – including the best humans – make mistakes. There can be no system mistake proof. So a large part of high trust is the accountability in mending one’s errors. The highest trusted agents will be those capable (and trusted!) to fix the mistakes they make, to have sufficient smart power to make amends, and get it right.

Ultimately the degree of trust we give to our prime AI agent — the one we interact with all day every day — will be a score that is boasted about, contested, shared, and advertised widely. In other domains, like a car or a phone, we take reliability for granted.

AI is so much more complex and personal, unlike other products and services in our lives today,

the trustworthiness of AI agents will be crucial and an ongoing concern. Its trust quotient (TQ) may be more important than its intelligence quotient (IQ). Picking and retaining agents with high TQ will be very much like hiring and keeping key human employees.

However, we tend to avoid assigning numerical scores to humans. The AI agent system, on the other hand will have all kinds of metrics we will use to decide which ones we want to help run our lives. The highest scoring AIs will likely be the most expensive ones as well. There will be whispers of ones with nearly perfect scores that you can’t afford. However, AI is a system that improves with increasing returns, which means the more it is used, the better it gets, so the best AIs will be among the most popular AIs. Billionaires use the same Google we use, and are likely to use the same AIs as us, though they might have intensely personalized interfaces for them. These too, will need to have the highest trust quotients.

Every company, and probably every person, will have an AI agent that represents them inside the AI system to other AI agents. Making sure your personal rep agent has a high trust score will be part of your responsibility. It is a little bit like a credit score for AI agents. You will want a high TQ for yours. Because some AI agents won’t engage with other agents having low TQs. This is not the same thing as having a personal social score (like the Chinese are reputed to have). This is not your score, but the TQ score of your agent, which represents you to other agents. You could have a robust social score reputation, but your agent could be lousy. And vice versa.

In the coming decades of the AI era, TQ will be seen as more important than IQ.

Emotional Agents

2025-08-24 05:57:26

Many people have found the intelligence of AIs to be shocking. This will seem quaint compared to a far bigger shock coming: highly emotional AIs. The arrival of synthetic emotions will unleash disruption, outrage, disturbance, confusion, and cultural shock in human society that will dwarf the fuss over synthetic intelligence. In the coming years the story headlines will shift from “everyone will lose their job” (they won’t) to “AI partners are the end of civilization as we know it.”

We can rationally process the fact that a computer could legitimately be rational. We may not like it, but we could accept the fact that a computer could be smart, in part because we have come to see our own brains as a type of computer. It is hard to believe they could be as smart as we are, but once they are, it kind of makes sense.

Accepting machine-made creativity is harder. Creativity seems very human, and it is in some ways perceived as the opposite of rationality, and so it does not appear to belong to machines, as rationality does.

Emotions are interesting because emotions clearly are not only found in humans, but in many, many animals. Any pet owner could list the ways in which their pets perceive and display emotions. Part of the love of animals is being able to resonate with them emotionally. They respond to our emotions as we respond to theirs. There are genuine, deep emotional bonds between human and animal.

Those same kinds of emotional bonds are coming to machines. We see glimmers of it already. Nearly every week a stranger sends me logs of their chats with an AI demonstrating how deep and intuitive they are, how well they understand each other, and how connected they are in spirit. And we get reports of teenagers getting deeply wrapped up with AI “friends.” This is all before any serious work has been done to deliberately embed emotions into the AIs.

Why will we program emotions into AIs? For a number of reasons:

First, emotions are a great interface for a machine. It makes interacting with them much more natural and comfortable. Emotions are easy for humans. We don’t have to be taught how to act, we all intuitively understand results such as praise, enthusiasm, doubt, persuasion, surprise, perplexity – which a machine may want to use. Humans use subtle emotional charges to convey non-verbal information, importance, and instruction, and AIs will use similar emotional notes in their instruction and communications.

Second, the market will favor emotional agents, because humans do. AIs and robots will continue to diversify, even as their basic abilities converge, and so their personalities and emotional character will become more important in choosing which one to use. If they are all equally smart, the one that is friendlier, or nicer, or a better companion, will get the job.

Thirdly, a lot of what we hope artificial agents will do, whether they are software AIs or hard robots, will require more than rational calculations. It will not be enough that an AI can code all night long. We are currently over rating intelligence. To be truly creative and capable of innovations, to be wise enough to offer good advice, will require more than IQ. The bots need sophisticated emotional dynamics that are deeply embedded in its software.

Is that even possible? Yes.

There are research programs (such as those at MIT) going back decades figuring out how to distill emotions into attributes that can be ported over to machines. Some of this knowledge pertains to ways of visually displaying emotions in hardware, just as we do with our own faces. Other researchers have extracted ways we convey emotion with our voice, and even in words in a text. Recently we’ve witnessed AI makers tweaking how complimentary and “nice” their agents are because some users didn’t like their new personality, and some simply did not like the change in personality. While we can definitely program in personality and emotions, we don’t yet know which ones work best for a particular task.

Machines displaying emotions is only half of the work. The other half is detection and comprehension of human emotions by machines. Relationships are two way, and in order to truly be an emotional agent, it must get good at picking up your emotions. There has been a lot of research in that field, primarily in facial recognition, not just your identity, but how you are feeling. There are commercially released apps that can watch a user at their keyboard and detect whether they are depressed, or undergoing emotional stress. The extrapolation of that will be smart glasses that not only look out, but at the same time look back at your face to parse your emotions. Are you confused, or delighted? Surprised, or grateful? Determined, or relaxed? Already, Apple’s Vision Pro has backward facing cameras in its goggles that track your eyes and microexpressions such as blinks and eyebrow rises. Current text LLM’s make no attempt to detect your emotional state, except what can be gleaned from the letters in your prompt, but it is not technically a huge jump to do that.

In the coming years there will be lots of emotional experiments. Some AIs will be curt and logical; some will be talkative and extroverts. Some AIs will whisper, and only talk when you are ready to listen. Some people will prefer loud, funny, witty AIs that know how to make them laugh. And many commercial AIs will be designed to be your best friend.

We might find that admirable for an adult, but scary for a child. Indeed, there are tons of issues to be wary of when it comes to AIs and kids, not just emotions. But emotional bonds will be a key consideration in children’s AIs. Very young human children already can bond with, and become very close to inert dolls and teddy bears. Imagine if a teddy bear talked back, played with infinite patience, and mirrored their emotions. As the child grows it may not ever want to surrender the teddy. Therefore the quality of emotions in machines will likely become one of those areas where we have very different regimes, one for adults and one for children. Different rules, different expectations, different laws, different business models, etc.

But even adults will become very attached to emotional agents, very much like the movie Her. At first society will brand those humans who get swept up in AI love as delusional or mentally unstable. But just as most of the people who have deep love for a dog or cat are not broken, but well adjusted and very empathetic beings, so most of the humans that will have close relationships with AIs and bots will likewise see these bonds as wholesome and broadening.

The common fear about cozy relationships with machines is that they may be so nice, so smart, so patient, so available, so much more helpful than other humans around, that people will withdraw from human relationships altogether. That could happen. It is not hard to imagine well-intentioned people only consuming the “yummy easy friendships” that AIs offer, just as they are tempted to consume only the yummy easy calories of processed foods. The best remedy to counter this temptation is similar to fast food: education and better choices. Part of growing up in this new world will be learning to discern the difference between pretty perfect relationships and messy, difficult, imperfect human ones, and the value the latter give. To be your best — whatever your definition —requires that you spend time with humans!

Rather than ban AI relationships (or fast food) you moderated it, and keep it in perspective. Because in fact, the “perfect” behavior of an AI friend, mentor, coach, or partner can be a great role model. If you surround yourself with AIs that have been trained and tweaked to be the best that humans can make, this is fabulous way to improve yourself. The average human has very shallow ethics, and contradictory principles, and is easily swayed by their own base desires and circumstances. In theory, we should be able to program AIs to have better ethics and principles than the average human. In the same way, we can engineer AIs to be a better friend than the average human. Having these educated AIs around can help us to improve ourselves, and to become better humans. And the people who develop deep relationships with them have a chance to be the most well-adjusted and empathetic people of all.

The argument that the AIs’ emotions are not real because “the bots can’t feel anything” will simply be ignored. Just like the criticism of artificial intelligence being artificial and therefore not real because they don’t understand. It doesn’t matter. We don’t understand what “feeling” really means and we don’t even understand what “understand” means. These are terms and notions that are habitual but no longer useful. AIs do real things we used to call intelligence, and they will start doing real things we used to call emotions. Most importantly the relationships humans will have with AIs, bot, robots, will be as real and as meaningful as any other human connection. They will be real relationships.

But the emotions that AIs/bots have, though real, are likely to be different. Real, but askew. AIs can be funny, but their sense of humor is slightly off, slightly different. They will laugh at things we don’t. And the way they will be funny will gradually shift our own humor, in the same way that the way they play chess and go has now changed how we play them. AIs are smart, but in an unhuman way. Their emotionality will be similarly alien, since AIs are essentially artifical aliens. In fact, we will learn more about what emotions fundamentally are from observing them than we have learned from studying ourselves.

Emotions in machines will not arrive overnight. The emotions will gradually accumulate, so we have time to steer them. They begin with politeness, civility, niceness. They praise and flatter us, easily, maybe too easily. The central concern is not whether our connection with machines will be close and intimate (they will), nor whether these relationships are real (they are), nor whether they will preclude human relationships (they won’t), but rather who does your emotional agent work for? Who owns it? What is it being optimized for? Can you trust it to not manipulate you? These are the questions that will dominate the next decade.

Clearly the most sensitive data about us would be information stemming from our emotions. What are we afraid of? What exactly makes us happy? What do we find disgusting? What arouses us? After spending all day for years interacting with our always-on agent, said agent would have a full profile of us. Even if we never explicitly disclosed our deepest fears, our most cherished desires, and our most vulnerable moments, it would know all this just from the emotional valence of our communications, questions, and reactions. It would know us better than we know ourselves. This will be a common refrain in the coming decades, repeated in both exhilaration and terror: “My AI agent knows me better than I know myself.”

In many cases this will be true. In the best case scenario we use this tool to know ourselves better. In the worst case, this asymmetry in knowledge will be used to manipulate us, and expand our worst selves. I see no evidence that we will cease including AIs in our lives, hourly, if not by the minute. (There will be exceptions, like the Amish, who drop out but they will be a tiny minority.) Most of us, for most of the time, will have an intimate relationship with an AI agent/bot/robot that is always on, ready to help us in any way it can, and that relationship will become as real and as meaningful as any other human connection. We will willingly share our most intimate hours of our lives with it. On average we will lend it our most personal data as long as the benefits of doing so keep coming. (The gate in data privacy is not really who has it, but how much benefit do I get? People will share any kind of data if the benefits are great enough.)

Twenty five years from now, if the people whose constant companion is an always-on AI agent are total jerks, misanthropic bros, and losers, this will be the end of the story for emotional AIs. On the other hand, if people with a close relationship with an AI agent are more empathetic than average, more productive, distinctly unique, well adjusted, with a richer inner life, then this will be the beginning of the story.

We can steer the story to the beginning we want by rewarding those inventions that move us in that direction. The question is not whether AI will be emotional, but how we will use that emotionality.