MoreRSS

site iconSingularity HUBModify

Singularity Hub has offered daily news coverage, feature articles, analysis, and insights on key breakthroughs and future trends in science and technology.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Singularity HUB

This Week’s Awesome Tech Stories From Around the Web (Through May 31)

2025-05-31 22:00:00

Future

It’s Not Your Imagination: AI Is Speeding Up the Pace of ChangeJulie Bort | TechCrunch

“Venture capitalist Mary Meeker just dropped a 340-page slideshow report—which used the word ‘unprecedented’ on 51 of those pages—to describe the speed at which AI is being developed, adopted, spent on, and used, backed up with chart after chart.”

Computing

Quebec Startup Shows Progress Toward Practical Quantum ComputingIvan Semeniuk | The Globe and Mail

“The company has successfully used one of its own quantum devices to encode a form of error detection for the first time. Bigger players, including Google, Microsoft Corp., and Amazon.com Inc., are working on the same problem as they seek to advance their own quantum systems. What’s different about Nord Quantique is that the hardware doing the checking is the same hardware doing the calculating.”

ROBOTICS

Want a Humanoid, Open Source Robot for Just $3,000? Hugging Face Is on It.Samuel Axon | Ars Technica

“Dubbed the HopeJR, Hugging Face’s robot has up to 66 actuated degrees of freedom. According to Hugging Face Principal Research Scientist Remi Cadene, it can walk and manipulate objects. As shown in a short X video, it has an accessible look that reminds us a bit of Bender from ‘Futurama.’ (It’s the eyes.).”

Energy

Quaise Demos Drill Bit That Will Go Deeper Than Humans Have Ever GoneJoe Salas | New Atlas

“The plan is to tap into Earth’s spicy deep layers like they’re a bottomless well of clean energy. Quaise wants to drill deeper and hotter than anything humans have ever attempted to do before—depths over 12 miles (20 km) below the surface, where Quaise expects temperatures to reach nearly 1,000 °F (500 °C).”

Artificial Intelligence

Less Is More: Meta Study Shows Shorter Reasoning Improves AI Accuracy by 34%Michael Nuñez | VentureBeat

“The research contradicts the prevailing trend in AI development, where companies have invested heavily in scaling up computing resources to allow models to perform extensive reasoning through lengthy ‘thinking chains’—detailed step-by-step trajectories that AI systems use to solve complex problems.”

Computing

Energy Dept. Unveils Supercomputer That Merges With AIDon Clark | The New York Times

“The system will use Nvidia chips tailored for AI calculations and the simulations common to energy research and other scientific fields. Lawrence Berkeley National Laboratory expects the new machine—to be named for Jennifer Doudna, a Berkeley biochemist who shared the 2020 Nobel Prize for chemistry—to offer more than a tenfold speed boost over the lab’s most powerful current system.”

Future

This Giant Microwave May Change the Future of WarSam Dean | MIT Technology Review

“Of course, this isn’t magic—there are practical limits on how much damage one array can do, and at what range—but the total effect could be described as an electromagnetic pulse emitter, a death ray for electronics, or a force field that could set up a protective barrier around military installations and drop drones the way a bug zapper fizzles a mob of mosquitoes.”

Artificial Intelligence

We Made a Film With AI. You’ll Be Blown Away—and Freaked Out.Joanna Stern | The Wall Street Journal

“How hard could it be? Very hard. Over a thousand clips, days of work and who knows how much data-center computing power later, we ended up with a three-minute film—about my life with a new kind of efficiency robot. Even if you don’t care about camera angles or storyboards, you might care about what this says about using AI in any job.”

Tech

The Downsides of Vibe CodingJon Victor | The Information

“Some businesses are waking up to the downsides of automated coding products. While most customers cite huge gains in developer productivity from tools like GitHub Copilot, Cursor, or Anthropic’s Claude, the code they generate sometimes doesn’t work as expected—or worse, it can make a business vulnerable to hacking or a data leak.”

Space

SpaceX May Have Solved One Problem Only to Find More on Latest Starship FlightStephen Clark | Ars Technica

“SpaceX’s ninth Starship survived launch, but engineers now have more problems to overcome. …And the longer they have to wait, the longer the wait for other important Starship developmental tests, such as a full orbital flight, in-space refueling, and recovery and reuse of the ship itself, replicating what SpaceX has now accomplished with the Super Heavy booster.”

Tech

This Temporary E-Tattoo Is Like a Mood Ring for Your FaceEd Cara | Gizmodo

“Ever wondered exactly how much your job is stressing you out? Scientists have developed a temporary forehead tattoo that could one day give you the answer.”

Tech

Google’s New AI-Powered Search Has Arrived. Proceed With Caution.Brian X. Chen | The New York Times

“To help assess whether AI is the future of search, I tested the new tool against traditional Google searches for a multitude of personal tasks over the last week, including shopping for a toddler car seat, preparing for a Memorial Day barbecue, and understanding the plot twists of a popular video game.”

Computing

3D Is Back. This Time, You Can Ditch the GlassesLuke Larsen | Wired

“If there’s one thing that turns people off from adopting new tech, it’s being forced to look silly and feel uncomfortable for extended lengths of time. …Laptops, tablets, and even computer monitors have started embracing a new form of 3D technology that solves this problem entirely, without giving up just how compelling 3D can look.”

The post This Week’s Awesome Tech Stories From Around the Web (Through May 31) appeared first on SingularityHub.

New Gene Therapy Reverses Three Diseases With Shots to the Bloodstream

2025-05-30 22:00:00

The treatments rescued damaged blood and immune cells in newborn mice with just one shot.

It’s now possible to treat inherited blood diseases, such as sickle cell disease, with gene editing. Blood stem cells are extracted from the patient, modified, and infused back into their bone marrow—often requiring a step that kills off existing damaged cells to make space.

While effective, these kinds of therapies are expensive, intense, and tedious, requiring the collection of sufficient numbers of blood stem cells. An alternative is to directly edit these cells in the body. But they’re usually nestled inside the bone marrow and difficult to reach. This week, a team from the IRCCS San Raffaele Scientific Institute in Italy treated infant mice for three types of blood-related genetic diseases with a custom gene-editing shot that directly edited cells in the mice’s blood.

The treatment tapped “a unique window” of time. After birth, blood stem cells flow from the liver to the bone marrow. There, the elusive cells transform themselves into blood and immune cells. But they’re difficult to reach in adults. Infants, in contrast, have an abundance of circulating stem cells in the bloodstream—making them an easy target for gene therapy.

The team successfully reprogrammed the mice’s blood stem cells with a single gene-therapy injection. The edits were long-lasting and survived when transplanted into mice who had not been given the therapy. A dose of “mobilizing agents”—chemicals that stimulate cells in the blood and immune system—further boosted the effect in young adult mice.

Circulating blood stem cells are abundant after birth in people too, wrote the team. The approach could be used to edit blood stem cells directly in the body for multiple diseases. Doing away with the need to first extract the cells could make gene therapy more accessible.

It’s All About Timing

In 2024, the EU approved a gene therapy called Casgevy for the inherited blood disorders sickle cell disease and beta thalassemia. The US FDA soon followed with their own green light. In both treatments, doctors remove blood stem cells from a patient’s body and use CRISPR gene editing to transform a mutated gene into its healthy version.

The treatments are life-changing, but the process is cumbersome, hard on patients, and very expensive. It would be better to genetically alter cells still inside the body. Several studies are already on the way. One from biotech startup Verve Therapeutics uses base editing—swapping one DNA letter for another—to fix a mutation in the liver that causes sky-high cholesterol. Another targets a rare but potentially fatal disease based on abnormal proteins in liver cells.

Most of these therapies deliver their gene-editing payloads in lipid nanoparticles. These tiny bubbles of fat readily tunnel through multiple tissues but generally find their way to the liver first. In other words, diseases of the liver are relatively easy gene-editing targets. Editing blood stem cells inside bone marrow is much harder.

What if there’s another way? Soon after birth, blood stem cells roam the bloodstream before eventually settling into the bone marrow, where they become immune cells and blood cells. The team analyzed these stem cells in newborn, young, and adult mice, and found far fewer circulating cells as the mice aged, including in the liver and spleen. This suggested that there was a window of opportunity to target stem cells before they settle down.

In an initial test, the researchers labeled blood stem cells with a glow-in-the-dark protein to track their movement and the system’s efficacy. The team packaged a gene encoding the protein into a mutated virus called LV. Stripped of the ability to cause dangerous infections, LV is a common vehicle for shuttling genes inside the body (although it has limited cargo space).

After injection into the blood of recipient mice, the virus-carried glow-in-the-dark gene rapidly found its mark—locating and incorporating itself into circulating blood stem cells. Four out of five mice took in the edited stem cells as their own. Twenty weeks after surgery, the edited cells developed into an army of immune cells that settled inside the bone marrow, spleen, and thymus. They also grew and matured when transplanted into another animal, suggesting that the edited stem cells can maintain their function and propagate. 

After validating the approach, the team tried the gene therapy itself in mice of multiple ages: Newborns, toddlers, and adults. It worked especially well in newborns, likely because they have plenty of blood stem cells in their bloodstream. Adding a “don’t eat me” signal to the viral carrier further shielded the corrective genes from the body’s immune system.

On-Demand Gene Therapy

The gene therapy’s flexibility is a perk. The team targeted three dangerous disorders. One, dubbed ARO—for autosomal recessive osteopetrosis—limits the body’s ability to produce blood-borne bone cells. People who inherit the disorder often have abnormally brittle bones, with symptoms emerging as an infant. Most don’t survive their first decade.

“This condition requires early intervention to prevent disease progression,” wrote the authors. After injecting the gene therapy into newborn mice with the disease, the team found it corrected enough cells that the animals could build bones normally. These mice also lived longer compared to peers who didn’t receive the treatment.

Mice with a metabolic disorder that severely inhibits immune responses also benefited. Untreated mice died before weaning. The mice that received the therapy survived far longer and were as healthy as their normal peers.

The most impressive results were in Fanconi anemia, a bone-marrow syndrome caused by defective DNA repair that especially affects blood stem cells. The disorder is difficult to treat because there aren’t enough stem cells to collect for gene editing. Several months after newborn mice received an injection tailored to the mutated gene, the production of immune blood cells reached normal levels and maintained them for at least a year.

The results suggest an early treatment window that rapidly closes with age. But adding several clinically approved drugs can expand the window. These medications, dubbed “mobilizer drugs,” force stem cells to circulate and increase gene-editing efficiency.

The team now wants to translate the findings to humans. Analysis of blood samples shows a large number of circulating blood stem cells in infants, suggesting people may also have a “unique and time-sensitive window” when a gene-therapy jab can correct blood-based disorders.

For now, it’s still more effective to edit blood stem cells outside of the body. But the study hints at the potential for “substantial therapeutic benefit” using the new approach, wrote the team. The technology could especially help patients with a limited number of blood stem cells.

“While the efficiency currently remains limited as compared to established ex vivo treatments, it may suffice, if replicated in human babies, to benefit some genetic diseases such as severe immunodeficiencies or Fanconi anemia,” said study author Alessio Cantore.

The post New Gene Therapy Reverses Three Diseases With Shots to the Bloodstream appeared first on SingularityHub.

ChatGPT for Biology: A New AI Whips Up Designer Proteins With Only a Text Prompt

2025-05-28 04:48:15

AI that translates English text into proteins is shaking up the field.

“Write me a concise summary of Mission Impossible characters and plots to date,” I recently asked ChatGPT before catching the latest franchise entry. It delivered. I didn’t need to understand its code or know its training dataset. All I needed to do was ask.

ChatGPT and other chatbots powered by large language models, or LLMs, are more popular than ever. Scientists are taking note. Proteins—the molecular workhorses of cells—keep our bodies running smoothly. They also have a language all their own. Scientists assign a shorthand letter to each of the 20 amino acids that make up proteins. Like words, strings of these letters link together to form working proteins, their sequence determining shape and function.

Inspired by LLMs, scientists are now building protein language models that design proteins from scratch. Some of these algorithms are publicly available, but they require technical skills. What if your average researcher could simply ask an AI to design a protein with a single prompt?

Last month, researchers gave protein design AI the ChatGPT treatment. From a description of the type, structure, or functionality of a protein that you’re looking for, the algorithm churns out potential candidates. In one example, the AI, dubbed Pinal, successfully made multiple proteins that could break down alcohol when tested inside living cells. You can try it out here.

Pinal is the latest in a growing set of algorithms that translate everyday English into new proteins. These protein designers understand plain language and structural biology, and act as guides for scientists exploring custom proteins, with little technical expertise needed.

It’s an “ambitious and general approach,” the international team behind Pinal wrote in a preprint posted to bioRxiv. The AI taps the “descriptive power and flexibility of natural language” to make designer proteins more accessible to biologists.

Pitted against existing protein design algorithms, Pinal better understood the main goal for a target protein and upped the chances it would work in living cells.

“We are the first to design a functional enzyme using only text,” Fajie Yuan, the AI scientist at Westlake University in China who led the team, told Nature. “It’s just like science fiction.”

Beyond Evolution

Proteins are the building blocks of life. They form our bodies, fuel metabolism, and are the target of many medications. These intricate molecules start from a sequence of amino acid “letters,” which bond to each other and eventually fold into intricate 3D structures. Many structural elements—a loop here, a weave or pocket there—are essential to their function.

Scientists have long sought to engineer proteins with new abilities, such as enzymes that efficiently break down plastics. Traditionally, they’ve customized existing proteins for a certain biological, chemical, or medical use. These strategies “are limited by their reliance on existing protein templates and natural evolutionary constraints,” wrote the authors. Protein language models, in contrast, can dream up a universe of new proteins untethered from evolution.

Rather than absorbing text, image, or video files, like LLMs, these algorithms learn the language of proteins by training on protein sequences and structures. EvolutionaryScale’s ESM3, for example, trained on over 2.7 billion protein sequences, structures, and functions. Similar models have already been used to design antibodies that fight off viral attacks and new gene editing tools.

But these algorithms are difficult to use without expertise. Pinal, in contrast, aims for the average-Joe scientist. Like a DSLR camera on auto, the model “bypasses manual structural specifications,” wrote the team, making it simpler to make your desirable protein.

Talk to Me

To use Pinal, a user asks the AI to build a protein with a prompt of several keywords, phrases, or an entire paragraph. On the front end, the AI parses the specific requirements in the prompt. On the back end, it transforms these instructions into a functional protein.

It’s a bit like asking ChatGTP to write you a restaurant review or an essay. But of course, proteins are harder to design. Though they’re also made up of “letters,” their final shape determines how (or if) they work. One approach, dubbed end-to-end training, directly translates a prompt into protein sequences. But this opens the AI to a vast world of potential sequences, making it harder to dial in on the accurate sequences of working proteins. Compared to sequences, protein structure—the final 3D shape—is easier for the algorithm to generate and decipher.

Then there’s the headache of training data. Here, the team turned to existing protein databases and used LLMs to label them. The end result was a vast library of 1.7 billion protein-text pair, in which protein structures are matched up with text descriptions of what they do.

The completed algorithm uses 16 billion parameters—these are an AI’s internal connections—to translate plain English into the language of biology.

Pinal follows two steps. First it translates prompts into structural information. This step breaks a protein down into structural elements, or “tokens,” that are easier to process. In the second step, a protein-language model called SaProt considers user intent and protein functionality to design protein sequences most likely to fold into a working protein that meets the user’s needs.

Compared to state-of-the-art protein design algorithms that also use text as input, including ESM3, Pinal outperformed on accuracy and novelty—that is, generating proteins not known to nature. Using a few keywords to design a protein, “half of the proteins from Pinal exhibit predictable functions, only around 10 percent of the proteins generated by ESM3 do so.”

In a test, the team gave the AI a short prompt: “Please design a protein that is an alcohol dehydrogenase.” These enzymes break down alcohol. Out of over 1,600 candidate proteins, the team picked the most promising eight and tested them in living cells. Two successfully broke down alcohol at body temperature, while others were more active at a sweaty 158 degrees Fahrenheit.

More elaborate prompts that included a protein’s function and examples of similar molecules, yielded candidates for antibiotics and proteins to help cells cell recover from infection.

Pinal isn’t the only text-to-protein AI. The startup 310 AI has developed an AI dubbed MP4 to generate proteins from text, with results the company says could benefit heart disease.

The approach isn’t perfect. Like LLMs, which often “hallucinate,” protein language models also dream up unreliable or repetitive sequences that lower the chances of a working end result. The precise phrasing of prompts also affects the final protein structure. Still, the AI is like the first version of DALL-E: Play with it and then validate the resulting protein using other methods.

The post ChatGPT for Biology: A New AI Whips Up Designer Proteins With Only a Text Prompt appeared first on SingularityHub.

Evidence Shows AI Systems Are Already Too Much Like Humans. Will That Be a Problem?

2025-05-27 04:09:59

What happens when you can’t tell the difference between a human and an AI chatbot? We’re about to find out.

What if we could design a machine that could read your emotions and intentions, write thoughtful, empathetic, perfectly timed responses—and seemingly know exactly what you need to hear? A machine so seductive, you wouldn’t even realize it’s artificial. What if we already have?

In a comprehensive meta-analysis, published in the Proceedings of the National Academy of Sciences, we show that the latest generation of large-language-model-powered chatbots match and exceed most humans in their ability to communicate. A growing body of research shows these systems now reliably pass the Turing test, fooling humans into thinking they are interacting with another human.

None of us was expecting the arrival of super communicators. Science fiction taught us that artificial intelligence would be highly rational and all-knowing, but lack humanity.

Yet here we are. Recent experiments have shown that models such as GPT-4 outperform humans in writing persuasively and also empathetically. Another study found that large language models (LLMs) excel at assessing nuanced sentiment in human-written messages.

LLMs are also masters at roleplay, assuming a wide range of personas and mimicking nuanced linguistic character styles. This is amplified by their ability to infer human beliefs and intentions from text. Of course, LLMs do not possess true empathy or social understanding—but they are highly effective mimicking machines.

We call these systems “anthropomorphic agents.” Traditionally, anthropomorphism refers to ascribing human traits to non-human entities. However, LLMs genuinely display highly human-like qualities, so calls to avoid anthropomorphizing LLMs will fall flat.

This is a landmark moment: when you cannot tell the difference between talking to a human or an AI chatbot online.

On the Internet, Nobody Knows You’re an AI

What does this mean? On the one hand, LLMs promise to make complex information more widely accessible via chat interfaces, tailoring messages to individual comprehension levels. This has applications across many domains, such as legal services or public health. In education, the roleplay abilities can be used to create Socratic tutors that ask personalized questions and help students learn.

At the same time, these systems are seductive. Millions of users already interact with AI companion apps daily. Much has been said about the negative effects of companion apps, but anthropomorphic seduction comes with far wider implications.

Users are ready to trust AI chatbots so much that they disclose highly personal information. Pair this with the bots’ highly persuasive qualities, and genuine concerns emerge.

Recent research by AI company Anthropic further shows that its Claude 3 chatbot was at its most persuasive when allowed to fabricate information and engage in deception. Given AI chatbots have no moral inhibitions, they are poised to be much better at deception than humans.

This opens the door to manipulation at scale to spread disinformation or create highly effective sales tactics. What could be more effective than a trusted companion casually recommending a product in conversation? ChatGPT has already begun to provide product recommendations in response to user questions. It’s only a short step to subtly weaving product recommendations into conversations—without you ever asking.

What Can Be Done?

It is easy to call for regulation, but harder to work out the details.

The first step is to raise awareness of these abilities. Regulation should prescribe disclosure—users need to always know that they interact with an AI, like the EU AI Act mandates. But this will not be enough, given the AI systems’ seductive qualities.

The second step must be to better understand anthropomorphic qualities. So far, LLM tests measure “intelligence” and knowledge recall, but none so far measures the degree of “human likeness.” With a test like this, AI companies could be required to disclose anthropomorphic abilities with a rating system, and legislators could determine acceptable risk levels for certain contexts and age groups.

The cautionary tale of social media, which was largely unregulated until much harm had been done, suggests there is some urgency. If governments take a hands-off approach, AI is likely to amplify existing problems with spreading of mis- and disinformation, or the loneliness epidemic. In fact, Meta chief executive Mark Zuckerberg has already signaled that he would like to fill the void of real human contact with “AI friends.”

Relying on AI companies to refrain from further humanizing their systems seems ill-advised. All developments point in the opposite direction. OpenAI is working on making their systems more engaging and personable, with the ability to give your version of ChatGPT a specific “personality.”

ChatGPT has generally become more chatty, often asking followup questions to keep the conversation going, and its voice mode adds even more seductive appeal.

Much good can be done with anthropomorphic agents. Their persuasive abilities can be used for ill causes and for good ones, from fighting conspiracy theories to enticing users into donating and other prosocial behaviours.

Yet we need a comprehensive agenda across the spectrum of design and development, deployment and use, and policy and regulation of conversational agents. When AI can inherently push our buttons, we shouldn’t let it change our systems.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post Evidence Shows AI Systems Are Already Too Much Like Humans. Will That Be a Problem? appeared first on SingularityHub.

This Week’s Awesome Tech Stories From Around the Web (Through May 24)

2025-05-24 22:00:00

Artificial Intelligence

The Man Who ‘AGI-Pilled’ GoogleKevin Roose and Casey Newton | The New York Times

“When he joined Google in 2014 through the acquisition of DeepMind, the artificial intelligence start-up he co-founded, [Demis] Hassabis was one of a handful of AI leaders taking the possibility of AGI seriously. Today, he is one of a growing number of tech leaders racing to build AGI… This week on ‘Hard Fork,’ we interviewed Mr. Hassabis about his views on AGI and the strange futures that might follow its arrival.”

Robotics

Waymo Hits 10 Million Paid RidesAlex Perry | The Information

“Waymo’s co-chief executive officer Tekedra Mawakana said at Google I/O Tuesday that the self-driving car company has completed 10 million paid rides, including rides on the Waymo One app and the Uber app. The company’s other chief executive officer, Dmitri Dolgov, said on X that half were from this year.”

Computing

China Begins Assembling Its Supercomputer in SpaceWes Davis | The Verge

“Each of the 12 satellites has an onboard eight-billion parameter AI model and is capable of 744 tera operations per second (TOPS)—a measure of their AI processing grunt—and, collectively, ADA Space says they can manage five peta operations per second, or POPS. That’s quite a bit more than, say, the 40 TOPS required for a Microsoft Copilot PC. The eventual goal is to have a network of thousands of satellites that achieve 1,000 POPs, according to the Chinese government.”

Biotechnology

World’s First Gene-Edited Spider Produces Red Fluorescent SilkJay Kakade | New Atlas

“The researchers developed a novel CRISPR solution containing the gene sequence for a red fluorescent silk protein and injected it into unfertilized spider eggs. …After recovery, the females were mated with males of the same species. The offspring spun silk infused with a red fluorescent protein, proof that the gene edits had taken hold without altering the silk assembly.”

Science

CERN Gears Up to Ship Antimatter Across EuropeJohn Timmer | Ars Technica

“CERN decided that it might be good to determine how to move the antimatter away from where it’s produced. Since it was tackling that problem anyway, CERN decided to make a shipping container for antimatter, allowing it to be put on a truck and potentially taken to labs throughout Europe.”

Artificial Intelligence

Anthropic’s New Model Excels at Reasoning and Planning—and Has the Pokémon Skills to Prove ItKylie Robison | Wired

“The goal is to build AI that can handle increasingly complex, long-term tasks safely and reliably, Kaplan says, adding that the field is moving fast, past simple chatbots and toward AI that acts as a ‘virtual collaborator.’ It’s not there yet, and the key challenge for every AI lab is improving reliability long term. ‘It’s useless if halfway through it makes an error and kind of goes off the rails,’ Kaplan says.”

Computing

The Flaw in Altman’s Thesis for AI DevicesMartin Peers | The Information

“Whatever OpenAI is planning, can it really be simpler than using a device we already carry with us?  …Now, Altman obviously knows all this. The real reason he is talking up the value of new devices is surely strategic: OpenAI, like Meta, doesn’t want to be dependent on smartphone makers—such as Apple—for distribution of its apps.”

Future

The Technology to End Traffic Deaths Exists. Why Aren’t We Using It?Michael White | Fast Company

“[Automatic emergency braking] alone cuts rear-end crashes by up to 50% and blind-spot monitoring reduces lane-change crash injuries by 23%. And yet, carmakers still sell models without them. …If we want to stop the daily slaughter on our roads, the bare minimum must be mandating that all cars in the US have this technology that corrects for human error. Immediately.”

Tech

Klarna Used an AI Avatar of Its CEO to Deliver Earnings, It SaidJulie Bort | TechCrunch

“Other than AI Siemiatkowski’s admission, it wasn’t obvious that this was AI. There were only a few subtle signs: AI Siemiatkowski didn’t blink as much as most humans do. The voice sync was good, but not perfect. The AI was also wearing a brown jacket that looked a lot like the one from a widely circulated corporate photo of his human self (though the shirt was different).”

Future

What if Making Cartoons Becomes 90% Cheaper?Brooks Barnes | The New York Times

“Toonstar, the start-up behind ‘StEvEn & Parker,’ uses AI throughout the production process—from honing story lines to generating imagery to dubbing dialogue for overseas audiences. ‘By leaning into the technology, we can make full episodes 80 percent faster and 90 percent cheaper than industry norms,’ said John Attanasio, a Toonstar founder.”

Robotics

Robots Are Starting to Make Decisions in the Operating RoomJustin Opfermann, Samuel Schmidgall, and Axel Krieger | IEEE Spectrum

“A scenario in which patients are routinely greeted by a surgeon and an autonomous robotic assistant is no longer a distant possibility, thanks to the imaging and control technologies being developed today. And when patients begin to benefit from these advancements, autonomous robots in the operating room won’t just be a possibility but a new standard in medicine.”

Energy

We Did the Math on AI’s Energy Footprint. Here’s the Story You Haven’t Heard.James O’Donnell and Casey Crownhart | MIT Technology Review

“New analysis by MIT Technology Review provides an unprecedented and comprehensive look at how much energy the AI industry uses—down to a single query—to trace where its carbon footprint stands now, and where it’s headed, as AI barrels towards billions of daily users.”

Tech

Bluesky Is Plotting a Total Takeover of the Social InternetKate Knibbs | Wired

“Graber sees Atmosphere as nothing less than the democratized future of the social internet, and she emphasizes to me that developers are actively building new projects with it. In her dreams, these projects are as big, if not bigger, than Bluesky. Her ambitions might not be kingly, in other words, but they are lofty.”

Tech

Zero-Click Searches: Google’s AI Tools Are the Culmination of Its HubrisRyan Whitwam | Ars Technica

“The shift toward zero-click search that began more than a decade ago was made clear by the March 2024 core update, and it has only accelerated with the launch of AI Mode. Even businesses that have escaped major traffic drops from AI Overviews could soon find that Google’s AI-only search can get much more overbearing.”

The post This Week’s Awesome Tech Stories From Around the Web (Through May 24) appeared first on SingularityHub.

Teaching AI Like a Kindergartner Could Make It Smarter

2025-05-23 22:00:00

Kids are expert learners. AI should take notes.

Despite the impressive performance of modern AI models, they still struggle to match the learning abilities of young children. Now, researchers have shown that teaching models like kindergartners can boost their skills.

Neural networks are typically trained by feeding them vast amounts of data in one go and then using this data to draw statistical patterns that guide the model’s behavior. But that’s very different from the way humans and animals learn, which typically involves gradually picking up new skills over the course of a lifetime and combining that knowledge to solve new problems.

Researchers from New York University have now tried to instill this kind of learning process in AI through a process they dub “kindergarten curriculum learning.”’ In a paper in Nature Machine Intelligence, they showed that the approach led to the model learning considerably faster than when using existing approaches.

“AI agents first need to go through kindergarten to later be able to better learn complex tasks,” Cristina Savin, an associate professor at NYU who led the research, said in a press release. “These results point to ways to improve learning in AI systems and call for developing a more holistic understanding of how past experiences influence learning of new skills.”

The team’s inspiration came from efforts to reproduce cognitive behavior in AI. Researchers frequently use models called recurrent neural networks to try and mimic the patterns of brain activity in animals and test out hypotheses about how these are connected to behavior.

But for more complex tasks these approaches can quickly fail, so the team decided to mirror the way animals learn. Their new approach breaks problems down into smaller tasks that need to be combined to reach the desired goal.

They trained the model on these simpler tasks, one after the other, gradually increasing the complexity and allowing the model to build on the skills it had previously acquired. Once the model had been pretrained on these simpler tasks, the researchers then trained it on the full task.

In the paper, the team tested the approach on a simplified digital version of a wagering task that mimics a real-world test given to thirsty rats. The animals are given audio cues denoting the size of a water reward. They must then decide whether to wait for an unpredictable amount of time or give up on the reward and try again.

To solve the challenge, the model has to judge the size of the reward, keep track of time, and figure out the average reward gained by waiting. The team first trained the model on each of these skills individually and then trained it to predict the optimal behavior on the full task.

They found that models trained this way not only learned faster than conventional approaches but also mimicked the strategies used by animals on the same task. Interestingly, the patterns of activity in the neural networks also mimicked the slow dynamics seen in animals that make it possible to retain information over long periods to solve this kind of time-dependent task.

The researchers say the approach could help better model animal behavior and deepen our understanding of the processes that underpin learning. But it could also be a promising way to training machines to tackle complex tasks that require long-term planning.

While the methods have so far only been tested on relatively small models and simple tasks, the idea of teaching AI the same way we would a child has some pedigree. It may not be long before our digital assistants get sent to school just like us.

The post Teaching AI Like a Kindergartner Could Make It Smarter appeared first on SingularityHub.