2026-02-20 23:00:00
Tech companies have touted scientific findings from AI systems. But can they truly produce bona fide advancements?
Ahead of an artificial intelligence conference held last April, peer reviewers considered papers written by “Carl” alongside other submissions. What the reviewers did not know was that, unlike other authors, Carl wasn’t a scientific researcher, but rather an AI system built by the tech company Autoscience Institute, which says that the model can accelerate artificial intelligence research. And at least according to the humans involved in the review process, the papers were good enough for the conference: In the double-blind peer review process, three of the four papers, which were authored by Carl (with varying levels of human input) were accepted.
Carl joins a growing group of so-called “AI scientists,” which include Robin and Kosmos, research agents developed by the San Francisco-based nonprofit research lab FutureHouse, and The AI Scientist, introduced by the Japanese company Sakana AI, among others. AI scientists are made up from multiple large language models. For example, Carl differs from chatbots in that it’s devised to generate and test ideas and produce findings, said Eliot Cowan, co-founder of Autoscience Institute. Companies say these AI-driven systems can review literature, devise hypotheses, conduct experiments, analyze data, and produce novel scientific findings with varying degrees of autonomy.
The goal, said Cowan, is to develop AI systems that can increase efficiency and scale up the production of science. And other companies like Sakana AI have indicated a belief that AI scientists are unlikely to replace human ones.
Still, the automation of science has stirred a mix of concern and optimism among the AI and scientific communities. “You start feeling a little bit uneasy, because, hey, this is what I do,” said Julian Togelius, a professor of computer science at New York University who works on artificial intelligence. “I generate hypotheses, read the literature.”
AI scientists are made up from multiple large language models. Carl differs from chatbots in that it’s devised to generate and test ideas and produce findings.
Critics of these systems, including scientists who themselves study artificial intelligence, worry that AI scientists could displace researchers of the next generation, flood the system with low quality or untrustworthy data, and erode trust in scientific findings. The advancements also pose a question about where AI fits into the inherently social and human scientific enterprise, said David Leslie, director of ethics and responsible innovation research at The Alan Turing Institute in London. “There’s a difference between the full-blown shared practice of science and what’s happening with a computational system.”
In the last five years, automated systems have already led to important scientific advances. For example, AlphaFold, an AI system developed by Google DeepMind, was able to predict the three-dimensional structures of proteins with high resolution more quickly than scientists in the lab. The developers of AlphaFold, Demis Hassabis and John Jumper, won a 2024 Nobel Prize in Chemistry for their protein prediction work.
Now companies have expanded to integrate AI into other aspects of the scientific discovery, creating what Leslie calls computational Frankensteins. The term, he says, refers to the convergence of various generative AI infrastructure, algorithms, and other components used “to produce applications that attempt to simulate or approximate complex and embodied social practices (like practices of scientific discovery).” In 2025 alone, at least three companies and research labs—Sakana AI, Autoscience Institute, and FutureHouse (which launched a commercial spinoff called Edison Scientific in November)—have touted their first “AI-generated” scientific results. Some US government scientists have also embraced artificial intelligence: Researchers at three federal labs, Argonne National Laboratory, the Oak Ridge National Laboratory, and Lawrence Berkeley National Laboratory, have developed AI-driven, fully automated materials laboratories.
“You start feeling a little bit uneasy, because, hey, this is what I do.”
Indeed, these AI systems, like large language models, could be potentially used to synthesize literature and mine vast amounts of data to identify patterns. Particularly, they may be useful in material sciences, in which AI systems can design or discover new materials, and in understanding the physics of subatomic particles.
Systems can “basically make connections between millions, billions, trillions of variables” in ways that humans can’t, said Leslie. “We don’t function that way, and so just in virtue of that capacity, there are many, many opportunities.” For example, FutureHouse’s Robin mined literature and identified a potential therapeutic candidate for a condition that causes vision loss, proposed experiments to test the drug, and then analyzed the data.
But researchers have also raised red flags. While Nihar Shah, a computer scientist at Carnegie Mellon University, is “more on the optimistic side” about how AI systems can enable new discoveries, he also worries about AI slop, or the overflow of the scientific literature with AI-generated studies of poor quality and little innovation. Researchers have also pointed out other important caveats regarding the peer review process.
In a recent study that is yet to be peer reviewed, Shah and colleagues tested two AI models that aid in the scientific process: Sakana’s AI Scientist-v2 (an updated version of the original) and Agent Laboratory, a system developed by AMD, a semiconductor company, in collaboration with Johns Hopkins University, to perform research assistant tasks. Shah’s goal with the study was to examine where these systems might be failing.
One AI system, the AI Scientist-v2, reported 95 and sometimes even 100 percent accuracy on a specified task, which was impossible given that the researchers had intentionally introduced noise into the dataset. Seemingly, both systems were sometimes making up synthetic datasets to run the analysis on while stating in the final report that it was done on the original dataset. To address this, Shah and his team developed an algorithm to flag methodological pitfalls they identified, such as cherry-picking favorable datasets to run their analysis and selective reporting of positive results.
Some research suggests generative AI systems have also failed to produce innovative ideas. One study concluded that one generative AI chatbot, ChatGPT4, can only produce incremental discoveries, while a recent study published last year in Science Immunology found that, despite being able to synthesize the literature accurately, AI chatbots failed to generate insightful hypotheses or experimental proposals in the field of vaccinology. (Sakana AI and FutureHouse did not respond to requests for comments.)
Even if these systems continue being used, a human place in the lab will likely not disappear, Shah said. “Even if AI scientists become super-duper duper capable, still there’ll be a role for people, but that itself is not entirely clear,” said Shah, “as to how capable will AI scientists be and how much would still be there for humans?”
Historically, science has been a deeply human enterprise, which Leslie described as an ongoing process of interpretation, world-making, negotiation, and discovery. Importantly, he added, that process is dependent on the researchers themselves and the values and biases they hold.
A computational system trained to predict the best answer, in contrast, is categorically distinct, Leslie said. “The predictive model itself is just getting a small slice of a very complex and deep, ongoing practice, which has got layers of institutional complexity, layers of methodological complexity, historical complexity, layers of discrimination that have arisen from other injustices that define who gets to do science, who doesn’t get to do science, and what science has done for whom, and what science has not done because people aren’t sending to have their questions answered.”
Researchers at three federal labs have developed AI-driven, fully automated materials laboratories.
Rather than as a substitute for scientists, some experts see AI scientists as an additional, augmentative tool for researchers to help draw out insights, much like a microscope or a telescope. Companies also say they do not intend to replace scientists. “We do not believe that the role of a human scientist will be diminished. If anything, the role of a scientist will change and adapt to new technology, and move up the food chain,” Sakana AI wrote when the company announced its AI Scientist.
Now researchers are beginning to ponder what the future of science might look like alongside AI systems, including how to vet and validate their output. “We need to be very reflective about how we classify what’s actually happening in these tools, and if they’re harming the rigor of science as opposed to enriching our interpretive capacity by functioning as a tool for us to use in rigorous scientific practice,” said Leslie.
Going forward, Shah proposed, journals and conferences should vet AI research output by auditing log traces of the research process and generated code to both validate the findings and identify any methodological flaws. And companies, such as Autoscience Institute, say they are building systems to make sure that experiments hold to the same ethical standards as “an experiment run by a human at an academic institution would have to meet,” said Cowan. Some of the standards baked into Carl, Cowan noted, include preventing false attribution and plagiarism, facilitating reproducibility, and not using human subjects or sensitive data, among others.
While some researchers and companies are focused on improving the AI models, others are stepping back to ask how the automation of science will affect the people currently doing the research. Now is a good time to begin to grapple with such questions, said Togelius. “We got the message that AI tools that make that make us better at doing science, that’s great. Automating ourselves out of the process is terrible,” he added “How do we do one and not the other?”
This article was originally published on Undark. Read the original article.

The post What the Rise of AI Scientists May Mean for Human Research appeared first on SingularityHub.
2026-02-19 23:00:00
Running on a brain-like chip, the ‘eye’ could help robots and self-driving cars make split-second decisions.
You’re driving in a winter storm at midnight. Icy rain smashes your windshield, immediately turning it into a sheet of frost. Your eyes dart across the highway, seeking any movement that could be wildlife, struggling vehicles, or highway responders trying to pass. Whether you find safe passage or meet catastrophe hinges on how fast you see and react.
Even experienced drivers struggle with bad weather. For self-driving cars, drones, and other robots, a snowstorm could cause mayhem. The best computer-vision algorithms can handle some scenarios, but even running on advanced computer chips, their reaction times are roughly four times greater than a human’s.
“Such delays are unacceptable for time-sensitive applications…where a one-second delay at highway speeds can reduce the safety margin by up to 27m [88.6 feet], significantly increasing safety risks,” Shuo Gao at Beihang University and colleagues wrote in a recent paper describing a new superfast computer vision system.
Instead of working on the software, the team turned to hardware. Inspired by the way human eyes process movement, they developed an electronic replica that rapidly detects and isolates motion.
The machine eye’s artificial synapses connect transistors into networks that detect changes in the brightness of an image. Like biological neural circuits, these connections store a brief memory of the past before processing new inputs. Comparing the two allows them to track motion.
Combined with a popular vision algorithm, the system quickly separates moving objects, like walking pedestrians, from static objects, like buildings. By limiting its attention to motion, the machine eye needs far less time and energy to assess and respond to complex environments.
When tested on autonomous vehicles, drones, and robotic arms, the system sped up processing times by roughly 400 percent and, in most cases, surpassed the speed of human perception without sacrificing accuracy.
“These advancements empower robots with ultrafast and accurate perceptual capabilities, enabling them to handle complex and dynamic tasks more efficiently than ever before,” wrote the team.
A mere flicker in the corner of an eye captures our attention. We’ve evolved to be especially sensitive to movement. This perceptual superpower begins in the retina. The thin layer of light-sensitive tissue at the back of the eye is packed with cells fine-tuned to detect motion.
Retinal cells are a curious bunch. They store memories of previous scenes and spark with activity when something in our visual field shifts. The process is a bit like an old-school film reel: Rapid transitions between still frames lead to the perception of movement.
Every cell is tuned to detect visual changes in a particular direction—for example, left to right or up to down—but is otherwise dormant. These activity patterns form a two-dimensional neural map that the brain interprets as speed and direction within a fraction of a second.
“Biological vision excels at processing large volumes of visual information” by focusing only on motion, wrote the team. When driving across an intersection, our eyes intuitively zero in on pedestrians, cyclists, and other moving objects.
Computer vision takes a more mathematical approach.
A popular type called optical flow analyzes differences between pixels across visual frames. The algorithm segments pixels into objects and infers movement based on changes in brightness. This approach assumes that objects maintain brightness as they move. A white dot, for example, remains a white dot as it drifts to the right, at least in simulations. Pixels near each other should also move in tandem as a marker for motion.
Although inspired by biological vision, optical flow struggles in real-world scenarios. It’s an energy hog and can be laggy. Add in unexpected noise—like a snowstorm—and robots running optical flow algorithms will have trouble adapting to our messy world.
To get around these problems, Gao and colleagues built a neuron-inspired chip that dynamically detects regions of motion and then focuses an optical flow algorithm on only those areas.
Their initial design immediately hit a roadblock. Traditional computer chips can’t adjust their wiring. So the team fabricated a neuromorphic chip that, true to its name, computes and stores information at the same spot, much like a neuron processes data and retains memory.
Because neuromorphic chips don’t shuttle data from memory to processors, they’re far faster and more energy-efficient than classical chips. They outshine standard chips in a variety of tasks, such as sensing touch, detecting auditory patterns, and processing vision.
“The on-device adaptation capability of synaptic devices makes human-like ultrafast visual processing possible,” wrote the team.
The new chip is built from materials and designs commonly used in other neuromorphic chips. Similar to the retina, the array’s artificial synapses encode differences in brightness and remember these changes by adjusting their responses to subsequent electrical signals.
When processing an image, the chip converts the data into voltage changes, which only activate a handful of synaptic transistors; the others stay quiet. This means the chip can filter out irrelevant visual data and focus optical flow algorithms on regions with motion only.
In tests, the two-step setup boosted processing speed. When analyzing a movie of a pedestrian about to dash across a road, the chip detected their subtle body position and predicted what direction they’d run in roughly 100 microseconds—faster than a human. Compared to conventional computer vision, the machine eye roughly doubled the ability of self-driving cars to detect hazards in a simulation. It also improved the accuracy of robotic arms by over 740 percent thanks to better and faster tracking.
The system is compatible with computer vision algorithms beyond optical flow, such as the YOLO neural network that detects objects in a scene, making it adjustable for different uses.
“We do not completely overthrow the existing camera system; instead, by using hardware plug-ins, we enable existing computer vision algorithms to run four times faster than before, which holds greater practical value for engineering applications,” Gao told the South China Morning Post.
The post This ‘Machine Eye’ Could Give Robots Superhuman Reflexes appeared first on SingularityHub.
2026-02-18 06:56:48
The self-spreading CRISPR tool increased editing efficiency roughly three-fold compared to older versions.
Gene editing is a numbers game. For any genetic tweaks to have notable impact, a sufficient number of targeted cells need to have the disease-causing gene deleted or replaced.
Despite a growing gene-editing arsenal, the tools share a common shortcoming: They only work once in whatever cells they reach. Viruses, in contrast, readily self-replicate by hijacking their host’s cellular machinery and then, their numbers swelling, drift to infect more cells.
This strategy inspired a team at the University of California, Berkeley and collaborators to modify the gene editor, CRISPR-Cas9, to similarly replicate and spread to surrounding cells.
Led by gene-editing pioneer and Nobel Prize winner, Jennifer Doudna, the scientists added genetic instructions for cells to make a virus-like transporter that can encapsulate the CRISPR machinery. Once manufactured in treated cells, the CRISPR cargo ships to neighboring cells.
The upgraded editor was roughly three times more effective at gene editing lab-grown cells compared to standard CRISPR. It also lowered the amount of a harmful protein in mice with a genetic metabolic disorder, while the original version had little effect at the same dose.
The technology is “a conceptual shift in the delivery of therapeutic cargo,” wrote the team in a bioRxiv preprint.
CRISPR has completely transformed gene therapy. In just a few years, the technology exploded from a research curiosity into a biotechnology toolbox that can tackle previously untreatable inherited diseases. Some CRISPR versions delete or inactivate pathogenic genes. Others swap out single mutated DNA letters to restore health.
The first CRISPR therapies focus on blood disorders and require doctors to remove cells from the body for treatment. The therapies are tailored to each patient but are slow and costly. To bring gene therapy to the masses, scientists are developing gene editors that edit DNA directly inside the body with a single injection.
From reprogramming faulty blood cells and treating multiple blood disorders to lowering dangerous levels of cholesterol and tackling mitochondrial diseases, CRISPR has already proven it has the potential to unleash a new universe of gene therapies at breakneck speed.
Gene editors “promise to revolutionize medicine by overriding or correcting the underlying genetic basis of disease,” wrote the team. But all these tools are throttled by one basic requirement: Enough cells have to be edited that they override their diseased counterparts.
How many depends on the genetic disorder. Treatments need to correct around 20 percent of blood stem cells to keep sickle cell disease at bay. For Duchenne muscular dystrophy, an inherited disease that weakens muscles, over 15 percent of targeted cells need to be edited.
These numbers may seem low, but they’re still challenging for current CRISPR technologies.
“Once delivered to cells, editing machinery is confined to the cells it initially enters,” wrote the team. To compensate, scientists often increase the dosage, but this risks triggering immune attacks and off-target genetic edits.
Although membrane-bound and seemingly isolated, cells are actually quite chatty.
Some cells package mRNA molecules into bubbles and eject them towards their neighbors, essentially sharing instructions for how to make proteins. Other cells, including neurons, form extensive nanotube networks that shuttle components between cells, such as energy-producing mitochondria.
Inspired by these mechanisms, scientists have transferred small proteins and RNA across cells. So, the team thought, why couldn’t a similar mechanism spread CRISPR too?
The team adapted a carrier developed a few years back from virus proteins. The proteins automatically form a hollow shell that buds off from cells, drifts across to neighboring cells, and fuses with them to release encapsulated cargo.
The system, called NANoparticle-Induced Transfer of Enzyme, or NANITE, combines genetic instructions for the carrier molecules and CRISPR machinery into a single circular piece of DNA. This ensures the Cas9 enzyme is physically linked to the delivery proteins as both are being made inside a cell. It also means the final delivery vehicle encapsulates guide RNA as well, the “bloodhound” that tethers Cas9 to its DNA target.
Like a benevolent virus, NANITE initially “infects” a small number of cells. Once inside, it instructs each cell to make the full CRISPR tool, package it up, and send it along to other cells. Uninfected cells absorb the cargo and are dosed with the gene editor, allowing it to spread beyond treated cells.
Compared to classic CRISPR-Cas9, NANITE was roughly three times more efficient at editing multiple types of cells grown in culture. Adding protein “hooks” helped NANITE locate and latch on to specific populations of cells with a matching “eye” protein, increasing editing specificity. NANITE punched far beyond its weight: Edited cells averaged nearly 300 percent of the initially treated number, suggesting the therapy had spread to untreated neighbors.
In another test, the team tailored NANITE to slash a disease-causing protein called transthyretin in the livers of mice. Mutations to the protein eventually lead to heart and nerve failure and can be deadly. The researchers injected NANITE directly into the rodents’ veins using a high-pressure system. This technique reliably sends circular DNA to the liver, the target organ for the disease, and shows promise in people.
Within a week, NANITE had reduced transthyretin nearly 50 percent while editing only around 11 percent of liver cells. Such results would likely improve and stabilize the disease according to previous clinical trials, although the team did not report symptoms. In contrast, classic CRISPR-Cas9 only edited four percent of cells and had minimal effect on transthyretin production.
The failure could be because the gene editor was confined to a small group of cells, whereas NANITE spread to others, “enabling more efficient tissue-level editing,” wrote the team. Extensive liver and blood tests in mice treated with NANITE detected no toxic side effects.
A three-fold boost in editing is just the beginning. The team is working to increase NANITE efficacy and to potentially convert the system into mRNA, similar to the technology underlying Covid-19 vaccines. Compared to shuttling circular DNA into the body—a long-standing headache—there is a far wider range of established delivery systems for mRNA.
Still, these early results suggest it’s possible to “amplify therapeutic effects by spreading cargo” beyond the initially edited cells. Avoiding the need for relatively large doses, NANITE could increase the safety profile of gene-editing treatments and potentially expand the technology to tissues and organs that are more challenging to genetically alter than the liver.
The technology changes the numbers game. Even if only a fraction of the NANITE therapy reaches its target tissue, its ability to spread could still deliver enough impact to cure currently untouchable genetic diseases. “By lowering effective dose requirements, NANITE could make genome editing more practical and accessible for treating human disease,” wrote the team.
The post Souped-Up CRISPR Gene Editor Replicates and Spreads Like a Virus appeared first on SingularityHub.
2026-02-17 05:56:12
Scientists find coordination between key brain waves breaks down in people under anesthesia.
You’re lying on an operating table. A doctor injects a milky white liquid into your veins. Within a minute, your breathing slows, your face relaxes, and you remain limp when asked to squeeze a hand. You’ve been temporarily put to sleep.
We lose consciousness every night with the conviction that a blaring alarm or the whiff of fresh brewed coffee will drag us out of our slumber. Giving up awareness is engrained in the way our brain works. With anesthesia, doctors can artificially induce the process to spare patients from the experience of surgery.
Despite decades of research, however, we’re still in the dark about how the brain lets go of consciousness, either during sleep or after a dose of chemicals that knock you out. Finding the neural correlates of awareness—that is, what changes in the brain—would solve one of the most enigmatic mysteries of our minds. It could also lead to the objective measurement of anesthesia, giving doctors valuable real-time information about whether a patient is completely under—or if they’re beginning to float back into consciousness on the operating table.
This month, Tao Xu at Shanghai Jiao Tong University and colleagues mapped the brain’s inner workings as it descends into the void. By comparing the brain activity of 31 patients before and after anesthesia, they found a unique neural pattern marking when patients slid into unconsciousness. Connections between nine brain regions—some previously implicated in consciousness—rapidly broke down.
The results echo previous findings. But the study stands out for its practicality. Rather than using implants inserted into the brain, the team captured signals with electrodes placed on the volunteers’ scalps. With further validation, this shift in brain activity could be used as a signal for loss of awareness, helping anesthesiologists reliably keep their patients in a dream state—and bring them back.
Scientists generally agree that consciousness emerges from multiple brain regions working in tandem, but they heatedly debate which ones are involved.
Some researchers believe the seat of consciousness is rooted at the back of the brain. These regions receive and integrate information, giving the brain an overall picture of both inner thoughts and the outer world. Another camp fixates on the front and side areas of the brain. These circuits broadcast signals to the rest of the brain and break down as awareness slips away.
Still more scientists point to connections between the cortex, the outermost part of the brain, and a deeper egg-shaped brain structure called the thalamus, which gives rise to our sense of perception and self.
These latter conclusions come from studies of healthy volunteers looking at flashing images while researchers record their brain signals. Some stimuli are deliberately designed to not reach awareness. Conscious perception seems to rely on wave-like neural activity between multiple areas in the cortex and the thalamus. Without it, participants are oblivious to the images.
These studies tested perception and awareness in people while they were awake. Another team has compared neural activity in completely or partially comatose patients to alert participants. They found two circuits catastrophically fail in a coma. One of these is at the front of the brain, the other at the back. As results from studies converge on similar patterns, researchers are hopeful we’ll eventually reach a unified theory of consciousness.
But consciousness isn’t all or none. Previous studies capture only a single snapshot in time. To Xu and colleagues, truly understanding awareness means turning that snapshot into a movie.
The authors of the new study recruited 31 people who were about to undergo surgery with the use of propofol, a popular general anesthetic. Once an anesthesiologist injects the milky liquid into a vein, it rapidly shuts down consciousness. Throughout surgery, the anesthesiologist carefully monitors a patient’s behavior (or lack thereof), heart rate, and other vital signs to adjust dosage in real-time. The goal is to keep the patient fully under without overdosing.
The team gave each person in the study a cap studded with 128 electrodes to capture the brain’s electrical chatter. This brain-recording method is called an electroencephalogram or EEG. It’s popular because the device sits on the scalp and is safe and non-invasive. But because it measures activity through the skull rather than directly from brain tissue, signals can be muffled or noisy.
To increase precision, the team developed a mathematical model to filter signals into five established brain wave types. Like radio waves, electrical activity oscillates across the brain at different frequencies, each of which correlates with a unique brain state. Alpha waves, for example, dominate when you’re relaxed but alert. Delta waves take over in deep sleep.
The team isolated signals from nine areas of the brain previously implicated in consciousness. These included most of the usual suspects: A cortex region in the middle of the brain called the parietal cortex, another cortex region in the back of the skull, and the thalamus and a handful of other deeper structures.
While the patients were alert, their brains hummed with alpha-wave activity between the parietal cortex and thalamus, suggesting the regions were synchronized. Other areas across the cortex were also highly connected, like parts of a well-oiled machine.
But a dose of propofol broke down most of these communications.
Within 20 seconds after patients received the drug, alpha waves disintegrated, and electrical signals between the parietal cortex and thalamus fragmented. Different parts of the cortex also lost connectivity. Although the patients seemed to lose consciousness suddenly, like flipping a light switch, their brain signals showed a steadier decline in synchrony—more like a dimmer that gradually shifted activity from a state of coordination to one of disarray.
The results “emphasize the critical role” alpha waves play in “reflecting the dynamic shifts associated with loss of consciousness,” wrote the team.
Further tests in 46 people undergoing mild sedation showed similar desynchronization in alpha waves. But the breakdown between the parietal cortex and thalamus was smaller. That specific connection seems especially relevant in the transition to unconsciousness, wrote the team.
The results back up other studies suggesting the thalamus is a critical node in consciousness. But they could also fuel further debate about the importance of different cortex regions and their connections. Instead of the front or back of the brain as the root of consciousness, the team thinks the middle parietal cortex is key, at least for patients taking propofol. They’re now exploring whether other anesthetics change brain wave dynamics in different and unique ways.
As the debate over consciousness rages on, the team is focused on practical gains in the clinic. They’re aiming to simplify the brain recording setup so anesthesiologists could routinely use it to measure consciousness in their patients before, during, and after anesthesia.
The post This Brain Pattern Could Signal the Moment Consciousness Slips Away appeared first on SingularityHub.
2026-02-14 23:00:00
Aurora’s Driverless Trucks Can Now Travel Farther Distances Faster Than Human DriversKirsten Korosec | TechCrunch
“Aurora’s self-driving trucks can now travel nonstop on a 1,000-mile route between Fort Worth and Phoenix—exceeding what a human driver can legally accomplish. The distance, and the time it takes to travel it, offers up positive financial implications for Aurora—and any other company hoping to commercialize self-driving semitrucks.”
OpenAI Sidesteps Nvidia With Unusually Fast Coding Model on Plate-Sized ChipsBenj Edwards | Ars Technica
“The model delivers code at more than 1,000 tokens (chunks of data) per second, which is reported to be roughly 15 times faster than its predecessor. To compare, Anthropic’s Claude Opus 4.6 in its new premium-priced fast mode reaches about 2.5 times its standard speed of 68.2 tokens per second, although it is a larger and more capable model than Spark.”
This State’s Power Prices Are Plummeting as It Nears 100% RenewablesAlice Klein | New Scientist ($)
“The independent Australian Energy Market Operator’s (AEMO) latest report shows that the average wholesale electricity price in South Australia fell by 30 per cent in the final quarter of 2025, compared with a year earlier. As a result, the state had the lowest price in Australia, along with Victoria, which has the second highest share of wind and solar energy in the nation.”
Gene Editing That Spreads Within the Body Could Cure More DiseasesMichael Le Page | New Scientist ($)
“The idea is that each cell in the body that receives the initial delivery will make lots of copies of the gene-editing machinery and pass most of them on to its neighbors, amplifying the effect. This means that disease-correcting changes could be made to the DNA of more cells.”
The First Signs of Burnout Are Coming From the People Who Embrace AI the MostConnie Loizos | TechCrunch
“The tools work for you, you work less hard, everybody wins. But a new study published in Harvard Business Review follows that premise to its actual conclusion, and what it finds there isn’t a productivity revolution. It finds companies are at risk of becoming burnout machines.”
ALS Stole This Musician’s Voice. AI Let Him Sing Again.Jessica Hamzelou | MIT Technology Review ($)
“[ALS patient Patrick Darling] was able to re-create his lost voice using an AI tool trained on snippets of old audio recordings. Another AI tool has enabled him to use this ‘voice clone’ to compose new songs. Darling is able to make music again.”
Chatbots Make Terrible Doctors, New Study FindsSamantha Cole | 404 Media
“When the researchers tested the LLMs without involving users by providing the models with the full text of each clinical scenario, the models correctly identified conditions in 94.9 percent of cases. But when talking to the participants about those same conditions, the LLMs identified relevant conditions in fewer than 34.5 percent of cases.”
LEDs Enter the NanoscaleRahul Rao | IEEE Spectrum
“MicroLEDs, with pixels just micrometers across, have long been a byword in the display world. Now, microLED-makers have begun shrinking their creations into the uncharted nano realm. …They leave much to be desired in their efficiency—but one day, nanoLEDs could power ultra-high-resolution virtual reality displays and high-bandwidth on-chip photonics.”
Leading AI Expert Delays Timeline for Its Possible Destruction of HumanityAisha Down | The Guardian
“A leading artificial intelligence expert has rolled back his timeline for AI doom, saying it will take longer than he initially predicted for AI systems to be able to code autonomously and thus speed their own development toward superintelligence [and doom for humanity].”
CAR T-Cell Therapy May Slow Neurodegenerative Conditions Like ALSMichael Le Page | New Scientist ($)
“Genetically engineered immune cells known as CAR-T cells might be able to slow the progress of the neurodegenerative condition amyotrophic lateral sclerosis (ALS) by killing off rogue immune cells in the brain. ‘It’s not a way to cure the disease,’ says Davide Trotti at the Jefferson Weinberg ALS Center in Pennsylvania. ‘The goal is slowing down the disease.'”
Meta Plans to Add Facial Recognition Technology to Its Smart GlassesKashmir Hill, Kalley Huang, and Mike Isaac | The New York Times ($)
“Five years ago, Facebook shut down the facial recognition system for tagging people in photos on its social network, saying it wanted to find ‘the right balance’ for a technology that raises privacy and legal concerns. Now it wants to bring facial recognition back. …The feature, internally called ‘Name Tag,’ would let wearers of smart glasses identify people and get information about them via Meta’s artificial intelligence assistant.”
I Tried RentAHuman, Where AI Agents Hired Me to Hype Their AI StartupsReece Rogers | Wired ($)
“At its core, RentAHuman is an extension of the circular AI hype machine, an ouroboros of eternal self-promotion and sketchy motivations. For now, the bots don’t seem to have what it takes to be my boss, even when it comes to gig work, and I’m absolutely OK with that.”
AI Is Getting Scary Good at Making PredictionsRoss Andersen | The Atlantic ($)
“At first, the bots didn’t fare too well: At the end of 2024, no AI had even managed to place 100th in one of the major [forecasting] competitions. But they have since vaulted up the leaderboards. AIs have already proved that they can make superhuman predictions within the bounded context of a board game, but they may soon be better than us at divining the future of our entire messy, contingent world.”
Meet the One Woman Anthropic Trusts to Teach AI MoralsBerber Jin and Ellen Gamerman | The Wall Street Journal ($)
“As the resident philosopher of the tech company Anthropic, [Amanda] Askell spends her days learning Claude’s reasoning patterns and talking to the AI model, building its personality and addressing its misfires with prompts that can run longer than 100 pages. The aim is to endow Claude with a sense of morality—a digital soul that guides the millions of conversations it has with people every week.”
This Startup Thinks It Can Make Rocket Fuel From Water. Stop LaughingNoah Shachtman | Wired ($)
“It’s an idea that’s been around since the Apollo era and has been touted in recent years by the likes of former NASA administrator Bill Nelson and SpaceX’s Elon Musk. But here’s the thing: No one has ever successfully turned water into rocket fuel, not for a spaceship of any significant size. A startup called General Galactic, led by a pair of twentysomething engineers, is aiming to be the first.”
The post This Week’s Awesome Tech Stories From Around the Web (Through February 14) appeared first on SingularityHub.
2026-02-14 07:41:11
A steady magnetic field protects the planet’s surface, and all those living on it, from harmful radiation.
While we have sent probes billions of kilometers into interstellar space, humans have barely scratched the surface of our own planet, not even making it through the thin crust.
Information about Earth’s deep interior comes mainly from geophysics and is at a premium. We know it consists of a solid crust, a rocky mantle, a liquid outer core and solid inner core. But what precisely goes on in each layer—and between them—is a mystery. Now our research uses our planet’s magnetism to cast light on the most significant interface in the Earth’s interior: its core-mantle boundary.
Roughly 3,000 kilometers beneath our feet, Earth’s outer core, an unfathomably deep ocean of molten iron alloy, endlessly churns to produce a global magnetic field stretching out far into space. Sustaining this “geodynamo,” and the planetary force-field it has produced for the past several billions of years (protecting Earth from harmful radiation), takes a lot of energy.
This was delivered to the core as heat during the Earth’s formation. But it is only released to drive the geodynamo as it conducts outwards to cooler, solid rock floating above in the mantle. Without this massive internal heat transfer from core to mantle and ultimately through the crust to the surface, Earth would be like our nearest neighbors Mars and Venus: magnetically dead.
Maps showing how fast seismic waves (vibrations of acoustic energy) that traverse Earth’s rocky mantle change in its lowermost part, just above the core. Especially notable are two vast regions close to the equator beneath Africa and the Pacific Ocean, where seismic waves travel more slowly than elsewhere.
What makes these “big lower-mantle basal structures,” or “Blobs” for short, special is not clear. They are made of solid rock similar to the surrounding mantle but may be higher in temperature, different in composition, or both.
Strong variations in temperature at the base of the mantle would be expected to affect the underlying liquid core and the magnetic field that is generated there. The solid mantle changes temperature and flows at an exceptionally slow rate (millimeters per year), so any magnetic signature from strong temperature contrasts should persist for millions of years.
Our study reports new evidence that these Blobs are hotter than the surrounding lower mantle. And this has had a noticeable effect on Earth’s magnetic field over the last few hundreds of millions of years at least.
As igneous rocks, recently solidified from molten magma, cool down at Earth’s surface in the presence of its magnetic field, they acquire a permanent magnetism that is aligned with the direction of this field at that time and place.
It is already well known that this direction changes with latitude. We observed, however, that the magnetic directions recorded by rocks up to 250 million years old also seemed to depend on where the rocks had formed in longitude. The effect was particularly noticeable at low latitudes. We therefore wondered whether the Blobs might be responsible.

The clincher came from comparing these magnetic observations to simulations of the geodynamo run on a supercomputer. One set was run assuming that the rate of heat flowing from core to mantle was the same everywhere. These either showed very little tendency for the magnetic field to vary in longitude or else the field they produced collapsed into a persistently chaotic state, which is also inconsistent with observations.
By contrast, when we placed a pattern on the core’s surface that included strong variations in the amount of heat being sucked into the mantle, the magnetic fields behaved differently. Most tellingly, assuming that the rate of heat flowing into the Blobs was about half as high as into other, cooler, parts of the mantle meant that the magnetic fields produced by the simulations contained longitudinal structures reminiscent of the records from ancient rocks.
A further finding was that these fields were less prone to collapsing. Adding the Blobs therefore enabled us to reproduce the observed stable behavior of Earth’s magnetic field over a wider range.
What seems to be happening is that the two hot Blobs are insulating the liquid metal beneath them, preventing heat loss that would otherwise cause the fluid to thermally contract and sink down into the core. Since it is the flow of core fluid that generates more magnetic field, these stagnant ponds of metal do not participate in the geodynamo process.
Furthermore, in the same way that a mobile phone can lose its signal by being placed within a metal box, these stationary areas of conductive liquid act to “screen” the magnetic field generated by the circulating liquid below. The huge Blobs therefore gave rise to characteristic longitudinally varying patterns in the shape and variability of Earth’s magnetic field. And this mapped on to what was recorded by rocks formed at low latitudes.
Most of the time, the shape of Earth’s magnetic field is quite similar to that which would be produced by a bar magnet aligned with the planet’s rotation axis. This is what makes a magnetic compass point nearly north at most places on Earth’s surface, most of the time.
Collapses into weak, multipolar states have occurred many time over geological history, but they are quite rare, and the field seems to have recovered fairly quickly afterwards. In the simulations at least, Blobs seem to help make this the case.
So, while we still have a lot to learn about what the Blobs are and how they originated, it may be that in helping to keep the magnetic field stable and useful for humanity, we have much to thank them for.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Vast ‘Blobs’ of Rock Have Stabilized Earth’s Magnetic Field for Hundreds of Millions of Years appeared first on SingularityHub.