2026-03-19 22:00:00
The prevailing narrative suggests AI is ready to replace humans, but the evidence is more nuanced.
In the past few months, a wave of tech corporations have announced significant staff cuts and attributed them to efficiency gains driven by artificial intelligence.
Companies such as Atlassian, Block, and Amazon have announced they would lay off thousands of employees due to increased reliance on AI.
The narrative these companies offer is consistent: AI is making human labor replaceable, and responsible management demands adjustment.
The evidence, however, tells a more nuanced story.
Genuine disruption is visible in specific corners of the labor market, though the scale of that disruption is commonly overstated. Research from Anthropic published earlier this month shows that although many work tasks are susceptible to automation, the vast majority are still performed primarily by humans rather than AI tools.
Moreover, some occupations are more exposed to displacement than others: Computer programmers sit at the top of the list, followed by customer service representatives and data entry workers. Yet even within the most exposed occupations, AI use is still limited.
The aggregate economic data reflects this reality. A 2025 Goldman Sachs report estimated that if AI were used across the economy for all the things it could currently do, roughly 2.5 percent of US employment would be at risk of job loss.
That’s not a trivial number. However, the report notes that workers in AI-exposed occupations are currently no more likely to lose their jobs, face reduced hours, or earn lower wages than anyone else.
The report does note early signs of strain in specific industries. Goldman Sachs identifies sectors where employment growth has slowed that align with AI-related efficiency gains. Examples include marketing consulting, graphic design, office administration, and call centers.
In the tech sector, US workers in their 20s in AI-exposed occupations saw unemployment rise by almost 3 percent in the first half of 2025. Anthropic’s research also found that job-finding rates (the chance of an unemployed person finding a job in a one-month period) for workers aged 22–25 entering AI-exposed occupations have fallen by around 14 percent since the launch of ChatGPT in 2022. This is a tentative but telling signal about where the pressure is being felt first.
These are meaningful signals, but they are sector-specific and concentrated—not the evidence of sweeping displacement that corporate announcements often imply. That gap between the evidence and the rhetoric raises an obvious question: What else might be driving these decisions?
The timing and framing of the layoffs attributed to AI warrant closer examination. Corporate restructuring, over-hiring during the post-pandemic boom as demand for online services soared, and pressure from investors to demonstrate improved profit margins are all forces operating at the same time as genuine advances in AI.
While these are not mutually exclusive explanations, they are rarely acknowledged alongside one another in corporate communications.
There is a powerful financial incentive for companies to be seen to be embracing AI aggressively. Since the launch of ChatGPT, AI-related stocks have accounted for about 75 percent of S&P 500 returns.
A workforce reduction framed around AI adoption sends a signal to investors that a straightforward cost-cutting announcement does not. A company making AI-related innovations looks a lot better than one sacking staff due to declining revenues or poor strategic decisions.
It is also worth distinguishing between two kinds of workforce reduction. In the first, AI genuinely increases productivity to the point where fewer workers are needed to produce the same output. In the second, staff reductions are not a consequence of AI, but a way to fund it.
Meta illustrates this distinction. The social media giant is reportedly planning to lay off as much as 20 percent of its workforce, while simultaneously committing $600 billion to build data centers and recruit top AI researchers.
In this case, the workers being let go are not being replaced by AI today; they are subsidizing the AI bet their employer is making on the future.
The big picture is likely one of transformation rather than elimination. According to a recent PwC report, employment is still growing in most industries exposed to AI, although growth tends to be slower than in less exposed sectors.
At the same time, wages in AI-exposed industries are rising roughly twice as fast as in those least touched by the technology. Workers with AI skills command an average wage premium of about 56 percent across the industries analyzed.
Together, the data points toward a flattening of the traditional workplace pyramid rather than mass displacement. Firms require fewer junior employees for routine analytical and administrative work, while experienced professionals who deploy AI tools effectively become more productive and command greater value.
AI is a consequential technology and will have a significant impact in the long term. What is in doubt is whether the dramatic, AI-attributed workforce reductions announced by individual companies accurately reflect that trajectory, or whether they conflate genuine technological change with decisions that would have been made regardless.
Making this distinction is not merely an academic exercise. It shapes how policymakers, educators, and workers themselves understand the nature of the disruption they are navigating.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Tech Companies Are Blaming Massive Layoffs on AI. What’s Really Going On? appeared first on SingularityHub.
2026-03-18 05:54:35
As they imagine typing, implants translate brain signals into keystrokes on a standard digital keyboard.
It’s hard to picture a keyboard layout other than the one we know best. From laptops to smartphones, it’s an integral part of our digital lives.
Scientists at Massachusetts General Hospital have now restored the ability to communicate by keyboard to two people with paralysis—using their thoughts alone.
Both people already had brain implants that could record their minds’ electrical chatter. The new system translated brain signals in real time as each person imagined finger movements. The system then accurately predicted the character they were trying to type.
The system learned to translate brain activity to physical intent after just 30 sentences. Typing speeds reached 22 words per minute with few errors, nearly matching speeds of able-bodied smartphone users.
“To our knowledge, this system provides the fastest… [brain implant] communication method reported to date based on decoding from hand motor cortex,” wrote the team.
The participants are part of the BrainGate2 clinical trial, a pioneering effort to restore communication and movement by decoding neural signals in people who have lost the use of all four limbs and the torso. One of the participants previously used the implants to translate his inner thoughts into text, but with mixed success.
Controlling a digital keyboard is far more intuitive and familiar, which makes it easier to grasp. Once a person learns to use the system, they don’t have to look at the keyboard, giving their eyes a break as they type with their minds. It also allows users full control of when, or when not, to share their thoughts, preventing accidental leakage of private musings onto a screen or broadcasted with AI-generated speech.
Parts of the brain hum with electrical activity before we speak. Over the past decade, brain implants—microelectrodes that listen in and decode signals—have translated these seemingly chaotic buzzes into text or speech, allowing paralyzed people to regain the ability to communicate.
Methods vary. Some hardware takes the form of wafer-thin disks sitting on top of the brain and gathering signals from vast regions; other devices are inserted into the brain for more targeted recordings.
These systems are life changing. In a recent example, an implant translated the neural activity controlling a man with ALS’s vocal muscles. With just a second’s delay, the system generated coherent sentences with intonation, allowing him to sing with an artificial voice. Another device turned a paralyzed woman’s thoughts into speech with nearly no delay, so she could hold a conversation without frustrating halts. People have also benefited from a method that uses the neural signals behind handwriting for brain-to-text communication.
Brain implants aren’t purely experimental anymore: China recently approved a setup allowing people with paralysis to control a robotic hand. It’s the first such device available outside of clinical trials.
Perhaps the most widely used clinical solution is eye-tracking. Here, patients move their eyes to focus on individual letters, one at a time, on a custom digital keyboard. But the pace is agonizingly slow and prone to error. And prolonged screen time strains the eyes, making extended conversations difficult.
“Those systems take far too long for many users,” said study author Daniel Rubin in a press release, causing them to abandon the technology.
For people who already know how to type, the standard keyboard layout—known as QWERTY—feels familiar and comfortable. Fingers stretch to hit letters in the upper row, tap directly down for ones in the middle, and curl into a loose claw to hit bottom letters and punctuation.
As fingers dance across the keyboard, parts of the motor cortex that control their motion spark with activity, precisely directing each placement. Mind-typing using a familiar keyboard, compared to a custom one, could feel more intuitive and relaxing.
Two people with tetraplegia gave the idea a shot. Participant T17 was diagnosed with ALS at 30, a disease that slowly destroys motor neurons, weakening muscles and eventually impairing breathing. Three years later, when he enrolled in the study, he’d lost control of his vocal muscles and relied on a ventilator. He could move only his eyes, but his mind was still sharp. The second participant, T18, was paralyzed by a spinal cord injury 18 months before enrollment. Both had multiple brain implants in different areas. These were connected to cables that shuttled recordings to a computer system for real-time processing.
The participants used a simplified QWERTY digital keyboard containing all 26 letters, a space key, and three types of punctuation—a question mark, comma, and period. To train the system, the volunteers imagined stretching, tapping, or curling their fingers to type text prompts, while implants captured and isolated neural signals for each finger. After training, a deep learning model predicted intended characters, and a language model continuously attempted to autocomplete the sentence.
After practicing just 30 sentences, both participants could copy on-screen text or type whatever they wanted. When asked “what was the best part of your job,” T18 cheekily replied “the best part of my job was the end [of] the day.” Meanwhile, T17, a fan of The Legend of Zelda video games, told the researchers “you should try oracle of ages and seasons…another is skyward sword…the music in those games is great.”
Their typing speeds broke records. T18 communicated at 110 characters or roughly 22 words per minute, which is 20 characters more than a previous state-of-the-art method based on handwriting, wrote the team. The rate is nearly on par with able-bodied smartphone users similar to his age. Typing errors were consistently low and neared perfection after practice.
T17, with incomplete locked-in syndrome due to ALS, typed 47 characters a minute at a higher error rate. He had full use of his vocabulary, unlike with previous systems that imposed word restrictions, and communicated much faster.
The performance differences could be due to where their implants are located. T18’s microarrays are on both sides of the brain, with some covering an area that controls all four limbs. T17’s implants are on only the left half of his brain, with less coverage of finger motor areas.
The team is now tweaking the system for longer use tailored to individuals. As disease progresses, the link between brain signals and keyboard characters may drift and produce more errors. But updating the algorithm is easy. The system needs only a few sentences to learn, so users could start each day mind-typing some thoughts to keep things dialed in.
Updates to the digital keyboard, like adding numbers or the return and delete keys, are in the works. Temporarily disabling the language model could also let participants type strong gibberish passwords, internet slang (ikr, btw, lol), and other non-standard words without being autocorrected.
The brain implant “is a great example of how modern neuroscience and artificial intelligence technology can combine to create something capable of restoring communication and independence for people with paralysis,” said study author Justin Jude.
The post Brain Implants Let Paralyzed People Type Nearly as Fast as Smartphone Users appeared first on SingularityHub.
2026-03-17 07:20:21
The simulation encompasses nearly all of a cell’s molecules over roughly two hours.
Five years ago, scientists watched in wonder as synthetic bacteria grew and split into daughter cells. The bacteria’s extremely stripped-down genome still supported its entire life cycle. It was a crowning achievement in synthetic biology that shed light on life’s most basic processes.
These processes can now be viewed digitally. This month, a team at the University of Illinois at Urbana-Champaign developed a virtual model of the bacteria tracking nearly all of a cell’s molecules down to the nanoscale. The researchers made this digital cell by combining several large datasets covering thousands of molecules and then animating them as the bacteria split in two.
The model is the latest in a growing effort to make digital twins of living cells. Mimicking diseases or treatments in the digital world offers a bird’s-eye view of cellular changes and could speed up drug discovery and help researchers tackle complex diseases like cancer.
“We have a whole-cell model that predicts many cellular properties simultaneously,” study author Zan Luthey-Schulten said in a press release. The model could provide “the results of hundreds of experiments” at the same time, she said.
Every cell is a bustling metropolis. Proteins orchestrate a vast range of cellular responses. RNA molecules carry instructions from genes to the cell’s protein-building factories. Fatty acids in a cell’s membrane rearrange themselves to admit nutrients or ward off invaders. Working in tandem, they all keep the cell humming along.
This complexity makes cells hard to simulate. But with large datasets charting the genome, gene expression, and proteins alongside sophisticated AI, scientists have built static virtual cells that paint a near-complete picture with atomic-level resolution. More recent models can even predict molecular movements for a short period of time (often less than a second).
But they can’t simulate “the mechanics and chemistry that take place over minutes to hours in processes such as gene expression and cell division,” wrote the University of Illinois team.
Other efforts use physics to predict how molecular changes affect behavior in bacteria, yeast, and human cells. These treat cells as a “well-stirred system”—that is, a cup of molecular soup lacking details about where each molecule sits and how molecules vary from cell to cell.
But location is key. As cells divide, some proteins gather around DNA to help copy it; others assemble near the membrane to recruit fatty molecules for its growth as the cell splits in two.
Simulating everything, everywhere, all at once during human cell division is beyond even the most powerful supercomputers. Minimal bacteria offer an alternative. These synthetic bacteria are stripped-down versions of the parasite Mycoplasma mycoides. The team focused on one of these known as JCVI-syn3A. Its 493-gene genome—roughly half the original—is the smallest set of DNA instructions to boot up a living bacteria that can still grow and divide.
In 2022, the team developed a 3D model of the bacteria’s metabolism, genes, and growth. But the software, Lattice Microbes, struggled to track division.
The new study added more data to the software. This included membrane changes and information about how ribosomes, the cell’s protein-making machines, assemble and move inside the cell’s gooey interior. They also added stochasticity, or unpredictability, to the model.
Changes to the location of chromosomes, which house DNA, are random as the cell divides, which makes them difficult to predict. But their position influences DNA replication and gene expression.
The first update nearly broke the software. It could map molecules involved in cell division, such as an enzyme critical for DNA copying. But adding chromosome location predictions slowed the model to a crawl, even when running on advanced GPUs. Most of the cells died before their simulations were complete.
Several tweaks helped. One was to add more computational power. The team used a GPU dedicated to chromosomes, while all other details were processed on a separate chip. The model also ran faster by rendering some proteins as inert spheres that could be largely ignored.
The upgrades worked. Leaving the model running over Thanksgiving, the team returned to find it had completed the bacteria’s whole life cycle. “All of a sudden, it was just this huge leap,” study author Zane Thornburg told Nature.
The simulation matched many real-world experiments, such as how the cells elongate and bubble into dumbbell-like shapes during division. The model also accurately predicted the length of a cell cycle and captured a wide range of cellular activity.
“I can’t overstate how hard it is to simulate things that are moving—and doing it in 3D for an entire cell was…triumphant,” said Thornburg.
Every cell is like a snowflake: Although containing similar molecules, the amounts and locations differ. The model easily handled this diversity. Repeated simulations of the bacteria, each starting with slightly different genetic, molecular, and metabolic makeup, resulted in a similar cycle length and movement of chromosomes during division.
The results came at a cost: Simulating the cell’s 105-minute cycle took up to six days on a supercomputer. But the virtual cell could lend insights into the molecular dance that causes all cells to grow and divide. JCVI-syn3A doesn’t have the smallest genome. Its predecessor holds the record, but it also struggles to make normally shaped and functional daughter cells—suggesting some genes are essential for division. Simulation could help us understand why.
Other efforts using generative AI to build virtual cells are in the works. But because this study’s model was grounded in strict physical and biochemical rules, results could be easily verified in the lab. AI-generated virtual cells, however, are commonly trained on gene expression data alone, which is a snapshot of a cell’s state and often fails to predict complex cell responses.
The two approaches could inspire each other by homing in on principles that make a virtual cell run like the real deal. For example, they could show that capturing each molecule in space and time, rather than as a soup, vastly improves the model.
Although the model can’t simulate a cell atom-by-atom, the team wrote, it could “illuminate the interwoven nature of the biology, chemistry, and physics that govern life for cells.”
The post Digital Twin of a Cell Tracks Its Entire Life Cycle Down to the Nanoscale appeared first on SingularityHub.
2026-03-14 22:00:00
How Pokémon Go Is Giving Delivery Robots an Inch-Perfect View of the WorldWill Douglas Heaven | MIT Technology Review ($)
“Niantic Spatial is using that vast and unparalleled trove of crowdsourced data—images of urban landmarks tagged with super-accurate location markers taken from the phones of hundreds of millions of Pokémon Go players around the world—to build a kind of world model, a buzzy new technology that grounds the smarts of LLMs in real-world environments.”
A Roadmap for AI, if Anyone Will ListenConnie Loizos | TechCrunch
“The newly published document, signed by hundreds of experts, former officials, and public figures, opens with the no-nonsense observation that humanity is at a fork in the road. One path, which the declaration calls ‘the race to replace,’ leads to humans being supplanted first as workers, then as decision-makers, as power accrues to unaccountable institutions and their machines. The other leads to AI that massively expands human potential.”
Startup Is Building the First Data Center to Use Human Brain CellsAlex Wilkins | New Scientist ($)
“Data centers use huge amounts of energy and chips are in high demand—could brain cells be the answer? Australia-based start-up Cortical Labs has announced it is building two ‘biological’ data centers in Melbourne and Singapore, stacked with the same neuron-filled chips that it has demonstrated can play Pong or Doom.”
Why Do Humanoid Robots Still Struggle With the Small Stuff?John Pavlus | Quanta Magazine
“‘I asked each researcher: Can your flagship robot—Boston Dynamics’ Atlas or Agility’s Digit, two of the most credible and pedigreed humanoids on Earth—handle any set of stairs or doorway? ‘Not reliably,’ Hurst said. ‘I don’t think it’s totally solved,’ Kuindersma said. …It’s 2026. Why are humanoids still this…hard?”
AI Isn’t Lightening Workloads. It’s Making Them More Intense.Ray A. Smith | The Wall Street Journal ($)
“One of the great hopes for artificial intelligence—at least, among workers—is that it will ease workloads, freeing people up for more high-level, creative pursuits. So far, the opposite is happening, new data show. In fact, AI is increasing the speed, density and complexity of work rather than reducing it, according to an analysis of 164,000 workers’ digital work activity.”
Karpathy’s March of Nines Shows Why 90% AI Reliability Isn’t Even Close to EnoughNikhil Mungel | VentureBeat
“The ‘March of Nines’ frames a common production reality: You can reach the first 90% reliability with a strong demo, and each additional nine often requires comparable engineering effort. For enterprise teams, the distance between ‘usually works’ and ‘operates like dependable software’ determines adoption.”
The Race to Solve the Biggest Problem in Quantum ComputingKarmela Padavic-Callaghan | New Scientist ($)
“Quantum computers are already here, but they make far too many errors. This is arguably the biggest obstacle to the technology really becoming useful, but recent breakthroughs suggest a solution may be on the horizon. ‘It’s a very exciting time in error correction. For the first time, theory and practice are really making contact,’ says Robert Schoelkopf at Yale University.”
Modular Yard Robot Mows Lawns, Plows Snow, Gathers Leaves and Trims GrassMaryna Holovnova | New Atlas
“Homeowners usually end up with a garage filled with various equipment: a lawn mower, snow blower, shovels, and tools for clearing fallen leaves. Currently available on Kickstarter, the Yarbo M attempts to combine all those individual tools into one compact robotic platform that can automatically do all the yard work.”
These Self-Configuring Modular Robots May One Day Rule the WorldTom Hawking | Gizmodo
“Each unit has multiple points to which another unit can attach itself: 18 of them, to be precise, which means that just two units can be combined in 435 ways. The number of possible configurations explodes as the number of units increases, and by the time you get to five units, there are hundreds of billions of possible combinations.”
This SpaceX Veteran Says the Next Big Thing in Space Is Satellites That Return to EarthTim Fernholz | TechCrunch
“The reusable rocket has transformed the space industry in the last decade, and a new startup led by a SpaceX veteran wants to do the same for satellites. Brian Taylor, who helped build satellites for networks like SpaceX’s Starlink and Amazon’s Leo, founded Lux Aeterna in December 2024 to develop satellite structures with a built-in heat shield that will allow them to return to Earth with their payloads intact.”
Almost 40 New Unicorns Have Been Minted So Far This Year—Here They AreDominic-Madori Davis | TechCrunch
“Using data from Crunchbase and PitchBook, TechCrunch tracked down the VC-backed startups that became unicorns in 2026. While most are AI-related, a surprising number are focused on other industries like healthcare and even a few crypto companies.”
SETI Thinks It Might Have Missed a Few Alien Calls. Here’s WhyMatthew Phelan | Gizmodo
“A new study published by researchers at the SETI Institute, short for the Search for Extraterrestrial Intelligence, has tested the possibility that ‘space weather’ could render strong premeditated alien broadcasts into the kind of fainter radio signals that SETI typically ignores.”
The post This Week’s Awesome Tech Stories From Around the Web (Through March 14) appeared first on SingularityHub.
2026-03-14 07:14:34
A single shot protected mice from the protein gunk implicated in Alzheimer’s disease.
Alzheimer’s disease and cancer have something in common: They’re hard to treat.
Despite decades of research, both still plague humanity, robbing people of longer, healthier lives. In Alzheimer’s, a protein called amyloid forms toxic clumps in the brain. Eventually, neurons supporting memory, decision-making, and movement wither.
Whether amyloid clumps cause Alzheimer’s is still hotly debated. Drugs that clear the proteins have slowed disease progress, but only mildly. The FDA recently approved two such drugs for early stage patients. The approvals were controversial, however, due to risks, like brain bleeds.
As scientists have struggled with Alzheimer’s, blood cancer treatments have undergone a revolution thanks to a therapy called CAR T. The treatment genetically engineers a patient’s T cells to hunt and destroy a handful of previously untreatable blood cancers.
Taking a page from CAR T, a team at the Washington University School of Medicine, St. Louis has now transformed brain cells, known as astrocytes, into amyloid-gobbling machines.
A single injection into mice with Alzheimer’s prevented the formation of amyloid clumps in the brain during the disease’s early stages. In mice whose brains were already riddled with the toxic plaques, the treatment cut amyloid levels roughly in half.
“This study marks the first successful attempt at engineering astrocytes to specifically target and remove amyloid beta plaques in the brains of mice with Alzheimer’s disease,” study author Marco Colonna said in a press release.
Alzheimer’s has baffled scientists for over a century. Genetics play a role. Some versions of a gene called APOE protect the brain against the disease; others exacerbate it. Inflammation in the skin, lungs, or gut may be an early trigger. Damage to waste-cleaning cells that normally wash toxic proteins out of the brain could also contribute to symptoms.
But the reigning theory of what causes Alzheimer’s is amyloid buildup. These sticky proteins aren’t inherently evil. At low levels, they tweak how neurons connect to make memories, support brain healing after injury, and may ward off infections.
In Alzheimer’s, however, amyloid clumps into toxic waste. The brain’s immune cells can initially clear them up. But amyloid eventually overwhelms these cells and causes them to spew inflammatory molecules that exacerbate the disease.
Amyloid clumps ignite a molecular cascade, leading to brain inflammation and ultimately neuronal loss, wrote Jake Boles and David Gate at Northwestern University, who were not involved in the study.
Existing drugs target amyloid proteins with antibodies that either physically prevent amyloid from clumping or mark the proteins for destruction by the immune system. But patients need regular treatment, and the risk of brain bleeds or stroke-like symptoms cause many to opt out.
Engineering brain cells to do the job could be a lasting solution.
In CAR T, scientists genetically engineer T cells to produce special proteins, or CARs, that latch onto cancer cells and destroy them. These have two sections: One outside the cell to recognize targets, and another inside the cell to trigger a biological effect. For CAR T cells, the interior trigger releases a blast of molecular bullets that destroys cancers or pathogens—and in one study, even amyloid proteins in mice. Another recent study modified a different type of immune cell to clear the toxic clumps, although its safety is still unknown.
These test cases suggest that CAR technology could reduce toxic amyloid buildup. But they used cell types that weren’t native to the brain and needed additional chemicals to keep them working. This mismatch could make treatment more complex and risk side effects.
In contrast, “genetically engineering resident brain cells to target…[amyloid]…could circumvent these persistence challenges,” wrote Boles and Gate.
The team turned to astrocytes. These support cells help repair brain injuries, provide nutrients to neurons, and impact memory and cognition. They also eat up dead cell fragments and some proteins, though they’re less efficient at this task than the brain’s immune cells.
To boost astrocytes’ appetite for amyloid, the team tested multiple CAR designs. Genetic instructions for two of these were then packaged into a harmless virus for delivery and injected into the veins of mice modeling Alzheimer’s disease.
Inside the brain, the treatment transformed naturally occurring astrocytes into CAR-A cells. In mice two and a half months old, roughly late adolescence in human years, the cells prevented amyloid clumps for at least three months. And a single shot slashed amyloid levels in half for older mice already suffering buildup. The treatment also protected neurons from further damage.
Though tailored to astrocytes, the gene therapy also caused the brain’s immune cells to more readily devour amyloid plaques, swept away malfunctioning ones, and lowered inflammation. Shifting some of the amyloid-clearing burden from immune cells to astrocytes could create a less toxic environment in the Alzheimer’s brain, wrote Boles and Gate.
Anti-amyloid antibodies in current Alheimer’s drugs struggle to tunnel through the blood-brain barrier. But CAR-A cells are made inside the brain with minimal blood vessel damage, which could also lower the chances of deadly side effects.
“Consistent with the antibody drug treatments, this new CAR-astrocyte immunotherapy is more effective when given in the earlier stages of the disease,” said study author David Holtzman. “But where it differs, and where it could make a difference in clinical care, is in the single injection that successfully reduced the amount of harmful brain proteins in mice.”
While the new approach may improve on current treatments, it ran into the same problem: The shot reduced amyloid clumps without significantly improving memory or mood.
The results mirror those seen in anti-amyloid drug trials. It could be because over-zealous CAR-A cells nibble on neurons too and destroy their connections, offsetting any benefit. Or more fundamentally, it could be that targeting amyloid alone isn’t enough. Tau proteins, for example, also aggregate in neurons as the disease goes on, and higher levels are tied to mental decline.
Early treatments targeting tau proteins have universally failed. But the CAR-A platform could be redesigned to go after tau in a combination therapy to wipe out both toxic proteins.
Beyond Alzheimer’s, a similar strategy may be work in other brain diseases to kill brain cancers. The team is now fine-tuning their designs to better detect targets.
“As CAR technologies mature and the ability to selectively neutralize toxic proteins improves, these approaches hold substantial promise for AD [Alzheimer’s disease] and other neurodegenerative disorders,” wrote Boles and Gate.
The post These Genetically Engineered Brain Cells Devour Toxic Alzheimer’s Plaques appeared first on SingularityHub.
2026-03-13 06:26:46
Finding water, generating power, and sheltering astronauts from radiation are just a few of the challenges NASA must solve.
A US Senate committee has directed NASA to begin work on a moon base “as soon as is practicable.” Under legislation advanced by the Senate lawmakers, the outpost would serve as a science laboratory and proving ground, where astronauts would develop the capabilities to live and work beyond Earth’s orbit.
A recent executive order issued by the White House directs NASA to establish the initial elements of a permanent moon base by 2030.
Since 2017, Artemis has been the NASA-led program working towards a sustained human presence on the moon. This year, it will send astronauts around the moon for the first time in more than half a century. And following a shake-up of Artemis announced in late February, the space agency plans to greatly increase the frequency of Artemis missions and return humans to the lunar surface in 2028.
A vote will now decide whether Senate legislation, known as the NASA Authorization Act of 2026, is passed to Congress, where a second bill is also circulating. The bills, which both break down this year’s funding for specific NASA programs, will be reconciled and voted on in both houses to become law.
Underlying some of the announced changes is a deepening concern in Congress and the current administration about the challenge rival powers pose to US leadership in space. A Chinese-Russian led moon outpost known as the International Lunar Research Station is under development.
A one page summary accompanying the Senate bill calls for a US base “so we can get there before the Chinese” and to “dominate the Moon, control strategic terrain in space, and write the rules of the 21st century.”
The American habitat will be located at the moon’s south pole, a strategically important location which harbors valuable resources such as water ice. The water could support habitation systems at a lunar outpost and be turned into rocket propellant for onward exploration.
Where exactly the base is located will depend on the terrain, how much sunlight the site receives, how extreme the temperatures are, how easily astronauts can communicate with Earth, and their access to resources such as water. The rim of a 21-kilometer-wide depression known as Shackleton Crater (which may hold abundant ice deposits) and a flat-topped mountain called Mons Mouton are among the leading candidates. The leading locations combine several favorable factors.
At high latitudes, such as the lunar poles, elevated crater rims can receive near-constant solar illumination. This makes them more thermally favorable than many sites at the equator, providing a consistent supply of solar power. However, the strategic value of these sites lies in what are called permanently shadowed regions (PSRs). These impact craters, untouched by sunlight for billions of years, are believed to contain the water-ice deposits.
While the south pole remains a primary focus in upcoming missions, other targets near the equator, such as Marius Hills and Mare Tranquillitatis, offer alternative advantages. These regions feature massive underground lava tubes formed by ancient volcanic activity that can act as natural shields against solar radiation and micrometeorite bombardments. They could insulate human outposts against extreme swings in temperature from 127° Celsius to -173° Celsius.
The interiors of lunar lava tubes are estimated to remain at about 17° Celsius year-round, making them ideal sites for human bases. However, unlike at the lunar poles, water in these regions is typically trapped as molecules within volcanic glass beads or minerals. Extracting this water to sustain human activities would require intensive heating and significant technological development.

The moon’s day-night cycle means that a given point on the lunar surface sees roughly 14 Earth days of continuous daylight followed by 14 days of darkness. While solar power is a viable entry point, it cannot sustain a permanent human presence through the freezing lunar night. To achieve the 2030 mandate for a “sustained presence” NASA and the Department of Energy are developing nuclear fission reactors as a potential source of energy.
They have been working on 40-kilowatt-class reactors that are designed to be launched from Earth in an inert state and activated upon arrival. To protect the crew from radiation, the reactors will likely be placed at a distance or buried within the lunar regolith (soil), which serves as a natural radiation shield.

The deployment of lunar fission reactors raises practical governance questions under existing international space law. The US-led set of rules for operating in space, known as the Artemis Accords, establishes a framework for peaceful cooperation.
It calls for transparency about space agencies’ activities on the surface and proposes safety zones around nuclear infrastructure. However, this approach conflicts with the Outer Space Treaty of 1967, which guarantees the right of all nations to have unrestricted access to all areas of celestial bodies.
Given that energy security is a strong prerequisite for successful habitation systems, there is a clear need for the governance of the storage and disposal of the materials used for nuclear fission on the lunar surface.
A lunar base would likely be built up in stages. Early missions would use satellites and autonomous rovers to study the lunar surface, identify areas rich in resources, and confirm the presence of water. Under a 2030s timeline, robotic missions could be sent ahead to prepare landing sites by leveling the ground and melting the dusty surface into harder landing pads. This would help reduce the damage caused by highly abrasive lunar dust kicked up during landings.
The habitats themselves would probably be built by connecting different modules—a bit like the International Space Station. Current designs favor modules that can be reduced in size for transportation and then expanded after landing. One way to do this is with inflatable structures.

Later, more permanent architectures may use microwaves or lasers to sinter or melt the lunar regolith into solid structures. This would create protective shells around base modules to protect them against micrometeorites and cosmic radiation.
The moon serves as a testbed for the life-support, power, and robotic systems required to support human missions on Mars and other destinations in deep space.
The fiscal implications of sustained operations on the lunar surface also require a more realistic assessment of funding. With NASA’s topline budget remaining largely flat, the higher cadence (frequency) of lunar missions outlined in NASA’s changes to Artemis would increase pressure on agency resources.
This may intensify competition with existing science and Earth observation priorities, but it also strengthens the case for greater commercial participation and international cost-sharing. If these financial pressures can be managed effectively, the long-term legacy of sustained lunar surface operations could be a more durable framework for funding space exploration.
The coming decade will test not only our ability to operate through the lunar night, but also our capacity to build the logistical, legal, and cooperative frameworks needed for a durable human presence beyond Earth.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post The US Plans to Break Ground on a Permanent Moon Base by 2030. Here’s What It Will Take. appeared first on SingularityHub.