2026-01-17 23:00:00
We’re About to Simulate a Human Brain on a SupercomputerAlex Wilkins | New Scientist ($)
“What would it mean to simulate a human brain? Today’s most powerful computing systems now contain enough computational firepower to run simulations of billions of neurons, comparable to the sophistication of real brains. We increasingly understand how these neurons are wired together, too, leading to brain simulations that researchers hope will reveal secrets of brain function that were previously hidden.”
Gemini Is WinningDavid Pierce | The Verge
“Each one of [the] elements [you need in AI] is complex and competitive; there’s a reason OpenAI CEO Sam Altman keeps shouting about how he needs trillions of dollars in compute alone. But Google is the one company that appears to have all of the pieces already in order. Over the last year, and even in the last few days, the company has made moves that suggest it is ready to be the biggest and most impactful force in AI.”
Meet the New Biologists Treating LLMs Like AliensWill Douglas Heaven | MIT Technology Review ($)
“[AI researchers] are pioneering new techniques that let them spot patterns in the apparent chaos of the numbers that make up these large language models, studying them as if they were doing biology or neuroscience on vast living creatures—city-size xenomorphs that have appeared in our midst.”
Scientists Sequence a Woolly Rhino Genome From a 14,400-Year-Old Wolf’s StomachKiona N. Smith | Ars Technica
“DNA testing revealed that the meat was a prime cut of woolly rhinoceros, a now-extinct 2-metric-ton behemoth that once stomped across the tundras of Europe and Asia. Stockholm University paleogeneticist Sólveig Guðjónsdóttir and her colleagues recently sequenced a full genome from the piece of meat, which reveals some secrets about woolly rhino populations in the centuries before their extinction.”
Finally, Some Good News in the Fight Against CancerEllyn Lapointe | Gizmodo
“The findings, published Tuesday, show for the first time that 70% of all cancer patients survived at least five years after being diagnosed between 2015 and 2021. That’s a major improvement since the mid-1970s, when the five-year survival rate was just 49%, according to the report.”
A Leading Use for Quantum Computers Might Not Need Them After AllKarmela Padavic-Callaghan | New Scientist ($)
“Understanding a molecule that plays a key role in nitrogen fixing—a chemical process that enables life on Earth—has long been thought of as problem for quantum computers, but now a classical computer may have solved it. …The researchers also estimated that the supercomputer method may even be faster than quantum ones, performing calculations in less than a minute that would take 8 hours on a quantum device—although this estimate assumes an ideal supercomputer performance.”
AI Models Are Starting to Crack High-Level Math ProblemsRussell Brandom | TechCrunch
“Since the release of GPT 5.2—which Somani describes as “anecdotally more skilled at mathematical reasoning than previous iterations” — the sheer volume of solved problems has become difficult to ignore, raising new questions about large language models’ ability to push the frontiers of human knowledge.”
How Next-Generation Nuclear Reactors Break Out of the 20th-Century BlueprintCasey Crownhart | MIT Technology Review ($)
“Demand for electricity is swelling around the world. …Nuclear could help, but only if new plants are safe, reliable, cheap, and able to come online quickly. Here’s what that new generation might look like.”
AI’s Hacking Skills Are Approaching an ‘Inflection Point’Will Knight | Wired ($)
“The situation points to a growing risk. As AI models continue to get smarter, their ability to find zero-day bugs and other vulnerabilities also continues to grow. The same intelligence that can be used to detect vulnerabilities can also be used to exploit them.”
Anthropic’s Claude Cowork Is an AI Agent That Actually WorksReece Rogers | Wired ($)
“[My experiences testing subpar agents] expose a consistent pattern of generative AI startups overpromising and underdelivering when it comes to these ‘agentic’ helpers—programs designed to take control of your computer, performing chores and digital errands to free up your time for more important things. …They just didn’t work. This poor track record makes Anthropic’s latest agent, Claude Cowork, a nice surprise.”
Ads Are Coming to ChatGPT. Here’s How They’ll WorkMaxwell Zeff | Wired ($)
“OpenAI could use a business like [ads] right about now. The decade-old company has raised roughly $64 billion from investors over its lifetime, and it generated only a fraction of that in revenue last year. Competition from rivals like Google Gemini has only amped up the pressure for OpenAI to monetize ChatGPT’s massive audience.”
Wing’s Drone Delivery Is Coming to 150 More WalmartsAndrew J. Hawkins | The Verge
“So far, they’ve launched at several stores in Atlanta, in addition to Walmart locations in Dallas-Forth Worth and Arkansas. They currently operate at approximately 27 stores, and with today’s announcement, the goal is to eventually establish a network of 270 Walmart locations with Wing drone delivery by 2027.”
OpenAI Forges Multibillion-Dollar Computing Partnership With CerebrasKate Clark and Berber Jin | The Wall Street Journal ($)
“OpenAI plans to use chips designed by Cerebras to power its popular chatbot, the companies said Wednesday. It has committed to purchase up to 750 megawatts of computing power over three years from Cerebras. The deal is worth more than $10 billion, according to people familiar with the matter.”
China Just Built Its Own Time System for the MoonPassant Rabie | Gizmodo
“As the global race to build a human habitat on the Moon heats up, there are several ongoing attempts to establish a universal lunar time that future missions can rely on. China, however, claims to be the first to set its lunar clocks and has made its new tool publicly available for use.”
The post This Week’s Awesome Tech Stories From Around the Web (Through January 17) appeared first on SingularityHub.
2026-01-17 05:27:09
When you click on the Spotify profile of Intelligent Band Machine you will see an image of three young men staring moodily back into the camera. Their profile confirms that they are a “British band,” “influenced by the post-punk scene,” and trying to capture the spirit of bands like The Cure “while carving out their own unique sound.” When you listen to their music you might be reminded of Joy Division’s Ian Curtis.
If you dig a little deeper and read about them on their record label’s page you will find that Cameron is the lead singer, and his musical tastes were shaped by the concerts he attended at Nottingham’s Rock City nightclub. Tyler, the drummer, was indeed inspired by The Cure, as well as U2, and The Smiths, while guitarist, Antonio, blends his Italian mother’s love of classic Italian folk songs with his British father’s passion for The Beatles and The Rolling Stones.
What these profiles don’t say is that Intelligent Band Machine is not real, at least not in the human sense. And I should know, because I created them.
I used a range of generative artificial intelligence (GenAI) tools, as well as my skills as a professional songwriter and sound engineer to make their debut album, “Welcome to NTU,” and I released it on my dedicated AI record label, XRMeta Records, in May 2025.
You might ask why an independently releasing singer-songwriter and music producer like me would create an artificial band. As well as being a musician, I’m an academic with a background in computer science, carrying out research about how GenAI can be used for music.
I had reservations about these tools and how they might affect me as a musician. I had heard about various AI controversies, like “fake” Drake, and artists like Grimes embracing GenAI in 2023. So, I was also intrigued by the possibilities.
Over 100 million people have tried Suno, an AI music generation platform that can create songs with vocals and instrumentation from simple text prompts. More than 100 million tracks have been created using the Mubert API, which allows streaming to platforms like YouTube, TikTok, Twitch, and Instagram. And according to Deezer, 28 percent of released music is fully AI-generated.
It was time for me to investigate what these tools could do. This is the story of how I experimented with GenAI and was transformed from a dance artist to a post-punk soft rock band.
In my early days of songwriting, one of the first pieces of equipment I bought was a Panasonic RQ-2745, a small, slim portable cassette tape recorder that allowed me to record rough drafts of vocals on an audio cassette tape.
When cheap products like the Sony cfs-w30 boombox began to incorporate double cassette decks, I could overdub songs and add choruses or instruments, like flute or guitar, at home. If I wanted a quality recording, I had to book a recording studio. I became an expert at splicing tape to remove vocal parts from the tape recording or to fix tape jams.
Cutting and taping became cutting and pasting as I experimented with the very early free digital music sequencers that were included on a disk I found on the cover of a PC magazine. I felt liberated when sequencers like Cubase, Pro Tools, and Logic allowed high-quality recordings to be produced at home. This, along with the significant reduction in the cost of studio equipment, led to the emergence of the bedroom producer and the proliferation of the 808 sound. This deep, booming bass line can be heard in hits like “It’s Tricky” by RUN DMC, “Emergency Room” by Rihanna, and “Drunk in Love” by Beyoncé.
Digital distribution and social media then paved the way for self-releasing independent artists like me to communicate directly with fans, sell music, and bypass record labels.
Yet during all of these changes, musicians still needed the skills and knowledge to create their songs. Like many musicians, I honed my skills over several years, learning to play the guitar, flute, and piano and developing sound-engineering skills. Even when AI-powered tools began to be incorporated into digital audio workstations, a musician’s skill and knowledge was still needed to use these tools effectively.
Being able to create music from text prompts changed this.
Not since the introduction of music streaming services in the late 1990s has there been such a dramatic shift in music composition and listening technologies. Now non-musicians can create studio-quality music in minutes without the extensive training that I had and without having to buy instruments or studio equipment.
Now anyone can do this. It was time for me to learn what these tools could do.
I typically produce RnB/neo soul, nu-jazz, and dance music, although I can write songs for multiple genres of music. For the experiment, I wanted to try a genre that I do not usually produce music for.
I tested about 60 different GenAI tools and platforms. These included standalone tools that focus on one task, like MIDI generation (musical data that can be played back on a keyboard or music sequencer). I also tried AI music studios. These platforms have user-friendly interfaces that combine a range of AI tools to support lyric, music, image, and video creation.
Suno and Udio were two of the best platforms. They can generate songs with complex vocal melodies and harmonies across a range of genres, with the best outputs being difficult to distinguish from what human musicians can create. Both Telisha “Nikki” Jones and music mogul Timbaland are said to have used Suno to create music for their AI-generated artists.
In June 2025, Timbaland announced the signing of his AI artist TaTa to his dedicated AI record label, Stage Zero. In September 2025 Jones was reported to have signed a $3 million deal with Hallwood Media for her AI-generated artist Xania Monet.
At the time of my experiment in March/April 2025, both Suno and Udio had issues, such as silence gaps, tempo changes, inconsistent vocal quality, and variations in genre. Sometimes the voice might change within the song. There was limited control in terms of editing, and the audio quality could vary within a single track or across a series of songs.
After trying several GenAI music platforms I decided to use Udio due to the quality of its output and its favorable terms and conditions at that time. Taking inspiration from pop-rock and post-punk bands like Joy Division and The Cure, I started the journey towards creating a new persona.
Using GenAI to produce one or two good songs was quite simple. Producing an album of 14 songs that sounded as if they were played by the same band was more challenging, particularly generating the same male voice and musical style for each song.
The songs were either far too similar to each other or had other issues, such as the voice changing or the instruments sounding too different. A careful listen to the songs in Unfolded by the AI artist Xania Monet will reveal similar inconsistencies. For example, you can hear a difference in the voice that is generated for the first song, “This Aint No Tryout,” compared to “Back When Love Was Real.”
My first task was to create the lyrics. I generated about 1,000 songs using Udio and found repeated words and phrases in the lyrics like “neon,” “whisper,” and “we are, we are, we are,” appearing both within and across the two user accounts I created. Themes like darkness, shadows, and light were also repeated within the lyrics for a significant number of songs.
GenAI just couldn’t write lyrics with the complexity or playfulness I needed, so I chose to write the lyrics for the album myself and used a semi-autobiographical narrative. This allowed me to maintain a story across the album; from arriving at Nottingham Trent University and settling into student accommodation to experiencing university life, graduating and leaving.
I could interweave current affairs like the closure of Nottingham’s Victoria Centre Market in the song “Goodbye Vicky Market.” I included lines that referenced Nottingham’s historical figures like Alan Sillitoe, who wrote “The Loneliness of the Long Distance Runner,” and the author D.H. Lawrence in the song, “Books.”
After writing the lyrics I generated the music. There were issues with prompt adherence. I tested prompts of different lengths. In some cases, prompts were partly or wholly ignored. I might write a prompt asking for one genre and a different genre would be produced.
There were also issues with the synthetic voice pronouncing some of the lyrics. For example, it could not pronounce “NTU” or “Sillitoe,” and I had to rewrite some of the lyrics phonetically or edit the audio to get the correct pronunciation for certain words.
I relied on my sound engineering skills, extending the outputs, editing, mixing, remixing, and manually recording vocals in Cubase to achieve a coherent final mix. This took a significant amount of time. In fact, editing the Udio outputs took so much time, it would have been easier to recreate the music myself. I can write a song in 10 minutes, and I sometimes record myself freestyling lyrics for an entire song directly in Cubase, so this was frustrating.
I encountered similar issues with prompt adherence when generating images and video. When using Kling AI to create images of the band members, I followed its prompt engineering guide. However, I had to generate hundreds of images and edit them with external tools to achieve the final band photos.
Generating video was equally tricky. One way to create a video is to upload a photo, which becomes the first frame. The rest of the video is generated based on the prompt. However, when I uploaded Cameron’s profile image to Kling AI, the initial frames of the 10-second video resembled him. But by the end of the video, Cameron often morphed into someone else, and this happened frequently when generating video.
Prompts for camera instructions, such as zoom and pan, were frequently ignored. I also had to edit out scenes with other problems, such as the appearance of extra fingers or an additional leg on the band members.
All this wasn’t cheap either. With 8,000 Kling AI credits at a cost of $64.99, I could generate about 40 ten-second videos, but many were unusable.
Music generation is cheaper. Paying between $24 and $30 for a monthly subscription might allow a user to create between 2,000 and 3,000 songs, depending on how the credits are used. I was very surprised to discover how quickly these song credits can be consumed. Every error or song that didn’t suit my taste still cost credits.
Eventually, after generating thousands of songs, hundreds of images and videos using tools like Duck.ai to create the band’s biographies, and spending many hours editing the outputs; Cameron, Tyler ,and Antonio began to emerge as the band.

I have always been passionate about creating my own music. As much as I love writing songs, the poor royalty payouts I was receiving had become disheartening. A song I recorded in 2001 and released in 2011 called Only Heaven Can Compare was streamed about 1 million times in France during 2024 but I only received about £21 in royalties.
Prior to streaming, had my song been downloaded by just 10,000 people, I would have been paid about £6,900 (69 pence per download). Artists like Kate Nash have raised concerns about the poor royalty payouts to musicians, citing her £500,000 payout for over 100 million plays of her song “Foundations.”
But as I created the band’s album something unexpected started to happen. I began to enjoy creating music again. The frustrations with using GenAI was balanced by wonder and curiosity.
At times Udio was able to generate vocals that were so realistic I could hardly believe they were created by an AI model. There were moments when I laughed, when I was really moved, and even had chills when I heard some of the songs.
Lyrics that once lay dormant in multiple lever arch files on my bookshelf began to find new life through these generative tools, allowing me to rapidly test them across multiple genres.
I decided to take this experiment further.
After carefully selecting a set of songs I had written many years ago, I created a new persona, Jake Davy Smith. For his 14-track album, called I’ll Be Right Here, which was released on November 22, 2025, I used Suno’s v5 model to generate studio-quality music that matched my original vision.
Suno’s extensive editing tools allow users to upload vocals, create a cover song, and edit the music, lyrics, or voice with greater precision than their earlier models. This helped me nearly recreate my original songs. The track “Calling” is an example of a rock ballad I wrote years ago, recorded, and didn’t release.
Reflecting on this experiment, I found myself with conflicting views about using GenAI. These tools are fast and affordable (in some cases, completely free). They can produce instant results. I now have tools that I can use to quickly reimagine my old songs.
I can use multiple personas to bring my lyrics to life. I am Priscilla Angelique. I am Intelligent Band Machine. I am Jake Davy Smith. I am Moombahtman 25, a male African American moombahton artist who combines hip hop with Latin American beats, and I have many more personas.
I am a “multiple persona musician” or MPM, a term I’ve created to define my new musical identity. Musicians having alter egos isn’t new, but GenAI has completely changed how this is done.
However, there’s another side to this. Human musicians are now having to compete with algorithms capable of producing high-quality music at scale—as well as with each other.
These tools are improving rapidly, and the issues I experienced when using Udio to create the album for Intelligent Band Machine in March/April 2025 have already been addressed in Suno’s v5 model. It is now easier to create a persona with a consistent voice. Users can upload their own songs and also create cover versions of their songs.
Creating the album for Intelligent Band Machine took about one month and there were multiple issues with trying to create consistently sounding high-quality, AI-generated songs. I spent hours reviewing thousands of outputs and then more time editing the final set of curated songs in Cubase.
My experience was very different when I created the album for Jake Davy Smith. I used lyrics I had already written, generated between five and 20 versions of each song, and spent far less time editing them. The process was faster, however, there were still some issues. Changes in Jake’s voice occurred, though they were less frequent and easier to correct. There were also problems with pronunciation, but I could now quickly regenerate the audio. In essence, what had previously taken a month now took only a week.
Yet beneath this lies a further internal conflict related to the data used to train these AI models or, as music journalist Richard Smirke describes it, “the largest IP theft in human history.” It is this issue that has made a technology that ought to have been celebrated as one of the biggest technological achievements in decades, one of the most contested instead.
Chatbots like ChatGPT, estimated to have nearly 1 billion users worldwide, have been described by the linguist and activist Noam Chomsky as both “marvels of machine learning” and the “banality of evil.” Image generators like OpenAI’s DALL-E have also come under fire. Critics like Ted Chiang challenge whether AI can make art and other commentators have criticized the lack of cultural diversity in image generation.
In addition to this, in 2024 the UK government announced it was considering an exception to copyright law that would allow industry to use copyrighted works for AI training without compensating the creators. This led to protests. More than 1,000 musicians released a silent album called Is This What We Want in protest against unauthorized AI training. The artists included Kate Bush, Annie Lennox, Damon Albarn, and The Clash.
Elton John and Paul McCartney also voiced their opposition to changes in copyright law that would benefit AI companies. The mystery about whether a band called The Velvet Sundown was AI-generated added fuel to the fire and sparked further debate during the summer of 2025.
Yet AI companies have been winning, or at least partially winning, court cases. In November 2025, Getty Images “lost its claim for secondary infringement of copyright” against Stability AI. Other AI companies are making deals, and this includes Udio and Suno’s recent deals with music companies. However, more alternative platforms are emerging. Klay.vision is negotiating with the big labels prior to launching, and Soundraw only uses music created in-house for AI training.
So GenAI is here to stay, and musicians will need to adapt. Library music, background music, and music for social media or film can easily be created with AI. However, there are risks. The risk that similar music may be generated for other users; the risk that any uploaded songs may be used for training data. Then there’s the risk that these tools may inadvertently generate something that breaches someone else’s IP.
One way for musicians to safely use GenAI is by training models using their own data, as YACHT did when they used their back catalog of songs as training data for a new album. In this way musicians can have full control over the outputs. This is something I will be exploring for the next stage of my research.
My transformation has been anything but straightforward. It has been marked by the deep frustration I encountered when initially using these tools, an ongoing conflict about how these tools are trained, and moments of genuine amazement. The albums I created may be imperfect, but they are a clear departure from my usual style and show how GenAI can support musical creativity.

Financially, the albums are unlikely to recoup the cost of creating them, as independent musicians may need hundreds of millions of streams to earn a decent income from music. Even a few million streams of the songs will barely cover the various fees for music, image, and video generation of around £140. Merchandise, licensing, sync deals, and other revenue streams will likely remain important sources of income for musicians, whether they are human or AI-generated.
On the legal side, one possible way forward is for AI companies to make open-source versions of their models freely available for offline use. Some already have, but for those that haven’t, it seems fair that if they have used our data to build these systems, they should allow broader access to the models themselves.
New technologies might change how music is produced. We have gone from clapping to drumming and from using drum machines in recording studios to generating “new” sounds with AI. Yet now that I have completed these experiments, I realize that one thing remains the same.
Whether I am cutting tape using scissors, cutting and pasting in a sequencer like Cubase, or regenerating parts in an AI music studio like Suno, human creativity is still an essential part of the process. Using GenAI was transformative, yet it was my creative decisions that shaped the songs, the albums, the avatars for my personas, their biographies, and the overall vision. This is something that AI cannot do – at least, not for now.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post How I Used AI to Transform Myself From a Female Dance Artist to an All-Male Post-Punk Band appeared first on SingularityHub.
2026-01-16 02:32:38
A small trial is testing a decades-old method that transforms cancer cells into a “vaccine” to prevent recurrence.
Lights, vitamin, action. A combination of vitamin B2 and ultraviolet light hardly sounds like a next-generation cancer treatment. But a small new trial is testing the duo in people with recurrent ovarian cancer.
Led by PhotonPharma, this first-in-humans study builds on decades of work investigating whether we can turn whole cancer cells into “vaccines.” Isolated from cancer patients, the cells are stripped of the ability to multiply but keep all other protein signals intact. Once reinfused, the cells can, in theory, alert the immune system that something is awry.
Known as the Mirasol process, the approach was originally used to neutralize bacteria, viruses, and other pathogens in donated blood or plasma (the yellowish liquid part of blood). But a chance discovery led Ray Goodrich at Colorado State University to re-envision the method as a cancer treatment. White blood cells, he noticed, retained their structure but were inactivated after going through the process, turning into “zombies” that couldn’t divide or function.
Cancer cells disabled in the same way could train the immune system to recognize and attack the real tumor. Researchers pursuing the idea have already used it to keep recurrent cancers at bay in mice and dogs. Encouraged, Goodrich, who helped develop the Mirasol process, co-founded PhotonPharma to bring the treatment to people.
To be clear, the method doesn’t prevent cancer. It’s designed to keep pesky cancer cells from returning. The trial is set to begin early this year at City of Hope, a national cancer research and treatment center, in six patients to gauge safety.
“We are thrilled to have reached this pivotal moment in our journey toward providing a novel treatment option for patients facing advanced ovarian cancer,” said Goodrich in a press release after being given the green light by the FDA to conduct the trial.
Cancer immunotherapies have come a long way. The blockbuster CAR T cell therapy, for example, has transformed treatment for deadly blood cancers. Here, T cells are removed from the patient and genetically engineered to more effectively recognize cancers. When infused back into the body, these enhanced cells can better hunt down and destroy their targets.
CAR T is a breakthrough, but it’s tough on an already fragile body. Patients undergo a short bout of chemotherapy or radiation to remove existing T cells. This step makes space for the newly engineered cells and reduces competition with existing immune cells. Scientists are working on ways to skip the process, such as a gene-editing injection that transforms T cells inside the body. But even these newer methods still struggle to tackle solid tumors, including ovarian cancer.
Cancer vaccines are also on the rise thanks to mRNA technology. Like Covid vaccines, these shots wake up a sluggish immune system and help immune cells penetrate the formidable barriers surrounding cancer cells.
The Mirasol process is different. Before boosting the immune system, it first neuters cancer cells.
Goodrich began working to purify donated blood during the HIV epidemic in the late 1980s, when contamination was a big concern. It took a decade, but he eventually found a curious duo: Vitamin B2, or riboflavin, and ultraviolet (UV) light. Riboflavin latches onto DNA and RNA molecules, and a blast of UV light damages the genetic material in pathogens, rendering them incapable of growth and reproduction. This essentially cleanses the blood. The process can be conducted in a machine the size of desktop printer and is approved in multiple countries in Europe and Canada.
But the technology is more sledgehammer than scalpel. It also damages red blood cells, platelets, and plasma proteins. To his surprise, however, Goodrich noticed some white blood cells remained intact, though they could no longer replicate. It was this observation that led him to test the procedure on cancer cells.
Using whole cancer cells as vaccines isn’t a new idea. For decades, scientists have tried to use patient’s own tumors to ignite the immune system, often incapacitating them with radiation. Some tests made it to clinical trials for prostate cancers and other solid tumors. But the overall immune response was weak, and the idea soon lost support.
Still, whole cancer cells do have a leg up compared to current immunotherapies. Such treatments use only a small handful of proteins, called neoantigens, to help the immune system identify cancer cells. With Mirasol, the inactivated cancer cells retain all their neoantigens. In theory, this could more effectively alert the immune system: More danger signals should provoke a fiercer response. Using whole cancer cells also means scientists don’t need to spend time figuring out the best neoantigens to include in their treatment.
Unlike CAR T, the inactivated cancer cells can be infused back into patients without radiation or chemotherapy conditioning. And because they can no longer divide, there’s little chance of them growing into new tumors.
Early trials in dogs with solid tumors were promising. In an 11-year-old goldendoodle named Ella, cancer of the liver had spread into surrounding healthy tissue. This made it difficult to completely remove. Rather than resorting to chemotherapy, her owner opted for Goodrich’s experimental process. After treatment, the cancer showed no signs of returning for three years, at which point she passed from health issues unrelated to cancer.
The new trial will treat people with recurrent, advanced ovarian cancer and no other options.
“Traditionally, it’s been chemotherapy, but in 80 percent of those patients who get surgery and chemotherapy, it comes back…we know it’s really not curable at that point,” Mihae Song, a gynecologic oncologist at City of Hope, told The Colorado Sun.
The trial will primarily measure safety. The team will remove cancer cells from patients, inactivate them with UV light and riboflavin, and add several molecules commonly used in vaccines to boost the immune response. Each person will receive three doses of the cells, as the team watches out for side effects and gauges the response.
If it works, Goodrich and team want to recruit more participants and potentially expand the treatment to other types of solid tumors. But success isn’t guaranteed. Tumors eventually suppress immune attacks, and it’s unclear if UV-treated decoys can restore immune cell vigor.
Still, the trial offers an alternative way to tackle stubborn cancers. “You would always hope, but I think it’s too optimistic to think that it completely cures it,” Goodrich told The Colorado Sun. “But it’s maybe something that could add years of life, of good-quality life, onto someone’s lifespan who is diagnosed with this, because, again, the prognosis right now is not very good.”
The post In First Human Trial, Zombie Cancer Cells Train the Body to Fight Tumors appeared first on SingularityHub.
2026-01-14 05:44:52
“Generative biology is moving drug discovery from a process of chance to one of design.”
Antibodies touch nearly every corner of healthcare. These carefully crafted proteins can target cancer cells, control autoimmune diseases, fight infections, and destroy the toxic proteins that drive neurological disorders. They’re also notoriously difficult to make.
Over 160 antibody therapies have been approved globally. Their market value is expected to reach $445 billion in the next five years. But the traditional design process takes years of trial and error and is often constrained to structures similar to existing proteins.
With AI, however, we can now generate completely new antibody designs—never before seen in nature—from scratch. Last year, labs and commercial companies raced to build increasingly sophisticated algorithms to predict and generate these therapeutics. While some tools are proprietary, many are open source, allowing researchers to tailor them to a specific project.
Some AI-optimized antibodies are already in early clinical trials. In late September, Generate:Biomedicines in Somerville, Massachusetts presented promising data from patients with asthma treated with an antibody designed with AI’s help. A shot every six months lowered asthma-triggering protein levels without notable side effects.
“Generative biology is moving drug discovery from a process of chance to one of design,” said Mike Nally, CEO of Generate, in a press release.
Nobel Prize winner David Baker at the University of Washington would likely agree. Known for his work on protein structure prediction and design, his team upgraded an AI last year to dream up antibodies for any target at the atomic level.
Pills containing small-molecule drugs like Tylenol still dominate healthcare. But antibody therapies are catching up. These therapies work by grabbing onto a given protein, like a key fitting into a lock. The interaction then either activates or inhibits the target.
Antibodies come in different shapes and sizes. Monoclonal antibodies, for example, are lab-made proteins that precisely dock to a single biological target, such as one involved in the growth or spread of cancer. Nanobodies, true to their name, are smaller but pack a similar punch. The FDA has approved one treatment based on the technology for a blood clotting disorder.
Regardless of type, however, antibody treatments traditionally start from similar sources. Researchers usually engineer them by vaccinating animals, screening antibody libraries, or isolating them from people. Laborious optimization procedures follow, such as mapping the exact structure of the binding pocket on the target—the lock—and tweaking the antibody key.
The process is tedious and unpredictable. Many attempts fail to find antibodies that reliably scout out their intended docking site. It’s also largely based on variations of existing proteins that may not have the best therapeutic response or safety profile. Candidates are then painstakingly optimized using iterations of computational design and lab validation.
The rise of AI that can model protein structures—and their interactions with other molecules—as well as AI that generates proteins from scratch has sparked new vigor in the field. These models are similar to those powering the AI chatbots that have taken the world by storm for their uncanny ability to dream up (sometimes bizarre) text, images, and video.
In a way, antibody structures can be represented as 3D images, and their molecular building blocks as text. Training a generative AI on this data can yield an algorithm that produces completely new designs. Rather than depending on chance, it may be possible to rationally design the molecules for any given protein lock—including those once deemed “undruggable.”
But biology is complex. Even the most thoughtful designs could fail in the body, unable to grasp their target or latching onto unintended targets, leading to side effects. Antibodies rely on a flexible protein loop to recognize their specific targets, but early AI models, such as DeepMind’s AlphaFold, struggled to map the structure and behavior of these loops.
The latest AI is faring better. An upgraded version of Baker lab’s RFdiffusion model, introduced last year, specifically tackles these intricate loops based on information about the structure of the target and location of the binding pocket. Improved prediction quickly led to better designs.
Initially, the AI could only make nanobodies. These are short but functional chunks of antibodies for a range of viruses, such as the flu, and antidotes against deadly snake venoms. After further tweaking, the AI suggested longer, more traditional antibodies against a toxin produced by a type of life-threatening bacteria that often thwarts antibacterial drugs.
Lab tests confirmed that the designer proteins reliably latched onto their targets at commonly used doses without notable off-site interactions.
“Building useful antibodies on a computer has been a holy grail in science. This goal is now shifting from impossible to routine,” said study author Rob Ragotte.
There have been more successes. One lab introduced a generative model that can be fine-tuned using the language of proteins—for example, adding structural constraints of the final product. In a test, the team selected 15 promising AI-made nanobody designs for cancer, infections, and other diseases, and each successfully found its target in living cells. Another lab publicly released an AI called Germinal that’s also focused on making nanobodies from scratch.
Commercial companies are hot on academia’s heels.
Nabla Bio, based in Cambridge, Massachusetts, announced a generative AI-based platform called JAM that can tackle targets previously unreachable by antibodies. One example is a highly complex protein class called G-protein-coupled receptors. These seven-arm molecules form the “largest and most diverse group” of protein receptors embedded in cell membranes. Depending on chemical signals, the receptors trigger myriad cell responses—tweaking gene activation, brain signaling, hormones—but their elaborate structure makes designing antibodies a headache.
With JAM, the company designed antibodies to target these difficult proteins, showcasing the AI’s potential to unlock previously unreachable targets. They’re releasing parts of the data involved in characterized antibodies from the study, but most of the platform is proprietary.
Momentum for clinical trials is also building.
After promising initial results, Generate:Biomedicines launched a large Phase 3 study late last year. The trial involves roughly 1,600 people with severe asthma across the globe and is testing an antibody optimized—not engineered from scratch—with the help of AI.
The hope is AI could eventually take over the entire antibody-design process: predicting target pockets, generating potential candidates, and ranking them for further optimization. Rational design could also lead to antibodies that better navigate the body’s crooks and crannies, including those that can penetrate into the brain.
It’ll be a long journey, and safety is key. Because the dreamed-up proteins are unfamiliar to the body, they could trigger immune attacks.
But ultimately, “AI antibody design will transform the biotechnology and pharmaceutical industries, enabling precise targeting and simpler drug development,” says Baker.
The post AI-Designed Antibodies Are Racing Toward Clinical Trials appeared first on SingularityHub.
2026-01-13 06:36:16
The skin could allow machines to dynamically blend into their surroundings or be used to create adaptive displays and artwork.
An octopus’s adaptive camouflage has long inspired materials scientists looking to come up with new cloaking technologies. Now researchers have created a synthetic “skin” that independently shifts its surface patterns and colors like these intelligent invertebrates.
The ability to alter an object’s appearance on demand has a host of applications, from allowing machines to dynamically blend into their surroundings to creating adaptive displays and artwork. Octopuses are an obvious source of inspiration thanks to their unique ability to change the color and physical structure of their skin in just seconds.
So far, however, materials scientists have struggled to replicate this dual control. Materials that change color typically use nanostructures to reflect light in specific ways. But changing a surface’s shape interferes with these interactions, making it challenging to tune both properties simultaneously.
Now, in a paper published in Nature, Stanford University researchers cracked the problem by creating a synthetic skin made of two independently controlled polymer layers: One changes color and the other shape.
“For the first time, we can mimic key aspects of octopus, cuttlefish, and squid camouflage in different environments: namely, controlling complex, natural-looking textures and at the same time, changing independent patterns of color,” Siddharth Doshi, first author of the paper, told The Financial Times.
The new camouflage system took direct inspiration from cephalopods, which use tiny muscle-controlled structures called papillae to reshape their skin’s surface while separate pigment cells alter color.
To recreate these abilities, the researchers turned to a polymer called PEDOT:PSS, which swells when it absorbs water. The team used electron-beam lithography—a technology typically used to etch patterns into computer chips—to control how much different areas of the polymer swell when exposed to liquid.
The team covered one layer of the polymer in a single layer of gold to create textures that switch between a shiny and matte appearance. They then sandwiched another layer of the polymer between two layers of gold, creating an optical cavity that could be used to generate a wide variety of colors as the distance between the gold sheets changes.
The researchers can create four distinct visual states—texture combined with a color pattern, texture only, color only, and no texture or color pattern—by exposing each side of the skin to either water or isopropyl alcohol. The system switches between states in about 20 seconds, and the process is fully reversible.
“By dynamically controlling the thickness and topography of a polymer film, you can realize a very large variety of beautiful colors and textures,” Mark Brongersma, a senior author on the paper, said in a press release. “The introduction of soft materials that can expand, contract, and alter their shape opens up an entirely new toolbox in the world of optics to manipulate how things look.”
Applications could extend beyond camouflage the researchers say—for instance using texture changes to control whether small robots cling to or slide across surfaces or creating advanced displays for wearable devices or art projects.
The current need to apply water to control the appearance of the skin is “a huge limitation,” Debashis Chanda, a physicist at the University of Central Florida, told Nature. But the researchers told the Financial Times they plan to introduce digital control systems to future versions of the skin.
They also hope to add computer vision algorithms to provide information about the surrounding environment the skin needs to blend in with. “We want to be able to control this with neural networks—basically an AI-based system—that could compare the skin and its background, then automatically modulate it to match in real time, without human intervention,” Doshi said in the press release.
While the research faces a long road from the lab bench to commercial reality, sci-fi style cloaking technology has taken a tiny step closer to reality.
The post Sci-Fi Cloaking Technology Takes a Step Closer to Reality With Synthetic Skin Like an Octopus appeared first on SingularityHub.
2026-01-11 03:10:13
Google Gemini Is Taking Control of Humanoid Robots on Auto Factory FloorsWill Knight | Wired ($)
“Google DeepMind is teaming up with Boston Dynamics to give its humanoid robots the intelligence required to navigate unfamiliar environments and identify and manipulate objects—precisely the kinds of capabilities needed to perform manual labor.”
Distinct AI Models Seem to Converge on How They Encode RealityBen Brubaker | Quanta Magazine
“Is the inside of a vision model at all like a language model? Researchers argue that as the models grow more powerful, they may be converging toward a singular ‘Platonic’ way to represent the world.”
Flu Is Relentless. CRISPR Might Be Able to Shut It Down.
David Cox | Wired ($)
“They believe CRISPR could be tailored to create a next-generation treatment for influenza, whether that’s the seasonal strains which plague both the northern and southern hemispheres on an annual basis or the worrisome new variants in birds and other wildlife that might trigger the next pandemic.”
Next-Level Quantum Computers Will Almost Be UsefulDina Genkina | IEEE Spectrum
“The machine that Microsoft and Atom Computing will be delivering, called Magne, will have 50 logical qubits, built from some 1,200 physical qubits, and should be operational by the start of 2027. QuEra’s machine at AIST has around 37 logical qubits (depending on implementation) and 260 physical qubits, Boger says.”
AI Coding Assistants Are Getting Worse
Jamie Twiss | IEEE Spectrum
“In recent months, I’ve noticed a troubling trend with AI coding assistants. After two years of steady improvements, over the course of 2025, most of the core models reached a quality plateau, and more recently, seem to be in decline. A task that might have taken five hours assisted by AI, and perhaps ten hours without it, is now more commonly taking seven or eight hours, or even longer.”
Meta Unveils Sweeping Nuclear-Power Plan to Fuel Its AI Ambitions
Jennifer Hiller | The Wall Street Journal ($)
“Meta Platforms on Friday unveiled a series of agreements that would make it an anchor customer for new and existing nuclear power in the US, where it needs city-size amounts of electricity for its artificial-intelligence data centers. …Financial details weren’t disclosed, but the arrangements are among the most sweeping and ambitious so far between tech companies and nuclear-power providers.”
Even the Companies Making Humanoid Robots Think They’re OverhypedSean McLain | The Wall Street Journal ($)
“Billions of dollars are flowing into humanoid robot startups, as investors bet that the industry will soon put humanlike machines in warehouses, factories and our living rooms. For all the recent advances in the field, humanoid robots, they say, have been overhyped and face daunting technical challenges before they move from science experiments to a replacement for human workers.”
Former Google CEO Plans to Singlehandedly Fund a Hubble Telescope Replacement
Eric Berger | Ars Technica
“On Wednesday evening, former Google CEO Eric Schmidt and his wife, Wendy, announced a major investment in not just one telescope project, but four. Each of these new telescopes brings a novel capability online; however, the most intriguing new instrument is a space-based telescope named Lazuli. This spacecraft, if successfully launched and deployed, would offer astronomers a more capable and modern version of the Hubble Space Telescope, which is now three decades old.”
Uber’s Not Done With Self-Driving Cars Just Yet. It’s Designing a New Robotaxi With Lucid and NuroSasha Lekach | Gizmodo
“The companies said that on-road testing [in San Francisco] started at the end of last year, which isn’t surprising as Nuro already holds driverless testing permits through the California DMV. Eventually, the trio plan to offer the Level 4 robotaxi prototype everywhere Uber has a presence—if all goes well, that is.”
Kawasaki’s Four-Legged Robot-Horse Vehicle Is Going Into ProductionBronwyn Thompson | New Atlas
“What was announced as a 2050 pipe dream by Kawasaki, the company’s hydrogen-powered, four-hooved, all-terrain robot horse vehicle Corleo is actually going into production and is now expected to be commercially available decades earlier—with the first model to debut in just four years.”
NASA’s Science Budget Won’t Be a Train Wreck After AllEric Berger | Ars Technica
“On Monday, Congress made good on…promises [to fund most of NASA’s science portfolio], releasing a $24.4 billion budget plan for NASA as part of the conferencing process, when House and Senate lawmakers convene to hammer out a final budget. The result is a budget that calls for just a 1 percent cut in NASA’s science funding, to $7.25 billion, for fiscal year 2026.”
AI Is Being Used to Find Valuable Commodities in Our TrashRyan Dezember | The Wall Street Journal ($)
“Murphy Road executives say the technology allows them to sort up to 60 tons an hour of curbside recycling from around Connecticut and western Massachusetts into precisely sorted bales of paper, plastic, aluminum cans, and other materials. The material is sold to mills, manufacturers, and remelt facilities, which pay more for cleaner bales.”
The post This Week’s Awesome Tech Stories From Around the Web (Through January 10) appeared first on SingularityHub.