2025-03-08 23:00:00
Eerily Realistic AI Voice Demo Sparks Amazement and Discomfort OnlineBenj Edwards | Ars Technica
“In late 2013, the Spike Jonze film ‘Her’ imagined a future where people would form emotional connections with AI voice assistants. Nearly 12 years later, that fictional premise has veered closer to reality with the release of a new conversational voice model from AI startup Sesame that has left many users both fascinated and unnerved.”
Inside the Start of Project Stargate—and the Startup Powering ltAbram Brown | The Information
“Just the scale of economics around [Stargate’s] Abilene [datacenter project] is enormous, and Lochmiller made sure I understood that by comparing it to a familiar sight: Marc Benioff’s billion-dollar skyscraper in downtown San Francisco. ‘In the Bay Area, the Salesforce Tower defines the city skyline, right?’ he said. ‘You take three Salesforce Towers, and that’s the amount of work that’s going on here.'”
This Kung Fu Robot Video Makes It Look Like the Uprising Has Already StartedTrevor Mogg | Digital Trends
“Folks often joke about the so-called ‘robot uprising,’ but a new video of Unitree’s advanced G1 robot pulling some kung fu moves could well wipe the smile off their faces. Shared on Tuesday, the 15-second clip shows a baton-wielding human retreating from a robot that then kicks the baton clean out of his hand. Let’s just say that again: a baton-wielding human retreating from a robot.”
De-Extinction Scientists Say These Gene-Edited ‘Woolly Mice’ Are a Step Toward Woolly MammothsJessica Hamzelou | MIT Technology Review
“They’re small, fluffy, and kind of cute, but these mice represent a milestone in de-extinction efforts, according to their creators. The animals have undergone a series of genetic tweaks that give them features similar to those of woolly mammoths—and their creation may bring scientists a step closer to resurrecting the giant animals that roamed the tundra thousands of years ago.”
OpenAI Plots Charging $20,000 a Month For PhD-Level AgentsStephanie Palazzolo and Cory Weinberg | The Information
“OpenAI executives have told some investors it planned to sell low-end agents at a cost of $2,000 per month to ‘high-income knowledge workers’; mid-tier agents for software development costing possibly $10,000 a month; and high-end agents, acting as PhD-level research agents, which could cost $20,000 per month, according to a person who’s spoken with executives.”
Firefly Releases Stunning Footage of Blue Ghost Landing on the MoonPassant Rabie | Gizmodo
“The Texas-based company released a clip of Blue Ghost’s descent toward the moon followed by a smooth landing. The footage is a masterclass in lunar landings, capturing striking views of the lander emerging from a cloud of dust, its shadow stretching across the moon’s surface in a superhero-like stance.”
This Scientist Left OpenAI Last Year. His Startup Is Already Worth $30 Billion.Berber Jin and Deepa Seetharaman | The Wall Street Journal
“Silicon Valley’s hottest investment isn’t a new app or hardware product. It’s one man. AI researcher Ilya Sutskever is the primary reason venture capitalists are putting some $2 billion into his secretive company Safe Superintelligence, according to people familiar with the matter. The new funding round values SSI at $30 billion, making it one of the most valuable AI startups in the world.”
Driverless Race Car Sets a New Autonomous Speed RecordAndrew J. Hawkins | The Verge
“Look out: there’s a new fastest robot in the world. A Maserati MC20 Coupe with no one in the driver’s seat set a new land speed record for autonomous vehicles, reaching 197.7mph (318km/h) during an automotive event at the Kennedy Space Center last week.”
AI Reasoning Models Can Cheat to Win Chess GamesRhiannon Williams | MIT Technology Review
“Facing defeat in chess, the latest generation of AI reasoning models sometimes cheat without being instructed to do so. The finding suggests that the next wave of AI models could be more likely to seek out deceptive ways of doing whatever they’ve been asked to do. And worst of all? There’s no simple way to fix it.”
SpaceX Starship Spirals Out of Control in Second Straight Test Flight FailureSean O’Kane | TechCrunch
“The ship successfully separated and headed into space, while the booster came back to the company’s launchpad in Texas, where it was caught for a third time by the launch tower. But at around eight minutes and nine seconds into the flight, SpaceX’s broadcast graphics showed Starship lose multiple Raptor engines on the vehicle. On-board footage showed the ship started spiraling end over end over the ocean.”
People Are Using Super Mario to Benchmark AI NowKyle Wiggers | TechCrunch
“Thought Pokémon was a tough benchmark for AI? One group of researchers argues that Super Mario Bros. is even tougher. Hao AI Lab, a research org at the University of California San Diego, on Friday threw AI into live Super Mario Bros. games. Anthropic’s Claude 3.7 performed the best, followed by Claude 3.5. Google’s Gemini 1.5 Pro and OpenAI’s GPT-4o struggled.”
AI Versus the Brain and the Race for General IntelligenceJohn Timmer | Ars Technica
“The systems being touted as evidence that AGI is just around the corner do not work at all like the brain does. …It’s entirely possible that there’s more than one way to reach intelligence, depending on how it’s defined. But at least some of the differences are likely to be functionally significant, and the fact that AI is taking a very different route from the one working example we have is likely to be meaningful.”
The post This Week’s Awesome Tech Stories From Around the Web (Through March 8) appeared first on SingularityHub.
2025-03-08 04:43:37
Until last year, the US hadn’t visited the moon in over a half century. But now? Twice in a week.
A growing number of companies are eyeing the moon as a source of commercial opportunities. Two private landings in under a week suggest our nearest celestial neighbor is open for business.
Rapidly falling launch costs have opened the door for smaller companies to take on more ambitious space missions, including efforts to land on the moon. NASA has also encouraged this activity. In 2018, the agency launched the Commercial Lunar Payload Services (CLPS) program, incentivizing firms to build robotic landers and rovers in support of its plans to return humans to the moon.
Last year, Intuitive Machines’ Odysseus became the first private spacecraft to touch down on the lunar surface. But the vehicle toppled over onto its side in the process, limiting its ability to communicate and deploy experiments.
Last Sunday, however, US startup Firefly Aerospace achieved a clean touchdown with its Blue Ghost lander in the Mare Crisium basin. Meanwhile, Intuitive Machines experienced déjà vu on its second landing near the moon’s south pole on Friday when its Athena lander ended up on its side again.
Firefly’s 6.6-foot-tall lander launched on a SpaceX Falcon 9 rocket on January 15 and entered lunar orbit on February 13. The solar-powered vehicle is carrying 10 NASA science experiments designed to gather data on the lunar surface. It will now conduct a 14-day mission before the lunar night’s frigid temperatures set in and disable the lander.
Things haven’t turned out as well for Intuitive Machines, whose spacecraft took a speedier path to the moon after launching on a Falcon 9 on February 26. The company experienced a repeat of the problems that took the shine off its first landing. Issues with its laser range finders meant the lander lost track of its trajectory above the moon and didn’t touch down properly.
After assessing the spacecraft, Intuitive Machines, who could play an important role in NASA’s plans to return humans to the moon later this decade, said the craft was on its side again, likely couldn’t revive its batteries, and declared the mission over.
“With the direction of the sun, the orientation of the solar panels, and extreme cold temperatures in the crater, Intuitive Machines does not expect Athena to recharge,” the company wrote in a statement Friday. “The mission has concluded, and teams are continuing to assess the data collected throughout the mission.”
Athena was carrying the agency’s Polar Resources Ice Mining Experiment, or PRIME-1, which NASA hoped could help the agency assess how easy it will be for astronauts to harvest water ice.
The experiment featured a drill called TRIDENT to extract lunar soil from three feet beneath the surface and a mass spectrometer to analyze the sample for water. Previous observations have suggested significant amounts of water ice is locked up in the soil at the moon’s south pole. This ice could prove a valuable resource for any future long-term outpost.
Athena was also carrying several robots made by Intuitive Machines, US startup Lunar Outpost, and the Massachusetts Institute of Technology, as well as equipment from Nokia designed to power the moon’s first 4G cellular network.
The hope for both missions is that renewed interest in lunar exploration could soon spur a flourishing off-world economy with plenty of opportunities for the private sector.
In the short term, national space agencies like NASA are likely to be the primary customers for companies like Firefly and Intuitive Machines, which both received funding from the CLPS program. NASA is eager to find cheaper ways to get cargo to the moon on a regular basis to support its more challenging missions.
But there’s hope that in the longer term there could be opportunities for companies to carve out a niche harvesting resources like water ice to create rocket fuel and oxygen or the rare isotope helium-3, which could be used to power fusion reactors. These could be particularly attractive to other private companies looking to push further into the solar system and use the moon as a staging post.
Whether this vision pans out remains to be seen. But with several more private moon landings scheduled later this year, the first shoots of a burgeoning lunar economy seem to be emerging.
The post Two Moon Landings in a Week—One Dead, One Alive—Aim to Kickstart the Lunar Economy appeared first on SingularityHub.
2025-03-07 06:34:21
The project explores how life adapts to extreme environments—and hopes to inspire new drugs or even treatments to aid space travel.
A human can’t survive in the Mariana Trench without protection. At its deepest, the trench plunges 35,000 feet below the surface of the Pacific Ocean to a region reigned by crushing pressure and darkness.
Yet somehow life finds a way. The hadal snailfish, with delicate fins and translucent body, roams the dark and freezing waters. Giant shrimp-like creatures up to a foot long scavenge fallen debris, including wood and plastic, and transparent eels with fish-like heads hunt prey. A carpet of bacteria breaks down dead sea creatures and plankton to recycle nutrients.
We’ve only scratched the surface of what thrives in the deepest regions of the ocean. But a large project has now added over 6,000 new microbes to the deep-sea species tally.
Called the Mariana Trench Environment and Ecology Research Project, or MEER for short, a team of scientists have collected sediment from the hadal zone—the deepest part of the ocean—in the Mariana Trench and two other areas. The investigation revealed thousands of new species and two adaptations allowing the microbes to thrive under intense pressure.
Another team assembled the genomes of 11 deep-sea fish and found a mutated gene that could boost their ability to survive. Sequencing the genome of a giant shrimp-like creature suggested bacteria boosted its metabolism to adapt to high-pressure environments.
Studying these mysterious species could yield new medications to fight infections, inflammation, or even cancer. They show how creatures adapt to extreme environments, which could be useful for engineering pressure- or radiation-resistant proteins for space exploration.
“The deep sea, especially hadal zones, represents some of the most extreme and least explored environments on Earth,” wrote study author Shunping He and colleagues at the Chinese Academy of Sciences. The project hopes to “push the boundaries of our understanding of life” in this alien world, added Shanshan Liu and her team at BGI research, in a separate study.
Oceans cover roughly 70 percent of the Earth’s surface. Yet we know very little about their inhabitants, especially on the ocean floor.
Since the 1960s, multiple missions—some autonomous, others manned—have sought to explore the deepest part of the Pacific Ocean, the Mariana Trench. Over 30,000 feet deep, it could completely submerge Mount Everest.
The trench is an unforgiving environment. The pressure is over 1,000 times greater than that at sea level, and at Challenger Deep—the deepest point navigated to date—the temperature is just above freezing. The seabed there is shrouded in complete darkness.
Yet a manned descent 65 years ago found flatfish and large shrimp-like creatures thriving in the trench—the first signs that life could survive in such extreme environments. More recently, James Cameron, best known for directing films like Titanic, dived to nearly 36,000 feet and took footage that helped identify even more new species.
The deep sea, it seems, is a trove of alien species yet to be discovered. The MEER project is collecting specimens from the deepest trenches across the world to learn more.
MEER relies on a deep-sea submersible called Fendouzhe, which means striver or fighter in Chinese. Fendouzhe is self-propelled and can survive freezing temperatures and tremendous pressure. It holds three crew members and has two mechanical arms bristling with devices—cameras, sonars, drills.
The submersible reached the bottom of the Mariana Trench in 2020 followed by missions to the Yap Trench and Philippine Basin. Scientists on board gathered over 1,600 sediment samples from multiple hadal zones between 6 and 11 kilometers, or roughly 4 to 7 miles, under the sea.
Added to the punishing pressure and lack of light, the deep sea is low on environmental nutrients. It’s truly “a unique combination that sets it apart from all other marine and terrestrial environments,” wrote the authors.
Sediments hold genetic material that survives intact when brought to the surface for analysis.
One study sketched a landscape of living creatures in the deep ocean using an approach called metagenomics. Here, scientists sequenced genetic material from all microbes within an environment, allowing them to reconstruct a birds-eye view of the ecology.
In this case, the collection is “10-fold larger than all previously reported,” wrote the team. Over 89 percent of the genomes are entirely new, suggesting most belong to previously unknown microbial species living in the deep ocean.
Samples collected from other trenches have varying genetic profiles, suggesting the microbes learned to adapt to various deep ocean environments. But they share similar genetic changes. Several genes bump up their ability to digest toluene as food. The chemical is mostly known for manufacturing paints, plastics, medications, and cosmetics.
Other genes wipe out metabolic waste products called reactive oxygen species. In large amounts, these damage DNA and lead to aging and disease. The creatures also have a beefed-up DNA repair system. This could help them adapt to intense pressure and frigid temperatures, both of which increase the chances of these damaging chemicals wreaking havoc.
Meanwhile, other studies peered into the genetic makeup of fish and shrimp-like creatures in the hadal zone.
In one, scientists collected samples using the Fendouzhe submersible and an autonomous rover, covering locations from the Mariana Trench to the Indian Ocean. The team zeroed in on roughly 230 genes in deep-sea fish that boost survival under pressure.
Most of these help repair DNA damage. Others increase muscle function. Surprisingly, all 11 species of deep-sea fish studied shared a single genetic mutation. Engineering the same mutation in lab-grown cells helped them more efficiently turn DNA instructions into RNA—the first step cells take when making the proteins that coordinate our bodily functions.
This is “likely to be advantageous in the deep-sea environment,” wrote the team.
Top predators in the deep rely on a steady supply of prey—mainly, a shrimp-like species called amphipods. Whole genome sequencing of these creatures showed the shrimp thrive thanks to various good bacteria that help them defend against other bacterial species.
There are also some other intriguing findings. For example, while most deep-sea fish have lost genes associated with vision, one species showed gene activity related to color vision. These genes are similar to ours and could potentially let them see color even in total darkness.
Scientists are still digging through the MEER database. The coalition hopes to bolster our understanding of the most resilient lifeforms on Earth—and potentially inspire journeys into other extreme environments, like outer space.
The post Scientists Discover Thousands of New Microbial Species Thriving in the Mariana Trench appeared first on SingularityHub.
2025-03-05 02:14:21
PsiQuantum claims to have solved scalability issues that have long plagued photonic approaches.
American quantum computing startup PsiQuantum announced last week that it has cracked a significant puzzle on the road to making the technology useful: manufacturing quantum chips in large quantities.
PsiQuantum burst out of stealth mode in 2021 with a blockbuster funding announcement. It followed up with two more last year.
The company uses so-called “photonic” quantum computing, which has long been dismissed as impractical.
The approach, which encodes data in individual particles of light, offers some compelling advantages—low noise, high-speed operation, and natural compatibility with existing fiber-optic networks. However, it was held back by extreme hardware demands to manage the fact photons fly with blinding speed, get lost, and are hard to create and detect.
PsiQuantum now claims to have addressed many of these difficulties. Last week, in a new peer-reviewed paper published in Nature, the company unveiled hardware for photonic quantum computing they say can be manufactured in large quantities and solves the problem of scaling up the system.
Like any computer, quantum computers encode information in physical systems. Whereas digital computers encode bits (0s and 1s) in transistors, quantum computers use quantum bits (qubits), which can be encoded in many potential quantum systems.
The darlings of the quantum computing world have traditionally been superconducting circuits running at temperatures near absolute zero. These have been championed by companies such as Google, IBM, and Rigetti.
These systems have attracted headlines claiming “quantum supremacy” (where quantum computers beat traditional computers at some task) or the ushering in of “quantum utility” (that is, actually useful quantum computers).
In a close second in the headline grabbing game, IonQ and Honeywell are pursuing trapped-ion quantum computing. In this approach, charged atoms are captured in special electromagnetic traps that encode qubits in their energy states.
Other commercial contenders include neutral atom qubits, silicon based qubits, intentional defects in diamonds, and non-traditional photonic encodings.
All of these are available now. Some are for sale with enormous price tags, and some are accessible through the cloud. But fair warning: They are more for experimentation than computation today.
The individual bits in your digital computers are extraordinarily reliable. They might experience a fault (a 0 inadvertently flips to a 1, for example) once in every trillion operations.
PsiQuantum’s new platform has impressive-sounding features such as low-loss silicon nitride waveguides, high-efficiency photon-number-resolving detectors, and near-lossless interconnects.
The company reports a 0.02 percent error rate for single-qubit operations and 0.8 percent for two-qubit creation. These may seem like quite small numbers, but they are much bigger than the effectively zero error rate of the chip in your smartphone.
However, these numbers rival the best qubits today and are surprisingly encouraging.
One of the most critical breakthroughs in the PsiQuantum system is the integration of fusion-based quantum computing. This is a model that allows for errors to be corrected more easily than in traditional approaches.
Quantum computer developers want to achieve what is called “fault tolerance.” This means that, if the basic error rate is below a certain threshold, the errors can be suppressed indefinitely.
Claims of “below threshold” error rates should be met with skepticism, as they are generally measured on a few qubits. A practical quantum computer would be a very different environment, where each qubit would have to function alongside a million (or a billion, or a trillion) others.
This is the fundamental challenge of scalability. And while most quantum computing companies are tackling the problem from the ground up—building individual qubits and sticking them together—PsiQuantum is taking the top-down approach.
PsiQuantum developed its system in partnership with semiconductor manufacturer GlobalFoundries. All the key components—photon sources and detectors, logic gates, and error correction—are integrated on single silicon-based chip.
PsiQuantum says GlobalFoundries has already made millions of the chips.
By making use of techniques already used to fabricate semiconductors, PsiQuantum claims to have solved the scalability issue that has long plagued photonic approaches.
PsiQuantum is fabricating their chips in a commercial semiconductor foundry. This means scaling to millions of qubits will be relatively straightforward.
If PsiQuantum’s technology delivers on its promise, it could mark the beginning of quantum computing’s first truly scalable era.
A fault-tolerant photonic quantum computer would have major advantages and lower energy requirements.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Quantum Computing Startup Says It’s Already Making Millions of Light-Powered Chips appeared first on SingularityHub.
2025-03-04 07:36:42
The device also mimics lemonade and coffee—but fried eggs? Not so much.
“That Cajun blackened shrimp recipe looks really good,” I tell my husband while scrolling through cooking videos online. The presenter describes it well: juicy, plump, smoky, a parade of spices. Without making the dish, I can only imagine how it tastes. But a new device inches us closer to recreating tastes from the digital world directly in our mouths.
Smaller than a stamp, it contains a slurry of chemicals representing primary flavors like salty, sweet, sour, bitter, and savory (or umami). The reusable device mixes these together to mimic the taste of coffee, cake, and other foods and drinks.
Developed by researchers at Ohio State University, the device has a tiny gum-like strip linked to a liquid reservoir. It releases each taste component in a gel and pumps the resulting blend onto the tongue. The system is wireless and includes a sensor to control the chemical mixture. In a demonstration, one person dipped the sensor into some lemonade in San Francisco and transferred a facsimile of the taste to people wearing the devices in Ohio in real-time.
Complex flavor profiles—say, a fried egg—are harder to simulate. And it’s likely awkward to have a device dangling on your mouth. But the work brings us a little closer to adding a new sense to virtual and augmented reality and making video games more immersive.
“This will help people connect in virtual spaces in never-before-seen ways,” study author Jinghua Li said in a press release. “This concept is here, and it is a good first step to becoming a small part of the metaverse.”
Gaming aside, future iterations of the device could potentially help people who have lost their sense of taste, including those living with long Covid or traumatic brain injuries.
We can taste food thanks to a variety of chemicals stimulating our taste buds. There are five main types of taste bud, each specializing in a different taste. When we chew food, our taste buds send electrical signals to the brain where they combine into a host of flavors—the bitterness of coffee, tanginess of a cup of orange juice, or richness of a buttery croissant.
But taste isn’t an isolated sensation. Smells, textures, memories, and emotions also come into play. One spoon of comfort food can take you back to happy days as a child. That magic is hard to replicate with a few spurts of chemical flavor and is partly why taste is so hard to recreate in digital worlds, wrote the team.
Virtual and augmented reality have mainly focused on audio and visual cues. Adding smell or taste could make experiences more immersive. An early version of the idea, dubbed Smell-O-Vision, dates back nearly a century when scents were released in theaters to heighten the film experience. It’s still employed in 4DX theaters today.
Cinema isn’t the only industry looking for a multi-sensory upgrade. At this year’s CES, a trailer for Sony’s hit game, The Last of Us, showed the technology at work in an immersive, room-size version of the game where players could smell the post-apocalyptic world.
Taste is harder to recreate. Older methods activated taste buds with electrical zaps to the tongue. While participants could detect very basic tastes, hooking your tongue up to electrodes isn’t the most comfortable setup.
More recently, a Hong Kong team developed a lollipop-like device that produces nine tastes embedded in food-safe gels. An electrical zap releases the chemicals, and upping the voltage delivers a stronger flavor. The approach is an improvement, but holding a lollipop in your mouth while gaming for hours is still awkward.
The new device offers a neater solution. Dubbed e-Taste, it has two main components: a sensing platform to analyze the taste profile of a food or drink and an actuator to deliver a mixture of liquid chemicals approximating the sampled taste.
The actuator, a cube the size of a shirt button and a gum-like strip, hangs on the lower teeth. The cube stores chemicals mimicking each of five tastes—glucose for sweet and citric acid for sour, for example—in separate chambers. A tiny pump, activated by an electrical zap, pushes the liquids onto a gel strip where they mingle before being pumped onto the tongue. Each pump is the equivalent of a drop of water, which is enough to activate taste buds.
A person using the device holds the strip inside their mouth with the cube dangling outside. Once the sensor captures a food or drink’s flavor profile—say, equal amounts of sweet, sour, salty, and savory—it wirelessly transmits the data to the actuator which releases the final taste mixture for roughly 45 minutes—plenty of time to experience a virtual foodie session.
After training e-Taste to understand which chemical mixtures best approximate various foods, the team asked 10 volunteers to name the food they were tasting from a list of possibilities.
Roughly 90 percent could pick out lemonade and gauge its sourness. Most could also identify the taste of cake. But not all foods were so easily mimicked. Participants struggled to name umami-heavy dishes, such as fried eggs or fish stew.
Rather than a bug in the device, however, this is an expected result. Taste is highly subjective. Our tolerance to spice or sourness varies largely.
Then there’s the weirdness of a virtual setup. We eat and drink with our eyes open and smell food too. One participant said that tasting coffee through the device without seeing a normal coffee maker led to some confusion. Scientists have long known the color of food is essential to our perception of flavor. Smell and texture are also crucial. The smell of a good southern barbeque joint sets expectations—even before we’ve tasted anything.
The team is exploring ways to enhance the experience by adding these senses. Shrinking the device is also on the menu.
Although the team developed e-Taste to enhance gaming, people could use something like it to sample food across the globe or items when grocery shopping online. Doctors could use it to detect if people have lost their sense of taste, an early indication of multiple diseases, including viral infections and Alzheimer’s disease. And more sophisticated versions could one day augment taste in people who’ve lost it.
The post You Can Taste Cake in Virtual Reality With This New Device appeared first on SingularityHub.
2025-03-01 23:00:00
Anthropic Launches the World’s First ‘Hybrid Reasoning’ AI ModelWill Knight | Wired
“Anthropic, an artificial intelligence company founded by exiles from OpenAI, has introduced the first AI model that can produce either conventional output or a controllable amount of ‘reasoning’ needed to solve more grueling problems. Anthropic says the new hybrid model, called Claude 3.7, will make it easier for users and developers to tackle problems that require a mix of instinctive output and step-by-step cogitation.”
Figure Will Start ‘Alpha Testing’ Its Humanoid Robot in the Home in 2025Brian Heater | TechCrunch
“Figure is planning to bring its humanoids into the home sooner than expected. CEO Brett Adcock confirmed on Thursday that the Bay Area robotics startup will begin ‘alpha testing’ its Figure 02 robot in the home setting later in 2025. The executive says the accelerated timeline is a product of the company’s ‘generalist’ Vision-Language-Action (VLA) model, called Helix.”
New AI Text Diffusion Models Break Speed Barriers by Pulling Words From NoiseBenj Edwards | Ars Technica
“Mercury Coder Mini scores 88.0 percent on HumanEval and 77.1 percent on MBPP—comparable to GPT-4o Mini—while reportedly operating at 1,109 tokens per second compared to GPT-4o Mini’s 59 tokens per second. This represents roughly a 19x speed advantage over GPT-4o Mini while maintaining similar performance on coding benchmarks.”
Amazon Uses Quantum ‘Cat States’ With Error CorrectionJohn Timmer | Ars Technica
“The system mixes two different types of qubit hardware to improve the stability of the quantum information they hold. The idea is that one type of qubit is resistant to errors, while the second can be used for implementing an error-correction code that catches the problems that do happen.”
It’s a Lemon’—OpenAI’s Largest AI Model Ever Arrives to Mixed ReviewsBenj Edwards | Ars Technica
“The verdict is in: OpenAI’s newest and most capable traditional AI model, GPT-4.5, is big, expensive, and slow, providing marginally better performance than GPT-4o at 30x the cost for input and 15x the cost for output. The new model seems to prove that longstanding rumors of diminishing returns in training unsupervised-learning LLMs were correct and that the so-called ‘scaling laws’ cited by many for years have possibly met their natural end.”
Google’s Taara Hopes to Usher in a New Era of Internet Powered by LightSteven Levy | Wired
“Instead of beaming from space, Taara’s ‘light bridges’—which are about the size of a traffic light—are earthbound. As X’s ‘captain of moonshots’ Astro Teller puts it, ‘As long as these two boxes can see each other, you get 20 gigabits per second, the equivalent of a fiber-optic cable, without having to trench the fiber-optic cable.'”
Next-Gen Nuclear Startup Plans 30 Reactors to Fuel Texas Data CentersAlex Pasternack | Fast Company
“Last Energy, a nuclear upstart backed by an Elon Musk-linked venture capital fund, says it plans to construct 30 microreactors on a site in Texas to supply electricity to data centers across the state. The initiative, which it says could provide about 600 megawatts of electricity, would be the company’s largest project to date and help it develop a commercial pipeline in the US.”
The Physicist Working to Build Science-Literate AIJohn Pavlus | Quanta Magazine
“Single-purpose systems like AlphaFold can generate scientific predictions with revolutionary accuracy, but researchers still lack ‘foundation models’ designed for general scientific discovery. These models would work more like a scientifically accurate version of ChatGPT, flexibly generating simulations and predictions across multiple research areas.”
Vinod Khosla: Most AI Investments Will Lose Money as Market Enters ‘Greed’ CycleSri Muppidi | The Information
“Early OpenAI investor Vinod Khosla warned that most investments in artificial intelligence will lose money, particularly as more investors jump into the market, funding more startups. But he said some companies would grow to be worth hundreds of billions—and eventually trillions—of dollars and make up for the failures.”
A Protein Borrowed From Tardigrades Could Give Us Radiation Body ArmorEd Cara | Gizmodo
“The strangely adorable and resilient tardigrade, or water bear, just might hold the key to making cancer treatment a lot more (water-) bearable. That’s because a team of researchers just found evidence that a protein produced by these microscopic creatures could protect our healthy cells from the ravages of radiation therapy.”
The World’s Smallest Lego Brick Is Here. It’s Literally MicroscopicGrace Snelling | Fast Company
“The brick in question is a microscopic sculpture created by UK-based artist David A Lindon. It’s made from a standard red square Lego, and it looks like one, too, aside from the fact that it measures just 0.02517 millimeter by 0.02184 millimeter (about the size of a white blood cell).”
Anthropic’s Latest Flagship AI Might Not Have Been Incredibly Costly to TrainKyle Wiggers | TechCrunch
“Assuming Claude 3.7 Sonnet indeed cost just ‘a few tens of millions of dollars’ to train, not factoring in related expenses, it’s a sign of how relatively cheap it’s becoming to release state-of-the-art models. Claude 3.5 Sonnet’s predecessor, released in fall 2024, similarly cost a few tens of millions of dollars to train, Anthropic CEO Dario Amodei revealed in a recent essay.”
How North Korea Pulled Off a $1.5 Billion Crypto Heist—the Biggest in HistoryDan Goodin | Ars Technica
“‘The Bybit hack has shattered long-held assumptions about crypto security,’ Dikla Barda, Roman Ziakin, and Oded Vanunu, researchers at security firm Check Point, wrote Sunday. ‘No matter how strong your smart contract logic or multisig protections are, the human element remains the weakest link.’”
Is It Lunacy to Put a Data Center on the Moon?Dina Genkina | IEEE Spectrum
“The idea of putting a data center on the moon raises a natural question: Why? Lonestar’s CEO Christopher Stott says it is to protect sensitive data from Earthly hazards. ‘Data centers, right? They’re like modern cathedrals. We’re building these things, they run our entire civilization. It’s superb, and yet you realize that the networks connecting them are increasingly fragile.’”
The post This Week’s Awesome Tech Stories From Around the Web (Through March 1) appeared first on SingularityHub.