MoreRSS

site iconSingularity HUBModify

Singularity Hub has offered daily news coverage, feature articles, analysis, and insights on key breakthroughs and future trends in science and technology.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Singularity HUB

Will AI Take Your Job? It Depends on These 4 Key Advantages AI Has Over Humans

2025-06-23 22:00:00

This framework can help you understand where AI provides value.

If you’ve worried that AI might take your job, deprive you of your livelihood, or maybe even replace your role in society, it probably feels good to see the latest AI tools fail spectacularly. If AI recommends glue as a pizza topping, then you’re safe for another day.

But the fact remains that AI already has definite advantages over even the most skilled humans, and knowing where these advantages arise—and where they don’t—will be key to adapting to the AI-infused workforce.

AI will often not be as effective as a human doing the same job. It won’t always know more or be more accurate. And it definitely won’t always be fairer or more reliable. But it may still be used whenever it has an advantage over humans in one of four dimensions: speed, scale, scope, and sophistication. Understanding these dimensions is the key to understanding AI-human replacement.

Speed

First, speed. There are tasks that humans are perfectly good at but are not nearly as fast as AI. One example is restoring or upscaling images: taking pixelated, noisy or blurry images and making a crisper and higher-resolution version. Humans are good at this; given the right digital tools and enough time, they can fill in fine details. But they are too slow to efficiently process large images or videos.

AI models can do the job blazingly fast, a capability with important industrial applications. AI-based software is used to enhance satellite and remote sensing data, to compress video files, to make video games run better with cheaper hardware and less energy, to help robots make the right movements, and to model turbulence to help build better internal combustion engines.

Real-time performance matters in these cases, and the speed of AI is necessary to enable them.

Scale

The second dimension of AI’s advantage over humans is scale. AI will increasingly be used in tasks that humans can do well in one place at a time, but that AI can do in millions of places simultaneously. A familiar example is ad targeting and personalization. Human marketers can collect data and predict what types of people will respond to certain advertisements. This capability is important commercially; advertising is a trillion-dollar market globally.

AI models can do this for every single product, TV show, website, and internet user. This is how the modern ad-tech industry works. Real-time bidding markets price the display ads that appear alongside the websites you visit, and advertisers use AI models to decide when they want to pay that price—thousands of times per second.

Scope

Next, scope. AI can be advantageous when it does more things than any one person could, even when a human might do better at any one of those tasks. Generative AI systems such as ChatGPT can engage in conversation on any topic, write an essay espousing any position, create poetry in any style and language, write computer code in any programming language, and more. These models may not be superior to skilled humans at any one of these things, but no single human could outperform top-tier generative models across them all.

It’s the combination of these competencies that generates value. Employers often struggle to find people with talents in disciplines such as software development and data science who also have strong prior knowledge of the employer’s domain. Organizations are likely to continue to rely on human specialists to write the best code and the best persuasive text, but they will increasingly be satisfied with AI when they just need a passable version of either.

Sophistication

Finally, sophistication. AIs can consider more factors in their decisions than humans can, and this can endow them with superhuman performance on specialized tasks. Computers have long been used to keep track of a multiplicity of factors that compound and interact in ways more complex than a human could trace. The 1990s chess-playing computer systems such as Deep Blue succeeded by thinking a dozen or more moves ahead.

Modern AI systems use a radically different approach: Deep learning systems built from many-layered neural networks take account of complex interactions—often many billions—among many factors. Neural networks now power the best chess-playing models and most other AI systems.

Chess is not the only domain where eschewing conventional rules and formal logic in favor of highly sophisticated and inscrutable systems has generated progress. The stunning advance of AlphaFold 2, the AI model of structural biology whose creators Demis Hassabis and John Jumper were recognized with the Nobel Prize in chemistry in 2024, is another example.

This breakthrough replaced traditional physics-based systems for predicting how sequences of amino acids would fold into three-dimensional shapes with a 93 million-parameter model, even though it doesn’t account for physical laws. That lack of real-world grounding is not desirable: No one likes the enigmatic nature of these AI systems, and scientists are eager to understand better how they work.

But the sophistication of AI is providing value to scientists, and its use across scientific fields has grown exponentially in recent years.

Context Matters

Those are the four dimensions where AI can excel over humans. Accuracy still matters. You wouldn’t want to use an AI that makes graphics look glitchy or targets ads randomly—yet accuracy isn’t the differentiator. The AI doesn’t need superhuman accuracy. It’s enough for AI to be merely good and fast, or adequate and scalable. Increasing scope often comes with an accuracy penalty, because AI can generalize poorly to truly novel tasks. The 4 S’s are sometimes at odds. With a given amount of computing power, you generally have to trade off scale for sophistication.

Even more interestingly, when an AI takes over a human task, the task can change. Sometimes the AI is just doing things differently. Other times, AI starts doing different things. These changes bring new opportunities and new risks.

For example, high-frequency trading isn’t just computers trading stocks faster; it’s a fundamentally different kind of trading that enables entirely new strategies, tactics, and associated risks. Likewise, AI has developed more sophisticated strategies for the games of chess and Go. And the scale of AI chatbots has changed the nature of propaganda by allowing artificial voices to overwhelm human speech.

It is this “phase shift,” when changes in degree may transform into changes in kind, where AI’s impacts to society are likely to be most keenly felt. All of this points to the places that AI can have a positive impact. When a system has a bottleneck related to speed, scale, scope, or sophistication, or when one of these factors poses a real barrier to being able to accomplish a goal, it makes sense to think about how AI could help.

Equally, when speed, scale, scope, and sophistication are not primary barriers, it makes less sense to use AI. This is why AI auto-suggest features for short communications such as text messages can feel so annoying. They offer little speed advantage and no benefit from sophistication, while sacrificing the sincerity of human communication.

Many deployments of customer service chatbots also fail this test, which may explain their unpopularity. Companies invest in them because of their scalability, and yet the bots often become a barrier to support rather than a speedy or sophisticated problem solver.

Where the Advantage Lies

Keep this in mind when you encounter a new application for AI or consider AI as a replacement for or an augmentation to a human process. Looking for bottlenecks in speed, scale, scope, and sophistication provides a framework for understanding where AI provides value, and equally where the unique capabilities of the human species give us an enduring advantage.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post Will AI Take Your Job? It Depends on These 4 Key Advantages AI Has Over Humans appeared first on SingularityHub.

This Week’s Awesome Tech Stories From Around the Web (Through June 21)

2025-06-21 22:00:00

Artificial Intelligence

This AI Model Never Stops LearningWill Knight | Wired

“The work is a step toward building artificial intelligence models that learn continually—a long-standing goal of the field and something that will be crucial if machines are to ever more faithfully mimic human intelligence. In the meantime, it could give us chatbots and other AI tools that are better able to incorporate new information including a user’s interests and preferences.”

Tech

SoftBank Proposes $1 Trillion Facility for AI and RoboticsRocket Drew | The Information

“The project aims to replicate the thriving tech hub of Shenzhen, China, possibly by manufacturing AI-powered industrial robots. To this end, SoftBank has compiled a list of robotics companies in its portfolio, such as Agile Robots SE, that could set up shop in the Arizona hub, according to the report.”

BIOTECH

The FDA Just Approved a Long-Lasting Injection to Prevent HIVJorge Garay | Wired

“Clinical trials have shown that six-monthly injections of lenacapavir are almost 100 percent protective against becoming infected with HIV. But big questions remain over the drug’s affordability.”

Computing

Microsoft Lays Out Its Path to Useful Quantum ComputingJohn Timmer | Ars Technica

“While [Microsoft is] describing the [error-correction] scheme in terms of mathematical proofs and simulations, it hasn’t shown that it works using actual hardware yet. But one of its partners, Atom Computing, is accompanying the announcement with a description of how its machine is capable of performing all the operations that will be needed.”

COMPUTING

Meta’s Oakley Smart Glasses Have 3K Video—Watch Out Ray-BanVerity Burns | Wired

“[The glasses include] a 50 percent longer battery life, with a fully charged pair of Oakley Meta HSTN lasting up to eight hours of typical use compared with four hours on the Ray-Ban Meta. …That’s perhaps all the more surprising when you hear that the Oakley Meta also have a higher resolution camera, allowing you to share video in 3K, up from full HD in the Ray-Ban Metas.”

Artificial Intelligence

Study: Meta AI Model Can Reproduce Almost Half of Harry Potter BookTimothy B. Lee | Ars Technica

“In its December 2023 lawsuit against OpenAI, The New York Times Company produced dozens of examples where GPT-4 exactly reproduced significant passages from Times stories. In its response, OpenAI described this as a ‘fringe behavior’ and a ‘problem that researchers at OpenAI and elsewhere work hard to address.’ But is it actually a fringe behavior? And have leading AI companies addressed it?”

Robotics

Waymo Has Set Its Robotaxi Sights on NYCKirsten Korosec | TechCrunch

“Of course, New York City has other challenges beyond regulations. The city is chock-a-block with cars, trucks, delivery vans, bicycles, buses, and, importantly, people, all of whom are scuttling about. San Francisco, one of the markets that Waymo operates in today, is also a bustling city with many of the same challenges. NYC takes that complexity to a factor of 10.”

Biotechnology

Scientists Discover the Key to Axolotls’ Ability to Regenerate LimbsAnna Lagos | Wired

“‘The axolotl has cellular properties that we want to understand at the deepest level,’ says Monaghan. ‘While regeneration of a complete human limb is still in the realm of science fiction, each time we discover a piece of this genetic blueprint, such as the role of CYP26B1 and Shox, we move one step closer to understanding how to orchestrate complex tissue repair in humans.'”

SPACE

SpaceX’s Next Starship Just Blew Up on Its Test Stand in South TexasStephen Clark | Ars Technica

“SpaceX’s next Starship rocket exploded during a ground test in South Texas late Wednesday, dealing another blow to a program already struggling to overcome three consecutive failures in recent months. The late-night explosion at SpaceX’s rocket development complex in Starbase, Texas, destroyed the bullet-shaped upper stage that was slated to launch on the next Starship test flight.”

Tech

The Entire Internet Is Reverting to BetaMatteo Wong | The Atlantic

“[Generative AI] tools can be legitimately helpful for many people when used in a measured way, with human verification; I’ve reported on scientific work that has advanced as a result of the technology, including revolutions in neuroscience and drug discovery. But these success stories bear little resemblance to the way many people and firms understand and use the technology; marketing has far outpaced innovation.”

Future

The Future of Weather Forecasting Is HyperlocalThomas E. Weber | The Wall Street Journal

“NOAA’s High-Resolution Rapid Refresh system (HRRR, usually pronounced “hurr”), can zero in on an area of 1.8 miles. Contrast that with the Comprehensive Bespoke Atmospheric Model, or CBAM, developed by Tomorrow.io, a hyperlocal weather startup. Tomorrow.io says the CBAM can be run at resolutions as small as tens of meters, effectively predicting how the weather will differ from one city block to another.”

Space

Mars Trips Could Be Cut in Half With Nuclear PowerMark Thompson | IEEE Spectrum

“Here’s how it works: Instead of burning fuel with oxygen, a nuclear reactor heats up a propellant like hydrogen. The super-heated propellant then shoots out of the rocket nozzle, pushing the spacecraft forward. This method is much more efficient than chemical rockets.”

The post This Week’s Awesome Tech Stories From Around the Web (Through June 21) appeared first on SingularityHub.

‘Cyborg Tadpoles’ With Super Soft Neural Implants Shine Light on Early Brain Development

2025-06-21 05:57:16

Tofu-like probes capture the activity of individual neurons in tadpole embryos as they grow.

Early brain development is a biological black box. While scientists have devised multiple ways to record electrical signals in adult brains, these techniques don’t work for embryos.

A team at Harvard has now managed to peek into the box—at least when it comes to amphibians and rodents. They developed an electrical array using a flexible, tofu-like material that seamlessly embeds into the early developing brain. As the brain grows, the implant stretches and shifts, continuously recording individual neurons without harming the embryo.

“There is just no ability currently to measure neural activity during early neural development. Our technology will really enable an uncharted area,” said study author Jia Liu in a press release.

The mesh array not only records brain activity, it can also stimulate nerve regeneration in axolotl embryos with electrical zaps. An adorable amphibian known for its ability to regrow tissues, axolotl research could inspire ideas for how we might heal damaged nerves, such as those in spinal cord injury.

Amphibians and rodents have much smaller brains than us. Due to obvious ethical concerns, the team didn’t try the device in human embryos. But they did use it to capture single neuron activity in brain organoids. These “mini-brains” are derived from human cells and loosely mimic developing brains. Their study could help pin down genes or other molecular changes specific to neurodevelopmental disorders. “Autism, bipolar disorder, schizophrenia—these all could happen at early developmental stages,” said Liu.

Probing the Brain

Recording electrical chatter from the developing brain allows scientists to understand how neurons self-assemble into a mighty computing machine capable of learning and cognition. But capturing these short sparks of activity throughout the brain is difficult.

Current technologies mostly focus on mature brains. Functional magnetic resonance imaging, for example, is used to scan the entire brain as it computes specific tasks. This doesn’t require surgery and can help scientists stitch together brain-wide activity maps. But the approach lacks resolution and is laggy.

Molecular imaging is another way to record brain activity. Here, animals such as zebrafish are genetically engineered to grow neurons that light up under the microscope when activated. These provide real-time insight into each individual neuron’s activity. But the method only works for translucent animals.

Neural implants are the newest kid on the block. These microelectrode arrays are directly implanted into brain tissue and can capture electrical signals from large populations of neurons with millisecond precision. With the help of AI, such implants have already restored speech and movement and untangled neural networks for memory and cognition in people.

They’re also unsuitable for developing brains.

“The brain is very soft, like a piece of tofu. Traditional electronics are very rigid, when you put them into the brain, any movement of the electronics can cut the brain at the micrometer scale,” Liu told Nature. Over time, the devices cause scarring which degrades the signals.

The problem is acute during development, as the brain dramatically changes shape and size. Rigid probes can’t continuously monitor single neurons as the brain grows and could damage the nascent organ.

Opening the Box

Picture the brain and a walnut-shaped structure etched with grooves likely comes to mind. But the organ begins life as a flat single-cell layer in the embryo.

Called the neural plate, this layer of cells lines the embryo’s surface before eventually folding into a tube-like shape. As brain cells expand and migrate, they generate tissues that eventually fold into the brain’s final 3D structure. This dimensional transition makes it impossible to monitor single neurons with rigid probes. But stretchable electronics may do the job.

In 2015, Liu and colleagues developed an ultra-flexible probe that could integrate into adult rodent brains and human brain organoids. The mesh-like implant had a stiffness similar to brain tissue and minimized scarring. The team used a material called fluorinated elastomers, which is stretchy like gum but has the toughness of Teflon—and is 10,000 times softer than conventional flexible implants made of plastic-like materials. Implants made of the material captured single-neuron activity in mice for months and were relatively easy to manufacture.

Because of the probe’s stretchiness, the team wondered if it could also monitor developing embryonic brains as they folded up from 2D to 3D. They picked tadpoles as a test case because the embryos grow fast and are easy to monitor.

The first try failed. “It turns out tadpole embryos are much softer than human stem cell-derived tissue,” said Liu. “We ultimately had to change everything, including developing new electronic materials.”

The team came up with a new meshy material that can be embedded with electrodes and is less than a micrometer thick. They then fabricated a “holding” device to support tadpole embryos and gently placed the mesh onto the tadpoles’ neural plates during early brain formation.

“You need a very stable hand” for the procedure, said Liu.

The tadpoles’ developing brains treated the mesh as another layer of their own biology as they folded themselves into 3D structures, essentially stretching the device across their brains. The implant reliably captured neural activity throughout development on millisecond scales across multiple brain regions. The cyborg tadpoles grew into healthy frogs, which acted normally in behavioral tests and showed no signs of brain damage or stress.

The implant picked up different brain-activity dynamics as the tadpoles developed. Early brain cells synchronized into patterns of slow activity as the neural plate folded into a tube. But as the brain matured and developed different regions, each of these established its own unique electrical fingerprint with faster neural activity.

By observing these dynamics, scientists can potentially decipher how the brain wires itself into such a powerful computing machine and detect when things go awry.

Rebuilding Connections

The human nervous system has limited regenerative capabilities. Axolotls, not so much. A type of salamander, these cartoonish-looking creatures can rebuild nearly any part of their bodies, including their nerves. How this happens is still mysterious, but if we can discover their secret, we might use it to develop treatments for spinal cord injuries or nerve diseases.

In one test, the team implanted the recording mesh in an axolotl tadpole with a damaged tail. The critter’s brain activity spiked during regeneration. When they added carefully timed zaps from external electrodes mimicking post-injury neural patterns, the regeneration sped up, suggesting brain activity could play a role in tissue regeneration (at least in some species).

“We found that the brain activity goes back to its early [embryo] development stage, so this is maybe a unique reason why this creature has this regeneration ability,” said Liu. 

The team is giving the technology to other researchers to further probe life’s beginnings, especially in mammals such as rodents. “Preliminary tests confirmed that the devices’ mechanical properties are compatible with mouse embryos and neonatal rats,” they wrote.

Liu is clear the method isn’t ready for implantation in human embryos. Using it in frogs, axolotls, and human brain organoids is already yielding insights into brain development. But ultimately, his team hopes to help people with neurodevelopmental conditions.

“We have this foundation of stretchable electronics that could be directly translated to the neonatal or developing brain,” said Liu.

The post ‘Cyborg Tadpoles’ With Super Soft Neural Implants Shine Light on Early Brain Development appeared first on SingularityHub.

Honda Surprises Space Industry by Launching and Landing a New Reusable Rocket

2025-06-20 06:33:16

Honda’s been quietly working on a side hustle.

The private space race has been dominated by SpaceX for years. But Japanese carmaker Honda may be about to throw its hat in the ring after demonstrating a reusable rocket.

Space rockets might seem like a strange side hustle for a company better known for building motorcycles, fuel-efficient cars, and humanoid robots. But the company’s launch vehicle program has been ticking away quietly in the background for a number of years.

In 2021, officials announced that they had been working on a small-satellite rocket for two years and had already developed an engine. But the company has been relatively tight-lipped about the project since then.

Now, it’s taken the aerospace community by surprise after successfully launching a prototype reusable rocket to an altitude of nearly 900 feet and then landing it again just 15 inches from its designated target.

“We are pleased that Honda has made another step forward in our research on reusable rockets with this successful completion of a launch and landing test,” Honda’s global CEO Toshihiro Mibe said in a statement. “We believe that rocket research is a meaningful endeavor that leverages Honda’s technological strengths. Honda will continue to take on new challenges.”

The test vehicle is modest compared to commercial launch vehicles, standing just 21-feet tall and weighing only 1.4 tons fully fueled. It features four retractable legs and aerodynamic fins near the nose of the rocket, similar to those on SpaceX’s Falcon 9, which are presumably responsible for steering and stabilizing the rocket on its descent.

Honda said the development of the rocket was built on core technologies the company has developed in combustion, control systems, and self-driving vehicles. While it didn’t reveal details about the engine, Stephen Clark of Ars Technica writes that the video suggests the rocket burns liquid cryogenic fuels—potentially methane and liquid oxygen.

Honda says the goal of the test flight, which took place on Tuesday in Taiki, Hokkaido and lasted just under a minute, was to demonstrate the key technologies required for a reusable rocket, including flight stabilization during ascent and descent and the ability to land smoothly.

In a video of the launch shared by Honda, the rocket lifts off, retracts its four legs, and then rises smoothly to 890 feet. It then hovers briefly and extends its fins before returning to the launch platform, deploying its legs just before touchdown.

With this successful test flight, Honda joins an elite club of companies that have managed to land a reusable rocket, including SpaceX, Blue Origin, and handful of Chinese startups. It’s also beaten Japan’s space agency (JAXA) to the milestone. The agency is developing a reusable rocket called Callisto alongside the French and German space agencies, but it has yet to conduct a test flight.

The company is currently targeting a suborbital launch—where the spacecraft reaches space but doesn’t enter into Earth orbit—by 2029. But Honda says it has yet to decide if it will commercialize the technology.

Nonetheless, the company noted the technology could have synergies with its existing business by making it possible to launch satellite constellations that could help support the “connected car” features of its vehicles. And it is already developing other space technologies including renewable-energy systems and robots designed to work in space.

Whatever their decision, this launch shows the barriers to space are falling rapidly as a growing number of companies develop capabilities necessary to push into Earth orbit and beyond.

The post Honda Surprises Space Industry by Launching and Landing a New Reusable Rocket appeared first on SingularityHub.

Is a Quantum-Cryptography Apocalypse Imminent?

2025-06-17 22:00:00

New estimates suggest it might be 20 times easier to crack cryptography with quantum computers than we thought—but don’t panic.

Will quantum computers crack cryptographic codes and cause a global security disaster? You might certainly get that impression from a lot of news coverage, the latest of which reports new estimates that it might be 20 times easier to crack such codes than previously thought.

Cryptography underpins the security of almost everything in cyberspace, from WiFi to banking to digital currencies such as bitcoin. Whereas it was previously estimated that it would take a quantum computer with 20 million qubits (quantum bits) eight hours to crack the popular RSA algorithm (named after its inventors, Rivest–Shamir–Adleman), the new estimate reckons this could be done with 1 million qubits.

By weakening cryptography, quantum computing would present a serious threat to our everyday cybersecurity. So is a quantum-cryptography apocalypse imminent?

Quantum computers exist today but are highly limited in their capabilities. There is no single concept of a quantum computer, with several different design approaches being taken to their development.

There are major technological barriers to be overcome before any of those approaches become useful, but a great deal of money is being spent, so we can expect significant technological improvements in the coming years.

For the most commonly deployed cryptographic tools, quantum computing will have little impact. Symmetric cryptography, which encrypts the bulk of our data today (and does not include the RSA algorithm), can easily be strengthened to protect against quantum computers.

Quantum computing might have more significant impact on public-key cryptography, which is used to set up secure connections online. For example, this is used to support online shopping or secure messaging, traditionally using the RSA algorithm, though an alternative called elliptic curve Diffie-Hellman is growing popular.

Public-key cryptography is also used to create digital signatures such as those used in bitcoin transactions and uses yet another type of cryptography called the elliptic curve digital signature algorithm.

If a sufficiently powerful and reliable quantum computer ever exists, processes that are currently only theoretical might become capable of breaking those public-key cryptographic tools. RSA algorithms are potentially more vulnerable because of the type of mathematics they use, though the alternatives could be vulnerable too.

Such theoretical processes themselves will inevitably improve over time, as the paper about RSA algorithms is the latest to demonstrate.

What We Don’t Know

What remains extremely uncertain is both the destination and timelines of quantum computing development. We don’t really know what quantum computers will ever be capable of doing in practice.

Expert opinion is highly divided on when we can expect serious quantum computing to emerge. A minority seem to believe a breakthrough is imminent. But an equally significant minority think it will never happen. Most experts believe it a future possibility, but prognoses range from between 10 and 20 years to well beyond that.

And will such quantum computers be cryptographically relevant? Essentially, nobody knows. Like most of the concerns about quantum computers in this area, the RSA paper is about an attack that may or may not work and requires a machine that might never be built (the most powerful quantum computers currently have just over 1,000 qubits, and they’re still very error-prone).

From a cryptographic perspective, however, such quantum computing uncertainty is arguably immaterial. Security involves worst-case thinking and future-proofing. So it is wisest to assume that a cryptographically relevant quantum computer might one day exist. Even if one is 20 years away, this is relevant because some data that we encrypt today might still require protection 20 years from now.

Experience also shows that in complex systems such as financial networks, upgrading cryptography can take a long time to complete. We therefore need to act now.

What We Should Do

The good news is that most of the hard thinking has already been done. In 2016, the US National Institute for Standards and Technology (NIST) launched an international competition to design new post-quantum cryptographic tools that are believed to be secure against quantum computers.

In 2024, NIST published an initial set of standards that included a post-quantum key exchange mechanism and several post-quantum digital signature schemes. To become secure against a future quantum computer, digital systems need to replace current public-key cryptography with new post-quantum mechanisms. They also need to ensure that existing symmetric cryptography is supported by sufficiently long symmetric keys (many existing systems already are).

Yet my core message is don’t panic. Now is the time to evaluate the risks and decide on future courses of action. The UK’s National Cyber Security Center has suggested one such timeline, primarily for large organizations and those supporting critical infrastructure such as industrial control systems.

This envisages a 2028 deadline for completing a cryptographic inventory and establishing a post-quantum migration plan, with upgrade processes to be completed by 2035. This decade-long timeline suggests that NCSC experts don’t see a quantum-cryptography apocalypse coming anytime soon.

For the rest of us, we simply wait. In due course, if deemed necessary, the likes of our web browsers, WiFi, mobile phones and messaging apps will gradually become post-quantum secure either through security upgrades (never forget to install them) or steady replacement of technology.

We will undoubtedly read more stories about breakthroughs in quantum computing and upcoming cryptography apocalypses as big technology companies compete for the headlines. Cryptographically relevant quantum computing might well arrive one day, most likely far into the future. If and when it does, we’ll surely be ready.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post Is a Quantum-Cryptography Apocalypse Imminent? appeared first on SingularityHub.

Scientists Can Now Design Intricate Networks of Blood Vessels for 3D-Printed Organs

2025-06-16 22:00:00

It’s a crucial step toward the dream of printable organs on demand.

Bioprinting holds the promise of engineering organs on demand. Now, researchers have solved one of the major bottlenecks—how to create the fine networks of blood vessels needed to keep organs alive.

Thanks to rapid advances in additive manufacturing and tissue engineering, it’s now possible to build biological structures out of living cells in much the same way you might 3D print a model plane. And there are hopes this approach could one day be used to print new organs for the more than 100,000 people in the US currently waiting for a donor.

However, reproducing the complex networks of ultra-fine blood vessels that keep living tissues alive has proven challenging. This has restricted bioprinting to smaller structures where essential nutrients and oxygen can simply diffuse into the tissue from the surrounding environment.

Now though, researchers from Stanford University have developed new software to rapidly design a blood-vessel, or vascular, network for a wide range of tissues. And in a paper in Science, they show that bioprinted tissues containing these networks significantly boosted cell survival.

“Our ability to produce human-scale biomanufactured organs is limited by inadequate vascularization,” write the authors. “This platform enables the rapid, scalable vascular model generation and fluid physics analysis for biomanufactured tissues that are necessary for future scale-up and production.”

To date, tissue engineers have mostly used simple lattice-shaped vascular networks to support the living structures they design. These work for tissues with a low density of cells but can’t meet the demands of denser structures that more closely mimic real tissues and organs.

Existing computational approaches can generate more realistic vascular networks. But they are extremely computationally expensive—often taking days to produce models for more complex tissues—and limited in the types of tissues they work with, says the Stanford team.

In contrast, their new approach generates organ-scale vascular network models for more than 200 engineered and natural tissue structures. Crucially, it was more than 230 times faster than the best previous methods. They did this by combining four algorithms, each responsible for solving a different problem.

Typically, the algorithms used to create these kinds of structures recalculate key parameters across the entire network when each new section is added. Instead, the Stanford team used an algorithm that freezes and saves values for all the unchanged branches at each step, significantly reducing the computational workload.

They then added an algorithm that breaks the 3D structure into smaller, easier-to-model chunks, which made it simpler to work with awkward shapes. Finally, they combined this with a collision-avoidance algorithm to prevent branching vessels from crossing paths and another algorithm to ensure each vessel is always connected to another one to make sure the system is a closed loop.

The researchers used this approach to create efficient vascular networks for more than 200 models of real tissue structures. They also 3D printed models of some simpler networks to test their physical properties and even bioprinted one of these and showed it could dramatically improve the viability of living cells over a seven-day experiment.

“Democratizing virtual representation of vasculature networks could potentially transform biofabrication by allowing evaluation of perfusion efficiency prior to production rather than through a resource-intensive trial-and-error method,” wrote the authors of an accompanying perspective article in Science about the new approach.

But they also noted it’s a big leap from simulation to real life, and it will probably require a combination of computational approaches and experiments to create biologically feasible vascular trees. Still, the approach is a significant advance toward the dream of printable organs on demand.

The post Scientists Can Now Design Intricate Networks of Blood Vessels for 3D-Printed Organs appeared first on SingularityHub.