MoreRSS

site iconTMT | 钛媒体修改

成立于2012年,致力于打造全球领先的财经科技信息服务平台。
请复制 RSS 到你的阅读器,或快速订阅到 :

Inoreader Feedly Follow Feedbin Local Reader

TMT | 钛媒体的 RSS 预览

20.8亿元加码川西锂矿,锂业巨头拟收购启成矿业30%股权;盐湖提锂龙头拟以46.05亿元收购五矿盐湖51%股权;【并购一线】

2025-12-30 22:32:21

3家上市公司发布并购重组公告

1、“五矿系”持续整合,盐湖提锂龙头拟以46.05亿元收购五矿盐湖51%股权

2、锂业巨头拟20.8亿元加码川西锂矿,收购启成矿业30%股权

3、全球特钢龙头拟15亿元收购富景特100%股权

更多并购信息及价值分析见全文

更多精彩内容,关注钛媒体微信号(ID:taimeiti),或者下载钛媒体App

Exclusive: “I believe in humanity, not AI," Fei-Fei Li Tells NextFin Founder

2025-12-30 21:53:00

Fei-Fei Li, known as the

Fei-Fei Li, known as the "Godmother of AI," had an in-depth wide-ranging podcast conversation with Jany Hejuan Zhao, the founder and CEO of NextFin.AI, chair of TMTPost and publisher of Barron's China

NextFin -- As 2025 draws to a close, Fei-Fei Li, the Stanford University professor known as the “Godmother of AI,” has been ushering in wave after wave of new developments with World Labs, the frontier AI company she founded in 2023. These include the release of Marble, the first commercial “world model,” which has finally made people realize that “world models” are not merely a conceptual idea, but something real and practically useful. 

Looking back, my first meeting with the visionary AI pioneer dates back to 2017, inside an academic building at Stanford. That year, Tianqiao Chen, the founder of Shanda Group and a renowned tech philanthropist, who had just settled in Silicon Valley, introduced her to me and several other longtime friends, noting, “She is one of the most outstanding scientists in the United States.” At the time, the ImageNet initiative launched by Professor Li was still in full swing. It was also during that first meeting and conversation with her that I learned a new idea: why the size of a dataset determines the level of intelligence. This was the original intention of ImageNet—building the largest possible data pool to advance artificial intelligence (AI). Although the scale of data processed in the AI world today has grown by trillions of times, at that time ImageNet created the largest dataset ever. More importantly, the ImageNet project she led proved—amid widespread skepticism—to both academia and industry that “data,” just like algorithms, is a cornerstone of artificial intelligence development. 

Over the following eight years, we witnessed how ImageNet became a milestone in the history of generative AI. The AI pioneer’s efforts in the world of artificial intelligence have never slowed. From leading the ImageNet initiative—driving a major leap in datasets and the transition from AI 1.0 to AI 2.0—to taking on a new mission today: leading the development of “world models” to break through the limitations of large language models to generate the 3D world, she once again finds herself at the crossroads of a data bottleneck in world models. 

Driven by curiosity about her new entrepreneurial venture, I had an in-depth video podcast conversation with Professor Li, who served as Vice President at Google and Chief Sicentist of AI/ML at Google Cloud during her sabbatical from January 2017 to September 2018. In the nearly two-hour discussion, which felt more like a relaxed chat, we covered a wide range of topics—from studying abroad as a teenager to choosing a scientific path; from becoming a member of the three most prestigious U.S. academies in arts and sciences, engineering, and medicine to starting a tech firm in Silicon Valley; from the different challenges AI has faced at various stages of its development to the possible solutions at each stage. Along the way, she also endured rumors and doubts. This time, in response to my question, she did not shy away from public speculation about her family background, allowing me to see the story of a girl from an ordinary Chinese family who crossed the ocean and grew with resilience in an unfamiliar reality and academic world.

In front of NextFin—the world's first AI agent platform for financial news and data analysis founded by myself, a serial entrepreneur—Li wove together the technological evolution of world models and spatial intelligence with her personal values, methodology, and entrepreneurial judgment into a coherent and clear narrative: the world is more than just language, and the next step for AI is enabling machines to “see, generate, and interact” within a continuous three-dimensional world; and before all grand promises, AI is, and will remain, a tool—the steering wheel must always be in human hands." This is the agency humanity must never give up, and the belief humanity must never abandon ... AI is just a tool. I believe in humanity, not AI,”  she said. The offhand remarks by the woman who revolutionized AI stirred a deep and lasting resonance within me.

This podcast conversation coincided with World Labs’ recent launch of Marble, its latest commercial spatial intelligence model. From a single image or text prompt, Marble can generate “a persistent, freely navigable, and geometrically consistent” 3D world, which can be exported in formats such as Gaussian Splat for exploration and further creation on the web and VR devices. It marks a tangible step from “content generation” to “world generation.” Media coverage has highlighted Marble’s “larger, clearer, and more consistent” worlds, as well as its usable engineering pipeline for creators and developers, including export, web and VR rendering, and interaction.

At the same time, world models are becoming a new battleground for the industry. Google DeepMind has successively launched Genie 3 and Gemini Robotics 1.5, emphasizing a model direction focused on “generating interactive environments with spatial understanding and planning capabilities.” Earlier this year, it also formed a dedicated world-modeling team focused on applications in gaming, film, and robotics.

Progress in the field has outpaced expectations from a year ago. In an episode of NextFin’s podcast Jany Talk, Professor Li predicted that the transition from “language generation” to “world generation” will bring an application-level explosion in spatial intelligence within the next two years. Since securing significant funding in 2024, World Labs has consistently advanced with the vision of Large World Models (LWM), pushing the boundaries of what AI can achieve.

Professor Li admitted the overwhelming pressure—fearing that her models might not be good enough, that she might let down the young coworkers who follow her, and that she might let down investors. But as she put it, “If you ever stop feeling uncertain, it means you’ve stopped being challenged, and that means what you’re doing may not matter as much.” She spoke calmly of setbacks: “If you fail, you fail—it’s not a big deal,” and emphasized the need for patience: “People always expect things to happen quickly, but they rarely do.” Amid the noisy restlessness of the AI world, her words felt like a steadying anchor.

On a personal note, I have embarked on a new entrepreneurial journey in Silicon Valley, launching NextFin.AI powered by native AI technology. Professor Li encouraged me, saying, “Your efforts to explore new AI product forms in media are absolutely in the right direction—AI should better serve humanity.” Her persistence amid global skepticism has also been a source of strength for me.  

The following is the transcript of a video podcast conversation between Fei-Fei Li, the founder of World Labs and a professor of computer science at Stanford University, and Jany Hejuan Zhao, a serial entrepreneur -- the founder and CEO of NextFin.AI, the world’s first AI agent platform for financial news and data analysis, founder and CEO of TMTPost and publisher of Barron’s China. It is edited for brevity and clarity.

Staying Curious and Facing Fear

Jany Hejuan Zhao: Your book The Worlds I See, left a very deep impression on me. The first time I read it, I cried several times. It was very touching. I even had my daughter read it too; she’s studying abroad, so she could really relate to it. My first question in Jany Talk is: do you have any advice or perspectives for teenagers on how to observe the world? This would be very helpful, not just for international students, but also for the current generation of teenagers in China.

Fei-Fei Li: : Thank you for liking my book, and thank you for having your daughter read it too. To be honest, I think today's teenagers are really incredible. Whether it’s from my students or colleagues who are young entrepreneurs, I often feel that I’m learning more from them than they are learning from me. So, I’m a bit reluctant to say I have anything to teach teenagers, but I can share some thoughts. I think the first word in the subtitle of my book is key—curiosity. It really is the starting point for everything. Especially as a child, curiosity is so pure and the world is still simple, and we approach it with a lot of curiosity.

When I wrote this book, my biggest feeling was that it was a sort of sorting out of my own scientific journey. I feel very lucky because, whether due to my family or my educational path, my curiosity has been nurtured. Looking back, many people protected my curiosity, which I consider a blessing. And I hope to share that blessing and insight with young people. Life often starts with curiosity. Don’t lose that curiosity, because it can really light a fire in your heart. Whether it’s curiosity about the world or pursuing your dreams, this fire can accompany you for a long time and lead you to do many things.

Jany Hejuan Zhao: So, how can we maintain this curiosity? Actually, I found some of the perspectives you shared in the book about how you observe the world and the people around you really interesting.

Fei-Fei Li: I think there are many different factors. A child can’t deliberately maintain or even discover curiosity because I think curiosity is innate. But as I mentioned, I feel very lucky because I might have a strong curiosity built into my genes. Looking back at the path I’ve walked, there have been many people who nurtured my curiosity. When there are so many like-minded people around me who also maintain their curiosity, it becomes easier for me to keep it. So, there are indeed many factors.

Many people say that you can’t copy someone else’s path to success. To some extent, that’s true because everyone is unique. But I also believe some things are universal—children’s curiosity is universal. Many parents and teachers are willing to nurture children’s curiosity, and that’s also common.

Jany Hejuan Zhao: You’re very humble. But perhaps behind this luck, as you said, it’s not just about you—it’s about the people around you, including your parents and teachers who protected your curiosity. This is also quite inspiring for us adults. For us as adults, how can we protect a child’s curiosity? What can we do to avoid destroying it?

Fei-Fei Li: That’s a great question, Hejuan, because you’re a mother too, and so am I. I’ve also been a teacher for many years. So what is the essence of curiosity? Curiosity is essentially a source of joy. It’s not utilitarian—it’s not about getting more knowledge or better grades, or achieving more, just because you’re curious. That would be a superficial form of curiosity, one driven by more utilitarian purposes.

True curiosity is joyful. In science, in research, even if it’s a small discovery or something insignificant, when it satisfies your true curiosity, you’re happy. And I think as parents and teachers, we need to empathize with that joy. A child’s curiosity comes from their genuine joy.

If you can’t empathize with that joy, it’s difficult to appreciate the curiosity. So I think, from the heart, to nurture that joy is also a kind of joy for yourself. When you see a child happy because of their curiosity—whether it’s because their curiosity is satisfied or because it motivates them—you can feel that joy too.

I think adults often can’t feel that joy because they’re wearing too many lens or filters: the lens of life, of utilitarianism, of pressure, of their own perceptions. These filters make it hard for adults to empathize with that joy, to feel that curiosity. So, unconsciously, we often fail to protect children’s curiosity.

Also, to be honest, as we grow older, there are many things that may seem not as good as when we were younger. But the joy of curiosity never changes. So, I think adults should also have the ability to experience joy. They should appreciate their own curiosity because it’s a source of happiness. Personally, I enjoy being with young people, learning from them, seeing all the things I don’t know, or sometimes learning something new. That’s a joy—it’s instinctual.

Jany Hejuan Zhao: That’s really important. Sometimes, adults lose the ability to feel joy, and we end up passing that pressure onto children. So, maybe we need to learn to feel joy ourselves first, and then teach them.

Fei-Fei Li: Yes, adults need to appreciate their own curiosity first.

Jany Hejuan Zhao: So, what brings you joy now?

Fei-Fei Li:  I think creating really brings me joy—whether it’s creating technology, creating a team to solve tough problems, or coming up with new ideas, or learning new ideas. All of these things make me incredibly happy. That’s why I’ve enjoyed working on the front lines of scientific research for many years, working with students, and even starting businesses with young people. All of this brings me immense joy.

Jany Hejuan Zhao: That’s wonderful. But we also know that along with joy, there’s often confusion. So, how do you deal with confusion or self-doubt? For example, when you transitioned from physics to AI, you must have had a long period of “hesitation,” deciding between practical applications or pure science. How did you get through that period?

Fei-Fei Li: Honestly, I’ve always felt profound apprehension. Because when you’re in the process of exploration, whether it’s in scientific research or in life, you’re always in a state of uncertainty. If you’re not apprehensive, then you’re too comfortable, and that means you’re not challenging yourself. I’m someone who likes to challenge myself, so I feel like I’ve been in a state of profound apprehension for my whole life. And since I’m always in fear, I’ve learned to accept it and deal with it. It’s just part of the process—you can’t completely get rid of fear.

I feel profound apprehension every day. But beyond apprehension, there are other things as well. First, excessive apprehension is not useful. You have to take things one step at a time. For example, if you’ve gone through a tough immigration experience, you’ll realize that with so many unknowns, all you can do is focus on today and finish it well. So, when I was chatting with a group of young entrepreneurs at Y Combinator in Silicon Valley recently, I told them about the concept of Gradient Descent in machine learning. It’s a way to deal with apprehension.

Another thing is to have faith. You need courage, some belief, and confidence. That belief might be the kind of belief that even with courage and confidence, and after working for a long time, you might still lose. But, that’s okay. If you fail, you fail, but at least you tried. Failure isn’t something to fear.

Jany Hejuan Zhao: So, what are you most fearful of right now?

Fei-Fei Li: I fear many things. Right now, I’m starting a company, and I’m worried our model isn’t right, or our product hasn’t found its position. But the biggest fear for me is that I have this amazing group of young people working with me, and I can’t let them down. That’s my biggest fear. Of course, I also don’t want to disappoint investors, but honestly, not letting down these young people feels more important to me than not disappointing investors.

I call my colleagues "young people" because they’re so young and talented. In my heart, these young people are my teachers, but they also trust me. So, I really care deeply—I try my best not to let them down.

Jany Hejuan Zhao: I see that some of your students are also starting their own businesses, including some robotics projects. How do you feel about your students starting businesses?

Fei-Fei Li: I support them very much, and I’m proud of them. Starting a business requires a certain belief—everyone has different beliefs. Especially as founders, you need to have more belief than others. These young people, who grew up in the era of AI, have broader perspectives than I did. So, I’m really happy for them.

“Spatial Intelligence” is The Way to AGI or Just AI

Jany Hejuan Zhao: Perhaps because you come from a computer vision background, I can sense that you have a kind of persistence—almost an obsession—with computer vision. That is different from large language models.

But many people are also saying things like “language is the world” or “information is the world.” For example, the founder of Anthropic has said that the entire future world will be a datafied world. I think everyone is actually talking about the same thing: how we observe and understand the world, and how AI can ultimately represent the world. You may care more about vision, while large language models may focus more on language.

After more than a year of starting a company, do you feel that the obsession is still there? Or do you still believe that the world cannot be composed of language alone—that it is something richer, more three-dimensional, four-dimensional, or even more spatially intelligent?

Fei-Fei Li: Yes, I firmly believe that the world is not just language. But let me first explain my belief, because technically there is indeed a shared underlying concept, which is why I can understand why some people say “language is the world.”

At a high level, I firmly believe the world is not only language. If by language we mean this discrete, tokenized information—and relatively speaking, it is one-dimensional. Even though what language expresses does not have to be one-dimensional, the representation of language itself is still fairly one-dimensional.

I think the world is actually much richer. As I’ve emphasized repeatedly, spatial intelligence has many properties, including physical properties, that go beyond the concept of language. And many things—whether human behavior or natural phenomena—cannot be fully described by language, nor can language accomplish everything we want to do.

From the moment we open our eyes every day, just imagine our daily human lives—from survival to work, to creation, to feeling and perception, to richer human-to-human emotions and all aspects of life—these are not things that language alone can achieve.

Of course, saying “language is the world” sounds nice, and it doesn’t sound wrong, because it’s an extremely broad statement. When a statement is that broad, it’s kind of hard for it to be wrong.

But technically speaking, digitization is inevitable. That includes vision models, spatial intelligence, and robotics models—they will all be digitized. But if digits and language become exactly the same thing, then the concept has been replaced. If you call all digital representations “language,” then fine—everything is language, and there’s nothing left for me to argue about.

Jany Hejuan Zhao: Right, that’s a bit like Wittgenstein, who said “information is the world.” If we use the idea of “information is the world,” then perhaps everyone is actually understanding things under the same conceptual framework.

Fei-Fei Li: But in my view, information is not only language. It also includes spatial information. Spatial information is, I think, just as beautiful and just as significant as language-based information.

Jany Hejuan Zhao: But we’re also encountering the reality that spatial intelligence—or world models—haven’t progressed as fast as people imagined. This is also the direction you’re currently pursuing through your startup. So how long do you think it will take before people can really perceive tangible changes in this area, whether in entrepreneurship or exploration?

Fei-Fei Li: Honestly, it’s hard to say what counts as fast or slow. From the time we started the company to now, it’s only been a bit over a year. We’ve seen progress from video models to real-time video models, to multimodal models, and to our own 3D models. Even though we haven’t scaled them massively yet, that pace of change is actually quite fast.

But the broader AI environment has created extremely aggressive expectations for AI.

Jany Hejuan Zhao: It always feels like it’s still not fast enough.

Fei-Fei Li: Exactly. Whether something is fast or slow is subjective. But I can tell you why I chose to start a company: I felt the timing was right.

Entrepreneurship is different from academic research—it must align with the market and deeply respect it. Many entrepreneurs who are better than me say that timing is the most important thing. You can’t be too early, when the market and technology aren’t ready; and you can’t be too late, when there’s no room left for you.

When World Labs was founded, spatial intelligence was still a bit early—but not so early that it would take another 5 to 10 years. I believe that in the next one to two years, it will experience explosive growth.

Just look at the dramatic progress in video generation, and then at world models. I firmly believe we’ll see major advances within one or two years, and I can already see the potential for market applications. So I don’t know whether that’s fast or slow—I just think this is a very good time to work on spatial intelligence.

Jany Hejuan Zhao: You’re saying there could be explosive growth within one or two years? That’s already very fast—much faster than I imagined. I originally thought it would take at least five years.

Fei-Fei Li: I hope so. I find the models we’re building now very exciting.

Jany Hejuan Zhao: Then let’s talk about the current progress of World Labs’ models.

Fei-Fei Li: We’re working on world generation—generating worlds. And we see many applications, from digital creatives to game development, film, design, architecture, VR, XR, AR, and robotic simulation.

Each of these markets can be subdivided into many more niches, all of which have strong demands for 3D space. And generative AI has a special characteristic: by lowering the difficulty of things that were previously very hard to do, it opens up many markets you couldn’t have imagined before.

Generating 3D spaces is extremely difficult. How many people in the world truly have the ability to do that? The tools they use are very cumbersome. I’ve tried Blender and Unity myself—it was overwhelming.

But creators often have great ideas in their minds; they’re limited by tools, not by imagination. AI can empower them. It can empower existing creators, and it can also empower people who never realized they could do this before—because it used to be too hard.

People like me never used Blender or Unity before—I found them too annoying and didn’t have the time. But once AI gives me that capability, of course I’ll use it, because it brings new inspiration and new possibilities.

That’s why I think 3D world models are so exciting. They tackle something that’s very hard for ordinary humans to do. When AI lowers the barrier to that capability, it creates an incredible opportunity to open up the market.

Jany Hejuan Zhao: If you manage to conquer this fortress, does that mean the final bottleneck of artificial general intelligence is broken, and AGI is achieved?

Fei-Fei Li: I think without spatial intelligence—or without generative 3D world models—it doesn’t count as AGI. But AGI is like a door with many locks, each requiring a different key. I do believe spatial intelligence is one of those keys.

That said, the metaphor isn’t perfect, because the door isn’t simply open or closed—it opens gradually.

So I’ve always said that I don’t really know what “AGI versus AI” even means. Because AI and AGI seem to share the same dream. At its core, this is a scientific curiosity: can machines think, can they do things? That was the original dream of AI, and the dream of AGI doesn’t seem all that different. So I don’t really see a clear distinction between AI and AGI.

Regardless of whether we call it AI or AGI, this dream is realized step by step. With every step we take, we move a little closer to that dream.

Spatial intelligence is definitely part of that journey. Whether it’s empowering human creativity, applications from games and design to industry, robotics, or imagined worlds like the metaverse, AR, VR—spatial intelligence is essential.

Jany Hejuan Zhao: One example you gave left a deep impression on me—the story of trilobites and vision. Trilobites took hundreds of millions of years to evolve a complex visual system. Now we’re trying to give AI a similarly complex visual system—not just to perceive a simulated world, but to generate worlds. The difficulty is obvious: how can a few years compete with hundreds of millions of years?

Fei-Fei Li: I can’t even imagine it. But at the same time, you can’t think about it that way. Because I think engineering and mathematics follow paths that are very different from the path of biological evolution. So this is really a comparison between apples and oranges. To put it this way, evolutionary iteration is extremely slow—much, much slower than the iteration of algorithms. And carbon-based systems and silicon-based systems operate very differently in terms of computation. So from a time-scale perspective, I don’t think they are really comparable. Still, evolution gives us a lot of insight and inspiration.

For example—and this brings us back to data—why is data so important? Why did our lab originally emphasize the concept of data? A lot of that inspiration came from evolution. Because the long course of evolution is actually a course of big-data training, right? The difference is that today, in the digital age, we don’t need to wait billions of years to collect data. We can collect data at massive scale.

In the end, it all comes back to the same underlying idea—the concept is similar, but the way it is carried out is completely different. It is fundamentally different from how evolution and nature operate.

Jany Hejuan Zhao: It’s not a time-based evolutionary process. It might even be exponential rather than linear.

Fei-Fei Li: Because there’s so much data. In one pass, you might process as much data as evolution saw over tens of millions of years. So you really can’t make a direct comparison.

A Bias: Data Versus Algorithm

Jany Hejuan Zhao: Speaking of data, we can go back to when you first ushered in ImageNet. ImageNet was essentially about data. But it used a more community-driven approach and much larger-scale data to push AI forward by a big step.

Fei-Fei Li: Looking back now, it seems very small. But at the time, it really was the largest.

Jany Hejuan Zhao: But when you were doing it back then, you actually faced many challenges. A lot of people questioned it. First, they questioned ImageNet itself. Second, they questioned the underlying principle behind ImageNet—that is, the idea that the more data you have, the stronger your computing capability, the more training data you have, and the higher the level of intelligence you can achieve.

At that time, hardly anyone believed in this principle. Looking back now, why were you able to firmly believe that this was something worth sticking to?

Fei-Fei Li: I don’t think believing in your own hypothesis is that strange. On the scientific path, after deep thinking, you naturally form some hypotheses—and you have to believe in some of them. Of course, as a scientist, you also have to accept that some hypotheses will turn out to be wrong. I’ve certainly had many hypotheses that turned out to be wrong.

But this particular hypothesis was something I had thought about for a long time. Mathematically, it’s a concept about generalization. I spent all my PhD program years working on models and algorithms, so I accumulated a lot of insights and gradually realized this.

At the end of the day, AI—mathematically speaking—has always been about one thing: generalization. That’s really it. And how do you achieve generalization? There are two aspects: algorithms and data. And the two are tightly connected.

If the algorithm is too complex and the data is scarce, you overfit. If the data is abundant but the algorithm isn’t good enough, you also overfit. There’s a mathematical relationship between the two.

At the time, after thinking about it for so long, I firmly believed in this. And I was part of the earlier generation of computer vision PhD students who worked with machine learning. I was lucky—my PhD years coincided with a turning point in computer vision, when many machine learning concepts were being adopted. So I had a relatively deep understanding of this.

I wasn’t the only one who understood it, of course. But I saw the importance of data back then, so I stuck to it. It really came down to curiosity. I actually found the whole process quite fun. When you’re trying to prove a hypothesis, it’s exciting. You’re full of passion, and you just keep fighting your way forward—like battling monsters in a game. As long as you’re not defeated, you keep fighting.

Jany Hejuan Zhao: Like leveling up while fighting monsters. From ImageNet back then to World Labs today, you’re once again at a new crossroads between algorithms and data. Now, for world models or vision models, data has become an especially difficult problem again.

Fei-Fei Li: A bottleneck, yes.

Jany Hejuan Zhao: How do you break through this bottleneck? Because when you think about space—how do we acquire that data? I touch something and feel whether it’s hot or cold. That feels even harder.

Fei-Fei Li: Exactly. This is a spiral of progress. Back then, ImageNet gave computer vision its largest dataset, and the field flourished. Then the internet brought massive amounts of natural language data, and large language models flourished.

Now we’re back to vision—though AI as a whole is much bigger now, so it’s not just about vision. Look at how fast video models are developing—that’s because there’s a lot of video data. Look at how fast autonomous driving is developing—that’s because some companies have accumulated massive amounts of driving, road, and environmental data.

So you’re right: we’re back to data and algorithms. Actually, it’s not even “back”—we never left. But we are indeed at a very critical point again.

Jany Hejuan Zhao: Yes, exactly.

Fei-Fei Li: Sometimes I find it interesting that even today, people still place more emphasis on algorithms. But everyone who truly works in AI—whether in startups or large companies—knows that data, if not more important, is at least equally important.

Yet when people talk about it, algorithms still sound more “fancy.” Actually data is truly a science.

Jany Hejuan Zhao: Yes, many people value algorithms so much that algorithm engineers are paid far more than data engineers. People’s perceptions of the difficulty and importance of these two things really are quite different. Data just doesn’t seem as “sexy.”

Fei-Fei Li: One of humanity’s weaknesses is bias.

Jany Hejuan Zhao: Do you think this is a very big bias?

Fei-Fei Li: Honestly, if it’s biased, then it’s biased. The world isn’t perfect anyway. As for me, I have pretty thick skin about this. If you ask me whether it’s a bias, fine—I’ll say it is. But does that mean I need to fight it? I’m too lazy to fight it. As long as I know the truth myself, that’s enough.

Jany Hejuan Zhao: So how does World Labs address this data bottleneck now?

Fei-Fei Li: That I can’t tell you.

Jany Hejuan Zhao: Because it’s a business secret?

Fei-Fei Li: Exactly.

Jany Hejuan Zhao: But I can imagine that if you truly believe there will be an explosion of progress within one or two years, then you must have found some ways to break through the data bottlenecks for world models. I’m really looking forward to seeing that.

Long Road Ahead For Robot Models

Jany Hejuan Zhao: Let’s go back to autonomous driving. I’ve been wondering—are autonomous driving models essentially a scaled-down or simplified version of world models?

Fei-Fei Li: They should be. They really should be. At least, I hope they are. Of course, I don’t know exactly what Tesla or Waymo have internally, or how much 3D information is involved in their systems.

Autonomous cars are actually robots—the earliest mass-produced robots created by humans. But they are extremely limited robots. What are they? They’re box-shaped robots, essentially rectangular boxes, operating in a largely two-dimensional world, because roads are basically two-dimensional, not three-dimensional. And in this 2D world, they do just one thing: avoid colliding with other objects. Those objects may be cars, pedestrians, or roadside obstacles. But in essence, it’s a box-shaped robot in a 2D world whose sole goal is not to bump into things.

Now think about the 3D robots we want to build in the future. In a three-dimensional world, their purpose is precisely to touch all kinds of objects—helping us wash dishes, cook, fold clothes. That comparison tells you how simple a robot a car really is.

Jany Hejuan Zhao: It really is very simple.

Fei-Fei Li: Exactly. That’s why I say the world model for cars is also simpler—it’s simpler because the task itself is simple. Of course, I’m not saying autonomous driving isn’t impressive. Tesla and Waymo are both remarkable. But from a scientific, macro-level perspective on world models and robotics, this is just the beginning. What comes next is far more complex.

Jany Hejuan Zhao: So if we think of current autonomous driving systems—the spatial perception models we can understand and experience today—as a low-end version of world models, they indeed handle relatively simple problems. They’re still very far from true robot models.

Fei-Fei Li: And generally speaking—though I truly don’t know what Tesla is doing internally—I don’t think their approach is centered on generative spatial models or world models, because they don’t really need generation. Maybe they use generation during training, but I don’t know. Their main tasks aren’t generation; they focus on judgment, recognition, detection, and so on.

So when it comes to Tesla’s “world model,” I don’t think it’s a strongly generative model, because it doesn’t need to be. But robots do need that. Robot training needs it. You simply can’t collect enough real-world data. What we’re doing is closely related to creativity and design, and those inherently require generation—generation itself is a use case.

Jany Hejuan Zhao: About robot models, I’ve seen that you also collaborate with Nvidia on robot-related models. In China, the robotics industry is very hot right now—lots of startups, lots of funding—but the focus is more on mechanical intelligence, manufacturing, and hardware. On the AI model side, especially generative models, breakthroughs seem more limited so far. How do you see the current stage of generative models for robotics?

Fei-Fei Li: I think robotics is fascinating. Robotics is incredibly hot in Silicon Valley right now. My own lab has been working on robotics for more than a decade, and many of my former students are now leading robotics research across startups and large companies alike. I really love this field and I’m very positive about it.

That said, I also believe we need to stay very calm and rational. Robotics research is still in its early stages. First, as we discussed, robots truly lack data. Think about autonomous driving—it’s been worked on for decades, and cars constantly collect data while people are driving them. Robots, on the other hand, have very limited commercial use cases, especially in daily life, so data collection is extremely difficult.

That’s why taking the generative AI route is both interesting and promising. Generative AI—especially video generation—opens up new possibilities for training. You can do simulations. What we’re doing with robot simulation is very promising. You can even use video models at inference time to assist with online planning.

So there are many exciting possibilities. In a way, robotics is benefiting from the rapid development of neighboring fields like generative AI. That’s why I’m excited—but we still need to wait and see. Robotics still has a long road ahead, especially when it comes to commercialization and everyday-use robots.

Jany Hejuan Zhao: Industrial robots may move faster, right?

Fei-Fei Li: Industrial robots have been in use for a long time already.

Jany Hejuan Zhao: I mean more intelligent industrial robots.

Fei-Fei Li: Yes, because their scenarios are relatively constrained. They operate in controlled environments and have access to plenty of data.

Jany Hejuan Zhao: If robotics still has a long way to go, and one major bottleneck is data, does that create new opportunities—like startups focused on robotic simulation data? Would data-focused startups around robotics be more promising than building robots directly?

Fei-Fei Li: Data companies can definitely be very successful. Just look at Scale AI—it’s a great example. So yes, data is a real business opportunity. But as the saying goes, the devil is in the details. How you do it, and how well you do it, really matters.

The most important things in a data business are: first, how big the market is; and second, whether you can deliver the data your customers actually need. Robot data is especially hard to collect, because you need robots to collect robot data. If humans collect it instead, scaling becomes very slow. It’s not like cars, which are already everywhere and can gather data very quickly.

Jany Hejuan Zhao: So the robotics industry currently faces two major challenges: data and application scenarios. Without sufficient data, application scenarios remain limited, and the two issues are closely linked. People also feel that there aren’t many compelling use cases yet—companies like Unitree, for example, are still largely focused on performance and demonstrations.

If we view robotics as being in a very early stage of development, what other challenges remain? And how many years might it take to complete this cycle? What key milestones still need to be crossed?

Fei-Fei Li: I can give you one data point or a simple fact. From the moment autonomous driving became a concept to real commercialization: Google formed a small autonomous driving team in 2006, and Waymo  began operating on public roads around 2024. That’s nearly 20 years. There are similarities and differences here. The automotive industry was already very mature—its supply chain, OEMs, and use cases were well established—so that helped. But AI itself wasn’t mature back then, which is why autonomous driving had such a long AI development path.

Today, AI is far more mature, so that part should move faster. But aside from industrial robots or very limited scenarios, robotics doesn’t yet have application environments as mature as cars. So whether this journey will be faster than 20 years or slower is hard to say.

I do believe AI will accelerate things compared to autonomous driving back then. But as we said earlier, the problem is also harder—it’s a truly three-dimensional world. I’m often asked how many years this will take, and I honestly don’t like answering that question because it’s very complex. I can only say this: I believe that within our lifetimes, we will definitely see it.

Jany Hejuan Zhao: Let’s wait and see. I know there are commercial secrets involved, but if we imagine the long arc of spatial intelligence and complex visual systems—comparing it to the 400-million-year evolutionary journey from trilobites onward—where do you think we are now? Early stage, or already somewhere in the middle?

Fei-Fei Li: Wow, it is hard to compare. You asked a great question. I think about this myself sometimes. In some aspects, today’s spatial intelligence—especially multimodal models—has already far surpassed humans. For example, object recognition has long exceeded human capabilities. How many breeds of dogs, species of birds, or types of cars can an average person recognize? AI is far better than most people at that.

Another example is 3D generation. Humans have actually quite good 3D understanding, but we’re very poor at generating 3D mentally—unless you’ve had specialized training. Doing 3D generation purely in one’s head is generally weak. This is different from children playing with clay—there, the 3D creation involves embodied interaction. But if you ask someone to imagine a 3D structure in their mind and then draw it, most people perform quite poorly. In this respect, AI can already achieve some very, very impressive results.

But when it comes to the deep understanding humans have of the 3D world—the physical relationships between objects, materials, physical properties, and all the rich intelligence embedded in that understanding—AI still falls far short. And that’s not even mentioning social understanding: how humans understand each other, which is also a form of visual understanding.

Humans are extraordinarily complex. So in some dimensions, AI is already comparable to—or even beyond—humans, while in others, it remains far behind.

And even though I believe deeply in spatial intelligence as an AI researcher, my belief isn’t blind. It’s grounded in scientific understanding and years of work in this field—seeing both the opportunities and the direction of the technology. Passion is necessary, especially for entrepreneurship, but judgment about technology requires strong logic and scientific rigor.

Jany Hejuan Zhao: Scientific rigor and careful reasoning underpin it all.

Golden Time for Startup or Big Tech Takes It All?

Jany Hejuan Zhao: Right now, most of our attention is focused on a few big tech companies. For example, Google Gemini, or OpenAI, which has grown from a small company into a giant. Anthropic has also effectively become a giant. Everyone is watching these giants. In the U.S. stock market, people talk about the “Magnificent Seven.”
Do you think small companies still have opportunities in this wave of AI development? And where do those opportunities lie, especially for new entrepreneurs?

Fei-Fei Li: I hope they do—because my own company is a small one. But hope aside, this is a valid question. When it comes to the integration of data, resources, computing power, and talent, companies that can consolidate these resources do have higher chances of survival and success.

That said, I don’t think we should only look at these more obvious factors. Obvious factors are easy to see, easy to talk about, and therefore spread easily.

Let me give a very simple example: AI coding. Microsoft was the first to do AI coding, right? Copilot. It had perfect timing, location, and people—everything working in its favor. It had all the resources, all the use cases, and even GitHub belongs to Microsoft. So why didn’t it fully dominate?

Today, what’s hot in Silicon Valley are Cursor and Claude Code. How is it that, under such circumstances, small companies were able to break through? This shows that obvious factors alone are not enough.

If everyone keeps judging solely based on these visible factors, their conclusions will be biased. In human history, there has never been an era where only big companies had a chance to win—never. In every era and in every society, big companies often had strong resource-integration capabilities as well. So what does this come down to? Creativity, opportunity, execution, and timing. These are all essential elements.

On top of that, AI is truly a horizontal technology. That means it creates opportunities at many application levels—far more than big companies can possibly cover. Small companies have countless opportunities to build applications extremely well, push them to the limit, and gradually carve open the market. All of that is possible.

Jany Hejuan Zhao: So for small companies, would choosing vertical application opportunities be better and more promising?

Fei-Fei Li: Exactly. It depends on what kind of small company. If you don’t have the capability to build foundation models or large models, then you definitely need to focus on applications. But applications aren’t only vertical. Take our company, for example—I don’t know whether you’d call it small or not; I’d still say it’s small. But we do have enough capability to build foundation models, so we also build models.

Jany Hejuan Zhao: To build models, you really need someone with your kind of background.

Fei-Fei Li: Right. Building models requires a very different talent structure.

Jany Hejuan Zhao: This brings us to a concept you often talk about: AI for Good. You believe AI should be more inclusive and bring benefits to ordinary people, rather than being controlled by a small elite. It should be used to serve humanity and promote good, not to do harm. This is a very interesting topic, and for scientists, it often has two sides.

You—and also Professor Geoffrey Hinton—have recently emphasized the need to be vigilant about AI’s potentially destructive power, even greater than nuclear weapons. But there’s another view that says we are still in a development phase and shouldn’t overemphasize AI risks right now. From your perspective, at this stage, should we focus more on development, or should we, like Professor Hinton suggests, simultaneously put more effort into safety and alignment?

Fei-Fei Li: I actually think this is just common sense. AI is a tool, and tools are double-edged swords. Every human tool—from something as small as fire or a stone axe, to nuclear weapons, biotechnology, or AI—is a double-edged sword. Of course, I believe tools should be used for good. But at the same time, we must prevent them from being misused—whether intentionally or unintentionally.

So I think both extremes are irrational. If we only focus on development and don’t care at all about safety or ethical use, that would be a disaster. But if we only talk about ethics every day and refuse to develop the technology, we would also miss many opportunities. Good technology can bring enormous benefits.

That’s why I often tell the media that I’m actually quite boring. I don’t say sensational things or take black-and-white positions. I always say the most boring things.

Jany Hejuan Zhao: But that’s the rigor of a scientist.

Fei-Fei Li: I don’t think this has anything to do with being a scientist. It’s just basic human common sense. Think about parenting: would you teach your child how to use fire? Of course you would—how to cook, for example. When you teach them, you explain the benefits of fire, but you also explain its dangers. That’s really just common sense.

Jany Hejuan Zhao: So how do we ensure that, in the development of AI, it becomes more widely accessible and benefits the public, instead of turning into a form of power? I increasingly feel that when technology is controlled by a few giants or by governments, it can become a tool of power. How do we prevent it from becoming a means of controlling humanity, and instead make it a way to benefit humanity?

Fei-Fei Li: You’re right. AI is a tool of power, and it is also a tool for good. It will always be a tool. In my view, this tool will become increasingly powerful. But before it becomes uncontrollable, it is still a human tool, and humans have the responsibility to keep it controllable.

Like all tools, we should never expect the tool itself to figure out what it ought to do. Whether it is used for good is a human responsibility. So controlling AI and guiding how it is used—that responsibility lies with humans:  with laws, institutions, education, and society as a whole. Every society is different, and every individual is different, but the responsibility ultimately lies with humanity.

An Upper Hand for Humans or AI?

Jany Hejuan Zhao: You also mention this at the end of your book—that AI is not meant to replace humans. There are many things AI cannot replace, including empathy. Emotional connection and communication are deeply human needs. So in the development of AI, how can we design it—or guide its development—in a way that preserves the parts of humanity that shine most brightly, and ensures that humans themselves are not replaced?

Fei-Fei Li: That’s a very good question, Hejuan. I think we really need to look at AI rationally—understand what it is, and then think rationally about what society needs today. Take education, for example. In the age of AI, we urgently need to update our educational philosophies and methods. We need to let children use this tool, and help them understand that it can empower their creativity and learning in many ways. At the same time, we must also teach them about the potential problems this tool can bring.

And this isn’t just about educating children. I think the biggest issue in the adult world is that we assume children are the ones who need education, when in fact the people who most need to be educated are ourselves. So we need to educate ourselves, educate the public, provide the public with sufficient information, and give policymakers and lawmakers more opportunities to learn and understand these technologies. All of this is extremely important.

In the end, how we develop and govern AI is really about our own learning, growth, and self-governance. Ultimately, it all comes back to people.

Jany Hejuan Zhao: Yes, educating ourselves is actually harder—much harder. Struggling with human nature is often more difficult than grappling with AI.

Fei-Fei Li: That’s absolutely true.

Fei-Fei Li: I think in the age of AI—especially with tools that possess cognitive abilities—the real lesson for us is that we should understand ourselves better and govern ourselves better. That “self” refers both to individuals and to groups. Sometimes I feel that all the heated discussion around AI misses the point. In the end, what’s lacking isn’t discussion about AI, but self-reflection on human nature—both individual and collective.

Jany Hejuan Zhao: Perhaps during the development of AI, we actually need more opportunities to discuss the development of human nature itself. Many young people are confused right now. There have been layoffs in Silicon Valley recently. I hear from many people who studied computer science—once in extremely high demand, including Stanford graduates—who are now facing layoffs and uncertainty. People say AI will replace many jobs, leading to unemployment and many other ripple effects.

So in this process—whether through education, or through how we understand the world and reflect on ourselves—how should we view the impact AI may have on our work, our lives, and even our emotional well-being? What should we, as humans, do?

Fei-Fei Li: What individuals need to do and what society as a whole needs to do are different.

For individuals, the first thing is to recognize that the era is changing. Pretending nothing is happening—like an ostrich burying its head in the sand—is not helpful. The world is changing, and jobs will change. Every major technological revolution brings job transformations, and often periods of pain. Some transitions are smoother; others are not and can cause social disruption.

So as individuals, we need to learn and adapt. Again, it comes back to maintaining curiosity—curiosity about life and about the world. Even if that curiosity comes from fear in adulthood, that’s okay. At least it gives you the motivation to learn. That is what individuals need to self-reflect on.

As for society, I believe our educational structures urgently need reform. Take K–12 education, for example. We ask teenagers to spend years on exam-oriented learning or on finding standard answers. In the United States, it’s not purely exam-driven, but it still emphasizes testing, and many teaching methods are based on knowledge “filling.” These approaches can—and should—be updated, and urgently so.

AI is rapidly demonstrating that many tasks can be done by machines. Asking humans to spend decades learning to do things that machines can already do is a waste of human potential. That’s why I strongly call on those who think about education, shape education policy, and implement education to seize the opportunity of the age.

For more than 100 years, our educational methodology has barely changed. My greatest hope is that when historians look back a century from now, at the early decades of the 21st century, they will say that humanity carried out an educational revolution.

Jany Hejuan Zhao: What would that educational revolution look like to you? In terms of direction or concrete changes, what are you most hoping for?

Fei-Fei Li: I believe we should use AI to empower both educators and students. By using AI to save time and energy, we can allow students—under the guidance of teachers and through self-guidance—to develop cognition and capabilities that AI cannot achieve.

Humans have enormous potential. Every individual has immense potential. Our brains are not fully utilized, and neither individuals nor societies have realized their full potential. You only need to look at the vast differences between individuals to see how great that potential is. Some people possess almost superhuman abilities, which shows that such capacity exists within human nature—we just haven’t unlocked it for most people.

With AI as a tool—and even with the disruption AI brings to human work—we have an opportunity to rethink education entirely. Our educational methodology hasn’t fundamentally changed in over a century. Now is the moment to transform it completely—from knowledge-based education, to skills-based education, to cognitive development, and ultimately to education about being human.

Jany Hejuan Zhao: Yet what we’re seeing now is that AI development seems to be pushing societies—not only in the U.S. but also in China—to place even greater emphasis on STEM (science, technology, engineering and mathematics). Education focused on cognition or the humanities is becoming less valued. Even the U.S. is talking about manufacturing reshoring and training more engineers. I find this somewhat confusing.

Fei-Fei Li: If education truly changes, we shouldn’t divide it into science versus humanities anymore. AI can enable everyone to learn coding—so are those people scientists or humanists? AI can also help people better appreciate beauty, read literature, and even write poetry. The entire methodology can change. Previously, we separated disciplines; AI gives us the chance to move beyond that.

The other day, my child was reading Harry Potter and asked me about a complicated plot point in the fifth volume—something neither of us fully understood. So we asked AI. We used ChatGPT and Gemini, asking step by step: what did Dumbledore do at that moment? What did Harry do? What did McGonagall do? After a series of questions, we finally understood the situation. This small example shows how many opportunities AI gives us.

But in the end, it still comes down to how people use this tool. What I fear most is human surrender—when people think, “AI is so smart, there’s nothing left for me to do.” That’s very frightening.

Jany Hejuan Zhao: People just “lie flat” and give up.

Fei-Fei Li: Right? I am not aware of this phrase. That’s a very vivid phrase. And it’s scary. Humans have immense potential, countless opportunities to shape the world, and countless opportunities to make the world a better place  . AI is just a tool.

Jany Hejuan Zhao: Listening to you repeatedly say “AI is just a tool” today has really struck me. I know many people, including AI researchers, and ironically, those who don’t use or understand AI often think of it as a tool. But many people working in AI say the opposite—that AI isn’t just a tool, that AI is everything, the future, the world itself, and that we shouldn’t treat it merely as a tool.

Because you are a true AI expert and scientist, hearing this from you is particularly powerful. It’s a simple sentence, but it shapes how we perceive and understand AI. Language is a gate—it shapes how we understand the world.

Fei-Fei Li: Human nature—and human agency—is the most important thing. If we give up our agency, we give up our curiosity and motivation to change ourselves and the world.

I honestly don’t understand what people mean when they say “AI is the world.” I really don’t. One could just as well say “a single flower contains an entire world.” I don’t know what “AI is the world” means. Behind the phrase “AI is just a tool” is a view of the relationship between humans and AI: seeing AI as a tool means seeing humans as more important, and placing greater emphasis on humanity itself.

Fei-Fei Li: Ultimately, when I say “AI is a tool,” it reflects my faith in humanity—my faith in human nature and human society. I believe in humans. I do not believe in AI. 

Dark Side of AI and AI Safety

Jany Hejuan Zhao: Earlier you mentioned that your family gave you many precious things. Could you share one or two examples of what your family gave you that you treasure most?

Fei-Fei Li: I actually mention this in my book. After finishing it, I realized that the book was really about my mother, not about me.

Jany Hejuan Zhao: Yes. I read the stories about your mother and your father, and I found them deeply moving. Coming from an ordinary family and growing step by step through your own efforts—it’s very inspiring.

Fei-Fei Li: Yes, my family was very ordinary, and quite small. In my childhood memories, there was my maternal grandmother, but on my father’s side there was no one. It was just a small, very ordinary family. My mother was in very poor health. But again, this wasn’t anything unusual—many families are like this.

What it gave me, though, were many precious things that I only understood after growing up. When you’re young, you spend so much time under survival pressure. But once you’ve walked that road, you realize—first of all—it truly forges your willpower. “Forging willpower” sounds like a big, abstract phrase, something people say that doesn’t mean much. But once you’ve been through those experiences, you don’t need to say it—it’s already there.

Second, although my work as an AI scientist is very “machine-like”—working with computers, algorithms, and data—my life experiences gave me a deep understanding of human nature. Those experiences, especially witnessing birth, aging, illness, and death, and seeing human vulnerability, gave me many perspectives. I think these are extremely valuable perspectives.

In Chapter 10 of my book, I wrote specifically about my mother’s illness. Why did I do that? Because I’m one of the very few AI researchers—perhaps the first or second—who is also a member of the U.S. National Academy of Medicine. And why did I become a member of the Academy of Medicine? Because over many years, I didn’t only work on AI as a professor; I also did a lot of work related to healthcare, especially healthcare delivery. Decades of accompanying my mother meant that I was truly struggling and navigating within the healthcare system.

Jany Hejuan Zhao: Long illness makes a doctor.

Fei-Fei Li: Absolutely. So many surgeries, so many illnesses big and small, daily caregiving—every experience gave me a deep understanding of healthcare. When I later worked on AI-enabled healthcare projects, I realized that my understanding was very different from others’. I truly had deeper insight. That also allowed me to work better with colleagues in hospitals, because they felt I respected them. They could see that I wasn’t just someone who talked only about computer science. “You actually understood our work and our pain points!” That perspective was incredibly special.

Jany Hejuan Zhao: That’s remarkable—going from “long illness” to  “a doctor,”  and to a Member of the National Academy of Medicine.

Fei-Fei Li: No experience in life is wasted.

Jany Hejuan Zhao: So this was also driven by curiosity.

Fei-Fei Li: It was driven by both curiosity and survival—but ultimately, it was driven by love. I loved my mother and wanted her to live, to be healthy. That’s why I devoted so much energy. That motivation really came from love.

Jany Hejuan Zhao: Professor Li, I personally have a major concern. As you know, the media industry has been discussing this a lot in recent years. First, AI has a huge impact on journalism, and next it will prompt our industry to undergo major changes. I’m also working on new products myself, hoping to build a better company and better products in the AI era.

But on the other hand, we’re seeing growing conflicts with long-held professional journalistic values. AI can now generate enormous amounts of text and images, and fake news and fabricated images are everywhere online. Many people can’t tell what’s real anymore. Even videos can be faked. How should we view the flood of misinformation AI may bring? I know you’ve also been personally affected—as a public figure, you’re especially vulnerable to online rumors. How should we think about this?

Fei-Fei Li: That’s true. And honestly, I deeply empathize. People often call me asking, “Where are you? What happened to you?” And I say, “Nothing—I’m at home sleeping.” There are all kinds of rumors. Some people who care about me are so worried that they don’t even dare to call, saying, “Something so big happened to you, we didn’t want to disturb you.” And I have to tell them: nothing happened.

Jany Hejuan Zhao: Exactly—everything’s fine.

Fei-Fei Li: Yes. So I really understand and empathize. As I said earlier, there are several layers to this issue. The first is public education. AI is a new thing. When cars were first invented, they were extremely unsafe—there were no seat belts, speeds were uncontrolled, and many problems existed. Humanity paid a heavy price in blood and tears before we gradually made cars safer. Even today, when your child becomes a teenager and starts learning to drive, you’re extremely nervous and provide extensive education.

I remember when I was young, my father repeatedly told me, “Never touch an electrical outlet with wet hands.” He must have said it 200 times. That’s education. When it comes to AI’s risks and fake news, public education is absolutely critical. This tool will always be used  by people. In media, I’m sure you see how impossible it is to guard against everything. When fake news about me appeared recently, I asked a journalist friend, “Why are there so many fake stories? It’s unbelievable.”

Jany Hejuan Zhao:
When I saw those stories, I wanted to message you and ask whether they were true.

Fei-Fei Li:
Exactly. I didn’t even realize at first that AI was involved. Someone with ulterior motives could write one piece of content, and in the past, without AI, they’d have to write it themselves or hire others. Now with AI, they press a button and generate 1,000 or even 10,000 pieces instantly. AI truly empowers harmful behavior as well.

In this situation, I think the first step—for individuals and for society—is understanding. We will all encounter these things, and we need to recognize them. Once I realized something was AI-generated, I actually found it amusing and became curious about how AI could write like that—and I stopped feeling hurt.

More seriously, though: first, individual education and collective education are essential. Second, institutions and policies matter—and they must be built on understanding. Without recognizing the destructive potential of this tool, we won’t just face fake news, but many other harms. Without awareness, we’ll never develop better systems or ways to deal with it.

Third, I strongly support what you’re doing. The tool is here—it has both good and bad sides. As a media professional, how do you update your products? How do you use your wisdom and execution to create new products that are original and distinctive? For example, when I read your writing, I know it wasn’t written by AI—or even if AI was used, it still delivers real value. That depends on human creativity. It’s not easy.

In the AI era, we are sometimes victims and sometimes deeply impacted. But as I said before, the responsibility ultimately lies with us—how we use the tool, how we avoid being harmed by it, how we prevent it from harming others, and how we accomplish what we want to do. All of these responsibilities remain human responsibilities.

Jany Hejuan Zhao:
Your remarks are insightful. Many young people admire you greatly. If you could say something to your 16-year-old self—just starting out on her journey of study—what message would you give to today’s youth?

Fei-Fei Li:
I’m actually very poor at giving advice to young people. But I truly believe this is an extraordinary era. Technology is changing, society is changing, and young people today have countless opportunities. In the end, it’s up to you to seize them.

Carry your curiosity, and what I call the “North Star” in my book—the passion, belief, and sense of purpose in your heart. Be yourself, and work to change the world. That, I think, is the greatest opportunity this era offers you. I hope young people recognize this opportunity and give themselves the chance to go for it—just do it.

Jany Hejuan Zhao:
Thank you, Professor Li. To close today’s conversation, I’d like to end with one of your own sentences: I hope humanity never gives up on themselves.

Thank you again, Professor Li, for taking time out of your very busy schedule to have this conversation with me.

Fei-Fei Li:
Thank you.

更多精彩内容,关注钛媒体微信号(ID:taimeiti),或者下载钛媒体App

3000万“急救”+1.3亿“抹账”,*ST惠程惊险保壳后真考验才刚开始

2025-12-30 21:22:54

图片由AI生成

图片由AI生成

年末收官之际,挣扎在退市边缘的*ST惠程(002168.SZ)踩线拿到“续命”款。  

12月29日,*ST惠程公告确认,重整投资人植恩生物已赶在年末窗口期向公司足额支付3000万元无偿现金捐赠,条款注明“无条件、不可撤销”,款项计入2025年度财务报表,将直接增厚公司货币资金、资本公积及归母净资产,解决当期生存。

三个月前,主要债权人绿发资产刚豁免其1.3亿元债务,资产负债表得以修复。短短一个季度内,这家被实施退市风险警示的公司,接连获得债务豁免与现金捐赠合计1.6亿元,通过外部力量的持续“输血”为自己争取到宝贵的喘息时间。

短期看,*ST惠程大概率锁定年度保壳;长期看,这家传统主业为电气设备制造与新能源业务、向着生物医药赛道深度转型的公司,仍面临业务整合成效、盈利可持续性与重整程序落地的多重挑战。

两笔“意外之财”瞄准年底财务生死线

*ST惠程的保壳之路,正凭借债务豁免与现金捐赠逐步走通。

公司自4月30日起被实施退市风险警示,核心症结是2024年经审计的归母净资产为负、扣非后净利润为负且营业收入低于3亿元。

9月底,*ST惠程获得债权人重庆绿发资产1.3亿元的债务本金豁免,是三季度净资产由负转正的核心推动力。截至此次债务豁免前,公司尚需向绿发资产偿还的借款本金合计1.9亿元,其中包含委托贷款本金5000万元。

该豁免同样强调为“单方面、无条件、不可变更、不可撤销”,并且作为资本性投入,直接减少公司负债并增加资本公积,从根源上改善了资产负债结构。

从2025年经营数据来看,公司基本面已出现改善迹象。

今年前三季度实现营业收入2.84亿元,同比大幅增长93.13%,接近3亿元的“退市红线”;归母净亏损从2024年的8309.48万元大幅收窄至4321.67万元,期末净资产(所有者权益合计)已从-3886.62万元转正至5879.66万元。

而此次3000万元现金捐赠的落地,精准卡在*ST惠程本年度最关键的财务时间节点上,成为保壳的“临门一脚”。

根据公告,该笔资金将直接增加公司2025年度货币资金与资本公积,不仅能有效对冲四季度的经营性亏损,更能进一步夯实净资产指标,大幅降低因财务指标不达标触发退市的风险。

双重支持之下,*ST惠程的保壳逻辑已较为扎实。

按今年前三季度分别营收0.61亿元、1.19亿元、1.04亿元,各同比94.47%、48.69%、191.12%的增速测算,全年营收突破3亿元大概率无虞,营收警报基本解除。

叠加债务豁免修复资产负债表、现金捐赠兜底利润指标,三大核心财务维度的改善,意味着公司保壳的确定性有所提升,也展现出重整相关方的紧迫感——确保上市公司资格不失,这是后续一切重整价值的基石。

“保壳”之后,财务数据达标≠基本面反转

两笔“意外之财”有望解除*ST惠程短期的保壳警报,但长期困境并未消失。公司的根本挑战在于主业疲弱与战略转型。

回溯可见,*ST惠程于8月被债权人绿发资产申请预重整,理由是公司不能清偿到期债务且明显缺乏清偿能力;9月即确定植恩生物为重整投资人并签署《重整投资协议》,约定过渡期内可向公司提供不超过7000万元的流动性支持,保障上市公司运营稳定。此次3000万元捐赠正是该协议的落地举措之一。

作为在生物医药领域摸爬20余年的产业资本,植恩生物的业务布局与*ST惠程的转型方向同轨。

植恩生物成立于2001年,以MAH(药品上市许可持有人)模式为核心,经营范围覆盖原料药、制剂、医药中间体等多个领域,拥有硬胶囊剂、片剂等多条生产线,研发规划产品管线涉及代谢系统、神经系统、肌肉骨骼系统、呼吸系统、血液系统、心血管系统、消化系统等疾病领域。

当前,植恩生物已上市重点产品包含奥利司他胶囊(减重用药)、盐酸多奈哌齐片(阿尔茨海默症用药)、盐酸罗匹尼罗片(帕金森病用药)、富马酸喹硫平缓释片(抗精神病药物)、盐酸托烷司琼注射液(化放疗及术后止吐用药)、甲磺司特颗粒(过敏性哮喘等2型炎症用药)、他达拉非片(男科用药)等50多种制剂和原料药产品。

2023年、2024年,植恩生物分别实现营业收入9.64亿元、11.93亿元,净利润1.1亿元、4895.39万元,截至2024年底的净资产达16.3亿元,经营能力与资金实力相对稳定。

反观*ST惠程,传统主业为中低压输配电设备制造及新能源汽车充电桩,增长乏力,扣非净利连亏五年,2024年末已资不抵债。

今年1月,公司以现金4700万元购买植恩生物所持有的锐恩医药51%股权,通过并购切入了生物医药领域,但新业务整合与贡献尚需时间。

当前营收的大幅增长主要源于并表因素。并购完成后,锐恩医药已开始贡献业绩。半年报显示,医药类业务占公司总营收的57.41%,子公司锐恩医药上半年实现了1.22亿元营收,净利润达2343.9万元,但可持续性与盈利能力还有待观察。

而当前*ST惠程仍处于预重整阶段,尚未获得法院最终批准,后续能否顺利转入正式重整、重整计划何时落地,仍存在一定变数。

此次植恩生物的现金捐赠,为*ST惠程的2025年画上了一个充满希望的逗号,它暂时驱散了退市的阴云,为公司的战略重整赢得了宝贵的时间窗口。但这笔钱解决的只是“活下去”的问题,如何让公司“活得好”,构建具有持续竞争力的主营业务,才是决定这场重整最终成败的关键。(文丨公司观察,作者丨曹倩,编辑丨曹晟源)

更多精彩内容,关注钛媒体微信号(ID:taimeiti),或者下载钛媒体App

“AI具身家庭机器人”第一股上市了,95%收入在海外

2025-12-30 21:20:31

图片来源:卧安机器人

图片来源:卧安机器人

12月30日,被视为“AI具身家庭机器人第一股”卧安机器人(股票代码:6600.HK)正式登陆港交所主板,上市发行价73.8港元/股,首发募资总额16.4亿港元。 

截至上市首日收盘,卧安股价上涨0.07%至73.85港元/股,总市值为164.11亿港元,约合147.7亿元人民币。 

卧安机器人由李志晨、潘阳联合创立于2015年,至今已有十年,并不是借助具身智能风口成立的机器人公司。

股权结构上,李志晨、潘阳及一致行动人合计持股44.53%,为公司控股股东;“大疆之父”李泽湘通过关联实体合计持股12.98%,高秉强间接持股9.72%,两位行业权威均以非执行董事身份深度参与公司发展。

上市前,卧安已完成8轮融资,累计融资3.91亿元,股东包括高瓴、源码、达晨等知名机构。 

卧安并非一上来就在做人形机器人,其最早在智能家居赛道火出圈的。

2017年,卧安机器人正式推出全球首款手指机器人及“SwitchBot”品牌旗下的第一款产品SwitchBot Bot。后来,卧安机器人又推出了窗帘机器人、门锁机器人等。

招股书显示,卧安机器人旗下品牌“SwitchBot”的产品线,涵盖了AI具身家庭机器人系列,包括增强型执行机器人及感知与决策系统,以及其他智能家庭产品与服务,拥有7大产品类别,42个SPU。

比起小米等公司AI具身家庭机器人系统领域的影响力,卧安之所以在国内市场鲜少被人提及,是因为其主要销售渠道在日本、欧洲及北美等地,公司95%以上的收入来自海外市场,日本是其第一大市场。

招股书显示,2022年至2025年上半年,卧安来自日本、欧洲及北美市场的合计收入占总营收的比例分别为95.5%、95.6%、95.0%及96.6%。

上市文件截图

卧安机器人招股书截图

其中,2022年至2024年,日本市场占卧安营收比重分别为61.4%、62.3%、57.7%。2025年上半年,来自日本市场的收入为2.68亿元,营收比重进一步上升至67.7%,占比近七成。

同时,2022年至2024年间以零售额计算,卧安机器人在日本AI具身家庭机器人系统行业中排名第一,2024年市场占有率为20.7%。此外,2025年上半年,欧洲市场收入为0.68亿元,占比17.2%;北美市场收入为0.46亿元,占比11.7%。 

卧安的收入从2022年的2.75亿元增长至2024年的6.10亿元,复合年增长率达49%。 

卧安在2025年上半年实现收入近4亿元,相当于2024年全年的65%;毛利率持续攀升至54.2%,并于2025年上半年实现扭亏为盈,经调整EBITDA达5414万元。

在研发投入上,卧安长期维持约20%的研发投入占比,重点投入机器人定位、AI视觉、边缘计算及人形机器人技术,同时建设数据采集工厂优化VLA模型。

除了上述基础产品,卧安也在开辟新赛道,主要聚焦在运动机器人(如已推出的和真人对打的AI网球机器人Acema),陪伴机器人(如搭载本地大模型的AI陪伴机器人Kata Friends),以及人形机器人。

值得关注的是,卧安计划于2026年1月推出首款人形家务机器人H1。这是一款只专用于家务的机器人。

卧安CEO李志晨在与源码资本的对话中表示,如果有一个超级通用机器人,既是管家又是保姆又是保安又是助理,那人们可能会担心,它可能会取代人。但如果每个机器人只负责一个具体的职责,比如保姆机器人、保安机器人,你就不会有这种担忧,人永远是未来家庭的主人,其中涉及未来的人机关系。

在他看来,从效果上看,专用机器人是最优解。人类的任何需求都有一个最能被满足的产品形态。所以卧安的产品矩阵是:宠物机器人提供情绪价值和管家功能,保姆机器人(如H1)做家务劳动,运动机器人提供陪练,其他专用机器人各司其职。 

他认为,未来的家庭,并不会由一个全能机器人包揽一切,而是会延续人类社会长期形成的分工逻辑。通用本体并非技术上不可实现,但在家庭这一高度私密、长期共处的场景中,并不构成最优的人机关系形态——机器人需要清晰的角色边界,有所为,也有所不为。

家庭场景的核心挑战,在于任务的长尾与分布漂移:同一任务在不同家庭、不同光照、不同物体形态下会出现大量极端情况,仅依靠规则或人工枚举难以长期覆盖,因此需要学习型模型的泛化能力,并在执行层结合传统控制与安全约束形成工程闭环。

李志晨判断,行业真正接近“GPT时刻”,在于能否形成可规模化的数据闭环,并验证学习型模型在真实家庭场景中的持续泛化能力,让人看到Scaling Law在具身领域的体现。这里的关键不是“堆参数”,而是可持续的数据闭环、可复用的任务表征与接口,以及可对齐的评测基准先收敛,规模扩展的收益才会稳定显现。

人形机器人赛道的竞争越来越激烈,作为相对稀缺的具身智能家庭场景商业化标的,卧安可以说抓住了当前AI物理落地的风口。

从轻量化智能改造配件,到运动、陪伴机器人,到开拓运动、陪伴机器人赛道,再到未来的人形机器人赛道,卧安的产品规划和增长路径,看起来没有会跳舞的人形机器人那么显眼,但产品有真实落地的场景,能够自我造血,这或许是卧安能成功IPO,被资本市场看好的重要因素。(作者|张敏,编辑|李程程) 

更多精彩内容,关注钛媒体微信号(ID:taimeiti),或者下载钛媒体App

监管亮剑“财技保壳”,谁在退市边缘“裸泳”?| 焦点

2025-12-30 21:10:14

图片系AI生成

图片系AI生成

年末退市大考临近,A股“保壳”战场硝烟弥漫。一批ST及濒临退市公司密集祭出非常规手段,试图在监管红线前“踩刹车”,掀起新一轮财务与资本操作潮。

从股东兜底代偿、债务豁免,到1元甩卖资产、跨界并购重整,保壳路径花样翻新,背后是大股东对控制权、融资平台和财富安全的奋力一搏,也折射出产业转型升级背景下地方维稳与上市公司平台价值再发现的复杂博弈。

在注册制深化与“应退尽退”监管基调下,突击财技正面临前所未有的穿透式审查。审计换所难掩瑕疵,立案调查冻结重组,“技术性保壳”的裸泳者终将在退潮中现形。

从兜底代偿到1元甩卖,花式保壳层出不穷

年末保壳冲刺进入白热化,一些“戴帽”公司以及游走在保壳边缘的公司密集推出非常规操作,试图在退市红线前“踩刹车”。

兜底代偿成为修复历史污点的快捷方式。12月29日,ST路通公告第一大股东吴世春代偿原实控人资金占用本息合计1022.54万元。这笔款项覆盖了此前未入账的869万元本金及利息,直接清除公司申请撤销其他风险警示(ST)的核心障碍。

回溯历史,2021年9月至2022年7月,公司实际控制人及其关联方累计发生资金占用1.55亿元。最终,因未能在一个月内解决资金占用问题,ST路通股票于2023年2月1日起被实施其他风险警示。

债务豁免与无偿赠资则成为改善净资产的急救包。同日,济高发展公告获控股股东济南高新城建及关联方合计3.78亿元债务豁免。根据会计准则,该金额将计入资本公积,直接增厚净资产。

截至2025年三季度末,济高发展净资产仅剩4387万元,较上年末缩水44.27%;前三季度营收2.22亿元,同比下降17.59%,归母净利润亏损8205万元。若无外部支持,公司已逼近因净资产为负而被实施退市风险警示的红线。

也有企业祭出债务豁免+重整组合拳。12月26日,*ST新元公告,产业投资人北京随锐新元创新科技中心及随锐科技集团拟协调关联方无偿赠与不超过3.3亿元现金资产,同时实控人朱业胜豁免子公司0.5亿元债务。

同时,*ST新元还公告,根据重整投资人随锐绿技行启宸联合体提交的重整投资方案,随锐科技、绿技行将向公司注入核心优质业务。叠加法院已裁定批准的重整计划,公司试图通过“债务豁免+资产注入”实现重生。

部分公司选择断臂求生式剥离亏损资产。12月26日,老牌房企津投城开以1元对价,将全部房地产开发业务相关资产及负债转让给关联方城运发展。同样操作的还有*ST南置,12月3日,公司以1元价格向关联方上海泷临置业有限公司转让房地产开发及租赁业务相关资产与负债,涉及17项股权资产及115.82亿元债务。上述操作有望让两家房企净资产快速转正,以摆脱退市风险。

更有公司押注破产重整+跨界并购。*ST东易12月21日公告称,北京市第一中级人民法院裁定批准其重整计划。若该计划顺利执行,公司将有效改善资产负债结构,“保壳”概率显著提升。此前,产业投资人承诺出资3.45亿元,并计划推动公司业务向“AI家装+算力”双主业方向转型。

保壳背后的生存逻辑与市场泡沫

上市公司不惜代价保壳,是基于多重现实利益驱动下的策略性应对。

“上市公司身份本身就是一种品牌背书,能维系客户与供应商的信任,保障业务基本运转。一旦退市,这套信用链条将迅速崩塌。”正如一位资深市场人士所言,退市往往意味着融资渠道收窄,信用评级下降,甚至引发债务违约,实控人可能面临质押爆仓、财富缩水、控制权丧失等风险。

ST路通便是典型案例。公司因原实控人资金占用超1.5亿元于2023年被实施其他风险警示。在经历激烈控制权争夺后,新晋第一大股东吴世春选择帮“前任”兜底代偿占款,此举固然有助于加速摘帽,更意在向交易所、中小股东及债权人传递其非“野蛮人”的形象,彰显修复历史问题、改善公司治理的能力与意愿。

结合吴世春当前对 ST路通75%的股权质押率来看,这一兜底举动也具有现实紧迫性的战略考量——用小额现金支出,守住数亿元质押资产的安全边际。

国资股东的介入则掺杂更多非市场化因素。在财政转型、产业升级等背景下,部分地方政府或地方国企出于维稳、就业、避免前期投入沉没等目的,即便自身经营承压,仍对濒临退市企业慷慨解囊。

例如,济高发展的控股股东济南高新城建2024年净利润亏损4.71亿元,却仍豁免上市公司巨额债务;*ST建艺的控股股东在入股四年浮亏、承担超50亿元担保与借款背景下,仍提供18亿元支持。

“尤为关键的是,壳资源被用于借壳上市、资产注入、资本运作的预期,对大股东极具吸引力。”上述人士进一步指出,新“国九条”和“并购六条”相继出台,这打开了市场对壳资源价值的重估空间,点燃了二级市场的投机热情。

2025年以来,ST板块整体涨幅达28.33%,显著跑赢沪深300指数(18.21%)。其中,*ST宇顺*ST亚振年内涨幅分别高达762.36%和675.53%,涨幅跻身A股第三、第四位。

据不完全统计,当前A股177家ST公司中,近一年内有157家推进并购重组相关动作,其中91家发生控制权变更,26家启动重大资产重组。并购重组已成为ST公司优化基本面的核心路径。

以*ST宇顺为例,其通过跨界并购切入智算赛道,迅速引爆市场情绪。2025年4月公告拟以33.5亿元收购中恩云等三家企业100%股权;截至11月,已支付51%对价并启动财务并表,标志着重组迈出关键一步。

但并非所有并购保壳故事都能圆满收场。部分“保壳”“借壳”的炒作行为往往是利用投资者投机心理,一旦泡沫破裂,投资者将遭受巨大损失。

典型如*ST双成,曾在2024年9月宣布拟收购半导体企业奥拉股份,在退市压力下,该方案一度点燃市场狂热,复牌后三个月股价飙升10倍。然而随着2025年3月宣布重组终止,保壳希望落空,公司随即被“戴帽”,当前市值较峰值蒸发逾70%。

监管亮剑,“技术性保壳”退潮

“上市公司想方设法保壳,折射出重上市、轻经营的扭曲激励机制。”一位市场人士直言,部分公司把上市当作终点而非起点,缺乏持续创造价值的能力。而资本市场对“概念”“故事”的追捧,助长了保壳续命的投机心态。

多数濒临退市的公司存在内控薄弱、大股东掏空、财务造假等问题。所谓的“花式保壳”往往靠钻漏洞、耍财技,掩盖而非解决根本问题,反映治理结构形同虚设。

随着监管层正从“看报表数字是否达标”转向“看业务是否真实、是否具备持续经营能力”,“技术性保壳”的空间正在被系统性压缩。

据笔者不完全统计,12月以来,已有超100家上市公司宣布拟变更会计师事务所,其中ST及高风险公司占比突出。一些企业频繁换所虽不违规,但恰逢保壳关键期,难免引发市场对“审计套利”或规避不利意见的质疑。

例如,*ST岩石以“综合考虑公司业务发展情况和整体审计需要”为由,将会计师事务所由中兴财光华变更为尤尼泰振青。据笔者观察,中兴财光华过去三年连续对*ST岩石年报出具了带有保留意见的审计报告。

12月13日,*ST精伦披露了对上交所问询函的回复公告,内容涉及公司新增2.3亿元算力服务器合同的业务模式、供应商关联等情况,以及突击做大营收是否符合营业收入扣除的有关规定。值此关键时点,合作了约24年的会计事务所中审众环突然辞任,且未就营收扣除事项发表意见。

同样在年底换所的还有*ST新元。12月2日,公司以“更好地适应公司未来业务发展及规范化需要”为由,拟聘请北京德皓国际为公司2025年度审计机构。值得注意的是,*ST新元此前因原审计机构对其2024年财报出具无法表示意见的审计报告,公司股票被实施退市风险警示。

另一方面,立案调查成为保壳“死刑通知书”。12月29日,*ST熊猫因涉嫌信息披露违法违规被证监会立案。近期,*ST沪科ST葫芦娃ST长园等也相继公告被查。

根据《上市公司重大资产重组管理办法》相关规定,被立案调查期间,上市公司原则上不得筹划或实施重大资产重组。这意味着,即便财务指标勉强达标,若存在信披或财务造假嫌疑,其通过资产注入续命的路径将被直接冻结。

更深层次的变化在于,监管对营业收入和持续经营能力的认定日趋严格。例如,对无核心技术、无客户黏性、无物流仓储的贸易型营收予以剔除;对年底突击签订的大额合同要求穿透核查商业实质等等。

这标志着A股正逐步告别对“壳价值”的依赖,转向真正基于企业内在价值的投资逻辑。真正有价值的公司,不需要靠财技续命;真正健康的市场,也不该为“僵尸企业”买单。那些仍在依赖“报表魔术”和监管套利的企业,终将在制度刚性与市场理性面前,失去最后的容身之地。(文|公司观察,作者|马琼,编辑|曹晟源)

更多精彩内容,关注钛媒体微信号(ID:taimeiti),或者下载钛媒体App

理想、小鹏造“未来”,蔚来、零跑抢“明天”

2025-12-30 19:17:06

文 | 谈擎说AI

有句老话讲得好,三年入行,五年懂行,十年称王。

如今2025年即将收官,细数下来,曾经造车新势力们也已经年过十载,算不上新势力了。

根据天眼查APP显示,蔚来与小鹏年纪较大,都成立于2014年下半年:理想与零跑都成立于2015年。

此外,十年前同期成立的还有威马汽车、哪吒汽车、奇点汽车等60多家新势力。

十年来,经过市场的大浪淘沙,到如今过得还不错的新势力品牌也几乎仅剩“蔚小理零”这四家。

威马汽车于2023年进入破产重整程序后;最近走向终点的是6月份的哪吒汽车,如今也进入破产重整程序。

但就算这也没完。

2025年对于新能源汽车市场又是一大转折点,新能源汽车市场渗透率首次超过油车,上半场电动化刚结束,紧锣密鼓地便是智能化竞争。

在新的节点上,这四家企业未来又从何谈起?

四家新势力的变化与隐忧

要看懂未来,首先要看懂过去和现在。

回顾2025年,新势力内部也在产生新变化。

曾经一度被认为会倒闭,在主流叙事中“小透明”的零跑汽车,却成为了新势力中最大的黑马。

11月份,零跑销量达到了7万辆,居于新势力第一,也是继理想之后第二家实现盈利的新势力车企。

今年更是连续三个季度实现盈利,提前完成了今年50万辆的销售目标;在最近致全体员工的信中,创始人朱江明更是展现出强烈信心:零跑不再满足于“新势力”标签,而是对标全球一流车企,并立下2026年冲刺年销百万辆的目标。

有意思的是,零跑成功逆袭,但蔚小理如今却又回到同一起跑线上。

蔚小理三者在十一月销量分别为36275辆、36728辆、33181辆。

尽管同一起跑线,但有人欢喜有人忧。

小鹏与蔚来在第三季度交出了成立来最好的财报,小鹏净利润亏损为3.8亿元,距离第四季度实现盈利就差咬咬牙的功夫;蔚来三季度净亏损收窄至34.8亿,距离第四季度实现盈利虽然仍有差距,但好在仍往前走。

最难的莫过于理想,本来一直坐稳新势力头把座椅,去年净利润还有80亿,转眼三季度却扭盈转亏为6.24亿元。

这也正验证了如今中国汽车市场的魅力时刻,明天和意外,你永远不知道哪个先来。

零跑一年前还在亏损,毛利率一度低至1.1%,凭借“半价理想”的精准定位,很快就逆袭到了第一梯队。

小鹏、蔚来同样如此,在2024年二者还一度经历至暗时刻。小鹏凭借MONA M03一款车型,成功从至暗时刻杀出重围;蔚来在年初还被人不看好,李斌曾说相信蔚来四季度能盈利的人不超过1%,但凭借乐道L90与蔚来ES8大卖,让更多人对蔚来有所期待。

这正是汽车市场魅力所在,凭借一款车型,企业就很有可能从垂死挣扎到翻身上岸。

但反过来讲,也会经历无妄之灾,凭借一款车把公司再度拉近至暗时刻。

理想之所以亏损,有很大一部分原因在2025年10月,理想汽车因旗下纯电MPV车型Mega冷却液防腐性能不足存在安全隐患,召回11,411辆2024款车型,为此计提约11亿元的质保成本。

这一巨额支出直接拖累毛利率从去年同期的21.5%降至16.3%,成为亏损的重要诱因。

其次,理想的“冰箱、彩电、大沙发”的增城车逻辑构不成护城河,正被其它车企蚕食。

不说远的,蔚来和零跑就是主要竞争对手。

蔚来ES8超大空间,对标理想L8、L9,还做得是纯电,有着差异化优势。

零跑不用多说了,就是凭借“半价理想”定位,才有了如今的地位。

理想摸着石头过河,但友商摸着理想过河。

另一方面,随着电池续航大幅跃升,增程车的“过渡”角色正迅速谢幕——纯电与增程在新势力的销量比例已从去年的 49:51 彻底翻盘至今年的 74:26,纯电不仅完成反超,更把增程远远甩在身后。

而作为增程祖师爷的理想正在经历战略调整期,增程车L系列销量下滑,如今推出理想i8、i6月销量都在6000辆左右,销量并不及预期,处于转型阵痛期。

李想在业绩发布会上也进行了反思,直接否定了近三年向职业经理人管理体系靠拢的努力,表示要重新回归创业公司管理人体系。

其实理想给了新势力一面参考的镜子,在如今的汽车市场上,想要一直独领风骚,并非易事儿。

除了理想之外,这三家不断前进的新势力也暗含隐忧。

首先它们的增长有着一个共同点,都采用了“低价高配的策略”。

像零跑B10,以12.98万元的预售价提供了通常20万级以上车型才有的激光雷达智驾版。

零跑C级系列和B级系列分别践行“半价理想”、“半价Model 3”的策略,精准狙击了价格敏感型用户。

然而,这种极具杀伤力的价格战是一把双刃剑,在换取销量的同时,也严重压缩了企业的利润空间。

零跑前三季度实现盈利1.5亿元,净利率却只有0.77%,盈利具备脆弱性。

这种模式意味着,当行业价格战加剧或原材料成本出现波动时,企业利润将面临被迅速侵蚀的巨大风险。若电池成本上涨或竞争对手进一步降价,零跑的盈利空间可能被彻底压缩,导致由盈转亏。

同时,零跑在智能驾驶技术上距离头部企业仍有差距,在接下来L3智驾竞争上,可能面临着更多压力。

小鹏则是通过MONA M03“低价+高配智能化“组合策略,打破传统A级车在辅助驾驶功能上的瓶颈。

但如今的小鹏有点儿过度依赖MONA M03,目前面临着严重的销量结构失衡。本来自身也算是一个高端化品牌,但十来万的小鹏MONA M03 销量占了其总销量的四成以上,相比之下,其他车型表现平平,销量都集中在16%以下。

这种“头轻脚重”的局面,无疑将成为小鹏未来发展的掣肘。

蔚来的低价高配主要体现在较为激进的降价策略上:

  • 乐道 L90 正式上市,售价 26.58 万–29.98 万元,比预售区间下调 1.4 万–2.2 万元;
  • 新款 ES8 起步价从 52.8 万元直降至 41.68 万元,降幅超过 11 万元;

但隐忧在于,这种“以价换量”的策略是一场针对老车主的被刺,今年8月,一度有老车主怒喷李斌5分钟上了热搜,直呼“蔚来的保值率还没有倒闭的高合高”。

此外,2025年第三季度,乐道品牌交付量(含L90、L60等车型)达到37,656辆,首次超过蔚来主品牌(蔚来品牌交付量为36,928辆)。

曾是“高端纯电标杆”的蔚来,如今反而被主打性价比的子品牌抢去风头,这种“倒挂”正悄悄动摇其品牌光环。

可见,不管如今新势力发展状况如何,但山前山后各有哀愁,新能源汽车的“内卷式”竞争远远没结束,马上到来的2026,随着补贴滑坡,L3的推进,竞争会进一步加剧,面向未来,这四家车企又将做出怎样的决策?

面对未来,四家新势力又如何抉择?

随着L3到来,汽车被软件定义的趋势越来越明显,汽车距离成为四轮机器人将更进一步。

未来竞争逻辑可能发生改变,如今还可以注重冰箱、彩电、大沙发,但随着智能驾驶及互动技术进步,竞争逻辑出现偏向智能化趋势。

面向未来,目前四家新势力车企分为了两派,一类为理想派,为理想、小鹏,发展具身智能为第二增长曲线;

一类是务实派,为零跑和蔚来,专注于卖汽车。

小鹏参与具身智能并不意外,无论是造飞行汽车,还是All In AI,小鹏成立以来就有着科技感的方向迈步,前段时间自家的人形机器人走猫步还小火了一把。

最激进的莫过于理想,就在最近,李想给出了理想未来发展的最终答案:理想要在下一个十年将理想汽车定义为“汽车机器人”。

相比着,零跑要在下个十年,要求成为400万级的世界性车企,专注于汽车领域。

此外,零跑于12月28日,举行10周年发布会,零跑首款大型SUV零跑D19、首款MPV零跑D99双旗舰发布。

D系列将要冲击高端阵营。

可见,面对外界变化,零跑依然稳扎稳打地扩展产品矩阵。

当李斌被问到蔚来关于具身智能的看法,其表示蔚来会更加聚焦一些,机器人毫无疑问有未来的,但蔚来不会今天去做,与其今天下场,更感兴趣谁家机器人会用蔚来的芯片。

这里面其实很有意思一点,零跑与小鹏的选择都不让人意外,零跑一开始就注重汽车供应链,就属于务实派,小鹏也已一开始就科幻感十足,属于理想汽车派。

让人意外的是蔚来与理想。

关于企业是否选择具身智能其实和当年是否选择做纯电有点儿像。

在纯电路线的选择上,蔚来是坚定的理想主义者,坚持纯电路线;而理想则是务实的代表,选择了增程式作为过渡。

然而,在面对汽车智能化的下一个风口——具身智能(将汽车视为四轮机器人)时,二者的选择却戏剧性地反转了:务实的理想如今拥抱了机器人的“理想主义”,而曾经的理想主义者蔚来,反而显得更为务实,重心回归汽车本身。

这种看似矛盾的“错位”,其实深刻反映了两家公司所处的不同发展阶段和财务现状。

李斌为了实现纯电的理想,已经付出了太多了,蔚来已经累计亏损超千亿。

为了维持自身纯电汽车面对增程时的竞争优势,不惜大规模铺设换电站。

蔚来与李斌在坚持纯电的路线上付出了太多的代价,可能是李斌累了,也可能李斌变了,变得更务实了,才没有选择做机器人。

此外,蔚来如今的发展现状也不允许李斌不得不为现实妥协。

蔚来如今首要目的是实现盈利,需要给多年来一个投资者一个交代,也需要给消费者一个交代。

李斌曾在内部谈话中表示,约30%-40%的潜在用户不选择蔚来,是因为担心蔚来会倒闭。

尽管蔚来账上躺着300多亿现金,但消费者已经见到太多家新能源车企的倒闭,恐惧的种子已经埋下。

说实话留给蔚来的容错空间并不多,蔚来也不得不务实起来。

反观理想,由于此前更务实,凭借增程策略的成功,早已实现了“财务自由”,账上现金流近千亿。

这赋予了理想更大的试错空间和长远布局的资本。

而且随着增程时代逐步完结,理想又回到了新的起点,要对自身的未来有着重新的思考,重新的定位,又走在了新的十字路口。

理想需要寻找新的差异化竞争壁垒。

而汽车最终归宿是机器人,因此,理想选择在此时投入资源探索具身智能和机器人技术,不仅是对未来汽车形态的畅想,更是为了在未来的竞争中构建新的护城河,就像当年选择增程一样,试图再次引领差异化。

李想在最近得的采访中表示,未来的汽车机器人应该之后没有车机座舱,车是自动驾驶+空间智能+本体控制的具身智能。

但想象是美好的,蔚来当年仅仅等待纯电春风的到来,又建换电站、又研究超充,花费了大量的资金与经历。

可见理想要从机器人角度造汽车,中间难免会有更多的磨难。

当然了,理想要是未来成功了,说不定又是一场技术革新的时代。

总结来说,这种战略选择的优劣是动态的:

短期来看,在具身智能的技术临界点尚未到来之时,像蔚来、零跑这样更务实、专注于当下汽车产品和盈利能力的策略,可能更有利于其在激烈的市场竞争中站稳脚跟。

长期来看,一旦具身智能的技术突破到来,像理想、小鹏这样提前布局的“理想派”,可能凭借技术积累实现“降维打击”,而务实派则可能面临转型的阵痛,就像如今的理想不得不面临增程落幕带来必然的转型阵痛。

当然,无论车企选择深耕当下还是畅想未来,这都是基于生存现状与发展阶段的理性抉择。

蔚来、零跑代表了“生存优先”的逻辑;理想、小鹏则代表了“进化优先”的逻辑,试图通过具身智能重塑竞争维度。

在技术爆发前夜,务实与理想的碰撞,不仅关乎几家企业的命运,更将决定未来十年汽车行业的转变。我们期待在这些不同路径的探索中,中国新势力能够穿越周期,迎来更加成熟的明天。

更多精彩内容,关注钛媒体微信号(ID:taimeiti),或者下载钛媒体App