Nature named DeepSeek founder Liang Wenfeng among its top 10 people who shaped science in 2025, citing the company's influential AI models. DeepSeek's R1 (January) and V3 (December) matched leading Western systems at far lower training cost and helped trigger a market sell-off that erased nearly US$1 trillion in tech value. By open-sourcing its models and publishing R1's training details in Nature, DeepSeek accelerated research adoption and altered assumptions about AI leadership and costs.
DeepSeek Founder Liang Wenfeng Named One of Nature's Top 10 People Who Shaped Science in 2025

Liang Wenfeng, the 40-year-old founder and CEO of Hangzhou-based AI firm DeepSeek, has been named by the British journal Nature as one of its top 10 "People Who Shaped Science in 2025." Nature described Liang as a "Chinese finance whizz" whose breakthrough AI models drew global attention and prompted debate about the future of artificial intelligence and the tech industry.
The profile highlights the disruption following the January release of DeepSeek-R1, a reasoning model whose performance suggested that the United States might not be as far ahead in AI as commonly assumed. DeepSeek — a spin-off from Liang's High-Flyer Quantitative Fund — attracted intense scrutiny after a market sell-off on 27 January erased nearly US$1 trillion in technology market value, including roughly US$600 billion from Nvidia alone.
Nature noted that DeepSeek's R1 and an earlier model, V3 (released in December), matched the capabilities of comparable systems from leading Western labs while using only a fraction of the training cost. For context, Nature reported that training Meta Platforms' Llama 3 405B model cost more than ten times as much as DeepSeek's reported expenditures.
DeepSeek's decision to publish its model code and training details as open source was singled out as a pivotal move. Researchers worldwide have adapted the models for domain-specific applications, and the release prompted other firms in China and the US to follow suit with open models of their own.
"In many ways, DeepSeek has been hugely influential," said Adina Yakefu, a researcher at AI platform Hugging Face, in Nature's profile.
Despite US restrictions limiting mainland firms' access to advanced AI chips, DeepSeek demonstrated that alternative engineering approaches and training techniques can produce high-performing models. In September, DeepSeek's engineers published the technical details of how R1 was built and trained in Nature, marking R1 as the first major large language model to undergo peer review in the journal and giving researchers a practical "recipe" for training reasoning models.
Nature's feature also highlighted several other notable scientists on its 2025 list. Among them was Du Mengran, a geoscientist from the Chinese Academy of Sciences, whose 2024 dives into the hadal zone uncovered the deepest-known animal ecosystem at the bottom of the Kuril-Kamchatka Trench. Others on the list included immunologists, public-health negotiators, systems biologists, entomologists, neurologists, physicists, and the first patient saved by personalized CRISPR therapy.
DeepSeek's rise raises important questions about transparency, open science and the economics of AI development. Liang's inclusion in Nature's top 10 underscores how technical breakthroughs, publication choices, and market impacts can intersect to reshape scientific and commercial landscapes.
This article is based on reporting by the South China Morning Post and Nature's 2025 "Nature's 10" feature.
Similar Articles

Study Names 'Most Harmful' AI: Most Leading Firms Fail to Manage Catastrophic Risks
The Future of Life Institute's AI Safety Index finds that most leading AI firms lack the safeguards, oversight and credible l...

Fei-Fei Li Urges End to AI Hyperbole: Call for Clear, Evidence-Based Public Messaging
Fei-Fei Li criticized polarized AI rhetoric that alternates between doomsday scenarios and utopian promises, urging clear, ev...

AI Safety Report Card: Major Firms Fall Short, Index Urges Binding Standards
The Future of Life Institute's AI Safety Index assessed major AI firms on 35 indicators across six categories and found indus...

Quantum-Inspired Hack Removes Censorship From DeepSeek R1 and Cuts Model Size by 55%
Researchers at Spanish firm Multiverse say they used a quantum-inspired method called CompatifAI to prune and compress DeepSe...

Report: Elite U.S. Universities Partnered With Chinese AI Labs Tied To Xinjiang Surveillance
A joint report from Strategy Risks and the Human Rights Foundation alleges that top U.S. universities have partnered with Chi...

Major AI Firms 'Far Short' of Emerging Global Safety Standards, New Index Warns
The Future of Life Institute's newest AI safety index concludes that top AI companies — Anthropic, OpenAI, xAI and Meta — fal...

Who Funds AI Critics? How Tarbell Fellows Sparked a Media Fight Over AI Coverage
The dispute began after NBC reported that OpenAI had threatened nonprofits critical of the company; OpenAI then raised concer...

AI Is Supercharging China’s Surveillance State — Algorithms Are Tightening Control
ASPI’s new report finds that China is integrating AI across surveillance, censorship, courts and prisons to monitor citizens ...

AI Research Under Strain: One Mentor Credited on 100+ Papers as Conferences Overflow
Kevin Zhu, a recent UC Berkeley graduate and founder of mentoring firm Algoverse, has been credited on more than 100 AI paper...

Sam Altman Warns About AI’s Breakneck Pace as ChatGPT Tops 800 Million Weekly Users
Sam Altman, CEO of OpenAI, warned that ChatGPT’s unprecedented three‑year rise — now reaching more than 800 million weekly us...

Experts Say AI Research Is Being Flooded With Low-Quality, AI-Aided Papers
Experts warn that a flood of low-quality AI papers—many aided by language models—is drowning out rigorous research and overwh...

Anthropic’s Dario Amodei Heads to Washington to Repair Relations and Push for Strong AI Export Controls
Anthropic CEO Dario Amodei visited Washington to repair ties with the Trump administration and press for strong AI export con...

15 PNNL-Affiliated Researchers Named to Clarivate’s 2025 Most Influential Scientists List
Fifteen researchers who work at or collaborate with Pacific Northwest National Laboratory have been named to Clarivate’s 2025...

Trump Renewed Push To Block State AI Rules, Faces Legal and Political Pushback
President Trump said he will sign an executive order to block states from setting their own AI rules, a proposal that legal e...

China and Brazil More Open to AI Than Americans — Survey Highlights U.S. Adoption Gap
An Edelman survey of 5,000 people found residents in China and Brazil are significantly more open to adopting AI than those i...
