The Work AI Institute report warns that generative AI can create an "illusion of expertise," making workers feel more capable while their underlying skills erode. The risk is greatest in creative and knowledge‑intensive roles and among early‑career employees who miss out on apprenticeship learning. Researchers contrast a beneficial "cognitive dividend" when AI complements existing expertise with a harmful "cognitive debt" when it becomes a reflexive shortcut. The report urges leaders to measure AI by outcomes (quality, customer satisfaction, innovation) rather than raw usage metrics.
AI Gives an Illusion of Expertise — New Report Warns It’s Eroding Workers’ Core Skills
A new report from the Work AI Institute, produced with researchers from institutions including the University of Notre Dame, Harvard University, and UC Santa Barbara, warns that generative AI is giving many office employees a false sense of expertise while their fundamental skills quietly deteriorate.
Rebecca Hinds, head of the Work AI Institute at workplace search company Glean and a coauthor of the report, told Business Insider that AI often blurs the line between a person's knowledge and the machine's suggestions. "AI is putting expertise into our hands in a way that's not always predictable," she said. "There's often this illusion that you have more expertise, more skills than you actually do. Even if you're very well aware you're using the technology, it's often unclear where your knowledge ends and where the technology begins."
How the Illusion Forms
Hinds compares the phenomenon to the early days of search engines, when easy access to information sometimes substituted for genuine understanding. With generative AI, the effect is more powerful: workers can quickly produce drafts, ideas, and analyses that feel polished — but may hide gaps in underlying judgment and domain knowledge.
The report highlights a common pattern: employees use AI to beat the "blank page" by generating first drafts or initial solutions. While this accelerates output, it also removes the messy, time-consuming process of wrestling with ideas — the very practice that builds deep understanding and the confidence to defend work in meetings. "That process is highly inefficient," Hinds said, "but it's also really healthy. If workers lean too heavily on AI to skip it, your skills are going to atrophy."
Cognitive Dividend vs. Cognitive Debt
The researchers describe AI's effects as either a "cognitive dividend" or a "cognitive debt." When used intentionally as a partner in areas where workers already have expertise, AI can free time and sharpen judgment. But when it becomes a reflexive shortcut into unfamiliar domains, it produces weaker skills and misplaced confidence.
Who’s Most At Risk
Early-career employees face the greatest exposure. Roles that traditionally function as apprenticeships — junior developers learning from senior engineers, entry-level marketers learning campaign craft, or young analysts building models from scratch — may be hollowed out if routine tasks are automated or juniors rely entirely on AI. Without hands-on practice and mentoring, these employees may never develop the foundational skills they need to advance.
Perverse Incentives From Metrics
Hinds warns that organizational measurement can unintentionally make the problem worse. Some companies equate AI adoption with clicks or usage metrics and even tie those numbers to performance reviews. That encourages employees to maximize tool use rather than invest time in genuine understanding. "Organizations stack-ranking employees based on how many times they're clicking an AI tool is a big red flag," she said.
How Leaders Should Respond
Hinds does not recommend rejecting AI. Instead, she urges deliberate adoption that aligns with business outcomes. She suggests leaders and teams ask three guiding questions:
- What roles should stay deeply human? Identify the parts of work that develop judgment, creativity, and motivation — and resist fully automating those.
- Where is AI truly adjacent? Use AI where it supplements existing expertise, not as a shortcut into domains employees don't understand.
- What are you measuring? Focus less on raw tool usage and more on whether AI improves outcomes like quality, customer satisfaction, and innovation.
“AI does not magically transform you as a leader,” Hinds said. “More often, it amplifies what already exists within the organization.”
Measured and contextualized, AI can be a force multiplier. Left unchecked, however, it risks producing a workforce that appears more expert than it truly is — and a generation of employees with weakened core skills.
Read the original reporting on Business Insider.
Similar Articles

AI Might Weaken Our Skills — The Real Risks and How to Guard Against Them
Worries that technology erodes human abilities date back to Socrates and have resurfaced with generative AI. Early, small stu...

Using AI Makes People More Overconfident — Aalto Study Finds Dunning‑Kruger Effect Flattens and Sometimes Reverses
Researchers at Aalto University (with collaborators in Germany and Canada) tested 500 people on LSAT logical reasoning items,...

Will AI Destroy Jobs or Create New Ones? Top Tech and Business Leaders Debate the Future of Work
The article summarizes divided views from leading tech and business figures on how AI will affect employment. Some warn of ra...

AI Is Creating New Jobs — Decision Designers, AI Experience Officers and Digital Ethics Advisors to Watch
New AI-era jobs blend technical AI knowledge with psychology, ethics and organisational design. Decision designers will defin...

How Gen Z Can Win in an AI-Shaped Job Market: Focus on Tasks, Not Titles
James Ransom of University College London advises Gen Z to prioritise task-level skills and AI fluency over chasing job title...

Anthropic CEO Calls for AI Regulation — Critics Warn Rules Could Favor Deep‑Pocketed Firms
Dario Amodei, CEO of Anthropic, urged "responsible and thoughtful" government regulation of AI on 60 Minutes, warning of majo...

How AI Will Reshape Reporting in Five Years — What ChatGPT and Gemini Predict
AI tools are changing how work gets done. I tested ChatGPT and Gemini by asking each to predict how my reporting role coverin...

Voters Divided on AI’s Impact — Many See Daily Benefits, Most Expect Job Loss
Voters are split on AI’s overall effects: 50% see it as positive for daily life, but only 37% view it as positive for mental ...

Cambridge Paper: Reframe Education for AI — From Memorisation to Dialogic, Collaborative Learning
The University of Cambridge paper calls for reframing education so AI supports collaborative, dialogic learning that tackles ...

Geoffrey Hinton Warns Rapid AI Rollout Could Destabilize Society and Cost Millions of Jobs
Geoffrey Hinton warned that rapid AI deployment could eliminate large numbers of jobs and destabilize economic demand because...

Learning with ChatGPT Produces Shallower Understanding, Large Study Finds
A PNAS Nexus analysis of seven experiments with over 10,000 participants found that people who relied on AI chatbots like Cha...

Study Names 'Most Harmful' AI: Most Leading Firms Fail to Manage Catastrophic Risks
The Future of Life Institute's AI Safety Index finds that most leading AI firms lack the safeguards, oversight and credible l...

Sam Altman Warns About AI’s Breakneck Pace as ChatGPT Tops 800 Million Weekly Users
Sam Altman, CEO of OpenAI, warned that ChatGPT’s unprecedented three‑year rise — now reaching more than 800 million weekly us...

UNDP: AI Risks Widening Global Inequality Unless Access Is Democratized
The UNDP warns that AI could widen global and within-country inequalities unless access to electricity, connectivity and digi...

AI Didn’t Break College — It Exposed How Dehumanized Higher Education Has Become
Steven Mintz, a history professor at the University of Texas at Austin, says AI has exposed how mechanized and dehumanized hi...
