Anthropic’s internal study finds AI significantly boosts engineer productivity — self-reported gains rose from about 20% to roughly 50% year over year — by assisting with testing, documentation, and minor bug fixes. However, senior staff warn that growing reliance on AI may erode core skills and reduce apprenticeship interactions, threatening the pipeline of senior talent. The article argues the future of work will shift from execution to judgment and orchestration, requiring targeted training to preserve the ability to evaluate and supervise AI systems.
Anthropic Study: AI Boosts Engineer Productivity — But Raises Concerns About Skill Atrophy and Apprenticeship Loss

You have to give Anthropic credit. Over the past year the company has been unusually open about how artificial intelligence is reshaping work, using its own staff as a live case study. In a study published last week, Anthropic reported that AI is delivering substantial gains in both productivity and quality — though the effects are nuanced and sometimes troubling.
Within Anthropic, engineers and researchers self-reported productivity increases from roughly 20% to about 50% year over year. Employees say they rely on AI for a wide range of tasks: exploratory code testing, drafting technical documentation, and fixing so-called “papercuts” — the small, noncritical bugs and annoyances that often get postponed in favor of higher-priority work. This does not mean Claude is independently conceiving and designing solutions; rather, the model is becoming increasingly effective at reducing engineers’ workloads, shortening development cycles, and accelerating learning.
Gains — And New Risks
The news is not all positive. A recent example from medicine highlights a key concern: gastroenterologists who used AI to improve colonoscopy accuracy later showed declines in baseline skills after the AI was removed. Some senior engineers at Anthropic express a similar worry that core technical skills could erode as AI expands its role.
“More junior people don’t come to me with questions as often,” one senior engineer told Anthropic researchers, acknowledging that AI can answer questions faster and often more effectively — while also reducing everyday interpersonal mentorship opportunities.
That decline in apprenticeship interactions matters. Asking supervisors questions and learning through hands-on apprenticeship are tacit workplace practices that help junior staff develop judgment, communication norms, and technical depth. If AI removes the lower rungs from the apprenticeship ladder, the long-term supply of senior talent could be jeopardized.
What Matters Going Forward
The broader lesson may be that the locus of value in knowledge work is shifting from execution toward judgment, architecture, and orchestration. As routine tasks are increasingly automated, human roles will emphasize evaluating AI outputs, setting system architecture, and making high-stakes decisions that require contextual judgment.
To make that transition successfully, organizations and policymakers should invest in training and education that preserve essential technical understanding and cultivate tacit skills — mentoring, critical questioning, and supervisory judgment. Even when humans act primarily as algorithmic supervisors, they must be able to assess the quality of machine-produced outputs and intervene when systems err.
Anthropic’s engineers provide a real-time preview of these dynamics: productivity rises and drudgery is offloaded to tireless systems, while human work shifts toward higher-level assessment and supervision. The risk is not predominantly mass unemployment but a growing scarcity of the human capabilities needed to guide and validate increasingly autonomous tools.
Brent Orrell is a senior fellow at the American Enterprise Institute, specializing in job training and workforce development.


































