CRBC News

How AI Will Reshape Reporting in Five Years — What ChatGPT and Gemini Predict

AI tools are changing how work gets done. I tested ChatGPT and Gemini by asking each to predict how my reporting role covering the Big Four and workplace culture will evolve in five years. Both tools said drafting and basic research will become more automated, shifting journalists' time toward verification, secure sourcing and interpretation. Gemini urged greater caution — warning of surveillance risks and recommending RAG literacy — while ChatGPT emphasized prompt skills and critical evaluation. The common thread: reporters who master AI tools and ethics will retain their value as veracity gatekeepers.

How AI Will Reshape Reporting in Five Years — What ChatGPT and Gemini Predict

Artificial intelligence is already reshaping how professionals work, and over the next five years many newsroom roles are likely to shift as a result. Consulting firm EY has developed an internal initiative called AI Now 2.0, where employees answer questions about their roles and feed responses into an internal ChatGPT-like tool, EYQ, to see how AI might change their jobs and what skills they should develop.

Why this matters for reporters

Professional services firms must make AI work responsibly in-house, which forces rapid change to pricing, staffing and services. Newsrooms face a parallel set of pressures: new tools promise efficiency gains, but they also force journalists to rethink sourcing, verification and the skills that define their value.

My experiment: asking ChatGPT and Gemini

To test how public chatbots would analyze a journalist's future role, I asked ChatGPT and Google’s Gemini to act as organizational strategists and produce a concise future-role analysis. I described myself as a reporter covering the Big Four and workplace culture, listed core responsibilities, and asked each system to highlight the most consequential changes coming in five years.

ChatGPT’s verdict

ChatGPT predicted that AI will increasingly handle structural drafting, background research and context generation. Newsrooms will gain built-in tools — smart templates, instant retrieval of past coverage and automated research aides — that speed up the reporting workflow. That shift, the model argued, will move a reporter's comparative advantage toward sourcing, judgment and interpretive context that AI cannot reliably produce: leaked documents, off-the-record nuance, organizational politics and subtle narrative framing.

On skills, ChatGPT urged reporters to develop AI fluency: learn to craft effective prompts, evaluate outputs critically, and use analytics to surface story leads earlier. Its ethics guidance was simple and familiar: don’t trust AI outputs uncritically; verify and cross-check.

Gemini’s analysis

Gemini produced a much longer, more detailed strategy document and sounded a more cautious note. It warned that advanced corporate surveillance and secrets-detection tools could make scoops harder to land, and recommended an immediate upgrade in secure sourcing tradecraft. That recommendation was striking because journalists already take pains to avoid leaving digital traces with sources — how AI surveillance will alter those practices is still uncertain.

Like ChatGPT, Gemini predicted that AI will augment research and drafting, shifting a reporter’s time toward verification. It advised building specific skills such as RAG (retrieval-augmented generation) literacy and using verification tools like Reality Defender. Gemini emphasized that adopting AI tools alone won’t guarantee job security; reporters must evolve into ethical supervisors and veracity gatekeepers for the information that underpins stories.

What I took away

The exercise confirmed some familiar themes: AI can boost efficiency and assist research and drafting, but verification and source-building remain essential. The most valuable human skills are likely to be empathy, judgment, secure sourcing, and the ability to interpret signals that AI cannot surface reliably.

Simon Brown, global learning and development leader at EY, says the firm’s tool "helps to show and bring to life in a totally relevant way where AI might be able to help them." I did not see EYQ's internal outputs or prompts, so this comparison is not a scientific side-by-side of proprietary and public models.

Overall, the chatbots did not offer a radical surprise. They reinforced that reporters who learn to use AI thoughtfully — while doubling down on verification, secure sourcing and interpretive judgment — will remain essential. The experiment was a useful prompt to think concretely about what to learn next and to explore verification and tradecraft tools as AI becomes more embedded in newsrooms.

Similar Articles