Beyond LLM’s: The Hidden Dangers of AI in Scientific Research

My focus was fixed on LLM’s like ChatGPT's disruptions until I realised, through Niall Ferguson's insights, that specialised scientific AI is the true global threat or risk.

Beyond LLM’s: The Hidden Dangers of AI in Scientific Research
A Faustian bargain being made?

Realising My Misplaced Attention on the type of AI

I genuinely believed I understood the risks associated with artificial intelligence. Like many others, my focus had been heavily on large language models, such as ChatGPT and Claude—tools I interact with daily and greatly value.

I thought these sophisticated chatbots embodied AI's biggest threats, from misinformation to the potential displacement of countless jobs. But recently, something shifted dramatically in my perspective.

An Eye-Opening Insight from Niall Ferguson

My realisation came after reading an enlightening interview with historian Niall Ferguson, a respected expert known for connecting historical events with contemporary geopolitical strategies.

Ferguson clearly explained something I'd missed: while language models capture headlines and public imagination, they're only a small part of AI’s broader landscape. The real threat is something quieter, highly specialised, and genuinely alarming.

“Mankind is in the process of creating an intelligence it cannot fully understand, and whose capacities it may not be able to control.”
— Henry Kissinger, Genesis: Artificial Intelligence, Hope, and the Human Spirit

The True, Hidden Power of Specialised AI

According to Ferguson, the critical threat lies in AI systems explicitly engineered for research, strategic, and military uses:

  • AI designed to create highly potent viruses, potentially more lethal and infectious than any naturally occurring pathogen.
  • Autonomous weapon systems capable of identifying and engaging targets more rapidly and accurately than human operators.
  • AI-driven platforms that optimise nuclear strategy, geopolitical analysis, and real-time decision-making at a frighteningly precise level.
“It’s the power of the scientific AI that should worry us.”

These aren't speculative scenarios or distant threats; they already exist. Backed by massive computational resources, extraordinary financial investments, and protected by stringent secrecy, these systems directly influence global power dynamics.

What makes them especially dangerous is the Faustian dynamic: even if only states and multinationals have the resources to build them, the knowledge and tools they create can leak, mutate, or be repurposed—putting catastrophic capabilities within reach of much smaller actors. This is the contradiction Henry Kissinger foresaw. While nations fight to control compute, the real instability may come from diffusion: scientific AI that outpaces the institutions meant to contain it.

Strategic AI and the Rise of Authoritarian Power

Ferguson underscores the critical advantage authoritarian states, particularly China, gain through strategic AI deployment.

Combining vast surveillance data with centralised political control and immense computing power, these nations create powerful tools for both domestic control and international influence.

Democracies face a substantial challenge responding effectively to this concentrated power.

Why This Matters for the Future of Geopolitics

This insight profoundly reshaped my understanding of geopolitical competition. While public debate fixates on regulating chatbots or tackling AI-generated content in social media, the real and critical contest unfolds quietly, hidden from view.

It's a competition around infrastructure, energy consumption, computational power, and strategic ambition. But it’s also a contest of restraint: the race to develop such tools is outpacing our capacity to imagine their consequences—or to govern their misuse.

Shifting My Focus

Ferguson's insights fundamentally challenged my earlier beliefs, prompting me to realign my attention from consumer-level AI to the specialised, strategically significant systems quietly shaping global power today. It’s a confronting shift—one that has left me both more curious and more uneasy.

Over the coming weeks, I’ll be updating my reading list to reflect this new focus: not just books about LLMs and ethics, but deeper works on AI’s role in science, security, and strategy. If we want to understand where this is going, we need to look well beyond the chat window.

America Is In A Late Republic Stage Like Rome | NOEMA
History suggests republics don’t last more than 250 years.
Kissinger — NIALL FERGUSON
Winner of the Council on Foreign Relations Arthur Ross Book Award
Genesis
NEW YORK TIMES BESTSELLER | USA TODAY BESTSELLER | LOS ANGELES TIMES BESTSELLER | J.P. MORGAN NEXTLIST SELECTION In his final book, the late Henry Ki…