many cords emerging from back of server
Category: Faculty, Featured News, News, Research

Title: Five Key Issues to Watch in AI in 2025

Author: Andrew Imbrie
Date Published: December 13, 2024

A former colleague of mine who worked as a speechwriter in the U.S. government once said the job felt like going back to college but changing his major every week. 

 I’ve been thinking about that analogy lately as someone who tries to follow the vertiginous developments in artificial intelligence (AI) policy over the last few years. It’s not just the pace of progress or the volume of information that can feel disorienting; it’s also the stakes for governing a general-purpose technology that promises to impact almost every field of endeavor. 

 The task of navigating the road ahead in AI will require equal parts ambition, humility, and comfort with uncertainty. For those of you who are new to these debates and for those who are tracking them closely day to day, it’s helpful to step back and think about the major questions that will define the AI policy landscape in 2025.  

 Here are five that I’m tracking: 

  1.     Is data really the “new oil”?

There’s no question that today’s large language models (LLMs) use a lot of data for training, but it’s not simply the quantity of data that matters. The quality of the data and its relevance to the task are just as important. If you talk to a data scientist, they will emphasize that it takes a great deal of effort to turn raw data into a form that is usable for machine learning. The story gets even more complicated when we focus on real-time data integration and common standards for data sharing in mission-critical organizations and with trusted allies and partners. Data matters, but too often we reduce its value to simple propositions or overlook the role of bias in shaping outcomes that privilege some interests over others. Sociotechnical perspectives compel us to reflect on how we interpret the data and the way our assumptions can interact with data collection processes to worsen inequalities or obscure understanding of our social and strategic environments. 

Amid the hype, one thing is clear: the hunt for high-quality data to power innovation and train ever-more capable AI models proceeds apace. Keep an eye on the legal questions playing out in the courts over training LLMs on copyrighted materials, not least the debates over the rights of digital artists and other creators. With some predicting bottlenecks on the stock of publicly available text, it will be worth tracking efforts at pioneering small data approaches to AI and the use of synthetic data, multi-sensory learning, privacy-enhancing technologies, and data inventories and public-private partnerships to push the curve of discovery. The role of biological data for training will also raise complex policy questions and require a nuanced approach to manage risks while enabling continued advances in the life sciences.   

  1.     Where will the next breakthroughs come from?

While some are banking on computing power as the lynchpin of leadership in AI (see the next question), others point to algorithmic innovations and new systems architectures as the key enablers of progress. Over the coming year, the smart money may be on what one of my students has termed “system-to-model innovation.” OpenAI’s GPT, Google’s Gemini, Anthropic’s Claude, and Meta’s Llama have gone mainstream, but the development ecosystems around them are equally critical. It’s not a coincidence that the company OpenAI trained chain-of-thought prompting—a system-level innovation that teaches the model to spend more time reasoning through intermediate steps before responding to a query—into their latest o1 model using reinforcement learning. 

Researchers will continue to explore novel paradigms for AI development that focus on reasoning capabilities and what psychologists call “System 2” thinking. New and less compute-intensive approaches will attract some of the brightest minds. Decentralized training runs for language models will help smaller firms compete even as such methods pose fresh challenges for AI governance. Policies, systems interfaces, evaluations, and methodologies for human-machine teaming will evolve to harness the benefits of collaboration while mitigating the risks of automation bias and other potential harms. Broader definitions of AI talent and investments in education and workforce development, including nontraditional pathways, will focus critically important policy attention on AI readiness and the impact of the technology on workers and livelihoods. The application of AI to science will be especially noteworthy. Recent advances in biomolecular modeling and the rise of DNA foundation models highlight AI’s tremendous potential for advancing drug discovery and public health as well as the need for responsible practices and policy guidelines to balance the tradeoffs between security, privacy, and transparency.   

Other parts of the so-called “technology stack” deserve equal attention. Investments through the CHIPS and Science Act are promoting a revival of semiconductor manufacturing in the United States, and the National Semiconductor Technology Center is well-placed to support workforce needs and next-generation research and development. For those squinting hard at the data and failing to see traces of an AI productivity boom, it’s also important to track competition dynamics in the marketplace and the diffusion of AI across societies, economies, and governmental institutions. Diffusion of the technology will increase the likelihood of identifying new applications and seems poised to shuffle the deck on the relative share of computing resources devoted to running AI workloads (also known as “inference”) over chip clusters for pre-training large models. Updates to the big foundation models will capture the headlines, but the speed, scale, and depth of adoption of AI may be more critical for a nation’s long-term growth and competitiveness.  

  1.     Will “scaling laws” continue to hold for AI, and if so, for how long, at what cost, and who benefits? 

Perhaps no question fires the engines of debate like this one—and for good reason. AI researchers often speak in terms of “scaling laws”: larger neural networks are more powerful, and the more data and computational resources for training, the better—or at least that’s the generalized observation. Recent developments provide some evidence that the leading models may be underperforming the expectations of scaling laws, though others point to inference scaling laws as the next frontier. It’s worth underscoring that many of today’s estimates of diminishing returns on data and computing power apply specifically to LLMs. Other countries are not necessarily putting all their eggs in the LLM basket, and one of the key questions for policymakers and science funding organizations over the next months and years is whether and how to place a more diverse portfolio of bets on the future of AI innovation.

Here’s another wrinkle: modern AI is not only data-hungry but energy-intensive. With climate change worsening and AI driving up the greenhouse gas emissions of the major tech companies, policymakers will face difficult tradeoffs as they contend with sustaining AI’s cost trajectories through 2030. The demand for new data centers and larger training runs will collide with an already strained energy grid, permitting obstacles, and the complex geopolitics of building AI infrastructure abroad and managing risks to supply chains. These policy and energy dynamics are inducing a scramble over small nuclear reactors and renewed attention to “point load” management for data centers and upgrading electrical infrastructure. 

Over the next year, one of the trends to watch is whether AI will accelerate or disrupt the clean energy transition. Another is whether algorithmic improvements and new approaches to AI will temper the soaring costs of scaling. Those of us who focus on national security will also have much to discuss. As the U.S. National Security Memorandum on AI makes clear, governments increasingly worry about the security of AI model weights—the mathematical parameters that derive from the training process—and the theft or misuse of AI models to enable sophisticated cyber attacks, fuel disinformation, and uplift biological attack capabilities. Given these stakes and the challenges of securing AI infrastructure, it’s all the more critical to ask: Who benefits from AI today and whose voices are at the table to represent the public’s interest?

  1.     Should we pump the brakes, press on the gas, or apply shock absorbers? 

Let’s start with pumping the brakes. AI models are unreliable. They hallucinate and reflect our biases back at us. While their inner workings are understood at some level, notably thanks to breakthroughs over the past year, neural networks with hundreds of billions, if not trillions, of parameters are not easily interpretable at scale. Optimization techniques, such as retrieval augmented generation and knowledge graphs, can improve performance, but integrating them into enterprise applications is just as much an art as it is a science. AI technologies can also act in ways that confound our intentions. The task of calibrating when we should rely on them to make decisions remains a significant challenge, all the more so in light of efforts to create general-purpose AI agents that can plan and take independent actions in the real-world. As if these issues weren’t pressing enough, we also have to guard against the threat of bad actors–state and nonstate alike–misusing AI and the structural risks that may arise as organizations deploy AI in safety-critical systems. 

It is understandable that some have called for a pause on AI development or, at a minimum, slowing the rate of progress until our societies can figure out how to govern this technology responsibly. Such calls reflect longstanding historical and socioeconomic patterns, where new technologies raise enduring concerns over loss of jobs, identities, stability, and public trust.    

Others make just the opposite case: market opportunities combined with promising developments in AI and energy innovations demand a full-speed-ahead approach. If democratic governments intrude with heavy hands, they will cede the advantage to authoritarian competitors and undermine the potential benefits of AI for economic growth and material abundance, human health, and planetary welfare. 

For those who wish to see AI progress continue but worry about the risks, there’s a third approach gathering steam: apply shock absorbers and a better GPS. AI is riding a wave of exuberance and investment, but it’s important to anticipate the risks before they spiral into widespread harms and create incentives across the life cycle of AI development to foster transparency, participation, and accountability. The growing network of AI Safety Institutes will refine our safety benchmarks and improve the science of model evaluations. Debates at the state, federal, and international levels will continue on AI incident tracking, third-party auditing, content authenticity tools, data protections, safety cases, risk management, and liability for AI harms. Progress on identifying and managing the risks will allow for sensible regulations that enable trust, promote adoption, and encourage responsible use. Attention to these issues will also provide a crucial ballast for effective governance: upgrading the AI literacy of policymakers, legislators, and regulators while also improving governmental capacity to deliver services and articulate a public vision for AI.   

  1.     What is the right goal: AI for democracy or democratize AI? 

All of the preceding trends beg the question: What is the right goal for AI policy? Should it be to empower coalitions of democracies to lead globally against increasingly emboldened authoritarian competitors, or should policymakers seek to facilitate open access, open documentation, and wider availability of AI models, tools, and infrastructure to promote innovation and safety while monitoring the risks? The former speaks to the intensity of geopolitical rivalries around AI and the array of tools available to governments to protect sensitive technologies; the latter reflects the traditions of openness that have long defined the field. Both of these perspectives are not necessarily opposed to one another, and evaluating the risks and benefits of model safeguards and release methods cannot be reduced to the easy binary of open versus closed. 

A third perspective, though, is worth taking seriously as we head into the coming year: the global majority. For many countries, the key question for AI policy isn’t whether democracies win or authoritarian competitors gain the upper hand—it’s about access, opportunity, and application of AI to meet domestic needs and make progress toward the UN’s sustainable development goals, such as education, health care, food security, and decent work. And it’s not just access and adoption that matter. Countries around the world are grappling with the legal and ethical implications of this technology and demanding a greater voice in AI governance. Creating opportunities for a broader, more inclusive global dialogue on AI will be imperative, even as local and regional initiatives gather steam. The promise of AI is inseparable from the looming dangers, but much of the story of 2025 may be written in capitals far away from Washington, London, Brussels, Moscow, and Beijing. Emerging markets around the world will continue to adapt AI to local conditions, and the diffusion of its risks and benefits will entangle us all. President Franklin D. Roosevelt words ring truer than ever today, “The world is getting so small that even the people of Java are getting to be our neighbors now.” 

As the laboratory of experiments on AI unfolds around the world, nations will need to find ways to collaborate, avoid regulatory fragmentation, and network increasingly distinctive governance regimes. Diplomats will hash out norms for responsible use and propose steps to lessen the risks of escalating competitive pressures; researchers will develop nascent efforts to promote AI verification and creative institutional design; and militaries will continue to test and deploy AI-enabled capabilities on the battlefield, creating new exigencies for global cooperation amid intensifying transnational challenges and geopolitical rivalry. If ever there was a time to change one’s major to AI and international affairs, it’s now. 

Andrew Imbrie is Associate Professor of the Practice in the Gracias Chair in Security and Emerging Technology at Georgetown’s School of Foreign Service. He is also a faculty affiliate at the Center for Security and Emerging Technology.