White Papers

White Papers

A Policymaker’s Guide to Foundation Models

By Christina Montgomery, Chief Privacy & Trust Officer, IBM; Francesca Rossi, AI Ethics Global Leader, IBM Research; Joshua New, Senior Fellow, IBM Policy Lab
May 1, 2023

 

The last few years – even the last few months – have seen AI breakthroughs come at a dizzying pace. Artificial Intelligence that can generate paragraphs of text as well as a human, that can create realistic imagery and video from text, or that can perform hundreds of different tasks have captured the public’s attention thanks to their high level of performance, creative potential, and in some cases, the ability for anyone to use them with little to no technical expertise.

This wave of AI is attributable to what are known as foundation models. As the name suggests, foundation models can be the foundation for many kinds of AI systems. Using machine learning techniques, these models can apply information learned about one situation to another. While the amount of data required is considerably more than the average person needs to transfer understanding from one task to another, the result is relatively similar. Once you spend enough time learning to cook, for example, without too much effort you can figure out how to cook almost any dish, and even invent new ones.

This wave of AI looks to replace the task-specific models that have dominated the AI landscape to date. And the potential benefits of foundation models to the economy and society are vast. There are also concerns about their potential to cause harm in new or unforeseen ways, prompting the public and policymakers to question whether existing regulatory frameworks are equipped to protect against these harms. Policymakers should take productive steps to address these concerns, recognizing that a risk and context-based approach to AI regulation remains the most effective strategy to minimize the risks of all AI, including those posed by foundation models.

Benefits of Foundation Models

Like the printing press, steam engine, and computer, the utility of foundation models cements AI’s potential as a general-purpose technology with broad applicability throughout the whole economy and many spillover effects. But as with any emerging technology, reliably identifying all the potential benefits of foundation models is an impossible task because new use cases will emerge as they become increasingly integrated into society and the economy. However, there are already clear indications of how impactful foundation models will be.

Foundation models show significant promise for helping solve some of the most challenging problems facing humanity. For example, identifying candidate molecules for novel drugs or identifying suitable materials for new battery technologies requires sophisticated knowledge about chemistry and time-intensive screening and evaluation of different molecules. IBM’s MoLFormer-XL, a foundation model trained on data about 1.1 billion molecules, helps scientists rapidly predict the 3D structure of molecules and infer their physical properties, such as their ability to cross the blood-brain barrier. IBM recently announced a partnership with Moderna to use MoLFormer models to help design better mRNA medicines. IBM also partners with NASA to analyze geospatial satellite data - to better inform efforts to fight climate change - using foundation models. Such applications are just emerging, but hold the potential to rapidly accelerate progress on pressing problems in health care, climate change, and beyond. 

Other promising applications of foundation models rely on their generative capabilities, such as powering productivity aids that can help users rapidly generate code, text, and other types of media. Partially automating time-intensive tasks can also be beneficial in professional and personal contexts, allowing people to devote more of their attention to more challenging or enjoyable tasks while increasing their output. For instance, a Deloitte study found that developers using a code generation tool were able to increase their development speed by 20%. These applications can also make certain kinds of work more accessible to a broader population. A small business using generative tools, for example, could create a website or app, develop marketing materials, and design their product with relatively little technical ability or resources.

While the potential benefits are enormous, it is important not to overestimate the capability of foundation models - it detracts from a correct understanding of the real benefits and harms they can create.

Risks of Using Foundation Models

Some risks of using foundation models are similar to those of other kinds of AI, but they can also pose new risks and amplify existing risks. These can be grouped into three broad categories:

  1. Potential risks related to input,
  2. Potential risks related to output, and
  3. Potential general governance risks.

 

  1. Input

Potential risks related to the inputs of foundation models – training data and the processes that influence how a model is developed – are mostly the same as other kinds of AI. For example, risks related to bias, whether training data includes personally identifiable information, and whether training data is “poisoned” (i.e. deliberately maliciously manipulated in order to influence a model’s performance) are not unique to foundation models.

The risks amplified by foundation models tend to stem from the large amounts of unorganized data often used to develop the model. These include IP and copyright issues with the training data – such as the use of licensed works – as well as transparency and privacy issues - including how developers and deployers disclose information about their training data, and the ability to provide data subject rights such as the “right to be forgotten.”

New kinds of risks include training or re-training models on data generated by an AI system, as it can perpetuate or reinforce undesirable behaviors. There are also concerns related to how foundation models are developed, as the values used to guide those choices can be reflected in the model’s behavior.

  1. Output

Risks related to the output of foundation models are mainly new and often stem from their generative capabilities. For example, they can pose misalignment concerns. This includes hallucination – the capability of generation of false yet plausible-seeming content – as well as the generation of toxic, hateful, or abusive content.

Foundation models can also be deliberately designed or used for malicious purposes, including to spread disinformation and to deceptively generate content without disclosure. Their robustness to adversarial attacks also poses new challenges, as techniques like prompt injection - in which a user tricks a model into performing a task that would otherwise be prohibited by its controls - are directly related to the new capabilities offered by generative AI.

  1. General Governance

Foundation models pose several new governance challenges related to how they are developed and deployed. First, developing and operating a foundation model can require significantly larger amounts of energy, as performance scales with size and the amount of computing power used in model training.

Second, how responsibility should be allocated along the AI value chain can cause confusion, as relationships between the developers and deployers can be complex. For example, a common business model involves a developer providing a foundation model to a deployer, which then fine tunes the model to their specific use-case for a real-world application before deploying. When performance issues are identified, it may be difficult to determine where corrective action can and should be taken and who has the responsibility for doing so.

Lastly, given the complexity of this value chain, it can be difficult to determine ownership rights of foundation models and their downstream applications. 

A Risk-Based Approach

One of the key principles of AI governance is that policymakers should adopt a risk-based approach for regulating AI systems. Different applications of AI can have significant differences in their potential to cause harms, and regulatory obligations should be proportionate to the level of risk involved. For example, an AI system used to recommend TV shows to consumers poses little risk of harm whereas an AI system that screens job applications can have an enormous impact on a person’s economic opportunity. In the latter case, high standards for transparency, accuracy, and fairness would be appropriate to reduce the risk the system unfairly discriminates against job applicants. Such requirements would offer little benefit in the former.

Policymakers around the world have endorsed such an approach, with major policy frameworks for AI governance prioritizing oversight based on the level of risk posed by an AI system. For example:

  • The draft European Union AI Act establishes tiers of risk that an AI system could pose based on their intended application, ranging from “unacceptable risk” to “low or minimal risk.” Regulatory obligations are proportionate to the risk level in different contexts, such as outright prohibition of systems that have a significant potential to exploit vulnerable groups, and requirements for data governance, transparency, and human oversight for systems that have implications for fundamental human rights.
  • In the United States, the National Institute for Standards and Technology (NIST) AI Risk Management Framework is centered on the principle of risk prioritization, where AI systems that pose higher risks warrant stronger risk management. It also recognizes that the risk posed by an AI system is highly contextual based on where and how it is deployed. While the framework does not create formal regulatory obligations, it can help serve as the foundation for future policymaking informed by real-world, practical examples of responsible AI governance.
  • In Singapore, regulators have developed their Model Artificial Intelligence Governance Framework to help companies deploy AI systems with appropriate governance measures. The framework uses a risk-based approach to identify the most effective features for facilitating stakeholders’ trust in AI, as well as recognizes that the risk and severity of potential AI harms will vary depending on the context of its application.

Tailoring regulation to the risk an AI system poses precisely applies high levels of protection while still enabling a flexible and dynamic regulatory framework. It minimizes unnecessary regulatory burdens, allows for a higher rate of AI adoption throughout the economy, and ensures robust consumer protections regardless of the underlying technology. And the technology neutrality is critical as it ensures any rulemaking will be future proof.

With the proliferation of foundation models and the complexities they introduce to the AI value chain, some have called for a deviation of this risk-based approach. They argue that foundation models can be adapted to a wide variety of applications, some of which could be harmful or high-risk, and the technology itself should be considered necessarily high-risk. And, they argue, that deployers of AI systems built on foundation models don’t necessarily control the development of the underlying model and developers should bear some of the responsibility for meeting the regulatory obligations of downstream applications. This would be a serious error.

A risk-based approach ensures AI deployers understand their responsibility for ensuring whatever AI system they deploy - regardless of whether it uses a foundation model - complies with relevant regulatory requirements for a particular use-case. If a deployer implements a foundation model in a high-risk context, they will have to comply with any obligations this requires. Deployers can communicate these needs to developers and implement proportionate safeguards as appropriate. Placing the responsibility on developers for downstream applications could make compliance impossible, as they cannot predict all potential applications and thus cannot identify and mitigate every conceivable risk. Imposing significant regulatory burdens arbitrarily can hinder innovation and limit the benefits the technology can provide in low-risk domains.

Recommendations for Policymakers

The best way policymakers can meaningfully address concerns related to foundation models is to ensure any AI policy framework is risk-based and appropriately focused on the deployers of AI systems. This will guarantee all AI systems - including those based on foundation models - are subject to precise and effective governance to minimize risk. There are several opportunities for policymakers to address the new risks foundation models pose in a productive manner.

  1. Promote Transparency

Deployers of foundation models should have enough understanding and visibility into the models to ensure they are doing so responsibly and can meet relevant regulatory obligations. Information about risk management, data governance, technical documentation, record keeping, transparency, human oversight, accuracy, and cybersecurity of foundation models can be critical in determining whether a particular model is appropriate to use in a given context.

To that end, IBM has developed AI FactSheets – a tool to facilitate better AI governance and provide deployers and users with relevant information about how an AI model or service was created. FactSheets can provide an array of useful information about foundation models, including training data, performance metrics, evaluation of biases, and energy consumption. FactSheets are flexible tools that can be tailored to the needs of different customers and AI models - and are modeled after a Supplier’s Declaration of Conformity, which is used in many industries to show a product conforms to a standard or technical regulation.

Policymakers should formalize transparency requirements for AI developers to provide documentation like FactSheets, including developing best practices about what information should be included. Doing so would significantly help mitigate the risks posed by foundation models and simplify compliance.

Additionally, policymakers should develop requirements for deployers to disclose to consumers when a foundation model is being used in certain contexts. Such requirements should be proportionate to the level of risk of the AI application.

  1. Leverage Flexible Approaches

As policymakers develop risk-based regulatory frameworks for AI, they should also recognize the value flexible soft law approaches can have for AI governance. This is particularly relevant for clarifying how responsibility should be assigned for different actors along the AI value chain. For instance, it’s typically appropriate for questions about responsibility for the performance of an AI system to be addressed through contractual means between a developer and a deployer. It’s also appropriate for a deployer to stipulate that their developer resolves performance issues with the underlying foundation model after their AI system is deployed. Given the variety of potential applications of generative AI and the different levels of control AI developers and deployers may want to have, policymakers should protect the ability for these actors to negotiate and define responsibilities contractually.

Policymakers should also support national and international standards development work focused on establishing common definitions, specifications for risk management systems, risk classification criteria, and other elements of effective AI governance.

  1. Differentiate Between Different Kinds of Business Models

The type and severity of risks posed by foundation models depends on where and how they are deployed. For example, an enterprise-facing chatbot employees use to understand internal policies and guidance could generate aggressive or unpredictable output, raising concerns about workplace safety and whether workers can access accurate compliance information. The risk profile of this application would be significantly different if the chatbot were consumer-facing, as the number people exposed to its behavior could potentially be much higher and include minors. In many cases it would be appropriate for policymakers to differentiate between open-domain applications - such as generative AI for augmenting web search - and closed-domain applications such enterprise AI assistants. Closed-domain applications like an enterprise-facing chatbot can pose significant risks, but the narrower functionality of the AI system limits the scope of risk it can pose. By contrast, open-domain applications likely trained on much broader data and capable of a much wider range of output can pose a wider variety and greater intensity of risks. 

Additionally, different actors along the AI value chain have different levels of control and responsibility over how to deploy an AI system, and the distribution of regulatory burdens should reflect this. For instance, policymakers should distinguish between developers and deployers. As mentioned, developers should be required to provide documentation like AI FactSheets. However, the focus of regulation should be on the end of the AI value chain, where deployers fine-tune foundation models and introduce AI systems into the world. Deployers have final say about when, where, and how to deploy AI systems and are best positioned to address the risks.

  1. Carefully Study Emerging Risks

Foundation models are a nascent technology without meaningfully widespread deployment. As they become more powerful and integrated into the economy and society, new risks may emerge, some expected risks may not manifest, and social norms or market forces may mitigate certain risks without the need for policymaker intervention. In addition to developing regulatory frameworks for AI, policymakers should devote significant resources to identify and understand emerging risks posed by increasingly powerful AI.

One topic that would particularly benefit from increased study and consensus-building is the potential for IP challenges posed by generative AI, particularly the confusion about ownership rights, licensing, and downstream obligations. Policymakers should work closely with industry, artists, content creators, and other relevant stakeholders to better understand such issues and develop clear legal guidance to protect IP rights and innovation. Given how consequential these topics can be for innovation and competition, waiting for slow-moving litigation to address confusion on an ad-hoc basis is undesirable.

Another critical component for studying the emerging risks is access to the technical infrastructure necessary to develop and evaluate AI systems. Developing foundation models requires large amounts of computing power and can be cost-prohibitive to smaller research labs, universities, and other stakeholders interested in scrutinizing AI systems. Policymakers should invest in creating a common research infrastructure. For example, the United States National AI Research Resource Task Force recommended the U.S. invest $2.25 billion in the technical resources necessary to conduct this research, including computing, data, training, and software resources.

Finally, policymakers should support developing better scientific evaluation methodologies for foundation models. While there are many metrics designed to measure performance, bias, accuracy, and other critical elements of an AI system, generative AI can make these metrics unreliable or ineffective. Policymakers and industry alike should prioritize the advancement of evaluation methodologies just as they prioritize the advancement of the technology itself.

Conclusion

Given the incredible benefits of foundation models, effectively protecting the economy and society from its potential risks will help to ensure that the technology is a force for good.  Policymakers should swiftly act to better understand and mitigate the risks of foundation models while still ensuring the approach to governing AI remains risk-based and technology neutral.  

Terminology 

A barrier to understanding and addressing the benefits and risk posed by foundation models is many of the definitions are not widely agreed-upon. This patchwork of conflicting and potentially overlapping definitions is a serious impediment to policymaking, and it will take time for researchers, industry, policymakers, and other stakeholders to come to a consensus.

Artificial Intelligence (AI): AI is a field of computer science focused on leveraging computers and machines to solve problems commonly understood to require human intelligence. AI encompasses a wide array of techniques including reasoning, optimization, and knowledge representation. Machine learning is one such technique responsible for the resurgence of AI over the past decade. It involves training an algorithm on data for it to learn to perform tasks.

AI deployer: A person or organization that puts into service an AI system, whether developed totally or in part by themselves or others. 

AI developer: A person or organization that creates an artificial intelligence model or system with the intent to deploy it themselves or provide it to others.

AI model: An algorithm built with AI techniques, typically machine learning, to perform tasks. 

AI system: “A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments,” as defined by the Organization for Economic Co-operation and Development. An AI system is a complete application deployed to accomplish a specific function. AI systems are made up of a variety of components, including an AI model, additional training and data, tooling, and other modification and customization.  

Foundation model: An AI model that can be adapted to a wide range of downstream tasks. Foundation models are typically large-scale (e.g. billions of parameters) models trained on unlabeled data using self-supervision. While all foundation models are built using generative AI, and therefore have the capability to generate content, they can be used in ways that do not use this capability. Foundation models are sometimes called “general-purpose AI.”

Generative AI: AI techniques capable of generating content or data of various kinds, including audio, code, images, text, simulations, 3D objects, videos, or other artifacts.