Latest News

Trustworthy AI at scale: IBM’s AI Safety and Governance Framework

February 07, 2025

In 2024 IBM signed the AI Seoul Summit’s commitments for AI Frontier Safety, a stride toward safe and trustworthy AI which IBM has long advocated for and pioneered. Today, we’re publishing...

In 2024 IBM signed the AI Seoul Summit’s commitments for AI Frontier Safety, a stride toward safe and trustworthy AI which IBM has long advocated for and pioneered.   Today, we’re publishing more details about how IBM’s AI governance framework and our organizational culture supports the responsible development and use of AI and aligns with the Seoul commitments' core objectives.

AI is a tool that performs according to how it is developed, trained, tuned, and used. AI governance at IBM comprises organizational governance structures, human oversight and state-of-the-art technology to uphold ethics and guardrails throughout all phases of an AI system’s lifecycle and mitigate a wide spectrum of risks.

IBM provides AI-enabled technologies, including AI models, and services to government and corporate entities in areas such as financial services, telecommunications, and healthcare. These entities rely on IBM as the trusted partner helping to advance responsible AI with a multidisciplinary, multidimensional approach.  Our holistic investment in culture, processes and tools gives us a more robust understanding of the potential risks and enables us to employ diverse evaluation and mitigation guardrails.

IBM’s Principles for Trust and Transparency, Pillars of Trustworthy AI, and AI Ethics Governance Framework, are the cornerstones of our approach to safety and ethics. Internal data management practices, and the holistic approach we apply through our Integrated Governance Program, allow for better model and data lineage tracking, and more dynamism in adapting to new challenges, reinforcing these best practices.

In October 2024 we published IBM’s Responsible Use Guide for IBM’s Granite model, which was recognized by Stanford University as one of the most transparent LLMs in the world, 

The Guide highlights IBM’s approach to development – and outlines AI safety choices faced by advanced model developers, overviews risk mitigation taxonomies, tools and resources, provides energy calculations for the sustainable use of IBM’s Granite models and sets forth how IBM applies these considerations when building its Granite models. The Guide presents a comprehensive process for systematically identifying, mitigating, and addressing the potential risks associated with AI through four key steps:

 Step 1: Data Preparation

Step 2: Training and Alignment

Step 3: Model Evaluation and Vulnerabilities Screening; and

Step 4: Continuous Feedback and Iterative Improvements

You can download it here.

IBM will continue to evolve its tools and frameworks to meet the technical and ethical challenges of responsible innovation. In this light, IBM is steadfast in its belief that the transformative changes of AI can only be harnessed by all when the future of AI is open.  As the science around safety is developing, broad community engagement is essential. Open source and permissively licensed AI models are a key part of an open innovation ecosystem for AI, as are open-source toolkits and resources, open datasets, open standards, and open science.  In addition, open models encourage greater involvement and scrutiny from the community, increasing the likelihood that vulnerabilities are identified and patched, and ensuring AI is responsibly developed and deployed.

IBM has a long history of leading and innovating in the open community. Some recent highlights include:

  • The IBM Granite model is recognized as one of the most transparent LLMs in the world, giving information about data sources and training methods, and enabling clients to be more confident in the safety of the AI they create with us.
  • We released the Granite Guardium models, a collection of models designed to detect risks in prompts and responses, trained in a transparent manner and according to IBM’s AI Ethics principles, under the Apache 2.0 license for research and commercial use.
  •  In May 2024, IBM and Red Hat launched InstructLab, an open-source project for enhancing LLMs through constant incremental contributions, much like software development has worked in open source for decades.
  •  In December 2023, IBM and Meta co-founded the AI Alliance, which has grown from 50 founding members and collaborators to an active, international community of more than 140 leading organizations across industry, startup, academia, research, and government coming together to support open innovation and open science in AI. For example, one key AI Alliance working group is focused on Trust and Safety to get the benefit of community engagement on this developing area of science (See https://thealliance.ai/focus-areas/trust-and-safety)
  • The Partnership on AI, that IBM co-founded in 2016, continues to evolve guidelines for safe deployment of foundation models.
  • Since 2018, IBM Research has developed and donated several trustworthy AI toolkits to the open-source community so that anyone, anywhere in the world can use trusted tools.
  • The MIT-IBM Watson AI Lab is a community of scientists at MIT and IBM Research who conduct poignant research and work with global organizations to bridge algorithms to their impact on business and society.
  • The Notre Dame-IBM Tech Ethics Lab was formed to address the many diverse ethical questions implicated by the development and use of advanced technologies, including AI, machine learning (ML) and quantum computing.
  • IBM has developed several methods to help with bias issues like FairIJ, Equi-tuning, and FairReprogram.
  • In July 2024, IBM partnered with the Data & Trust Alliance and 18 other enterprises to co-create and test the Data Provenance Standards, the first cross-industry standards for metadata to help describe data origin, lineage and suitability for purpose. 

We are also actively contributing to diverse, global efforts in shaping AI metrics, standards and best practices with and through alliances, affiliations, and governments. The partnerships highlighted here are just some of all that we are involved with, and we enter new partnerships and collaborations often to further AI ethics around the world.

To read our latest case studies, POVs, blogs, and news, visit the IBM AI Ethics homepage: https://www.ibm.com/impact/ai-ethics

 

Article Categories