All press releases

Davos Panel Hosted By IBM CEO Ginni Rometty Explores Precision Regulation Of AI & Emerging Technology

Event formally launches IBM Policy Lab, a new forum to advance bold, actionable policy recommendations for a digital society and foster trust in innovation
Also follows IBM Policy Lab publication of industry-leading priorities to guide regulation of artificial intelligence based on accountability, transparency, fairness and security
Jan 22, 2020

DAVOS, Switzerland, Jan. 22, 2020 /PRNewswire/ -- Today at the World Economic Forum in Davos, IBM (NYSE: IBM) launched the IBM Policy Lab - a new global forum aimed at advancing bold, actionable policy recommendations for technology's toughest challenges - at an event hosted by IBM Chairman, President and Chief Executive Officer Ginni Rometty that explored the intersection of regulation and trust in emerging technology. 

IBM Corporation logo. (PRNewsfoto/IBM)

The IBM Policy Lab, led by co-directors Ryan Hagemann and Jean-Marc Leclerc, two long-standing experts in technology and public policy, provides a global vision and actionable recommendations to help policymakers harness the benefits of innovation while building societal trust in a world reshaped by data and emerging technology. Its approach is grounded in the belief that technology can continue to disrupt and improve civil society while protecting individual privacy, and that responsible companies have an obligation to help policymakers address these complex questions. 

Christopher Padilla, Vice President of Government & Regulatory Affairs, IBM, said:
"The IBM Policy Lab will help usher in and build a new era of trust in technology. IBM pushes the boundaries of technology every day, but we also recognize our responsibility relating to trust and transparency and address how technology is impacting society. I see an abundance of technology but a shortage of actionable policy ideas to ensure we protect people while allowing innovation to thrive. The IBM Policy Lab will set a new standard for how business can partner with governments and other stakeholders to help serve the interests of society."

Ahead of the launch event, the IBM Policy Lab released  landmark priorities for the precision regulation of artificial intelligence, as well as a new Morning Consult study on attitudes toward regulation of emerging technology. The perspective, Precision Regulation for Artificial Intelligence, lays out a regulatory framework for organizations involved in developing or using AI based on accountability, transparency, and fairness and security. This builds upon IBM's calls for a "precision regulation" approach to facial recognition and illegal online content—laws tailored to hold companies more accountable, without becoming over-broad in a way that hinders innovation or the larger digital economy. These approaches are reinforced by a Morning Consult survey, sponsored by IBM, which found that 62% of Americans and 7 in 10 Europeans responding prefer a precision regulation approach for technology, with less than 10% of respondents in either region supporting broad regulation of tech. 

IBM's policy paper on AI regulation outlines five policy imperatives for companies, whether they are providers or owners of AI systems that can be reinforced with government action. They include: 

  1. Designate a lead AI ethics official. To oversee compliance with these expectations, providers and owners should designate a person responsible for trustworthy AI, such as a lead AI ethics official. 
  2. Different rules for different risks. All entities providing or owning an AI system should conduct an initial high-level assessment of the technology's potential for harm. And regulation should treat different use cases differently based on the possible inherent risk.
  3. Don't hide your AI. Transparency breeds trust; and the best way to promote transparency is through disclosure  making the purpose of an AI system clear to consumers and businesses. No one should be tricked into interacting with AI.
  4. Explain your AI. Any AI system on the market that is making determinations or recommendations with potentially significant implications for individuals should be able to explain and contextualize how and why it arrived at a particular conclusion.
  5. Test your AI for bias. All organizations in the AI developmental lifecycle have some level of shared responsibility in ensuring the AI systems they design and deploy are fair and secure. This requires testing for fairness, bias, robustness and security, and taking remedial actions as needed, both before sale or deployment and after it is operationalized. For higher risk use cases,this should be reinforced through "co-regulation", where companies implement testing and government conducts spot checks for compliance. 

These recommendations come as the new European Commission has indicated that it will legislate on AI within the first 100 days of 2020 and the White House has released new guidelines for regulation of AI

The new Morning Consult study commissioned by the IBM Policy Lab also found that 85% of Europeans and 81% of Americans surveyed support consumer data protection in some form, and that 70% of Europeans and 60% of Americans responding support AI regulation. Moreover, 74% of American and 85% of EU respondents agree that artificial intelligence systems should be transparent and explainable, and strong pluralities in both countries believe that disclosure should be required for companies creating or distributing AI systems. Nearly 3 in 4 Europeans and two-thirds of Americans of respondents support regulations such as conducting risk assessments, doing pre-deployment testing for bias and fairness, and reporting to consumers and businesses that an AI system is being used in decision-making.

In addition to its new AI perspective, the IBM Policy Lab has released policy recommendations on regulating facial recognition, technological sovereignty, and climate change, as well as principles to guide a digital European future. Learn more about the IBM Policy Lab at ibm.com/policy.

The IBM-hosted event in Davos, Walking the Tech Tightrope: How to Balance Trust with Innovation, also featured the President and CEO of Siemens AG Joe Kaeser, White House Deputy Chief of Staff for Policy Coordination Chris Liddell, and Organisation for Economic Co-operation and Development Secretary-General José Ángel Gurría Treviño. CNN International Anchor and Correspondent Julia Chatterley moderated the discussion.

You can read the transcript here

 

SOURCE IBM

Release Categories

For further information: Jordan Humphreys, jordan.humphreys@ibm.com