Docket Id No. OMB_FRDOC_0001-02611
Business Roundtable, an association of chief executive officers of the United States’ largest employers, appreciates this opportunity to comment on the draft OMB memorandum on Guidance for Regulation of Artificial Intelligence Applications. Artificial intelligence (AI) is a powerful and versatile technology, with potential applications in virtually every corner of the economy.
Business Roundtable members are developers and users of AI across many different industries and functions. They strongly support a government framework for AI that spurs innovation and U.S. AI leadership globally. In comments to the National Institute of Standards and Technology (NIST), Business Roundtable recommended three principles to guide the government:
- public-private collaboration is critical to advancing AI innovation and deployment;
- standards must reflect that AI applications vary significantly across sectors, and standards and tools should reflect this reality; and
- the United States must actively participate in international standard setting.
Business Roundtable also advocates for a smarter approach to regulation in general – one that more cost-effectively achieves regulatory goals such as protecting human health and safety, maintaining people’s privacy, promoting U.S. innovation, creating jobs and growing economic opportunity.
American companies lead the world in developing and deploying AI, but maintaining American leadership will require federal agencies to choose thoughtfully between regulatory and nonregulatory approaches to AI to avoid creating unnecessary barriers to the important and widespread benefits that AI innovations offer.
In that respect, the draft memorandum provides excellent guidance. Indeed, it may contain the most comprehensive inventory of smart regulation principles of any publication of the Executive Office of the President, at least since Executive Order 12866.
That said, the Roundtable urges OMB to supplement the memorandum to more fully reflect several important considerations:
I. When Agencies Should Consider New Regulation
Agencies should determine whether to consider new regulation in accordance with Section 1 of E.O. 12866, which can be reduced to two questions:
- Is there a specific market failure or other compelling need (e.g., health, safety, security) that requires government intervention?
- If so, could the need be addressed by non-regulatory approaches that are less intrusive and impose fewer costs on society (e.g., reliance on private standard-setting, creation of economic incentives, or dissemination of information)?
An agency should not regulate unless the answer to these questions is “yes” and “no,” respectively. The draft should highlight the centrality of this decision and discuss how the portions of the memorandum that follow bear on it. AI is rapidly evolving, and unnecessary or overly prescriptive regulation could stifle its full innovative potential. Federal agencies should strongly consider whether action is necessary and, if so, whether non-regulatory measures are appropriate. Agencies should also consider the potential for existing regulations to address a problem before proposing new regulations.
Business Roundtable also appreciates the draft’s discussion of retrospective review. Agencies very rarely perform retrospective review. Yet, when performed well, it often results in regulatory alternatives that better meet regulatory goals while imposing fewer costs. We believe periodic retrospective review will be particularly important with respect to AI, where technology and its applications develop at a rapid rate.
II. Fairness and Non-Discrimination
One important concern regarding new AI applications that they will produce unfairly discriminatory outcomes, whether as a result of unconscious bias on the part of the AI model’s designers, the use of erroneous or inherently biased data, or as a result of systemic errors in the algorithm itself (commonly called model bias). There are instances where it is appropriate for an AI tool to be intentionally focused on a particular population (for example, some medical applications are properly focused on a target patient population). But AI should not be used to discriminate intentionally against individuals on the basis of prohibited factors in areas such as employment, housing, credit, insurance, and the provision of important services. Mechanisms for testing, mitigating and explaining AI throughout its lifecycle are important for being able to provide assurances of fairness.
The federal government can and must play a greater role in supporting research that facilitates fairness and bias detection and mitigation, and that produces technologies that are explainable and, where feasible, produce traceable outputs. While there is a clear demand for “explainable” technologies, stakeholders have yet to reach a common understanding of what this means or how it should be achieved. For this reason, initiatives such as DARPA’s Explainable AI project have tremendous potential to drive the conversation forward.
III. Safety and Security
AI systems need to be designed with safety and security from the ground up and throughout the lifecycle of the system. Business Roundtable supports the draft’s discussion of the issue but urges OMB to highlight it more prominently in the document. For example, OMB might urge agencies to ensure that their regulatory and nonregulatory activities consider (i) the importance of developing and executing internal governance models that involve secure lifecycle development, management and threat modeling; (ii) DOD’s ethical AI commitment to reliable uses and governable capabilities, which include testing and assurance of AI capabilities within defined uses across their entire life-cycles, and (iii) NIST’s efforts to establish a Taxonomy and Terminology of Adversarial Machine Learning as a baseline to inform future standards and best practices. Because it is impossible to guarantee the security of a system from cyber threats, OMB should endorse implementation of risk-based security practices and protocols.
Relatedly, a higher degree of regulatory scrutiny – and explanation – might be appropriate for AI developed for higher risk applications, particularly military, law enforcement or national security purposes. But standards developed for higher risk uses should not be routinely applied to AI developed for other uses.
AI systems must take into account the principle of transparency in order to build trust and confidence in their use. Business Roundtable supports the inclusion of this principle and the draft’s guidance that appropriate levels of disclosure and transparency are context-dependent. Apart from the assessment and magnitude of potential harms, the technical state of art and the potential benefits of the AI system, appropriate transparency must also ensure that proprietary information remains protected, and that malicious actors are not enabled to bypass the AI system.
V. Engagement in Standard Setting Activities
Numerous efforts at setting standards for AI applications are underway in the United States and at the international level. A number of companies, including members of BRT, have adopted principles governing their use of AI technology. Bills introduced in Congress and individual states propose rules for conducting impact assessments and for the regulation of facial recognition technology. The OECD AI Principles and the European Commission’s White Paper on Artificial Intelligence also aim to advance a common set of principles for the development and deployment of AI applications.
OMB should direct federal agencies to ensure that any rules governing AI are generally uniform across agencies and across the country, while aiming to achieve consistency or interoperability with rules that are developed globally. OMB should encourage federal agencies to drive voluntary standard-setting activities in the areas of greatest interest to stakeholders and in which the need for consistency is most acute (e.g. traceability, detection and prevention of bias, and use of simulated data in AI development).
As directed by OMB Circular A-119, federal agencies should actively involve themselves with relevant Standards Development Organizations (SDOs), including using SDOs as the initial forum for identifying and attempting to address issues of concern to those agencies. This includes not only U.S.-based entities, but also leading international cross-sectoral standards setting bodies (e.g., IEEE, IETF, the ISOC/IEC Joint Technical Committee) and fora (e.g., Partnership on AI). We particularly encourage the United States to be an active participant in the OECD’s recently launched AI Policy Observatory and to encourage U.S. academic and research institutions to explore ways to collaborate. Thus, another of the Business Roundtable principles for AI is that active U.S. participation in existing international bodies, such as those just noted above, is critical to maintaining U.S. AI leadership.
VI. Access to Government Data Sets
The federal government possesses an enormous variety of fundamentally important and comprehensive datasets on a wide variety of subjects. Agencies should work with AI technology and application developers, and other stakeholders, to make this information publicly accessible in readily usable and shareable frameworks, with appropriate protections for personal privacy, business confidentiality and national security.
Additionally, a key OMB memorandum interpreting the Information Quality Act (IQA) recently explained that “[t]he touchstone” for determining the utility of information “is ‘fitness for purpose.’” AI is by definition an information-based activity, and it is especially important that the information used to develop, train and validate AI applications be assessed to assure that they are fit for that purpose, and to seek to avoid giving rise to applications that are biased or unreliable. The final version of this memorandum should reference the IQA and discuss how agencies should engage with developers leveraging government datasets as those developers assess the datasets’ fitness for purpose.
VII. Sector-Specific Approaches
The draft highlights sector-specific approaches only in the context of non-regulatory actions, but the concept is equally applicable (and more important) in the regulatory context. A clear set of rules may be necessary for AI when put to certain uses or in certain sectors, such as in the autonomous vehicle market. And a set of standards might be applicable across technologies at a high level of generality, with differentiation at levels of greater specificity. But an approach that applies exactly the same standards to both autonomous vehicles and use cases such as virtual assistants or fraud prevention applications would likely be ill-advised.
A sector-specific approach harnesses industry expertise to inform best-fit standards and tools for AI development and avoids overly prescriptive frameworks that could frustrate AI deployment at scale. This is particularly true in the area of performance measures, which are usually sector-specific. At the same time, federal agencies should conduct a risk assessment prior to adopting any sector-specific rules in order to ensure that such rules do not have negative effects on competition.
VIII. International Regulatory Cooperation
International regulatory cooperation has been an increasingly important topic for the Roundtable, and we are pleased to see the memorandum specifically use that term. Consistent with the theme of maintaining American leadership, however, Business Roundtable urges the Administration to adopt a more proactive perspective.
On page 5, the memorandum says that, “[t]o advance American innovation, agencies should keep in mind international uses of AI, ensuring that American companies are not disadvantaged by the United States’ regulatory regime.” Agencies should also ensure that AI regulatory regimes are coordinated globally to ensure that American companies are not disadvantaged by foreign regulatory regimes. The European Union’s General Data Protection Regulation (GDPR) is a prominent example of a non-U.S. regulatory regime that is having a significant extraterritorial effect on U.S. companies in the absence of a domestic equivalent. The European Commission’s recent White Paper suggests that U.S. companies may face the same challenges once legal instruments regulating AI are drafted. In response, the United States should seize opportunities to lead in setting global standards for AI. Business Roundtable is pleased that the Office of Science & Technology Policy is sensitive to the potential for European Union regulation of AI to become the next GDPR. The final memorandum should advise agencies to coordinate and engage with their foreign counterparts, directly and through fora such as OECD, to ensure that any European standards for AI are interoperable with U.S. approaches.
IX. State and Local Regulation
States can serve as laboratories of democracy, piloting an approach to allow for assessment of its costs and benefits before it is applied nationally. The opportunities for state experimentation may be relatively more limited in the AI context than in other regulatory contexts, however. Indeed, as the draft recognizes, it might be appropriate to preempt state regulation that prevents the emergence of a national market. For example, since people, data, and their devices constantly travel across state borders, there should be a comprehensive federal consumer privacy law that guarantees the same set of privacy protections for every American.
* * *
Business Roundtable commends the Administration for developing a draft document that comprehensively summarizes the principles of smart regulation and explains how they apply to AI. Supplemented as proposed above, the final document should be exceptionally valuable in maintaining American leadership in AI.
Thank you for consideration of these comments. Business Roundtable appreciates the opportunity to continue engagement with the Administration on its AI policy efforts.
We would be happy to discuss these comments or any other matters you believe would be helpful. Please contact Denise Zheng, Vice President for Technology and Innovation Policy at Business Roundtable, at email@example.com or (202) 496-3274.
 85 Fed. Reg. 1825 (January 13, 2020).
 Comments of Business Roundtable on “RFI: Developing a Federal AI Standards Engagement Plan” (June 10, 2019).
 Memorandum for the Heads of Executive Departments and Agencies re “Improving Implementation of the Information Quality Act” (April 24, 2019).
 CNBC, “EU launches plan to regulate A.I., taking aim at Silicon Valley giants” (Feb. 19, 2020) (quoting Michael Kratsios), available at https://www.cnbc.com/2020/02/19/eu-launches-plan-to-regulate-ai-aimed-at-silicon-valleygiants.html.