Washington - Business Roundtable today released the following statement on comments the organization submitted in response to the Office of Science and Technology Policy’s (OSTP) Request for Information (RFI) on National Priorities for Artificial Intelligence (AI).
“AI’s continued advancement and adoption is poised to benefit American businesses, consumers and workers. To realize that potential, business and government have a shared responsibility to maximize the societal and economic benefits of AI, while minimizing risks,” said Business Roundtable CEO Joshua Bolten. “Business Roundtable members, whose companies are among the world’s largest developers and users of AI, are committed to building trust in and acceptance of AI by responsibly developing and deploying these technologies.”
The Roundtable’s comments reference the organization’s Roadmap for Responsible AI (RAI), which outlines principles to guide businesses’ responsible development and use of AI technologies. The Roadmap was released along with a set of policy recommendations as part of the launch of the Business Roundtable RAI Initiative in early 2022.
The filing reads in part:
“… Business Roundtable’s Roadmap includes principles for organizations developing and deploying AI to ensure Americans’ rights and safety are protected at every stage of the AI lifecycle, and that companies’ AI oversight and governance will result in responsible AI deployment:
- Implement safeguards against unfair bias where AI systems may produce significant and high-consequence outcomes for individuals, while recognizing opportunities for AI to mitigate human bias.
- Explain the relationships between AI systems’ inputs and outputs where possible and appropriate, particularly for systems that may result in significant and high-consequence outcomes for individuals, as well as the extent to which such inputs and outputs are governed by human oversight.
- Equip deployers of AI systems with sufficient information and training to support responsible and trustworthy downstream use.
- Disclose to end users when they are directly interacting with AI agents that simulate human interactions (e.g., chatbots).
- Evaluate and monitor model fitness and impact continuously to allow for adjustments for fitness for purpose, accuracy and resilience.”
The Roundtable’s comments also outlined principles for government oversight, which are included in its 2022 policy recommendations.
“To the extent that measures are implemented through government rules, regulations or standards, it is important to remember that the context and corresponding risk levels of AI applications exist in a wide spectrum. As such, requirements should be outcome-focused and take a risk-based approach to avoid over-regulating uses of AI which have no significant impact on individuals or do not pose potential for societal harm. For cases in which government action is necessary to protect people’s rights and security, such action should be consistent and compatible with the following principles:
- Any regulatory approaches to AI considered or adopted in the United States should be contextual, risk-based, proportional and use-case specific. Any frameworks, guidance and regulation should be tailored to specific AI use cases, rather than broadly regulating any technology or application outright, and should be appropriately calibrated depending upon the risk of substantial harm.
- AI measures should incentivize good-faith and demonstrated efforts to adhere to requirements, norms and standards.
- In developing these measures, policymakers should conduct a thorough assessment of existing regulatory gaps before establishing new regulations to avoid overlapping or inconsistent rules and to understand where guidance is most needed.
- AI measures should include clear definitions with case study examples informed by continued dialogue with industry stakeholders.
- Policymakers should explore the use of evidence-based regulatory approaches and tools that allow for the iteration of governance practices (e.g., regulatory sandboxes) and opportunities for industry to discover and share best practices.
- Finally, government could incentivize industry to engage in self-assessments, whether such work is performed internally or against external guidelines or standards.”
Additionally, the Roundtable provided examples of how companies are aligning their AI use with the Roadmap’s principles. The comments also underscored the need for a national data privacy law, cited the NIST AI Risk Management Framework as a strong example of public-private partnership and highlighted the importance of public and private investments in a future-ready workforce.
For the full comments, click here.