Business Roundtable Response to Request for Information on Regulatory Reform on Artificial Intelligence
Letter
Business Roundtable Response to Request for Information on Regulatory Reform on Artificial Intelligence
View PDFOctober 27, 2025
Stacy Murphy Deputy Chief Operations Officer and Security Officer White House Office of Science and Technology Policy (OSTP) 1650 Pennsylvania Avenue, NW Washington, DC 20502
Re: Business Roundtable Response to Request for Information (RFI) on Regulatory Reform on Artificial Intelligence
Dear Ms. Murphy,
These comments are submitted on behalf of Business Roundtable, an association of more than 200 chief executive officers (CEOs) of America’s leading companies, representing nearly every sector of the U.S. economy. Business Roundtable CEOs lead U.S.-based companies that support one in four American jobs and almost a quarter of U.S. GDP. We appreciate the opportunity to respond to the Office of Science and Technology Policy’s (OSTP) Request for Information regarding Regulatory Reform on Artificial Intelligence under the White House AI Action Plan.
Business Roundtable applauds OSTP and the White House for this RFI and the leadership demonstrated in the AI Action Plan to advance our shared goals of U.S. competitiveness and leadership, and we urge continued cooperation with the private sector as the process continues. AI presents extraordinary opportunities to enhance productivity, strengthen competitiveness and improve the customer experience across every sector. However, outdated, fragmented or unclear regulatory expectations can slow down development and deter responsible deployment. Primarily, businesses need clarity and consistent guardrails, not surprises, conflicting rules or punitive uncertainty. Business Roundtable supports balanced approaches that protect consumers, bolster innovation, and strengthen U.S. leadership in global AI development and deployment. Ultimately, numerous factors contribute to U.S. leadership in AI development, deployment and adoption that will need to be addressed outside of regulatory overreach, including export controls, federal permitting reform to accelerate infrastructure development and comprehensive federal consumer data privacy legislation. Business Roundtable looks forward to continuing to work with the Administration across these areas.
Below, we discuss the need for preemption of state AI laws, discuss recommendations when considering federal AI policy and provide examples of current rules that should be reviewed, revised or removed.
Priorities for U.S. Federal AI Policy
A growing patchwork of state laws regulating AI risks undermining innovation and U.S. global leadership. As of October 2025, over 1,100 AI-related bills have already been introduced across 50 states, which is nearly double the number of bills introduced in 2024.1 Business Roundtable strongly supports broad federal preemption of state laws related to AI, including through a federal regulatory approach that facilitates the development and deployment of innovative AI technologies in the United States and avoids a patchwork of state regulation. Such an approach does not require comprehensive legislation to regulate AI technology but rather targeted guardrails, where appropriate, can be adopted through a collection of legislative, regulatory and executive actions. In addition, the lack of a consistent federal approach creates a vacuum where the United States should be leading, and other countries are moving forward to create their own divergent regulatory environments.
If a federal approach to AI is adopted, it should:
- Preempt certain state regulations that complicate efforts to develop and deploy AI;
- Spur innovation by reviewing and revising or removing rules and procedures that are not well suited to fast-moving, emerging and transformative technologies like AI;
- Promote safe and secure development and deployment through an incremental, agile and collaborative approach to AI governance; and
- Provide certainty for AI innovation through well-crafted regulatory guardrails.
Preempt AI-specific state laws and regulations
Establishing a preemptive federal approach to AI regulation would accelerate innovation by providing certainty, reducing fragmentation and lowering compliance costs. The proliferation of state-level AI laws and regulations has already complicated efforts to develop and deploy AI technology across the entire United States and hindered widespread adoption of such technology. The emerging patchwork of overlapping requirements creates uncertainty and raises compliance costs associated with certain AI research and development activities, as well as the adoption of AI technologies for specific use cases. This impacts Americans not only in states with AI-specific laws2 (e.g., California, Colorado, and New York, etc.) but across the entire country as businesses seek to integrate AI technologies into products and services uniformly for all U.S.-based users. A federal approach should consider existing state sectoral regulations and the need for specific guidance for certain industries regulated at the state level, like insurance.
Spur AI innovation
The United States’ leadership on AI is strengthening all sectors of the American economy and actively improving the lives of individuals and the function of businesses, government, and civil society. To sustain this leadership and facilitate innovation, any regulation impacting AI must be designed to adapt to and keep pace with the AI ecosystem's complex, rapidly evolving nature.
Business Roundtable believes that the Administration should establish formal processes for creating regulatory sandboxes to support businesses developing and deploying AI technologies, in line with the AI Action Plan. Regulatory sandboxes would provide a controlled environment wherein businesses can test new AI tools or adapt existing technologies for emerging use cases, while maintaining the essential protections that current regulations afford in non-AI contexts. These sandboxes would allow companies to apply for targeted modifications or waivers to existing regulations, enabling innovation without waiting for full-scale regulatory reform. This approach would also reduce legal uncertainty and mitigate liability concerns that otherwise come with deregulation by offering clear, affirmative regulatory approval.
To maximize their effectiveness, these sandboxes should be coordinated across the U.S. federal government and between sectors, ensuring consistency while allowing each agency to decide whether a particular regulation warrants a modification or waiver, based on whether the regulation and its requirements are consistent with the AI Action Plan’s goal of advancing U.S. AI innovation and leadership. Federal agencies should develop a process for capturing lessons from sandbox activities and use them to refine regulations and issue improved guidance to help organizations navigate rules. Finally, the federal government must also articulate a clear strategy for transitioning businesses out of the sandbox towards full compliance, ensuring that new AI tools can remain in use over the long term.
Business Roundtable also recommends that the Administration ensures access to the technical resources necessary for AI innovation. Since advanced AI research requires vast amounts of computing power and high-quality data, new entrants currently face high barriers to entry. By providing access to technical resources, the United States can ensure that more researchers can contribute to AI research and development, accelerating innovation and empowering entrepreneurs to build new AI-driven businesses.
An important technical resource for AI innovation is government datasets, which are typically much larger in size and scope and more representative of diverse populations than non-governmental datasets. This makes them uniquely valuable for conducting research, testing and advancing AI models that better serve the public and further U.S. AI leadership. But while open data is encouraged and often required in government, federal agencies typically lack the resources to publish high-impact datasets. The Administration should consider initiatives to publish datasets across federal agencies that are easily accessible and formatted for AI research activities.
Finally, spurring AI innovation requires increased clarity around intellectual property, which impacts the ability to access and use training data, inputs and outputs. Policymakers should clarify intellectual property law where necessary to foster continued innovation and U.S. AI leadership.
Promote safe and secure AI development and deployment
As AI becomes more widely used, policymakers will need to take a pragmatic approach to designing effective guardrails, as necessary, to ensure public trust and consumer safety.
The complexity of frontier AI technologies makes them more difficult to understand than other types of AI models. To help foster the public trust necessary to drive widespread adoption of AI systems, the U.S. government should encourage targeted transparency and accountability requirements for frontier models such as model cards, evaluation disclosures, and safety and security frameworks.
For example, model cards are concise documents that describe a model’s intended use, performance evaluation procedures and observed output metrics. These model cards not only help build trust by clarifying how models are designed and tested but also allow developers to compare their systems with others built for similar purposes, supporting innovation. For those seeking deeper technical insights, cards can also be accompanied by more comprehensive technical reports. The National Telecommunications and Information Administration has noted that this approach already aligns with current industry best practices as many AI developers already produce such artifacts.3
Business Roundtable urges policymakers to enact and enforce necessary safeguards against AI-enabled deceptive content that will complement private sector efforts to protect users, particularly against unauthorized representations of an individual’s image, voice or visual likeness online. For example, the Administration should work with Congress on measures to prohibit a person or entity from intentionally and knowingly committing deceptive acts through the public communication of a representation of an individual’s image, voice or visual likeness. Implemented appropriately, these safeguards can enhance user protections while promoting continued innovation in AI.
Security of the AI ecosystem is an essential pillar of responsible AI development and deployment. Policymakers should support the secure development and deployment of AI, including through integration of “secure by design” principles into any AI governance framework and public-private collaborations to provide greater clarity around defining, measuring, mitigating and addressing risks.
Additionally, Business Roundtable recommends the Administration maintain and build upon voluntary, harmonized and flexible risk-based standards to ensure organizations are equipped to evaluate and implement AI tools, systems and services.
The National Institute of Standards and Technology (NIST) AI Risk Management Framework
(RMF) is an example of strong existing risk management guidance developed through robust public-private partnership. While the NIST AI RMF may need to be refined over time, including updates to account for different types and uses of AI, the framework provides a cross-industry supported foundation for developing innovation-enabling standards.
Provide certainty for AI innovation
While overregulation can certainly impede innovation, so too can the uncertainty created by the absence of regulation and agency guidance. When legal requirements are ambiguous, companies must devote significant time and resources to determine that they are not in violation of the law.
Many existing regulations not specifically drafted to address AI are already well equipped to govern the deployment of AI systems even for novel use cases, particularly in heavily regulated sectors. To promote clarity and ensure coherent application of these regulations, regulators should issue guidance on how they apply to AI models, tools and infrastructure. This will also provide companies with confidence that they are meeting their compliance obligations and that they will face predictable and transparent processes with regulators, while strengthening consumer trust and supporting American leadership in AI.
When considering regulations pertaining to AI, the Administration should consider the following criteria:
- Context-specific: Regulations should be aligned with clearly defined AI use cases or deployment contexts rather than specific to AI models, systems or algorithms. Restrictions on a given technology or broad application will necessarily miss important context and are likely to misjudge risk.
- Risk-based: Regulation should focus on real-world, demonstrable risks rather than speculative or generalized harms. Any regulations targeting AI use cases should be tightly tailored to identified, high-risk use cases.
- Adaptive: Regulatory approaches must be able to evolve alongside AI technologies, use cases and markets. Regulation should define desired outcomes rather than prescribe detailed technical or procedural requirements. Policymakers should consider the frequency and feasibility required to update regulations to keep up with the advancement of AI technology. Voluntary standards, sector-specific risk management frameworks and opt-in guidance, such as those led by NIST, can better serve AI adoption and innovation than prescriptive regulation.
- Clear: Enforcement of regulations should be clearly targeted across different businesses in the AI value chain and technology stack to ensure accountability and distinct roles, based on clear guidance from regulators.
- Proportional: Enforcement standards should be effective, proportional, and clearly articulated such that they focus on bad actors and reflect the contextual and evolving nature of AI and its use.
- Universal: Regulations, tailored to particular risks and use cases of AI applications, should apply to all companies, regardless of size or revenue, to ensure that all organizations are subject to the same rules and to avoid small-to-medium-sized enterprises facing a regulatory cliff as they expand. Exemptions based on company size are likely to indicate that requirements are potentially overly burdensome and not grounded in considerations of risk.
- Coherent: Regulatory guardrails should account for existing sectoral frameworks to avoid imposing duplicative or confusing obligations in heavily regulated sectors where existing regulations are already well equipped to govern AI systems.
- Consistent: Where appropriate, regulations should be standardized within and between sectors, rather than exist in a patchwork of overlapping regulations with slightly differing criteria.
Specific Federal Regulations for Review
Business Roundtable has identified regulations related to healthcare, finance, liability doctrines and training data that may need to be improved or removed to promote broad AI development and deployment. This is not an exhaustive list, but it is representative of the issues companies face as they seek to develop and use AI innovations.
Federal Permitting
Rapid growth in AI development and deployment is driving increased infrastructure demand across the United States. Overly costly, complex and lengthy permitting processes constrain the construction of new infrastructure necessary to support AI development and deployment such as data centers and fiber networks. Though down from nearly four years in 2018, the median time to complete an environmental impact statement (EIS) for major infrastructure projects is currently around 26 months (2.2 years). A streamlined permitting process would speed construction and grow available energy resources, accelerating AI infrastructure expansions.
- Business Roundtable recommends improving construction speed and reducing costs by streamlining permitting processes to shorten decision timelines, including embracing National Environmental Policy Act (NEPA) reforms.4
Healthcare
The healthcare sector is highly regulated, and a number of regulations could be improved, revised or removed to facilitate increased AI adoption and innovation to the benefit of consumers and patients. For example:
- Business Roundtable recommends that the Department of Health and Human Services continue to modernize the implementation of the Health Insurance Portability and Accountability Act (HIPAA) to facilitate responsible data use for AI development while safeguarding patient privacy.
- Business Roundtable recommends that the Food and Drug Administration (FDA) revise its guidance regarding Predetermined Change Control Plans (PCCP) for Medical Devices, allowing for more flexibility in premarket submission types, including AI-enabled devices. FDA’s guidance on PCCP for Medical Devices, which dictates what modifications will be made to a device and how the modifications will be assessed, are currently overly restrictive and inconsistent with FDA’s statutory authority. Specifically, the prohibition of certain premarket submission types removes intended flexibility that otherwise allows PCCPs to enable regulatory frameworks and keep pace with rapid AI innovation.
Finance
The financial services sector is subject to extensive oversight regarding use of technology under the Securities and Exchange Commission (SEC), Office of the Comptroller of the Currency (OCC), Federal Reserve Board and Consumer Financial Protection Bureau (CFPB). While many existing laws, regulations and risk management frameworks are appropriately technology neutral and should not be duplicated via AI-specific regulation, some regulations could be improved, revised or removed to facilitate increased AI adoption and reduce compliance costs. For example:
- Business Roundtable recommends that the Federal Reserve clarify that the Supervisory Guidance on Model Risk Management (SR 11-7) does not apply to generative AI and agentic AI. Moreover, Business Roundtable recommends that financial regulators exempt AI-powered call transcriptions and summaries from supervisory authority requests. Extensive oversight requires significant governance to meet compliance obligations, particularly around retention requirements for AI-powered call summaries.
Liability Doctrines
Liability doctrines, which have an outsized impact on potential AI adoption and innovation, are currently unclear and fragmented across federal agencies. This issue was acknowledged under Pillar I: Accelerate AI Innovation in the AI Action Plan. Several regulations could be improved, revised or removed to facilitate increased AI adoption and reduce compliance costs. For example:
- Business Roundtable recommends that the FTC modify or set aside its ruling In the Matter of Rytr LLC, where the FTC found a technology platform had committed an unfair business practice by providing a tool that had legitimate uses but could be used to create fraudulent online reviews. This legal theory could implicate liability among companies and developers for technology tools used to facilitate fraudulent conduct by a third party, even if the companies did not design the tools for fraud or know of their fraudulent use. When applied in an overly broad manner, the FTC’s regulations on unfair and deceptive practices increase the risk of legal action or onerous penalties against companies developing and deploying AI.
Training Data
The federal government holds many different high-quality datasets. However, privacy and data restrictions often prevent their release or integration into industrial AI solutions. For example:
- Business Roundtable recommends that the Administration develop standardized, secure and privacy-preserving frameworks for sharing datasets and pursue interagency alignment in order to expedite dataset release. Environmental, water quality and hydrological datasets are currently siloed across federal agencies with inconsistent access rules, including barriers based on privacy and security regulations for government data.
Conclusion
A federal regulatory approach to AI would promote American leadership by making it easier to develop and deploy AI in the United States. Federal preemption of AI-specific state laws would reduce fragmentation and uncertainty, making it easier to safely develop and deploy AI. As OSTP reviews regulations designed to put guardrails on AI technology, any rules that are not context-specific, risk-based, adaptive, clear, proportional, universal, coherent and consistent should be revised or removed.
Business Roundtable appreciates your consideration of our comments and looks forward to working with the Administration to continue U.S. leadership and innovation in AI. For any questions, please contact Amy Shuart, Vice President of Technology & Innovation, Business Roundtable, at ashuart@brt.org or (202) 496-3290.
Footnotes
- Multistate.AI Legislative Tracker, accessed at: https://www.multistate.ai/artificial-intelligence-ai-legislation
- For example, the Transparency in Frontier Artificial Intelligence Act (TFAIA) (SB 53), signed on September 29, 2025, available at: https://leginfo.legislature.ca.gov/faces/billVersionsCompareClient.xhtml?bill_id=202520260SB53, and the Generative Artificial Intelligence Training Data Transparency Act (AB 2013), signed on September 28, 2024, available at: https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240AB2013.
- NTIA AI Accountability Policy Report, discussion of AI System Disclosures, available at: https://www.ntia.gov/issues/artificial-intelligence/ai-accountability-policy-report/developing-accountability-inputs-a-deeper-dive/information-flow/ai-system-disclosures
- See Business Roundtable recommendations for permitting reform in Building a Prosperous Future (September 2025), https://www.businessroundtable.org/building-a-prosperous-future