How Healthcare Organizations Can Balance Federal and State AI Regulations

,

In the third part of her Pharma Commerce video interview, Linda Malek, JD, partner at Crowell & Moring, explains how healthcare organizations can harmonize both approaches by prioritizing fairness, transparency, and adaptable systems.

The White House AI Action Plan is designed to position the United States as a global leader in artificial intelligence by accelerating development, adoption, and innovation across industries, including healthcare. According to Linda Malek, JD, partner at Crowell & Moring, the plan’s intent is twofold: to maintain US competitiveness against other nations advancing rapidly in AI, and to create an environment that fosters safe, efficient, and scalable AI implementation.

For hospitals, health systems, and health-tech companies, the plan could significantly shape the pace and scope of AI adoption through several key mechanisms. One of the most notable is the introduction of regulatory sandboxes—controlled environments where new AI tools can be tested and validated with reduced bureaucratic friction. These sandboxes are intended to streamline approval processes for AI-driven medical devices, clinical software, and digital tools, effectively “cutting red tape” and allowing innovations to reach clinical settings faster.

Another major component involves public investment in infrastructure. The plan calls on agencies such as the National Institute of Standards and Technology (NIST) and the U.S. Food and Drug Administration (FDA) to expand national data and computing infrastructure. This includes developing large-scale data centers and improving access to high-quality, diverse datasets—resources essential for training and validating AI models.

By emphasizing both regulatory flexibility and technical capacity-building, the Action Plan aims to lower barriers to entry for healthcare innovators while ensuring oversight and accountability remain in place. Ultimately, Malek suggests, this initiative could enable faster clinical integration of AI tools, strengthen domestic innovation pipelines, and help U.S. healthcare systems leverage AI more effectively for patient care, operational efficiency, and research advancement.

Malek also dives into the role sandboxes could play in helping healthcare organizations test and implement AI safely, while protecting patients; how providers and health systems prepare for potentially conflicting requirements when it comes to AI-related regulations; the steps healthcare organizations can take now to align with the Action Plan and position themselves for regulatory changes in the next few years; and much more.

A transcript of her conversation with PC can be found below.

PC: With both federal and state governments pursuing AI-related regulations, how should providers and health systems prepare for potentially conflicting requirements?

Malek: That is a challenge, certainly because the federal signaling that we're getting is to kind of open things up and create less obstacles, fewer regulations, more innovation, more freedom to innovate, more building of large databases and use of the data in those databases. On the state side, you have Colorado and Utah that have specific AI legislation, but you also have quite a number of other states that regulate in the context of AI under their consumer protection laws, and those laws are very focused on individual privacy rights and deception around companies not telling consumers how they may use their data in an AI context.

I think that there are ways to harmonize both of those requirements and sets of goals. I think that one thing that's important is to sort of map what the compliance vulnerabilities might be as you're developing your AI tools, looking at the most stringent laws, which at this point are probably the state laws. I think that focusing a lot on themes that are going to be relatively important to the federal government as well as to the state government, are themes like transparency and fairness and accuracy as it relates to AI output, and as it relates to disclosures to consumers about how their data is going to be used for AI purposes.

Really looking at those key themes as you're developing your AI will be important to try to navigate the requirements as they sit between the state and federal governments. Obviously documenting everything and ensuring that your systems are adaptable, because the laws are going to develop and they're going to change, and so your systems are going to have to adapt to meet those requirements.