AI Regulation is no longer theoretical as it is currently unfolding in real time across states, courtrooms and corporate boardrooms. Between New York’s RAISE Act, California’s SB 53 transparency mandates and the US House’s proposed 10-year moratorium on state AI laws, policymakers and technology companies are weighing the options on how to build trust in these systems without slowing innovation. At the center of this discussion are disclosure requirements, safety reporting mandates and training data transparency rules. These are used together in order to make advanced AI models more accountable. The round-up below provides insights on how to balance innovation, federal action, state legislation and growing demand for transparency. 

New York’s RAISE Act Is the Blueprint for AI Regulation to ComeBloomberg Law

The RAISE Act joined California’s SB-53 in establishing a disclosure-driven framework for governing the most powerful AI models in the market. These pieces of legislation don’t feature radically new ideas. Instead, they signal how targeted, state-level AI governance is likely to scale, converge, and settle over the next few years.

The executive order can’t prevent states from passing laws, but the threat of litigation and funding consequences is meant to chill state-level action. That New York proceeded anyway suggests confidence that disclosure-based regulation can withstand legal challenges, or that the political calculus favored moving forward despite the risk. And the risk may have been worth it, as Utah is now considering a bill that is inspired by the New York and California laws.

After Setback, Tech Firms Renew Push for Federal AI RegulationWSJ 

As Congress debated the proposed moratorium in recent weeks, Microsoft released a report saying the company was “well prepared to comply” with emerging AI regulations.

Though Microsoft has lobbied for a federal AI law, it supports a compromise allowing states to regulate certain areas, including protecting consumers from the use of AI in fraud. “We will continue to advocate for state and federal public policies that support our business goals,” a company spokesperson told The Wall Street Journal.

How to Regulate, or Not Regulate, AIThe Regulatory Review

Despite its global popularity, data-driven generative AI is fairly new. Although experts understand how it works, for most users, AI is a black box that they ask for information and advice, utilizing this technical tool much like a human expert, such as a doctor, a lawyer, or an engineer. Increasingly, AI is shaping all kinds of human decision-making. But to what long-term effect? We do not know.

The real governance challenge that AI forces us to confront may be quite different from our initial impulses. Rather than quickly addressing a few obvious goals, real—and really smart—AI regulation may require a rewiring of our regulatory processes and institutions, not only to facilitate but also to embrace with humility the constant need to observe, learn, and adjust.

New York mandates AI model safety requirementsCIO Dive

It is the first state AI law to be enacted since Trump signed his executive order earlier this month, which mandates creation of the AI Litigation Task Force to challenge state AI laws that interfere with existing federal laws or “unconstitutionally regulate interstate commerce.” The order was the culmination of a year spent removing federal agency enforcement power and reducing regulations. It also aligns with outcry from tech companies about the burdensome patchwork of state AI laws

The law implements a 72-hour reporting requirement for AI model safety incidents, which could include frontier models autonomously engaging in behavior outside of what a user requested, critical failure of technical controls, or theft, malicious use or unauthorized access to the frontier model. 

Inspire Versus Require: The New Mandate For AI LeadershipForbes

One of the clearest lessons from our own journey is that AI adoption works best when it starts with a real pain point owned by the people closest to the work. In our case, that came from our customer success operations team.

The new leadership mandate is clear: Inspire first. Require next. The companies that master this balance and cultivate a culture of curiosity, agency and urgency will build organizations that don’t just keep up, but adapt ahead of the market.

Because today’s AI isn’t merely about automating what’s been done before, it’s about surfacing what could be done differently. Leaders who hold teams to high standards and simultaneously give them the freedom to explore will unlock not only adoption but transformation.

Anthropic backs California bill that would mandate AI transparency measuresNBC News

Among other conditions, the bill would require large AI companies offering services in California to create, publicly share and adhere to safety-focused guidelines and procedures stipulating how each company attempts to mitigate risks from AI. The bill would also strengthen whistleblower requirements by creating stronger pathways for employees to flag concerns about severe or potentially catastrophic risks that might otherwise go unreported.

The new California bill would apply only to AI companies building cutting-edge models that demand massive computing power. Within that subset of AI companies, the strictest requirements in the bill would apply only to those with annual revenues exceeding $500 million.

SB 53 would also establish an emergency reporting system through which an AI developer or members of the public could report critical safety incidents related to a model.

US House Passes 10-Year Moratorium on State AI LawsTech Policy Press

Supporters of the moratorium say it would stop a confusing patchwork of state AI laws that have cropped up nationwide and give Congress space to craft its own AI legislation while preserving American leadership. Opponents call it a dangerous giveaway to tech firms that would leave consumers — particularly vulnerable communities and children — unprotected and wipe out a flurry of state laws that address everything from deepfakes to discrimination in automated hiring.

2026 Year in Preview: AI Regulatory Developments for Companies to Watch Out ForWilson Sonsini

The cyber insurance market is undergoing an AI-related transformation, with many carriers increasingly conditioning coverage on the adoption of AI-specific security controls. Insurers have begun introducing “AI Security Riders” that require documented evidence of adversarial red-teaming, model-level risk assessments, and specialized safeguards as prerequisites for underwriting. We expect this trend to continue in 2026, and it will become increasingly common for insurance carriers to require alignment with recognized AI risk management frameworks as a baseline for “reasonable security.”

Comply with new AI training data transparency requirements. Under California AB 2013, developers of generative AI systems (subject to narrow exceptions) are now required to publicly disclose information about their AI system’s training data, including detailed summaries of datasets used for training generative AI systems. See our previous alert here. Covered developers are required to disclose information such as the number of data points within the datasets, whether the datasets include protected intellectual property or personal information, and whether the datasets were purchased or licensed, among other requirements.

Will AI regulation allow for more trust, transparency and protection without slowing advancements?

FischTank PR is a top technology PR firm working with companies who innovate industries through AI offerings. If you’re seeking a proactive media relations program to highlight your solutions or industry commentary, reach out to us at [email protected].

***News roundup guest post from FischTank PR intern Abby Collins***