AI policy is one of those topics that sounds boring until you realize it determines whether you can use the tools you depend on, whether your company faces million-dollar fines, and whether the AI industry develops in a way that actually benefits people.
So let’s talk about what’s happening in AI policy right now, because a lot is happening and most of it isn’t making headlines.
The US Policy space: Organized Chaos
The United States still doesn’t have a thorough federal AI law, and at this point, it probably won’t get one anytime soon. What it has instead is a growing collection of executive orders, agency guidance, and state-level legislation that adds up to something resembling a policy framework — if you squint.
The Biden-era executive order on AI safety established some important precedents: reporting requirements for large AI training runs, safety testing standards, and guidelines for government AI use. The current administration has kept some of these in place while rolling back others, creating uncertainty about which rules actually apply.
Meanwhile, federal agencies are doing their own thing. The FTC is going after companies that make misleading claims about AI capabilities. The FDA is developing frameworks for AI in medical devices. The SEC is looking at AI in financial services. The EEOC is concerned about AI in hiring. Each agency has its own approach, its own timeline, and its own enforcement priorities.
The result: if you’re building AI in the US, you need to track dozens of different regulatory bodies and their evolving positions. It’s manageable for large companies with legal teams. It’s a nightmare for startups.
State-Level AI Laws Are Exploding
This is the story that deserves more attention. While Congress debates and delays, state legislatures are acting.
Colorado passed one of the first thorough AI governance laws, requiring companies to disclose when AI is used in consequential decisions and to conduct impact assessments. California has multiple AI-related bills in various stages of progress. Illinois, Texas, New York, and others are all working on their own approaches.
The problem: these laws don’t always agree with each other. A company operating in all 50 states could theoretically need to comply with 50 different AI regulatory frameworks. That’s not sustainable, and it’s one of the strongest arguments for federal legislation — not because federal regulation is inherently better, but because a single framework is easier to comply with than 50 different ones.
International Policy Developments
The EU AI Act implementation. The European Commission is publishing guidance documents to help companies understand their obligations under the AI Act. Standards bodies are developing technical standards for compliance. The AI Office is building enforcement capacity. It’s a massive undertaking, and the details matter enormously for companies operating in Europe.
The UK’s evolving approach. The UK government is developing a more structured AI regulatory framework, moving away from the previous government’s lighter-touch approach. The details are still emerging, but expect something between the EU’s thorough regulation and the US’s sector-specific approach.
Global South engagement. Countries in Africa, Latin America, and Southeast Asia are increasingly active in AI policy discussions. Many are developing their own frameworks, often drawing on elements from the EU, US, and Chinese approaches. The risk: AI policy becomes another area where wealthy countries set the rules and developing countries have to follow them.
International coordination efforts. The OECD, G7, and various UN bodies are all working on AI governance frameworks. Progress is slow, but the conversations are happening. The most concrete outcome so far: the OECD AI Principles, which provide a common reference point even if they’re not legally binding.
The Policy Issues That Matter Most
AI and employment. How should governments handle AI-driven job displacement? Retraining programs? Universal basic income? New labor protections? Every country is grappling with this, and nobody has a great answer yet.
AI and intellectual property. Can AI-generated content be copyrighted? Can AI models be trained on copyrighted material? Different jurisdictions are giving different answers, and the legal battles are just beginning.
AI and national security. Export controls on AI chips and technology are reshaping the global AI space. The US restrictions on AI chip exports to China are the most significant, but other countries are implementing their own controls.
AI transparency and accountability. When AI makes a decision that affects your life — a loan application, a job interview, a medical diagnosis — should you have the right to know how that decision was made? Most policymakers say yes, but the technical and practical challenges of AI transparency are significant.
What to Watch
The first EU AI Act enforcement actions. These will set precedents that shape how the law is interpreted for years to come.
US federal AI legislation attempts. Several bills are in various stages of development. None are likely to pass soon, but the debates will signal where policy is heading.
AI in elections. With major elections happening globally, AI-generated political content is a hot-button issue. Expect new rules and enforcement actions around deepfakes, AI-generated ads, and automated campaigning.
AI liability court cases. As AI systems cause more real-world harm, courts will increasingly be asked to determine who’s responsible. These decisions will create de facto policy in areas where legislation is unclear.
My Take
AI policy in 2026 is moving fast but not fast enough. The technology is advancing faster than policymakers can respond, and the gap between what AI can do and what the rules say about it is growing.
The most productive approach for companies: don’t wait for regulation to tell you what to do. Build responsible AI practices now, document your decisions, and be prepared to adapt as rules evolve. The companies that invest in governance early will have a significant advantage when enforcement begins in earnest.
🕒 Last updated: · Originally published: March 12, 2026