At first glance, the current landscape of artificial intelligence (AI) policy in the United States appears to embrace a deregulatory approach. Prominent leaders, including JD Vance, advocate for AI policies that echo this sentiment, emphasizing minimal regulation to boost innovation. Notably, Congress even contemplated a decade-long ban on state-level AI legislation, while the Trump administration’s “AI action plan” echoes calls for less bureaucratic red tape in the technology’s early stages.
However, this narrative of deregulation is a critical misconception. While the US federal government currently adopts a relatively hands-off stance towards consumer-facing AI applications—like chatbots or image generators—it remains deeply involved in the foundational elements that power these systems. For instance, both the Trump and Biden administrations have closely regulated AI chips, integral to robust AI functionality. Biden’s administration has enacted restrictions on chip exports to nations like China, emphasizing national security as a primary concern. At the same time, the Trump administration sought international deals to maintain the US’s competitive edge in AI technology.
This duality reveals that the U.S. is not stepping back from AI regulation but is directing its focus toward less visible areas. Beneath the façade of free-market rhetoric lies significant intervention in the building blocks of AI systems. This regulation occurs in a discrete manner, often overlooked amidst discussions of consumer applications. Key components such as hardware, data centers, and underlying software are now the focal points of emerging government policies.
Globally, regulatory focus is beginning to shift towards the foundational components of AI systems rather than solely the visible applications. For instance, while Europe’s earlier frameworks, such as the EU’s AI Act, concentrated on high-risk applications—including those in health care, employment, and law enforcement—to mitigate societal harms, China has enacted restrictions to prevent deepfakes and inauthentic content. The U.S. positions itself as a major player in this dynamic, prioritizing national security by controlling the export of advanced chips and even model weights—the algorithms that fuel AI functionality. Often couched in dense administrative jargon, such as “Implementation of Additional Export Controls,” these regulations hide significant implications for the future of AI governance.
Different countries are moving beyond initial applications-focused rules aimed at societal protection, towards a more multifaceted regulatory approach that incorporates both national security and societal considerations. The emerging regulatory landscape consists of a hybrid model that addresses the complexities of modern AI technologies. This approach mitigates redundancy and enhances efficacy by breaking down silos between various concerns, showing a clear trend towards integrated governance.
The rhetoric surrounding laissez-faire policies often obscures the reality of U.S. AI regulations. Observed through the lens of the entire AI technology stack, U.S. policies do not signify a retreat but a strategic reorientation of where regulations are applied—favoring a light touch on visible applications while maintaining strict control over core components.
It is crucial for any international regulatory framework to acknowledge that the U.S., as a leading nation in AI development, cannot maintain a façade of non-regulation without consequences. With its significant interventions focused on critical elements like AI chips, it is evident that U.S. AI policy operates under a different paradigm than the understood concept of laissez-faire. Recognizing this duality is essential for developing effective global cooperation and fostering transparency—the first step towards impactful AI governance in an interconnected world.
With these evolving frameworks and the pressing need for clarity in AI regulations, dialogue around global governance must deepen. Without comprehensive insights into how and why decisions are made regarding AI governance, the conversation can only remain an echo of what it could become—an effective, transparent, and inclusive discourse.
Inspired by: Source

