“AI execs used to beg for regulation. Not anymore.”

Sam Altman, CEO of OpenAI, cautioned during a Senate hearing on Thursday that requiring government approval prior to releasing advanced artificial intelligence systems could severely hinder the United States’ competitive edge in the global AI landscape. Altman described such regulation as potentially “disastrous” for American innovation.
The comments represent a notable departure from Altman’s previous stance. In a Senate appearance two years earlier, he recommended establishing a federal agency to license and oversee AI development, calling it his top suggestion for ensuring the technology’s safe deployment.
This change reflects a broader realignment in the U.S. policy and industry approach to artificial intelligence. Once dominated by concerns about AI posing existential threats, the prevailing sentiment among leading tech executives and policymakers—particularly under the Trump administration—has shifted toward accelerating AI development to secure economic and geopolitical advantages, particularly over China.
Senator Ted Cruz (R-Texas), chair of the Senate Committee on Commerce, Science, and Transportation, echoed that sentiment, stating that U.S. leadership in AI depends on minimizing regulatory barriers that might slow innovation.
This pro-growth stance is now supported by several former venture capitalists serving in the Trump administration, including Vice President JD Vance, who has emerged as a strong advocate for a laissez-faire approach to AI both domestically and internationally.
However, critics argue that this deregulation-first strategy overlooks pressing real-world harms associated with AI. Researchers have documented biases in AI systems and the proliferation of AI-generated nonconsensual explicit content, including deepfakes and child exploitation imagery. In response, Congress passed bipartisan legislation in April criminalizing the publication of such content.
Rumman Chowdhury, who served as the U.S. science envoy for AI under the Biden administration, accused the tech industry of diverting attention from immediate risks by emphasizing long-term, speculative threats such as “superintelligent” AI. She argued that this narrative enabled companies to position national security concerns as justification for delaying regulation.
The AI boom, ignited by OpenAI’s release of ChatGPT in November 2022, spurred both enthusiasm and anxiety. Industry leaders and researchers, some aligned with the AI safety movement, warned about the need to manage hypothetical future risks. In May 2023, hundreds of executives and academics signed a statement calling AI an existential threat on par with pandemics and nuclear weapons.
These concerns initially gained traction in policy circles, influencing global forums such as the 2023 U.K. AI Safety Summit, where then-Vice President Kamala Harris emphasized the need to address both present and future dangers posed by AI technologies.
By contrast, the 2024 follow-up summit in Paris saw a clear pivot. Safety concerns were downplayed, and world leaders emphasized accelerating AI innovation. Vice President Vance used the platform to argue against overregulation, singling out the European Union’s AI regulatory framework as overly restrictive.
In line with this new policy direction, President Trump swiftly repealed Biden-era AI regulations upon taking office. Those rules had mandated transparency and safety testing for the most powerful AI models. Critics in the tech sector had argued that the regulations disproportionately benefited large companies and stifled startups.
Trump’s administration has since appointed prominent tech industry figures, such as David Sacks, to leadership roles shaping AI and cryptocurrency policy. The administration has also reversed restrictions on semiconductor exports that were designed to curb China’s AI development.
Tech companies have responded by aligning with the new regulatory climate. Microsoft President Brad Smith, who previously advocated for a dedicated AI oversight agency, now supports a “light-touch” regulatory framework. He cited permitting delays for AI data centers as a primary obstacle to expansion.
Other leading AI firms, including Google DeepMind, OpenAI, Meta, and Anthropic, have also updated their policies to allow engagement with defense and military projects—reversing earlier commitments to avoid such uses.
Some experts continue to urge caution. MIT professor Max Tegmark, president of the Future of Life Institute, criticized the lack of oversight, arguing that AI firms are not held to the same safety standards as small businesses. He compared AI’s regulatory environment to that of a restaurant requiring health inspections, while powerful AI models can be released without oversight.
Tegmark’s group and others remain active in researching and advocating for safeguards. A recent summit in Singapore brought together AI safety researchers to advance those efforts, with Tegmark describing it as a hopeful sign following setbacks in Paris.
Rewritten: Source article posted here: https://css.washingtonpost.com/technology/2025/05/08/altman-congress-openai-regulation/
Follow ‘Outspoken’ on Rumble! https://rumble.com/c/DrNaomiWolfOutspoken
Donate to DailyClout: https://ko-fi.com/dailyclout
Please Support Our Sponsors
The Wellness Company:
https://dailyclouthealth.comUse code DAILYCLOUT for 10% off!
NativePath: “7 Reasons Why Men & Women Over 50 Are Adding This Single Ingredient To Their Morning Routine. Visit https://getnativepath.com/DailyClout to Learn More…”
Raw Paws: “Give your pets the power of raw nutrition—feed them raw…
Visit https://dailycloutpets.com to learn more”
Order ‘The Pfizer Papers’ and Support Our Historic Work: https://www.amazon.com/dp/1648210376?&tag=skyhorsepub-20
Discover LegiSector! Stay up-to-date on issues you care about with LegiSector’s state-of-the-art summarizing capabilities and customizable portals. No researchers needed, no lobbyists, no spin. Legislation at your fingertips! Learn more at https://www.legisector.com/