AI
AI

Musk, Trump, and the EU AI Regulation

Photo credit: www.cnbc.com

U.S. President-elect Donald Trump, alongside notable figures such as Elon Musk, recently witnessed the launch of the sixth test flight of the SpaceX Starship rocket in Brownsville, Texas, on November 19, 2024.

The upcoming year is anticipated to bring significant changes to the U.S. political landscape, especially concerning the regulation of artificial intelligence (AI). Trump is set to be inaugurated on January 20, and his administration is expected to feature influential advisors from the tech industry, including Musk and biotech entrepreneur Vivek Ramaswamy. Their presence is likely to shape policies on emerging technologies like AI and cryptocurrencies.

In contrast, the regulatory environments across the Atlantic reveal a divergence between the U.K. and the European Union (EU). While the EU has adopted a stricter regulatory stance towards major tech companies influencing AI, the U.K. has opted for a more lenient approach.

As 2025 approaches, the global landscape of AI regulation stands at a pivotal moment. Key developments, including the EU’s AI Act and the actions of the forthcoming Trump administration, will play a significant role in shaping the future of AI governance.

Musk’s Influence on U.S. Policy

Elon Musk was seen walking the halls of Capitol Hill during a meeting with Senate Republican Leader-elect John Thune in Washington on December 5, 2024.

Despite not being a focal point during Trump’s election campaign, AI is anticipated to become a crucial area of focus for the incoming administration. Musk has been appointed to help lead the “Department of Government Efficiency,” a role aimed at integrating business insights into government operations. He will share this position with Ramaswamy, who previously withdrew from the presidential race to back Trump.

According to Matt Calkins, CEO of Appian, Trump’s close relationship with Musk could put the U.S. in a favorable position regarding AI advancements. Musk’s expertise as a co-founder of OpenAI and head of his AI lab, xAI, is viewed as a positive asset for formulating informed policies.

Calkins stated that having a knowledgeable individual like Musk in the administration could contribute significantly to AI strategy. Musk has been vocal about the potential risks associated with AI, emphasizing the need for regulations to prevent adverse outcomes. Hence, it is conceivable that he might advocate for safeguards against catastrophic AI risks, given his long-standing concerns in this area.

Currently, the U.S. lacks a cohesive federal framework governing AI, leading to a patchwork of state and local regulations that vary widely across the country.

The EU AI Act

On the global stage, the EU leads the charge with its comprehensive AI regulatory framework, known as the AI Act.

This landmark legislation came into effect earlier this year and has generated significant discussions within the tech industry, particularly among American companies wary of its stringent requirements. Though it is not yet fully implemented, the Act has already raised concerns that its regulations could stifle innovation.

In December, a newly established body within the EU, tasked with overseeing compliance with the AI Act, released a second draft code of practice related to general-purpose AI models. This draft offers exemptions for certain open-source AI providers, promoting public access while requiring that developers of significant AI systems conduct thorough risk assessments.

However, organizations representing large technology firms, including Amazon and Google, have voiced concerns regarding aspects of the draft code that extend beyond the initially agreed-upon parameters, particularly around copyright issues.

As the EU moves towards full implementation, pressure on American tech firms is expected to increase, especially with the upcoming enforceable provisions focused on high-risk AI applications.

UK’s Approach to AI Regulation

The United Kingdom, under Prime Minister Keir Starmer, is signaling its intent to develop AI legislation while favoring a principles-based approach that contrasts with the EU’s more stringent regulations.

Previously cautious about imposing heavy regulations on AI model developers due to potential constraints on innovation, the U.K. government has recently announced consultations aimed at addressing copyright concerns in AI training. This decision acknowledges the ongoing debate surrounding the use of copyrighted materials in training AI systems.

The proposed measures would allow for an exception to existing copyright law specifically for AI training, affording rights holders the option to exclude their works from such use. Analysts suggest that the U.K. could potentially lead the international conversation on copyright issues arising from AI, benefitting from a relative lack of lobbying pressure that has influenced U.S. policy.

Geopolitical Tensions and AI Regulation

The reemergence of Donald Trump as President could add another layer of complexity to U.S.-China relations, particularly in the realm of AI governance.

Trump’s previous term saw a series of confrontational policies towards China, including trade restrictions on technology companies like Huawei and attempts to ban apps like TikTok. These measures reflected broader concerns about technological competition, specifically in the rapidly evolving AI sector.

China is investing heavily to gain a competitive edge in AI, while also trying to mitigate U.S. restrictions on critical technology access. The race for AI superiority raises fears about the possibility of developing AI systems that might surpass human intelligence, with competing nations potentially creating advanced technologies without adequate safeguards.

As the geopolitical landscape shifts, experts like Max Tegmark warn of the implications of an AI arms race between the superpowers. He advocates for unilateral safety standards to ensure that companies within each nation operate responsibly, thereby mitigating the risks of uncontrollable advancements in AI.

To proactively address potential issues surrounding AI regulation, international gather discussions have already begun, such as the global AI safety summit hosted by the U.K. in 2023, where key representatives from both the U.S. and China participated.

– CNBC’s Arjun Kharpal contributed to this report

Source
www.cnbc.com

Related by category

Court Determines Apple and Executive Committed Perjury in Epic Games Trial

Photo credit: www.cnbc.com A person walks out of an Apple...

Epic Systems Strengthens Lead in EHR Market Over Oracle Health

Photo credit: www.cnbc.com Epic Systems Expands Lead in EHR Market,...

Amazon Plans $4 Billion Investment to Expand Delivery in Small Towns

Photo credit: www.cnbc.com Trucks navigate through a flooded roadway while...

Latest news

LG Display Reveals Potential of ‘Dream OLED’ Technology – But Don’t Expect It in Next-Gen OLED TVs Just Yet

Photo credit: www.techradar.com LG Display Announces Breakthrough in Blue Phosphorescent...

Rupali Ganguly Encourages Fans to Skip Gifts and Do THIS Instead

Photo credit: www.news18.com Last Updated: May 01, 2025, 08:08 IST Previously,...

Apple Breached Injunction in Antitrust Case, Judge Rules

Photo credit: www.cbsnews.com Apple Found in Contempt of Court in...

Breaking news