Europe’s AI Law Needs a Smart Pause, Not a Full Stop | Mint-OxBig News Network

Advertise with OxBig News Network – WhatsApp Now +919501762829 

(Bloomberg Opinion) — There’s a common tool in the arsenal for anyone trying to change the course of artificial intelligence: the pause. Two years ago, Elon Musk and other tech leaders published an open letter calling on tech companies to delay their AI development for six months to better protect humanity. Now the target has shifted. Amid a growing fear of getting left behind in a race to build computers smarter than humans, a group of European corporate leaders are pointing the “pause” gun at the European Union, the world’s self-styled AI cop.

Like the tech bros who wanted to rein in AI two years ago, this is a blunt suggestion that misses the nuance of what it’s trying to address. A blanket pause on AI rules won’t help Europe catch up with the US and China, as more than 45 companies now argue. That ignores a more fundamental problem around funding that the region’s tech startups desperately need to scale up and compete with their larger Silicon Valley rivals. The idea that Europe has to choose between being an innovator and a regulator is a narrative successfully spun by Big Tech lobbyists who would benefit most from a lighter regulatory touch.         

But that doesn’t mean the AI act itself couldn’t do with a pause, albeit a narrow version of the what firms including ASML Holding NV, Airbus SE and Mistral AI called for in their “stop the clock” letter published on Thursday, which demands that the president of the European Commission, Ursula von der Leyen, postpone rules they call “unclear, overlapping and increasingly complex.”

On that they have a point, but only for the portion of the 180-page act that was hastily added in the final negotiations to address “general-purpose” AI models like ChatGPT. The act in its original form was initially drafted in 2021, almost two years before ChatGPT sparked the generative AI boom. It aimed to regulate high-risk AI systems used to diagnose diseases, give financial advice or control critical infrastructure. Those types of applications are clearly defined in the act, from using AI to determine a person’s eligibility for health benefits to controlling the water supply. Before such AI is deployed, the law requires that it be carefully vetted by both the tech’s creators and the companies deploying it.

If a hospital wants to deploy an AI system for diagnosing medical conditions, that would be considered “high-risk AI” under the act. The AI provider would not only be required to test its model for accuracy and biases, but the hospital itself must have humans overseeing the system to monitor its accuracy over time. These are reasonable and straightforward requirements.

But the rules are less clear in a newer section on general-purpose AI systems, cobbled together in 2023 in response to generative AI models like ChatGPT and image-generator Midjourney. When those products exploded onto the scene, AI could suddenly carry out an infinite array of tasks, and Brussels addressed that by making their rules wider and, unfortunately, vaguer.

The problems start on page 83 of the act in the section that claims to identify the point at which a general purpose system like ChatGPT poses a systemic risk: when it has been trained using more than 10 to the 25th power — or 10^25 — floating point operations (FLOPs), meaning the computers running the training did at least 10,000,000,000,000,000,000,000,000 calculations during the process.

The act doesn’t explain why this number is meaningful or what makes it so dangerous. In addition, researchers at the Massachusetts Institute of Technology have shown that smaller models trained with high-quality data can rival the capabilities of much larger ones. “FLOPs” don’t necessarily capture a model’s power — or risk — and using them as a metric can miss the bigger picture.

Such technical thresholds meanwhile aren’t used to define what “general-purpose AI” or “high-impact capabilities” mean, leaving them open to interpretation and frustratingly ambiguous for companies.

“These are deep scientific problems,” says Petar Tsankov, chief executive officer of LatticeFlow AI, which guides companies in complying with regulations like the AI act. “The benchmarks are incomplete.”

Brussels shouldn’t pause its entire AI law. It should keep on schedule to start enforcing rules on high-risk AI systems in health care and critical infrastructure when they roll out in August 2026. But the rules on “general” AI come into effect much sooner — in three weeks — and those need time to refine. Tsankov recommends two more years to get them right. 

Europe’s AI law could create some much-needed transparency in the AI industry, and were it to roll out next month, companies like OpenAI would be forced to share secret details of their training data and processes. That would be a blessing for independent ethics researchers trying to study how harmful AI can be in areas like mental health. But the benefits would be short-lived if hazy rules allowed companies to drag their heels or find legal loopholes to get out.

A surgical pause on the most ambiguous parts of the act would help Brussels avoid the legal chaos, and make sure that when rules do arrive, they work. 

More From Bloomberg Opinion:

This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “Supremacy: AI, ChatGPT and the Race That Will Change the World.”

More stories like this are available on bloomberg.com/opinion

#Europes #Law #aSmart #Pause #aFull #Stop #Mint

artificial intelligence, AI development, Elon Musk, European Union, generative AI

latest news today, news today, breaking news, latest news today, english news, internet news, top news, oxbig, oxbig news, oxbig news network, oxbig news today, news by oxbig, oxbig media, oxbig network, oxbig news media

HINDI NEWS

News Source

spot_img

Related News

More News

More like this
Related