Connect with us

Hi, what are you looking for?

Technology

Harmful AI rules: Now brought to you by Europe & Co., Inc.

Press play to listen to this article

Companies, many of them from outside Europe, will play a key role in deciding the details of the European Union’s planned rules for potentially dangerous artificial intelligence. But corporate influence over decisions that risk human rights has some activists worried.

The EU’s new law on artificial intelligence aims to protect people from harmful AI by cracking down against discriminatory, opaque and uncontrolled algorithms that are increasingly being used to make life-changing judgments on immigration, policing, social benefits and schooling. 

The rules don’t target “normal kinds of products,” but aim to halt “potential violation of constitutional rights, whether it’s about the use of biometric surveillance, discrimination or your access to employment and education,” said Iverna McGowan, the director of the Center for Democracy and Technology in Europe. She criticizes entrusting “private-sector dominated” European standards groups to shape the final rules.

But that’s the way the EU’s Artificial Intelligence Act is designed. It leans on industry forums, such as CEN-CENELEC and ETSI, to outline the technical instructions that ensure AI systems are trained on unbiased data and ultimately determine how much human oversight is needed and what needs to be done to prevent the software from going off-track.

France-based ETSI counts over 900 members, including tech giants like Microsoft and Facebook’s parent Meta Platforms as well as European defense companies like Thales and Chinese telecoms equipment provider Huawei. The ETSI group coordinating AI work is led by executives from Japanese telecoms company NEC, China-based Huawei and U.S. chipmaker Intel. 

The ETSI organization has what researchers in the United Kingdom have described as a “pay-to-play” model that gives members paying higher subscription fees more votes in meetings. That makeup can give an advantage to larger and richer corporations, and to global companies able to sign up many national chapters as distinct members. Huawei, for instance, is represented by six members (from Huawei Technologies to HUAWEI TECH. GmbH).

CEN-CENELEC includes industry standards experts from 34 European countries, including some from non-EU countries such as the U.K., Serbia and Turkey. The group ultimately represents thousands of EU and non-EU companies. 

However, some say that non-EU company participation will also draw them into embracing European industrial standards, since companies building highly risky AI systems will have to evaluate their own compliance by following global industry standards.

Conflict of interest

Standards organizations set the rules that make products and services work. They draw up technical specifications to determine the quality and safety of everything from teddy bears and batteries to complicated machinery and even data transfers. European groups like CEN-CENELEC and ETSI have been crucial in getting telecom companies to agree on global shared standards for mobile networks and cybersecurity.

But using the industry’s bureaucrats to figure out how to bring ethics into AI is a step too far for some when companies’ primary aim is to grab some of a global AI market — estimated to be worth more than €1 trillion by 2029.

“AI is a hugely profitable endeavor that is reshaping multiple areas of society and is not going to be fixed by a piece of legislation that treats it like a toy, a radio or a piece of protective equipment,” said Michael Veale, an associate professor in digital rights and regulation at University College London.

MEP Dragoș Tudorache is one of the European Parliament’s point persons on the AI rules | Alain Rolland/European Union

Companies focus on getting “products to the market but the AI Act seeks to limit harms,” said Kris Shrishak, a technology fellow at the Irish Council for Civil Liberties.

Engineers and technical experts in standards groups will likely struggle with ethical questions when they are tasked with translating an AI law into standards, Shrishak said. They will have to decide what constitutes fair and representative data sets for algorithms, the extent of documentation, transparency and human control for an AI program.

These decisions are critical. Flawed data at the heart of AI programs can reinforce social inequalities and prejudices and have far-reaching consequences like misdiagnosing diseases for minority racial groups and limiting job opportunities for women.

Ethical and geopolitical standards

Standards groups don’t see themselves as vessels of corporate influence. Indeed they see that their working methods show that they can reach “solutions that take all points of view into account,” according to Markus Mueck, Intel engineer and first vice chair of ETSI’s coordinating group on AI.

They’re not being asked “to start developing standards about ethics” but merely to implement them, said Constant Kohler, who manages the work on AI for CEN-CENELEC.

The European Commission said that it would have the last say in checking the standards drafted for the AI Act. It also wants European standards groups to involve human rights campaigners in the decision-making on harmful AI.

At the same time, standards groups are under pressure to overhaul how they work as geopolitical tensions brew over supply chains. The Commission is pushing standards organizations to limit the influence of large companies and reform their governance by 2022 to “fully represent the public interest.” Effectively, that means drafting more nongovernmental organizations and curbing the power of non-European companies.

Renew MEP Dragoș Tudorache, one of the European Parliament’s point persons on the AI rules, believes industry should continue to drive standard-setting, but wants rules to go a little further in banning companies controlled by some authoritarian regimes from industry standard-setting.

“Industry is not the enemy,” he said, but it now has “increased responsibility… in forging Europe’s digital path.”

This article has been updated to better clarify MEP Dragoș Tudorache’s position on industry standard-setting.

This article is part of POLITICO Pro

The one-stop-shop solution for policy professionals fusing the depth of POLITICO journalism with the power of technology


Exclusive, breaking scoops and insights


Customized policy intelligence platform


A high-level public affairs network

Click to comment

Leave a Reply

Your email address will not be published.

You May Also Like

Europe

Jamil Anderlini is editor-in-chief of POLITICO Europe and spent two decades as a reporter in China. “Chinese people should be braver!” the young man...

Technology

PARIS — Brussels wants media concentration to become a European Union issue. French billionaire Vincent Bolloré is testing how serious it is. The European...

Technology

Press play to listen to this article Voiced by artificial intelligence. Spain is proving the most troublesome country to probe in EU lawmakers’ ongoing...

Europe

KYIV — It’s all about the weapons — and we’ll do everything to get them to Kyiv.  That was the message from Nordic and...

Europe

Press play to listen to this article Voiced by artificial intelligence. The French left has seldom been so united — and rarely so divided....

Europe

Ireland’s privacy authority Monday announced it was imposing a €265 million fine and other corrective measures on Meta for failing to properly protect its...

Foreign Policy

BERLIN — The German government gave a piece of advice to China on Monday as the country faces historic protests against its rigid zero-COVID...

Energy

EU countries resumed last-ditch talks on Monday to secure agreement on a price cap for Russian oil, with deep splits among them on where...