The European Parliament overwhelmingly approved the AI Act, the first legislation of its kind, which aims to “promote the uptake of human-centric and trustworthy AI and protect the health, safety, fundamental rights and democracy from its harmful effects.”
Listen To This Story
|
This week, the European Union (EU) once again delivered proof that it is leading the way when it comes to curbing the power of Big Tech, protecting the privacy of users, and shielding the public from potentially harmful technologies.
On Wednesday, the European Parliament overwhelmingly approved the AI Act, which places obligations on providers of artificial intelligence (AI) systems and those who deploy them. This landmark legislation, the first of its kind, aims to “promote the uptake of human-centric and trustworthy AI and protect the health, safety, fundamental rights and democracy from its harmful effects.”
For example, it requires content that is AI-generated to actually disclose that fact.
“All eyes are on us today,” said Member of European Parliament Brando Benifei. “While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose.”
MEP Dragos Tudorache echoed that sentiment, stating that the goal of the AI Act is not to prohibit this new technology, which he says is “set to radically transform our societies through the massive benefits it can offer,” but rather to ensure that it “evolves and is used in accordance with the European values of democracy, fundamental rights, and the rule of law.”
The law does prohibit a number of practices, such as predictive policing systems, untargeted scraping of facial images from the Internet or CCTV footage to create facial recognition databases, and emotion recognition systems in law enforcement or the workplace.
The EU has led the way on a number of different initiatives meant to protect Internet users, such as the General Data Protection Regulation, which is a sweeping data privacy and security law.
In addition, a new law with the goal of combating misinformation, the Digital Services Act, will go into effect in August. It requires the major social media platforms doing business in the EU to take a series of steps designed to fight disinformation, such as making it easier for users to report hate speech and other illegal content, ensuring a high level of safety and privacy for minors, and preventing their algorithms from amplifying disinformation.
Violations of any of these laws can result in hefty fines.
For example, in extreme cases, the EU can impose fines of up to 6 percent of the global revenue of the social media platform.
But that’s not all. The EU is also using its antitrust laws to ensure that tech companies do not have too much influence.
Also on Wednesday, the European Commission issued a preliminary report that search engine giant Google has “abused its dominant positions” and must sell part of the company to be in compliance with European antitrust laws.
“Google has a very strong market position in the online advertising technology sector. It collects users’ data, it sells advertising space, and it acts as an online advertising intermediary,” said Margrethe Vestager, the Commission’s executive vice president in charge of competition policy. “So Google is present at almost all levels of the so-called adtech supply chain. Our preliminary concern is that Google may have used its market position to favor its own intermediation services. Not only did this possibly harm Google’s competitors but also publishers’ interests, while also increasing advertisers’ costs. If confirmed, Google’s practices would be illegal under our competition rules.”
The Commission suggested that it would not be possible to find a “behavioral remedy” that would put Google in compliance with the law and that “only the mandatory divestment by Google of part of its services would address its competition concerns.”