More than 100 companies voluntarily sign EU AI Pact

11 seconds ago

Image: © Angelika Bentin/Stock.adobe.com

While Meta is not among the signatories, a spokesperson told Euractiv that it hasn’t ruled out the possibility of joining the pact at a later stage.

More than 100 SMEs and multinational companies, including big names such as OpenAI, Google, Microsoft and Amazon, have become the first signatories of the EU Artificial Intelligence Pact and its voluntary pledges. However, some Big Tech companies, such as Meta and Apple, are noticeably missing.

The EU AI Pact calls for the participating companies to work towards future compliance with the AI Act, identify systems that might be categorised as high-risk under the Act and promote AI literacy among staff. In addition to the core pledges, more than half of the signatories signed additional pledges to ensure human oversight, mitigate risks and transparently label AI-generated content.

Currently, there are 116 signatories on the AI Pact, with the EU continuing to update the list as new pledges are signed.

A spokesperson for Meta told Euractiv yesterday (24 September) that it hasn’t ruled out the possibility of joining the pact at a later stage.

“We welcome harmonised EU rules and are focusing on our compliance work under the AI Act at this time, but we do not rule out our joining the AI Pact at a later stage.”

The landmark EU AI Act entered into effect on 1 August this year and intends on regulating AI through a risk-based approach. Simply put, the higher the risk that an AI system poses, the more rules that apply to it. The Act would be working in conjunction with existing legislations around data including the Digital Markets Act, Digital Services Act, Data Governance Act, and Data Act.

The EU AI Pact is a voluntary measure that developers and organisations can sign to adopt the Act’s key measures ahead of legal deadlines.

The AI Act has stringent penalties, as companies might be facing fines up to 7pc of their global annual turnover for violations of banned AI applications. They will also face fines of up to 3pc for violations of other obligations and a maximum of 1.5pc of their global turnover for supplying incorrect information.

While the AI Act has been viewed as landmark regulation to rein in the growing power of AI, some have criticised parts of the legislation.

Dr Kris Shrishak, a technology fellow at the Irish Council for Civil Liberties, told SiliconRepublic.com earlier this year that the AI Act relies too much on “self-assessments” when it comes to risk.

“Companies get to decide whether their systems are high risk or not,” he said. “If high risk, they only have to perform self-assessment. This means that strong enforcement by the regulators will be the key to whether this regulation is worth its paper or not.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Suhasini Srinivasaragavan is a sci-tech reporter for Silicon Republic

editorial@siliconrepublic.com