- What we offer
- Who we serve
- Digital Transformation
- Our Approach
- Careers
- About Us
- Contact Us
In 2023, the number of users of artificial intelligence (AI) tools in Europe grew to almost 80 million. In the same year, over a third of European companies adopted AI.
With the increased use of AI comes increased concerns about its usage: deepfakes, privacy violations, algorithmic bias, and so on.
This is where the EU AI Act comes into play. It was proposed by the European Commission in 2021, and unanimously voted in favor of this year. The Act is similar to Europe’s General Data Protection Regulation (GDPR), and imposes requirements on firms that are using or designing AI within the European Union. It is considered the first comprehensive AI regulation by a major regulator in the world.
Regulatory non-compliance could lead to fines ranging from €7.5 million or 1% of worldwide annual turnover, to €35 million or 7% of worldwide annual turnover.
Regulatory non-compliance could lead to fines ranging from €7.5 million or 1% of worldwide annual turnover, to €35 million or 7% of worldwide annual turnover. Most violations of the Act are expected to cost companies €15 million or 3% of annual global turnover.
To ensure better conditions for AI’s development and use, the EU wants to regulate it. For Europe to be transformed into a global hub for trustworthy AI, rules are being laid out governing AI’s use, development, and marketing.
The EU AI Act aims to ensure that, within the EU, AI systems are safe and respect fundamental rights. Furthermore, it is meant to encourage a single EU AI market and promote innovation and investment in AI.
The AI Act is distinguished by its classification of AI technologies into risk categories: unacceptable, high, limited and minimal.
Unacceptable
These risks are prohibited. Examples include biometric categorization systems, social scoring, compiling facial recognition databases via untargeted scraping, and using AI to exploit vulnerabilities to distort behavior.
High
These AI systems are permitted. However, they must comply with multiple regulations that require an accountability framework, documentation, and rigorous testing. Conformity assessments for high-risk systems have to be finished before the products are released on the market.
Examples include AI systems used in certain products such as cars, aviation, and medical devices, and systems falling under specific areas such as law enforcement, employment, and migration.
Limited & Minimal
Systems that fall outside the prohibited and high-risk categories are considered limited or minimal risk.
Examples of limited-risk systems include personalized recommendations on eCommerce platforms, spam filters, or basic image recognition software. These may face transparency requirements. For instance, users must be informed that they are interacting with a product generated by AI.
Generative AI, such as ChatGPT, will need to follow both EU copyright law and transparency requirements. This will include disclosing summaries of copyrighted data that was used for training, mentioning that the content was AI-generated, and preventing the model from generating content that is illegal.
Generative AI, such as ChatGPT, will need to follow both EU copyright law and transparency requirements.
General-purpose, high-impact models that could pose a systemic risk, such as GPT-4, will need to go through rigorous evaluations. Furthermore, incidents considered serious will be expected to be reported to the European Commission.
Minimal-risk systems are thought of as the “safest” AI systems. They often use non-sensitive data and minimally impact people’s lives. Examples might include filters that adjust image brightness or AI-driven chess games. For these, the EU imposes minimal or no regulations.
For this set of EU AI regulations, all parties involved in the use, distribution, manufacturing, development or import of AI systems will be held accountable. The Act also applies to deployers and providers of AI technology located outside the EU, such as Switzerland, if the system output is intended for use within the EU.
Having said that, the Act is expected to impact how AI technology is designed, marketed and deployed around the world. It may even inspire domestic policy changes in different countries. Similar to the GDPR, the EU AI Act could become a worldwide gold standard.
On 13 March 2024, the AI Act was adopted by the European Parliament. It is expected to come into force 20 days after being published in the Official Journal of the European Union, and will be fully applicable 24 months hence.
However, certain parts will be applicable sooner, such as codes of practice, the banning of unacceptable-risk AI systems, and requirements for general-purpose AI systems.
Certain high-risk systems will be provided more time for regulatory compliance, at 36 months after entry into force.
The EU AI Act stands as a watershed moment in the global AI and ML landscape, with implications for countries around the world.
As the Act comes into force, it is crucial for global policymakers and stakeholders to proactively engage with the evolving regulatory environment. By addressing new laws on artificial intelligence, businesses can demonstrate responsible AI usage, deployment, and development.
Silverskills and its subsidiary, Silverse, can provide services that will help you successfully navigate the Act’s intricacies, from regulatory compliance to AI/ML services. Contact us now to get started.
Please fill the details below. A representative will contact you shortly after receiving your request.