adesso Blog

Artificial intelligence has already become a fixture in many areas of our professional and private lives. The types of technical solutions it provides are also fitting more and more into our day-to-day lives rather than being solely the domain of the niche or the training environment. But the problem is that it is getting harder to quickly tell whether these solutions are using basic models such as OpenAI’s large language model, or indeed which technology they are using at all. There are a number of reasons for this, including the fact that providers fail to clearly classify their services or results as ‘generated by AI’, the ever-improving integration into popular software tools such as PowerPoint and, last but not least, the fact that authors do not always consistently refer to external sources.

Apart from the obvious question of copyright in documents and texts, we also need to make sure we get the basics right, especially when it comes to AI models. And this is where the EU AI Act comes into play – a framework that shines the spotlight on what AI-based solutions have to offer.

A brief look back at how the EU AI Act came into being

Given that past efforts by US-based technology companies such as Alphabet Inc. (Google Group) and Meta (which includes Facebook, WhatsApp and Threads) had effectively seen them secure sovereignty over personal data, experts in the EU decided they were no longer content twiddling their thumbs when it came to AI. In 2018, a strategy paper was drawn up that, among its key decisions on the digital and technological direction of the bloc, called for AI to be subject to consistent regulation. Various expert commissions spent the next three years refining this digital strategy.

After the EU had confirmed what it wanted to achieve and a large number of amendments from the Member States had been incorporated, the European Parliament was able to hold an initial plenary session and have an extensive debate on the current draft of the bill in October 2022. The input from the discussions, various parliamentary committees and again from the Member States then put the European Council in a position to be able to publish a watered-down version of the act in December, which was adopted as the final proposal by the Parliament on 14 June 2023.

An agreement on the last disputed points is expected before the end of the year. We expect minor adjustments to be made, such as whether national authorities are permitted to monitor public spaces using biometric facial recognition or whether this use case warrants an explicit exception. After that, the companies have a period of 15 months to comply with the requirements. This period – the ‘grace period’ – ends once the act comes into force, which is legally binding for all Member States, without the need for national ratification. Companies that do not meet the requirements by this deadline must expect to be hit with sanctions that are designed to go beyond those of the GDPR.

What is the EU AI Act?

The proposal for the EU AI Act begins with the definition of AI and incorporates machine learning concepts, statistical approaches, as well as logic and knowledge-based concepts. This quickly expands the scope of application to include traditional recommendation engines or chatbots that do not use ‘new’ AI technologies.

The main focus here is on both the risk classification and the definition. The act distinguishes between

  • AI systems with low risk,
  • AI systems with high risk (defined in Article 6) and
  • Prohibited AI systems (defined in Article 5).

While low-risk AI systems will feel the changes enforced by the act – such as a labelling obligation – to a limited extent, the other two risk classes will have to deal with a much greater impact.

Prohibited AI systems include:

  • Subconscious influence techniques
  • Schemes that exploit the weakness or vulnerability of a particular group of people because of their age or physical or mental disability
  • Systems designed to assess or classify the trustworthiness of natural persons over time on the basis of their social behaviour or known or predicted personal characteristics or personality traits
  • Using real-time biometric remote identification systems in publicly accessible areas for law enforcement purposes

As you can see, protecting consumers and natural persons is very much the focus here.

The term ‘high-risk systems’ refers to all systems from the (application) areas listed in Annexes II and III of the act:

  • Biometric identification and categorisation of natural persons
  • Management and operation of critical infrastructures (CRITIS)
  • Education and training
  • Employment, personnel management and access to self-employment
  • Administration of justice and democratic processes.

A variety of safeguards need to be put in place for these use cases to ensure the aim of the EU AI act remains intact, namely the protection of EU citizens. These measures include safeguarding software and technology by traditional means, as well as AI-specific aspects such as traceability and (cyber) security.

The future starts now

The EU’s AI Act is not yet final. Nevertheless, seeing both AI and data and its processing play an important role in our projects – we should start to engage with the work, to internalise it. We have to be aware that we are pioneering the use of these technologies with our understanding, as the majority of people today still do not have access to them. It is also still difficult to predict exactly when the technology will penetrate fully (how widely it is used among the population). And we must never forget that we have to think about tomorrow’s world today. That means setting the course in our projects and plans and creating a framework that allows us to use the result in the years to come in compliance with the registration.

Conclusion

As it did with the General Data Protection Regulation, the EU is also leading the way – and it is an opportunity for us all. We should not see this as a brake, but as an equal opportunity. The Brussels effect – the idea that good initiatives find imitators worldwide – is also to be expected in this case; we can already see initial approaches from the US in particular being made.

Would you like to find out more about AI and what we can do to support you? Then check out our website.

You will find more exciting topics from the adesso world in our latest blog posts.

Picture Christian Hammer

Author Christian Hammer

After successfully completing his degree in business informatics at the University of Applied Sciences in Würzburg with a focus on e-commerce, Christian Hammer went through a career spanning several stations and technologies in the development of data analytics solutions. Over the years, he took on increasing responsibility, first as a lead developer, later as an architect and project manager - including in the merger of E-Plus and O2. In the meantime, he almost exclusively takes on consulting assignments in strategy consulting or as project or program manager. Christian focuses on business analysis in the context of data integration, data platforms, big data and artificial intelligence.

Save this page. Remove this page.