Exhibitors & Products
Events & Speakers

1. the AI Act defines so-called "high-risk AI". From an industrial perspective, this includes medical devices, vehicles, HR software, banking applications, training tools, products for critical infrastructure and safety-related products. The requirements apply to providers and users of high-risk AI applications.

2. companies operating in the high-risk AI sector will have to undergo a certification process. It is about data governance, transparency, robustness, cyber security, risk and quality management. The applications will then be stored in a public database.

3. applications in the field of defense and national security as well as in research and partly open source approaches are excluded from the AI Act.

4. foundation models must provide transparency. This involves technical documentation, training data, copyright and intellectual property.

5 Generative AI must make itself recognizable to humans. It must be recognizable to the user that an image was created by an AI, for example. Transparency is therefore becoming increasingly important.

6. penalties: up to 7 percent of annual turnover. The penalties are staggered according to the severity of the offense.

7. now the standardization groups, the standardization experts in their working groups must bring the AI Act to life. Companies such as Trustifai from Austria are ready and are certain that they will be able to carry out the first certifications as early as next year.

Tip: Applied AI has been collecting use cases for several months and assigning them to a risk factor. The database provides a good overview. Find out more here .