"It's been a hot topic in recent months. As the use of new tools has become more widespread, so too have the threats and how often they appear," says Ivana Kočíková, a security expert at Aricoma who specialises in this whole new topic, including related legislation. The European Union is the first in the world to introduce comprehensive regulation of artificial intelligence called the AI Act. It sets out rules, obligations and requirements for the development, deployment and use of AI systems.
New opportunities, new risks
"We already know that companies and organisations approach AI tools in different ways. Some deploy them quickly and without in-depth analysis of the impacts, while others are more cautious. Those of us who work in the field of cyber security add context to this – in recent months, we have seen an increase in the misuse of artificial intelligence in targeted fraud, phishing campaigns and automated attacks. There has also been a significant increase in deepfake attacks, in some cases by hundreds to thousands of percent. Just as ordinary users have learned to use these new possibilities effectively, attackers are now able to deploy them with increasing sophistication," warns Kočíková.
This is why the AI Act brings new obligations for some organisations, for example in the areas of risk management, data management and cyber security. In the Czech Republic, the Ministry of Industry and Trade plays a key role in coordinating this European regulation and is currently preparing follow-up rules and procedures.
From February 2025, a ban on certain unacceptable practices, such as manipulative techniques that exploit people's vulnerabilities, will apply throughout the EU, and further legal barriers will be put in place.
Among other things, artificial intelligence systems will be classified according to their level of risk – prohibited, high risk, limited risk and minimal risk. Depending on which of these companies or institutions use, they will be subject to specific obligations and required measures.
Understanding as the best defence
"Some people are concerned about all the new things they will have to comply with and set up because of the new rules. But the question of what we actually use at work and how we secure it is important in itself, regardless of whether legal regulations exist or not," describes Ivana Kočíková.
According to her, the main difference is often whether the tools are used in a coordinated and conscious manner within the framework of company rules, or individually, without knowledge of the possible impacts. That is why it is important to focus on education right now. "Not only about what the AI Act says, but mainly about how artificial intelligence tools work, what situations can arise in practice and what risks arise from them. If people understand the context, they can make more responsible and safer decisions," the security expert points out. In short, the importance of critical thinking will only grow in the future.
No control without context: why monitor AI system inputs
Some of the obligations defined by the AI Act are already familiar to companies and institutions in strategic sectors. Mapping tools and classifying them will be fundamental for them, and risk analysis will not be new to some. However, systems using artificial intelligence have an additional requirement, namely the ability to retrospectively document what data the system worked with.
"If we start using it and it starts generating bad decisions for some reason, it must be possible to trace back what inputs were used and when this happened. Deliberate manipulation with incorrect or harmful inputs can also be a form of attack. That is why it is important to record not only what output the model generated, but also the context in which it made its decisions. For high-risk systems used in banking, healthcare or energy, this can have a major impact," explains Ivana Kočíková, praising the fact that this time we are in a relatively good starting position compared to other technological changes. Now is the right time to set basic rules so that unnecessary complications do not have to be dealt with in the future.