How does your company work with AI? Now is the time to map it out, and not just because of the law

2. 7. 2025

What AI tools do you use? Do you know what tools your colleagues use? Do you know where the data you put into them ends up? Let's be clear, this article is not meant to scare you. But we also know that when managers are asked the previous three questions, many of them become uncertain. And because the use of artificial intelligence will only continue to grow, now is the right time to set boundaries so that a great tool does not become a security threat.

"It's been a hot topic in recent months. As the use of new tools has become more widespread, so too have the threats and how often they appear," says Ivana Kočíková, a security expert at Aricoma who specialises in this whole new topic, including related legislation.  The European Union is the first in the world to introduce comprehensive regulation of artificial intelligence called the AI Act. It sets out rules, obligations and requirements for the development, deployment and use of AI systems.

New opportunities, new risks

"We already know that companies and organisations approach AI tools in different ways. Some deploy them quickly and without in-depth analysis of the impacts, while others are more cautious. Those of us who work in the field of cyber security add context to this – in recent months, we have seen an increase in the misuse of artificial intelligence in targeted fraud, phishing campaigns and automated attacks. There has also been a significant increase in deepfake attacks, in some cases by hundreds to thousands of percent. Just as ordinary users have learned to use these new possibilities effectively, attackers are now able to deploy them with increasing sophistication," warns Kočíková.

This is why the AI Act brings new obligations for some organisations, for example in the areas of risk management, data management and cyber security. In the Czech Republic, the Ministry of Industry and Trade plays a key role in coordinating this European regulation and is currently preparing follow-up rules and procedures.

From February 2025, a ban on certain unacceptable practices, such as manipulative techniques that exploit people's vulnerabilities, will apply throughout the EU, and further legal barriers will be put in place.

Among other things, artificial intelligence systems will be classified according to their level of risk – prohibited, high risk, limited risk and minimal risk. Depending on which of these companies or institutions use, they will be subject to specific obligations and required measures.

Understanding as the best defence

"Some people are concerned about all the new things they will have to comply with and set up because of the new rules. But the question of what we actually use at work and how we secure it is important in itself, regardless of whether legal regulations exist or not," describes Ivana Kočíková.

According to her, the main difference is often whether the tools are used in a coordinated and conscious manner within the framework of company rules, or individually, without knowledge of the possible impacts. That is why it is important to focus on education right now. "Not only about what the AI Act says, but mainly about how artificial intelligence tools work, what situations can arise in practice and what risks arise from them. If people understand the context, they can make more responsible and safer decisions," the security expert points out. In short, the importance of critical thinking will only grow in the future.

No control without context: why monitor AI system inputs

Some of the obligations defined by the AI Act are already familiar to companies and institutions in strategic sectors. Mapping tools and classifying them will be fundamental for them, and risk analysis will not be new to some. However, systems using artificial intelligence have an additional requirement, namely the ability to retrospectively document what data the system worked with.

"If we start using it and it starts generating bad decisions for some reason, it must be possible to trace back what inputs were used and when this happened. Deliberate manipulation with incorrect or harmful inputs can also be a form of attack. That is why it is important to record not only what output the model generated, but also the context in which it made its decisions. For high-risk systems used in banking, healthcare or energy, this can have a major impact," explains Ivana Kočíková, praising the fact that this time we are in a relatively good starting position compared to other technological changes. Now is the right time to set basic rules so that unnecessary complications do not have to be dealt with in the future.

"Some people are concerned about all the new things they will have to comply with and set up because of the new rules. But the question of what we actually use at work and how we secure it is important in itself, regardless of whether legal regulations exist or not."

From 3D liver model to virtual reality. Connecting organ donors and recipients faster

11. 2. 2025

David Sibřina has something at work that I'm sure none of you have. It's a 3D print of a liver with all its protrusions and projections, and it's a bit of a symbol. The researcher, who works at IKEM as head of the augmented and virtual reality surgery planning group, actually started working on incorporating virtual reality tools into the surgical process the moment it became clear that even modern planning techniques using just 3D prints weren't enough.

Czechs want to help, but it has to be simple. So we made it possible for them

7. 1. 2025

The need to adapt, to deal with server and database capacity, to scale. When you say charity, you don't associate any of those words with it. But David Procházka, the founder of Donio, the largest and well-known donation platform in the Czech Republic, doesn't tend to think that way. On the contrary, he talks about Donio as a technological tool that facilitates help. In the interview, he explains what it entails and what has made it so popular. He and his team have managed to solve old problems much more efficiently using new tools.

Let's ask children not only about how was school but also what’s up online

30. 12. 2024

It is pretty hard to imagine a job, a school day or life in general without computers, phones and technology in general. But daily exposure to them has an impact on people and contributes to shaping their lives. How can we make the most of them, the positive side, and not be influenced by the negative side? This is what psychologist David Šmahel has been researching at Masaryk University (MU) for years. In contrast to others, you will not hear apocalyptic scenarios from him. Instead, he speaks engagingly about all the consequences of the digitalisation of life that we are still unaware of.