KI & Automation
March 17, 2026

EU AI Act Readiness: Processes, Roles, Evidence — the Reality Check

EU AI Act Readiness explained: What processes, roles, transparency and evidence must be in place by August 2, 2026

EU AI Act Readiness: Processes, Roles, Evidence — the Reality Check

Less manual, more automated?

In an initial consultation, let's find out where your biggest needs lie and what optimization potential you have.

The EU AI Act is often still treated as a topic for the future. In reality, it is more of a stress test for how organizations use AI. By August 2, 2026, at the latest, it will no longer be enough to simply introduce AI as a tool. Companies must be able to demonstrate in a transparent manner how AI systems are operated: with clear labeling, documented decisions, human oversight, and clearly defined responsibilities.

What the EU AI Act Actually Regulates

The EU AI Act is a European regulation. This means it applies directly in all member states without requiring national laws to be transposed first.

The regulation follows a risk-based approach. AI systems are regulated to varying degrees depending on their risk. Four categories are central:

Prohibited Applications (Unacceptable Risk)

Some applications are strictly prohibited, such as social scoring or certain forms of emotion recognition in the workplace.

High-Risk Systems

These include AI applications in sensitive areas such as personnel decisions, lending, or critical infrastructure. These systems are subject to strict requirements regarding documentation, risk management, and human oversight.

Systems with transparency requirements

These include, among others, chatbots or generative AI systems. Users must be able to recognize that they are interacting with an AI or that content has been artificially generated.

Minimal risk

Many AI systems—such as spam filters or recommendation systems—fall into this category and are subject to very few requirements.

The key point: In practice, most companies fall somewhere between the “transparency requirement” and “high-risk” categories. And that is where the organizational challenges arise.

The key deadline: August 2, 2026

The EU AI Act will not come into full effect all at once. Various regulations take effect at different times; an official implementation timeline is available here.

Some prohibitions have been in effect since February 2025. Additional rules apply to providers of large AI models.

However, the most important date for many companies is August 2, 2026. From this date onward, key requirements apply to:

  • High-risk AI systems
  • Transparency obligations under Article 50
  • Obligations to provide evidence to supervisory authorities

Actual enforcement also begins on this date. Authorities can then impose fines if organizations fail to meet their obligations.

Article 50: Transparency Requirements for AI

Article 50 of the EU AI Act concerns transparency. Certain AI systems must disclose that they use artificial intelligence. Examples include chatbots, generative text systems, synthetic images or videos, and so-called deepfakes. In such cases, it must be clearly recognizable that content or interactions were generated by AI.

In practice, this means more than just a small note in the interface. Transparency must be ensured both technically and organizationally. Organizations need clear rules regarding:

  • when content is labeled
  • how labeling is performed automatically
  • who reviews content before publication
  • how these processes are documented

Provider vs. Deployer – two roles, two responsibilities

The AI Act distinguishes between two central roles.

Providers develop an AI system themselves or have it developed and bring it to market or into operation under their own name or brand; therefore, they bear primary responsibility for compliance, documentation, and risk management.

Deployers implement an AI system under their own responsibility and must organize its safe, compliant use—including human oversight and adherence to guidelines.

The distinction is important: A deployer can become a provider themselves if a high-risk system is significantly modified, its purpose is changed, or it is continued under their own name.

Where the EU AI Act becomes relevant in practice

Chatbots and assistance systems

Many companies already use internal or external chatbots. These often fall under transparency obligations: labeling, logging, approvals, and monitoring must be properly organized.

HR and personnel processes

AI systems used to support job applications or performance evaluations can quickly be classified as high-risk systems. In such cases, clear processes for human review and documentation are required.

Marketing and Content

Generative AI is increasingly being used for text, images, and campaigns. As soon as content is published, transparency obligations may apply. Organizations therefore need clear rules regarding when content must be labeled and how this process is technically implemented.

Knowledge Systems with RAG

Many modern AI applications use so-called Retrieval-Augmented Generation (RAG). In this process, the model accesses corporate documents to generate answers. This improves traceability and source attribution but raises new questions regarding data access, permissions, and documentation.

A Pragmatic Readiness Check

Five questions are particularly crucial:

  1. Is there a clear governance structure for AI?
  2. Are all AI systems inventoried and assessed for risk?
  3. Is there robust documentation for relevant systems?
  4. Have transparency obligations been implemented technically and organizationally?
  5. Are there monitoring, incident processes, and human oversight?

Organizations that cannot clearly answer several of these questions are usually still in the early stages of their AI Act readiness. In many cases, the problem lies not with the technology, but with a lack of governance, unclear responsibilities, or insufficient documentation.

AI Literacy: AI Competence Becomes an Organizational Requirement

The EU AI Act concerns not only technology but also competence in handling it. Organizations must ensure that employees working with AI systems have sufficient knowledge. This so-called AI literacy is explicitly part of the regulation.

This does not mean a deep technical understanding of machine learning. Rather, what is crucial is that employees understand the fundamental characteristics and limitations of AI systems. This includes, for example, understanding that AI models can make mistakes, that results must be verified, and that sensitive data must not be entered into systems without proper oversight. This makes AI literacy a formal organizational requirement for the first time. Anyone using AI must also ensure that people can handle it responsibly.

The EU AI Act and the parallels to the GDPR

For many organizations, the EU AI Act brings to mind the introduction of the GDPR. The comparison is no coincidence. In both cases, the focus is less on individual technical measures and more on demonstrable organizational structures.

The GDPR transformed data protection from a purely technical issue into a management task. Suddenly, there was a need for data protection officers, documentation, incident response processes, and clear responsibilities.

The EU AI Act follows a similar logic, only this time for artificial intelligence. Here, too, requirements for governance, risk assessment, documentation, and oversight are emerging.

In a nutshell: The GDPR focuses primarily on data. The EU AI Act focuses on decision-making processes, risks, and the transparency of AI systems.

Common Misconceptions About the EU AI Act

Several common misconceptions have already emerged regarding the EU AI Act. A frequent mistake is the assumption that the AI Act is primarily a matter for the legal department. In practice, however, it affects multiple areas simultaneously: IT, data protection, compliance, business units, and management.

A second misconception concerns the role of the technology itself. It is often assumed that a specific tool is “AI Act-compliant” or “non-compliant.” However, it is not the tool alone that is regulated, but the specific context of use.

Finally, the organizational effort involved is frequently underestimated. Many requirements of the AI Act pertain to documentation, governance, and evidence management. That is where the biggest gaps arise in practice, not with the technology itself.

Conclusion

The EU AI Act is not just a paper problem. It is a stress test for how organizations implement, manage, and account for AI. By August 2026, it won’t be enough to roll out a tool, write a few guidelines, and hope for common sense. Anyone using AI productively needs clear roles, robust processes, solid evidence, and operations that continue to function even when things go wrong.

FAQ

When does the EU AI Act take effect?

The EU AI Act is being implemented in phases. Some prohibitions have been in effect since February 2025. For many companies, however, August 2, 2026 is the decisive date. From then on, key requirements for high-risk AI and transparency obligations will take effect.

What are high-risk AI systems under the EU AI Act?

High-risk AI encompasses applications that can have a particularly significant impact on people. These include systems used in human resources, credit decisions, education, or critical infrastructure. These systems are subject to stricter requirements regarding documentation, risk management, and human oversight.

What penalties apply for violations of the EU AI Act?

The EU AI Act provides for substantial fines. Depending on the violation, penalties of up to 35 million euros or 7% of global annual turnover may be imposed. Particularly high penalties apply to prohibited AI applications.

Does the EU AI Act also apply to generative AI such as ChatGPT?

Yes. Generative AI systems are specifically subject to the transparency obligations of the EU AI Act. Users must be able to recognize when content or interactions have been generated by AI, such as with chatbots or AI-generated images.

What does Article 50 of the EU AI Act regulate?

Article 50 of the EU AI Act sets out transparency obligations for certain AI systems. When users interact with an AI or when content has been generated by AI, this must be clearly recognizable. This includes, for example, chatbots, generative text systems, or AI-generated images and videos.

Less manual, more automated?

Let's arrange an initial consultation to identify your greatest needs and explore potential areas for optimisation.

SLOT 01
Assigned

Pro Quartal arbeiten wir nur mit maximal sechs Unternehmen zusammen, um die besten Ergebnisse zu erzielen.

SLOT 02
Assigned

Pro Quartal arbeiten wir nur mit maximal sechs Unternehmen zusammen, um die besten Ergebnisse zu erzielen.

SLOT 03
Assigned

Pro Quartal arbeiten wir nur mit maximal sechs Unternehmen zusammen, um die besten Ergebnisse zu erzielen.

SLOT 04
Assigned

Pro Quartal arbeiten wir nur mit maximal sechs Unternehmen zusammen, um die besten Ergebnisse zu erzielen.

SLOT 05
Assigned

Pro Quartal arbeiten wir nur mit maximal sechs Unternehmen zusammen, um die besten Ergebnisse zu erzielen.

SLOT 06
Available

To achieve the best possible results, we limit the number of companies we work with to a maximum of six per quarter.

faq

Your questions, our answers

What does bakedwith actually do?

bakedwith is a boutique agency specialising in automation and AI. We help companies reduce manual work, simplify processes and save time by creating smart, scalable workflows.

Who is bakedwith suitable for?

For teams ready to work more efficiently. Our customers come from a range of areas, including marketing, sales, HR and operations, spanning from start-ups to medium-sized enterprises.

How does a project with you work?

First, we analyse your processes and identify automation potential. Then, we develop customised workflows. This is followed by implementation, training and optimisation.

What does it cost to work with bakedwith?

As every company is different, we don't offer flat rates. First, we analyse your processes. Then, based on this analysis, we develop a clear roadmap including the required effort and budget.

What tools do you use?

We adopt a tool-agnostic approach and adapt to your existing systems and processes. It's not the tool that matters to us, but the process behind it. We integrate the solution that best fits your setup, whether it's Make, n8n, Notion, HubSpot, Pipedrive or Airtable. When it comes to intelligent workflows, text generation, or decision automation, we also use OpenAI, ChatGPT, Claude, ElevenLabs, and other specialised AI systems.

Why bakedwith and not another agency?

We come from a practical background ourselves: founders, marketers, and builders. This is precisely why we combine entrepreneurial thinking with technical skills to develop automations that help teams to progress.

Do you have any questions? Get in touch with us!