KI & Automation
January 19, 2026

AI and data protection — what German companies should pay attention to when it comes to US tools

AI & data protection are important topics for German companies: GDPR, AI Act, US tools, risks & obligations explained in an understandable way.

AI and data protection — what German companies should pay attention to when it comes to US tools

Less manual, more automated?

In an initial consultation, let's find out where your biggest needs lie and what optimization potential you have.

More and more companies are using artificial intelligence to support various business processes, whether in the form of assistants such as ChatGPT or as part of specific departmental software. According to figures from the Federal Statistical Office, these technologies were in use in one in five German companies in 2024, with an annual increase of a substantial 8 percentage points.

However, AI and data protection often remain a big question mark. This is precisely because many of the tools originate in the US, where different standards apply to the handling of data than in the EU. Added to this are concerns in connection with the EU AI Regulation. Perhaps you are also unsure about what you need to bear in mind if you want to integrate such applications into your workflows in a legally compliant manner. With this article, we shed some light on the matter.

Disclaimer: Before we dive deeper, it should be noted that the information provided here cannot replace individual legal advice, but should be understood solely as an initial guide. If you use AI in a data protection-sensitive area, you should consult a proven legal expert. AI consulting from an AI agency is also advisable.

Why is data protection such an important issue in AI?

At first glance, artificial intelligence can quickly appear to be purely an efficiency tool. It takes over routines, speeds up processes, and helps with creative bottlenecks. However, it is very often used to analyze large amounts of data – and this is precisely where AI and data protection become particularly relevant. Caution is particularly important when it comes to the deliberate processing of personal information. However, you should bear in mind that data is also used outside of this specific area of application, as it forms the basis of every AI application.

Basically, AI does not work without information. Systems based on machine learning improve their results through the continuous evaluation of large data sets. The more comprehensive and accurate this data is, the more powerful the system is. In practice, however, these records often contain personal information, i.e., information that can be traced back to an identifiable person, without this being clearly indicated.

This lack of transparency in many AI systems also extends to other areas. For example, the internal decision-making processes of modern algorithms are often difficult to understand. This makes it difficult for companies to assess how data is processed or reused. If unbalanced or incorrect training information is added, the risk of distorted or discriminatory results increases. In particular, when you use artificial intelligence to write texts, this can also lead to copyright issues or unintended similarities.

Combined with the fact that international AI tools are often developed according to legal requirements that do not fully comply with European data protection standards, this creates a high level of sensitivity. This chain of events makes AI and data protection a particularly important issue for companies.

The GDPR as the main point of reference for proper data protection and AI in companies

If you use AI software from providers outside the EU, the same legal requirements apply as for other international business software: the General Data Protection Regulation (GDPR) is decisive. It always applies when personal information of people in the EU is processed (technologically), regardless of where the provider of the tool is located.

The key difference lies in the scope of the review. Due to the special functioning of artificial intelligence, the GDPR requires a much more precise, documented, and reflective application of its rules in this area. The explicit aim is not to slow down innovation, but to make risks controllable.

A key element is the question of the legal basis on which data is processed. The GDPR only allows this if there is a clear reason or legitimate interest – in other words, you cannot collect information arbitrarily. In this context, the express consent of the persons concerned is usually decisive. You are already familiar with this logic from other digital applications, but in the case of AI, it applies in several areas at once.

The data protection principles set out in Article 5 of the GDPR are particularly relevant. This article stipulates that data must be processed lawfully, transparently, and for a specific purpose. It also requires that information be factually correct and only used to the extent that is actually necessary. It is precisely these points that often present companies with new and unexpected challenges when using AI.

The sticking point: personal data

The performance of AI systems depends heavily on the quality and quantity of the data used. In many use cases, this data consists of personal information that is protected by the GDPR. This applies not only to obvious facts such as names or email addresses, but also to indirect data such as user behavior, text entries, or metadata.

Personal data can be processed throughout the entire life cycle of an AI system. This begins with the collection of training data, extends to the actual learning process, and continues through to use in ongoing operations, such as the evaluation of inputs and outputs. Each of these processing operations requires a legal basis and must be documented in a traceable manner.

In addition to legality, other principles play an important role. The requirement for transparency demands that data subjects be able to understand what happens to their data. The purpose limitation principle stipulates that data may only be used for clearly defined purposes. In addition, there are requirements for data accuracy and the obligation to minimize data, i.e., to limit it to what is really necessary (more on this in a moment).

Taking all of this into account is not easy even when using standard tools – but AI takes the whole thing to the extreme due to its complexity.

The GDPR also applies to US providers – but with additional challenges.

As soon as a company in the European Union processes personal data or has it processed by third parties, the GDPR applies. It does not matter whether the provider of the software used is based in Germany, Europe, or the US. This basic rule applies to classic cloud software as well as to modern AI systems.

In practice, however, the crucial difference arises when data is processed outside the EU or made accessible from there. Many AI tools from the US store, analyze, or maintain data in so-called third countries. It is precisely this point that makes the use of such applications challenging in terms of data protection law.

In legal terms, this is referred to as data transfer to a third country. The GDPR regulates such cases in Articles 44 ff. These provisions are intended to ensure that personal information retains a comparable level of protection outside the EU. For companies, this means increased auditing efforts.

A key instrument are the standard contractual clauses, often referred to as SCCs. These are templates specified by the EU, which require providers to comply with European data protection standards. These clauses are usually mandatory for US providers. However, they are not sufficient on their own.

In addition, companies must check whether national laws in the third country conflict with data protection. In the US, this primarily concerns government access to data. The GDPR requires a realistic assessment of potential risks in this regard. On this basis, additional protective measures must be implemented, such as encryption, anonymization, or clear access restrictions.

With AI tools, this assessment is often more difficult than with traditional software. Training data, log data, or support access are not always clearly defined. It often remains unclear for what purposes the information entered will be used in the long term.

AI increases typical GDPR risks

Compared to conventional software, solutions with artificial intelligence bring additional data protection issues. These arise less from the technology itself than from its flexible and learning mode of operation.

A key issue is purpose limitation. The GDPR not only requires a legal basis such as a legitimate interest in data processing, but also that information be used only for predefined and clearly described purposes. However, AI systems benefit from versatile data sets. This becomes problematic when text entries or usage data are processed for training models or for general product improvement. Without clear boundaries, this may violate the original purpose definition.

Data minimization also poses challenges for companies. Many AI applications collect more information than is necessary for their actual function. This includes complete prompts, usage profiles, or technical metadata. The GDPR requires a conscious restriction to the necessary minimum.

Another critical issue concerns transparency. Companies must be able to explain what data is being processed, how the system handles it, and whether decisions are made automatically. Especially with complex AI models, this traceability is limited. Black box approaches make it difficult to provide understandable information.

The situation becomes particularly sensitive when it comes to automated decisions. Article 22 of the GDPR protects individuals from being evaluated or disadvantaged solely on the basis of automated processes. If AI is used in scoring procedures, pre-selection processes, or performance evaluations, for example, additional obligations arise. These include information rights, opportunities for human intervention, and increased documentation requirements.

Other important legal obligations – from copyright to advertising guidelines

In addition to the GDPR, a whole host of other laws may be relevant when using AI. These include the Copyright Act, for example when protected works are used as data for training an AI system or AI results are to be exploited commercially. Companies must check whether they have rights of use and how content may be further processed.

In heavily regulated industries, the requirements are even more stringent. In the healthcare sector, the Medical Device Regulation (MDR) and the European Health Data Space (EHDS) also play a role. AI software can be classified as a medical product, which entails special testing and approval requirements. The EHDS regulates the handling of health data across Europe.

Clear rules also apply to customer acquisition. If AI is used in connection with advertising, the Unfair Competition Act (UWG) must be observed. Misleading statements, hidden advertising, or unlabeled content generated by marketing automation can have legal consequences.

What does the EU AI Regulation mean for artificial intelligence and data protection in companies?

Many people think that the EU AI Act (EU Artificial Intelligence Act) is only relevant for developers of AI software, but that is not entirely true. With the AI Regulation, the European Union has created the first comprehensive legal framework for artificial intelligence, which also affects companies that merely use AI.

But does this have an impact on data protection? Not directly, but since these AI statutes are also about making artificial intelligence and the associated data processing legally compliant and secure, the two areas are very closely linked. This is probably why the AI Regulation is so often mentioned in direct connection with AI and data protection.

The AI Act is primarily a technology-specific regulation with a product safety law character. It was published in the Official Journal of the EU on July 12, 2024, and came into force on August 1, 2024. Its application is gradual. The first regulations have been in force since February 2, 2025.

The aim of the regulation is to build trust in AI, enable innovation, and protect fundamental rights. To this end, the legislator is pursuing a risk-based approach. The higher the potential risks to human rights and security, the stricter the obligations. The focus is on so-called high-risk AI systems. These can include tools for applicant selection, performance evaluation, credit checks, or for organizing access to education or services. This is where specific operator obligations regulated by the AI Act come into play, but these are not directly related to data protection.

The following distinction is important for classification in the context of AI and data protection:

• AI Act = product and risk regulation

• GDPR = data protection law

For you as a user, this means:

• The GDPR determines how data processing is handled in your systems.

• The AI Act regulates the use, control, and responsibility of AI (where data is of course always involved)

Checklist: What you should always pay attention to for the data protection-compliant use of US AI software

Clear guidelines are needed to ensure that AI and data protection are compatible in everyday business life. The following points are intended to help you as initial pointers to compactly identify typical risks and limit them in a structured manner.

Clarify the legal basis: For each instance of personal data processing, check the basis on which it is carried out. Define the legitimate interest or another legally secure basis.

• Clearly define purposes: Specify what data may be used for. You should never automatically reuse (or allow the reuse of) text entries, usage information, or analysis results for other purposes.

• Limit the scope of data: Reduce inputs and stored information to the bare minimum. Less data reduces risks and makes it easier to comply with the GDPR.

• Ensure transparency: Document in an understandable way what information is collected, how the AI system works, and whether automated decisions are made. These facts must be traceable internally.

• Check third-country transfers: With US tools, it is particularly important to clarify whether data is processed or accessible outside the EU. Standard contractual clauses are mandatory, and additional protective measures are often required.

• Ensure technical security: Encryption, access restrictions, and anonymization reduce the risk of unauthorized access. This is highly relevant to the GDPR.

• Secure automated decisions: As soon as AI delivers results based on specific data that affect people, very precise control is necessary. Individuals must not be evaluated solely by machines.

• Take documentation seriously: Keep written records of audits, decisions, and protective measures. This evidence is essential in case questions or checks from public authorities follow.

• Keep an eye on other laws: In addition to the GDPR, the BDSG, copyright law, the Medical Device Regulation, and, in the future, the European Health Data Space may also be relevant.

• Consider obligations under the AI Act: Check whether your use of AI is classified as low or high risk and implement the appropriate requirements for users.

Conclusion

With this article, we have of course only been able to scratch the surface of the topic of AI and data protection. The legal and technical contexts are complex and closely linked to the specific use of the relevant tools. However, it should be clear that the GDPR remains the central benchmark for the handling of personal data. In the context of AI tools from the US, however, it must be applied even more rigorously than before. Since the topic is very difficult to grasp, you should consider legal support and expert AI consulting for the integration of artificial intelligence not as an optional extra, but as an important part of responsible, legally compliant implementation.

FAQ

Why are AI and data protection so important for companies?

Because AI relies on data and often processes personal information. Without clear rules, there are significant legal risks.

Does AI violate the GDPR?

No, not fundamentally. AI is permitted as long as the requirements of the GDPR are met. The decisive factors are ensuring the legal basis, a specific purpose limitation, transparency, and the implementation of appropriate safeguards.

Is the EU AI Act important for AI and data protection?

Yes, because both sets of regulations are intertwined. The GDPR determines how personal data may be processed. The EU AI Act specifies the conditions under which AI must be used, monitored, and controlled. This means that companies have a duty to consider data protection and AI risks together, especially in the case of (highly) sensitive applications.

Less manual, more automated?

Let's arrange an initial consultation to identify your greatest needs and explore potential areas for optimisation.

SLOT 01
Assigned

Pro Quartal arbeiten wir nur mit maximal sechs Unternehmen zusammen, um die besten Ergebnisse zu erzielen.

SLOT 02
Assigned

Pro Quartal arbeiten wir nur mit maximal sechs Unternehmen zusammen, um die besten Ergebnisse zu erzielen.

SLOT 03
Assigned

Pro Quartal arbeiten wir nur mit maximal sechs Unternehmen zusammen, um die besten Ergebnisse zu erzielen.

SLOT 04
Assigned

Pro Quartal arbeiten wir nur mit maximal sechs Unternehmen zusammen, um die besten Ergebnisse zu erzielen.

SLOT 05
Assigned

Pro Quartal arbeiten wir nur mit maximal sechs Unternehmen zusammen, um die besten Ergebnisse zu erzielen.

SLOT 06
Available

To achieve the best possible results, we limit the number of companies we work with to a maximum of six per quarter.

faq

Your questions, our answers

What does bakedwith actually do?

bakedwith is a boutique agency specialising in automation and AI. We help companies reduce manual work, simplify processes and save time by creating smart, scalable workflows.

Who is bakedwith suitable for?

For teams ready to work more efficiently. Our customers come from a range of areas, including marketing, sales, HR and operations, spanning from start-ups to medium-sized enterprises.

How does a project with you work?

First, we analyse your processes and identify automation potential. Then, we develop customised workflows. This is followed by implementation, training and optimisation.

What does it cost to work with bakedwith?

As every company is different, we don't offer flat rates. First, we analyse your processes. Then, based on this analysis, we develop a clear roadmap including the required effort and budget.

What tools do you use?

We adopt a tool-agnostic approach and adapt to your existing systems and processes. It's not the tool that matters to us, but the process behind it. We integrate the solution that best fits your setup, whether it's Make, n8n, Notion, HubSpot, Pipedrive or Airtable. When it comes to intelligent workflows, text generation, or decision automation, we also use OpenAI, ChatGPT, Claude, ElevenLabs, and other specialised AI systems.

Why bakedwith and not another agency?

We come from a practical background ourselves: founders, marketers, and builders. This is precisely why we combine entrepreneurial thinking with technical skills to develop automations that help teams to progress.

Do you have any questions? Get in touch with us!