4.2.2026

Plan, test and use artificial intelligence in marketing and PR in a legally compliant manner.

AI is revolutionizing communication — but without clear rules, there is a risk of fines and reputational damage

Artificial intelligence (AI) is rapidly changing marketing and PR. Whether in text creation, image generation or data analysis, AI tools provide enormous efficiency gains and creative freedom.

But as opportunities grow, risks also increase: data protection, copyright and new EU requirements make the use of AI a legal challenge.

Especially in public relations and marketing, where trust, transparency and adherence to facts are decisive, careless use of AI can quickly become expensive. Incorrectly generated content (“hallucinations”), unclear data sources or unmarked deepfakes not only jeopardize the credibility of your company, but also the legal security of your company.

This checklist shows in a practical way what marketing and communication teams must pay attention to in order to use AI responsibly and in accordance with law.

Checklist: Legally compliant use of AI in marketing and PR

1. Clarify responsibilities and approvals

Before AI tools are used, responsibilities must be clearly defined.

  • Employees: Employed employees generally need their employer's approval before they can use an AI tool.
  • Agencies: Agencies and external partners must clarify whether the client allows the use of AI. Exception: purely auxiliary functions such as spelling correction or layout visualizations. Nevertheless, an internal agreement for use is recommended.

important: In both cases, documentation should be provided in writing.

2. Avoid banned AI applications

Die EU AI regulation (Art. 5) prohibits systems that deceive or manipulate people so that they make consistent decisions (Art. 5 para. 1 lit. A and B):

Examples include chatbots that broker credit agreements and

  • subconsciously influence or intentionally manipulate users, or
  • misuse people's need of protection, e.g. due to low or old age or economically weak situation.

Marketing managers should check whether AI systems are exploiting emotional weaknesses — and strictly rule out such practices. Violation of the requirements could result in fines of up to 7% of annual turnover or up to 35 million euros, whichever is higher (Art. 99 KI Regulation).

3. Only use high-risk AI with control

Certain applications are considered high-risk AI (Art. 6 KI Ordinance) and require certification of the systems used, documentation and human control. This includes, for example, systems that make personnel decisions or analyze biometric data. The last point in particular can be relevant in marketing, e.g.:

  • Face recognition at events or at the point of sale
  • voice analysis or body language evaluation, as a basis for addressing potential customers or users

If high-risk AI systems are used without compliance with the KI Regulation, there is a risk of fines of up to 15 million euros or 3% of turnover, whichever is higher (Art. 99 KI Regulation).

4. Maintain data protection and security

When using AI, data protection plays an essential role when personal data is processed. It is therefore essential to pay attention to the following points:

4.1. Applicability of the GDPR

  • Personal data includes: names, email addresses, customer numbers, IP addresses, telephone numbers, content of customer accounts.
  • AI may only process personal data after prior review and express approval.

4.2. Permission bases for data processing

  • Consent: Required if, for example, photos are to be edited or customer or citizen data are to be evaluated.
  • Legitimate interests: Possible in certain cases, e.g. when using publicly available data of adults with a right of objection. These requirements must be checked by a specialist.

4.3. Contracts with external AI providers

  • Order processing contracts (AVV).
  • Necessary if the provider processes data exclusively on behalf of the provider and does not use it for its own purposes.
  • Note: Many AI providers only provide AVV as part of paid plans.

4.4. AI providers outside the EU (third countries)

Ensuring an adequate level of data protection by:

  • adequacy decision (e.g. Switzerland, Canada, UK), or
  • standard contractual clauses (SCC) in the absence of an adequacy decision, or
  • EU-US data privacy framework with some providers in the USA (e.g. Google or Microsoft).

4.5. Data protection notices

  • All AI service providers used must be named in the privacy policy.
  • The respective purpose of the processing must be clearly described.

4.6. liability

  • As a rule, users of AI systems remain legally responsible, even when using external providers.
  • Clarify in advance whether a person concerned is liable for his own mistakes.

4.7. Attention: Personal liability

Managing and executive employees may be personally liable if:

  • there are no effective compliance rules for the use of AI and advertising measures
  • Employees are not trained.
  • compliance with the requirements is not adequately controlled.

Violation of the requirements of the GDPR could result in fines of up to 20 million euros or up to 4% of the worldwide annual turnover of the previous financial year, whichever is higher.

5. Protect trade secrets

Not only personal data, but also in the case of business and trade secrets, it must be checked whether they can be made available to AI. The following points should be considered:

  • Trade secrets: Trade secrets include confidential company information, such as marketing strategies, budget and financial data, product or campaign concepts, internal analyses, and technical or organizational procedures.
  • Service secrets: Service secrets are information that has been obtained in the course of professional activity or public service and whose unauthorised disclosure could damage the interests of the employer or institution.
  • Only with consent: Business and service secrets may not be entered into external AI systems without the consent of the owners of the trade or service secrets. Many external AI tools store user input for the purpose of training the AI models. As a result, there is a risk that sensitive content may be unintentionally processed or disclosed.

If trade secrets can be remembered and reproduced by AI, the owner will lose trade secret protection. If it is a third-party trade secret, damages and contractual penalties may be due. In the case of trade secrets, there is a risk of official consequences. Before each entry, it should therefore be checked whether the content has been approved for external use.

6. Copyright protection of AI content and prompts

Texts, images, or videos generated by AI are not protected by copyright, as only human-made works are protected. Even complex prompts don't change that. AI results can therefore be copied freely, and others can also adopt your AI content without consent. The important thing is:

  • Content contracts: The use of AI must be expressly permitted by contract, as AI results can be freely copied due to lack of protection.
  • No legal protection for prompts: Copyright protection provides individual, personal and therefore deviating from the norm. Prompts are therefore not protected by copyright, as they usually contain factual or instructive wording. Only exceptionally formulated and complex prompts could be protected. In addition, complex and business-relevant prompts can be protected as trade secrets.

7. Copyright infringement due to AI results

AI-generated content rarely infringes on third-party copyrights, unless you challenge copyright infringements.

  • Basically safe: Modern AI works with abstract knowledge patterns and does not store any specific works of art.
  • AI copies: Risks only arise when the AI is specifically instructed to copy or closely imitate protected works, e.g. “Create a graphic that looks like my original” or “Create an image with “Mickey Mouse.”
  • Styles: Imitating styles or general design principles is allowed because styles are not protected, e.g. “create a graphic/text in the style of...” However, you have to be careful that the results do not copy other people's works.

In the event of an infringement of third-party copyrights, there is usually a risk of warnings with demands for injunctive relief and the obligation to pay the resulting warning costs. In addition, claims for damages can be asserted, for example in the form of fictitious license fees or, in the event of violations of personal rights, including intangible damage.

However, the risk of unintentional violations can be further reduced by backward image searches and plagiarism checks.

8. Foreign content as AI templates

Foreign texts or images are often read into AI tools in order to generate modified content on this basis. This procedure is generally permitted, provided that certain conditions are met.

  • Permitted text and data mining: The copyright exception under Section 44b UrhG allows the reading of third-party content if this serves only to obtain facts, patterns, styles or ideas and the originals are then deleted.
  • Reservation of use as a prohibition: Authors can prohibit text and data mining through a machine-readable reservation of use, e.g. “Use for text and data mining is reserved” in the legal notice of a website or book. This must be clearly visible, for example in the legal notice, in terms of use or in the website code. The burden of proof that there was no reservation of use lies with the user of the AI and may result in significant documentation requirements.

In practice, automatic AI crawlers in particular respect such restrictions of use. However, when manually uploaded by users, they are often overlooked or ignored. Although this makes violations difficult to prove, they are by no means ruled out. Employees should therefore never upload third-party copyrighted content to AI systems without the express permission of their employer.

9. Violations of personal rights due to AI results

AI content can infringe personal rights, in particular the right to one's own image or voice. There is usually no labeling requirement as long as the AI creates completely fictitious people.

  • AI-generated fictional people: If the AI creates purely artificial, non-existent people, there are no image rights. Such representations can be used freely.
  • Similarities to real people: It becomes critical when AI specifically imitates real people or their doppelgangers, in particular celebrities. The use of such images for marketing purposes regularly constitutes a violation of personal rights.

Violations of personal rights as a result of AI results are also threatened with demands for injunctive relief and compensation for intangible and material damage, for example due to unauthorized use of the image or voice of a real person. In addition, preliminary injunctions, lawsuits and associated high legal and court costs may arise, particularly if prominent persons are affected.

10. Transparency & Labeling Obligations

From August 2026, deepfakes must be clearly identifiable as AI plants. Labeling requirement for deepfakes (images, videos and audio).

According to the AI regulation, a deepfake only exists if all of the following criteria are met:

  • AI-generated or AI-manipulated images, videos, or voices,
  • replica of a real person, place, object or event,
  • realistic presentation with deceptive effect.

Not a deepfake: Conversely, not every realistic AI representation is a deepfake. Completely artificial persons, places or events are not subject to identification as long as no real element is reproduced.

If there is a deepfake, then a clearly visible and barrier-free label such as “AI result” is required, for example in the image description or as a visible watermark in the image.

Labeling requirements for texts

AI texts are only subject to labeling if they are published to inform the public about matters of public interest. This includes, for example, news texts or press releases and especially journalistic texts. However, the labeling requirement does not apply if effective human review or editorial control has taken place and a person assumes editorial responsibility.

Labeling requirements of platforms

Regardless of the law, online platforms and social networks, such as Facebook and Instagram, are beginning to impose their own labeling requirements for AI content. In doing so, they sometimes go further than the law. Facebook and Instagram, for example, require an AI label for all “realistic-looking” AI images, regardless of whether real or fictional people are shown.

Recommendation: Due to different legal and platform-specific requirements, it is advisable to label realistic-looking AI content as AI works in principle, unless it is certain that there is no labeling requirement.

You should only refrain from labelling if you are absolutely certain that you are not committing any infringements of the law. Because even if the labeling requirement is breached, there is a risk of significant sanctions, in particular fines of up to 3% of annual global turnover or up to 15 million euros, whichever is higher.

In addition, there may be further legal consequences, for example due to consumer deception or violations of personal rights, if real people or places have been reproduced in a deceptively real way without identification.

11. Liability and human control

AI is powerful but not infallible. In particular, faulty or fictional content (“hallucinations”) can have serious consequences. For example, if they contain false statements about other people or our own products.

That is why:

  • All AI results must be reviewed professionally, legally and in terms of content before they are published.
  • The information provided to the AI (so-called data basis) should also be checked to see whether it does not contain any errors that are adopted by the AI.

Teams should establish internal review processes. This is the only way to ensure quality, truthfulness and legal certainty.

12. Summary and practice recommendation

AI offers great potential for marketing and public relations, but at the same time, requirements are increasing due to the EU AI regulation, data protection, copyright and media law, as well as transparency and labeling obligations. Secure deployment is only successful if internal processes are clearly defined, approvals are documented and employees know what legal risks AI can pose and how to avoid them.

Only clear internal rules, documented approval processes and, above all, appropriately sensitized employees can ensure that AI works efficiently without triggering legal risks.

In summary: Legally compliant use of AI requires knowledge, clear processes and trained employees.

Conclusion: safely into the future of AI with certready.eu

In order to avoid the above legal violations, warnings and fines, the AI competence expressly required by law is required (Art. 4 AI regulation).

The online AI skills training from certready.eu make marketing and PR teams fit for the responsible use of AI.


Participants learn in a hands-on way:

  • implement the legal requirements of the EU AI Regulation,
  • Comply with data protection, copyright and transparency
  • and to use AI as an opportunity — without taking legal risks.

This is how the balancing act between innovation and compliance is achieved.

Secure AI expertise now


Recent posts