4.2.2026
4.2.2026
Artificial intelligence (AI) is rapidly changing marketing and PR. Whether in text creation, image generation or data analysis, AI tools provide enormous efficiency gains and creative freedom.
But as opportunities grow, risks also increase: data protection, copyright and new EU requirements make the use of AI a legal challenge.
Especially in public relations and marketing, where trust, transparency and adherence to facts are decisive, careless use of AI can quickly become expensive. Incorrectly generated content (“hallucinations”), unclear data sources or unmarked deepfakes not only jeopardize the credibility of your company, but also the legal security of your company.
This checklist shows in a practical way what marketing and communication teams must pay attention to in order to use AI responsibly and in accordance with law.
Before AI tools are used, responsibilities must be clearly defined.
important: In both cases, documentation should be provided in writing.
Die EU AI regulation (Art. 5) prohibits systems that deceive or manipulate people so that they make consistent decisions (Art. 5 para. 1 lit. A and B):
Examples include chatbots that broker credit agreements and
Marketing managers should check whether AI systems are exploiting emotional weaknesses — and strictly rule out such practices. Violation of the requirements could result in fines of up to 7% of annual turnover or up to 35 million euros, whichever is higher (Art. 99 KI Regulation).
Certain applications are considered high-risk AI (Art. 6 KI Ordinance) and require certification of the systems used, documentation and human control. This includes, for example, systems that make personnel decisions or analyze biometric data. The last point in particular can be relevant in marketing, e.g.:
If high-risk AI systems are used without compliance with the KI Regulation, there is a risk of fines of up to 15 million euros or 3% of turnover, whichever is higher (Art. 99 KI Regulation).
When using AI, data protection plays an essential role when personal data is processed. It is therefore essential to pay attention to the following points:
4.1. Applicability of the GDPR
4.2. Permission bases for data processing
4.3. Contracts with external AI providers
4.4. AI providers outside the EU (third countries)
Ensuring an adequate level of data protection by:
4.5. Data protection notices
4.6. liability
4.7. Attention: Personal liability
Managing and executive employees may be personally liable if:
Violation of the requirements of the GDPR could result in fines of up to 20 million euros or up to 4% of the worldwide annual turnover of the previous financial year, whichever is higher.
Not only personal data, but also in the case of business and trade secrets, it must be checked whether they can be made available to AI. The following points should be considered:
If trade secrets can be remembered and reproduced by AI, the owner will lose trade secret protection. If it is a third-party trade secret, damages and contractual penalties may be due. In the case of trade secrets, there is a risk of official consequences. Before each entry, it should therefore be checked whether the content has been approved for external use.
Texts, images, or videos generated by AI are not protected by copyright, as only human-made works are protected. Even complex prompts don't change that. AI results can therefore be copied freely, and others can also adopt your AI content without consent. The important thing is:
AI-generated content rarely infringes on third-party copyrights, unless you challenge copyright infringements.
In the event of an infringement of third-party copyrights, there is usually a risk of warnings with demands for injunctive relief and the obligation to pay the resulting warning costs. In addition, claims for damages can be asserted, for example in the form of fictitious license fees or, in the event of violations of personal rights, including intangible damage.
However, the risk of unintentional violations can be further reduced by backward image searches and plagiarism checks.
Foreign texts or images are often read into AI tools in order to generate modified content on this basis. This procedure is generally permitted, provided that certain conditions are met.
In practice, automatic AI crawlers in particular respect such restrictions of use. However, when manually uploaded by users, they are often overlooked or ignored. Although this makes violations difficult to prove, they are by no means ruled out. Employees should therefore never upload third-party copyrighted content to AI systems without the express permission of their employer.
AI content can infringe personal rights, in particular the right to one's own image or voice. There is usually no labeling requirement as long as the AI creates completely fictitious people.
Violations of personal rights as a result of AI results are also threatened with demands for injunctive relief and compensation for intangible and material damage, for example due to unauthorized use of the image or voice of a real person. In addition, preliminary injunctions, lawsuits and associated high legal and court costs may arise, particularly if prominent persons are affected.
From August 2026, deepfakes must be clearly identifiable as AI plants. Labeling requirement for deepfakes (images, videos and audio).
According to the AI regulation, a deepfake only exists if all of the following criteria are met:
Not a deepfake: Conversely, not every realistic AI representation is a deepfake. Completely artificial persons, places or events are not subject to identification as long as no real element is reproduced.
If there is a deepfake, then a clearly visible and barrier-free label such as “AI result” is required, for example in the image description or as a visible watermark in the image.
AI texts are only subject to labeling if they are published to inform the public about matters of public interest. This includes, for example, news texts or press releases and especially journalistic texts. However, the labeling requirement does not apply if effective human review or editorial control has taken place and a person assumes editorial responsibility.
Regardless of the law, online platforms and social networks, such as Facebook and Instagram, are beginning to impose their own labeling requirements for AI content. In doing so, they sometimes go further than the law. Facebook and Instagram, for example, require an AI label for all “realistic-looking” AI images, regardless of whether real or fictional people are shown.
Recommendation: Due to different legal and platform-specific requirements, it is advisable to label realistic-looking AI content as AI works in principle, unless it is certain that there is no labeling requirement.
You should only refrain from labelling if you are absolutely certain that you are not committing any infringements of the law. Because even if the labeling requirement is breached, there is a risk of significant sanctions, in particular fines of up to 3% of annual global turnover or up to 15 million euros, whichever is higher.
In addition, there may be further legal consequences, for example due to consumer deception or violations of personal rights, if real people or places have been reproduced in a deceptively real way without identification.
AI is powerful but not infallible. In particular, faulty or fictional content (“hallucinations”) can have serious consequences. For example, if they contain false statements about other people or our own products.
That is why:
Teams should establish internal review processes. This is the only way to ensure quality, truthfulness and legal certainty.
AI offers great potential for marketing and public relations, but at the same time, requirements are increasing due to the EU AI regulation, data protection, copyright and media law, as well as transparency and labeling obligations. Secure deployment is only successful if internal processes are clearly defined, approvals are documented and employees know what legal risks AI can pose and how to avoid them.
Only clear internal rules, documented approval processes and, above all, appropriately sensitized employees can ensure that AI works efficiently without triggering legal risks.
In summary: Legally compliant use of AI requires knowledge, clear processes and trained employees.
In order to avoid the above legal violations, warnings and fines, the AI competence expressly required by law is required (Art. 4 AI regulation).
The online AI skills training from certready.eu make marketing and PR teams fit for the responsible use of AI.
Participants learn in a hands-on way:
This is how the balancing act between innovation and compliance is achieved.