Understanding AI’s ‘Fair Use’ Copyright Defense: A Guide for PR Professionals 

Understanding AI’s ‘Fair Use’ Copyright Defense: A Guide for PR Professionals 

Listen to this blog with one of our AI generated voices.

A Precedent for AI Model Training 

A federal judge recently ruled that Anthropic’s use of copyrighted books to train its Claude AI model qualifies as “fair use” under U.S. copyright law. This decision contributes to the ongoing debate about the legality of using copyrighted works in training large language models for generative AI services.  

For context, the “fair use” doctrine supports the limited use of copyrighted material without requiring explicit permission, balancing the interests of copyright owners and the public. And in this case, Anthropic argued that their use of these books fostered human creativity and aligned with copyright law by promoting innovation. 

Nevertheless, the judge also held Anthropic accountable for maintaining a digital library of pirated works. This split decision emphasizes that while AI companies may find new opportunities through legal clarity, they must also carefully navigate their ethical responsibilities.  

By the same token, the ruling underscores the importance for PR and communications professionals to understand the legal nuances of AI-driven content creation

The Content Landscape 

From owned media and social platforms to podcasts and videos, the volume of content – and its duplication across channels – has surged. But with this growth comes complexity, especially around intellectual property. As more content is created, shared, and repurposed with AI, the risk of infringement has never been greater. 

4 Things PR Pros Should Know About Copyright for AI-Generated Content 

1. Copyright Protection Requires Human Creativity 

AI-generated content is not eligible for copyright protection unless the content has been drastically edited. As such, the US Copyright Office’s Copyright and Artificial Intelligence Report (Part 2) suggests creators might have stronger copyright claims if they use “expressive inputs,” like a copyrighted work, as prompts. 

ArentFox Schiff’s Dan Jasnow explains the term like this: if a creator uploads an original image to an AI tool and instructs modifications, they have a better authorship claim. This is because the output is closely linked to the creator’s input, making copyright protection more likely. It’s worth noting that the Copyright Office views AI tools as those that “assist” in enhancing human creativity, meaning this kind of use does not change the copyright protection of the original work. 

2. Always Document Content Sources and Licensing 

Documenting the sources and licenses of content is essential, especially as Part 2 of the report highlights that just using prompts in gen AI-produced content lacks the control needed for copyright protection. 

Jasnow, the law firm’s AI Group Co-Leader, also claims that for legal clarity and protection, it is crucial to maintain detailed records of content origins and the extent of human input, particularly because the Office remains open to future changes if technological advances allow better human control over AI-generated content. 

Therefore, PR teams should also keep records of where the AI model’s training data and AI-generated outputs originate and carefully consider which AI tools they use to ensure it is compliant with copyright law.    

3. Monitor and Adapt to Ongoing Legal Developments 

As copyright law continues to evolve, successful PR teams will need to stay informed about new rulings and adapt their strategies to minimize legal exposure and maximize compliance. BakerHostetler has an artificial intelligence, copyright, and class actions tracker that monitors key litigation related to the creation and use of generative AI. 

4. Transparency and Attribution Matter 

Transparency and attribution are crucial when integrating AI into content creation. Clearly disclosing AI’s role not only informs audiences about the technology behind the content but also establishes trust and accountability. Whether for internal communications, client deliverables, or public distribution, maintaining transparency builds credibility and strengthens relationships with stakeholders. In addition, this openness is vital in mitigating potential reputational and legal risks by demonstrating a commitment to ethical practices in AI-driven content development. 

Where to from here? 

The Anthropic ruling signals a new, more flexible era for AI in communications. By understanding and applying copyright principles, PR professionals can leverage AI’s potential with confidence and foresight, ensuring their brands remain both innovative and protected. 

What are your thoughts on the balance between innovation and copyright protection in AI use? We’d love to hear your perspective in the comments below.

Leave a Reply

Your email address will not be published. Required fields are marked *

Listen to this blog with one of our AI generated voices.

Consuming content is part of everyday life. In fact, it’s become such a norm that the average American spends up to six hours per day streaming content while younger generations, like Gen Z, scroll for up to seven hours.  

Research shows nearly half (42%) of Americans admit to feeling like they consume ‘too much’ media and a third (36%) revealed their mood is often negatively impacted by something they’ve seen on social media. 

It’s no surprise that there’s a variety of emotions built around content consumption, including guilt, anger, happiness, and contempt. This is because we can watch, listen, and learn about anything and everything with the touch of a button. However, if we lose our inquisitive nature and stop source and fact-checking, we risk falling for disinformation.  

What is Disinformation Awareness Month 

April marks Disinformation Awareness Month, an initiative spearheaded by the Institute of Public Relations (IPR) to equip communications professionals with strategies to combat disinformation. It also serves as a reminder for PR professionals to continue to uphold integrity in the role they play between brands, the public, and the media – this is despite the common misconception of the industry ‘spinning’ stories and narratives.  

Notably, 60% of Americans view disinformation as a major issue, and are increasingly holding politicians, social media, and companies responsible for the spread of it. More alarmingly, 61% consider public relations professionals as ‘somewhat’ culpable for spreading disinformation. This awareness amplifies the responsibility of the industry to uphold truth amidst the growing prevalence of the public’s distrust. 

Public Relations role in combatting disinformation  

With Americans labelling sources such as specific social media platforms, political parties, and companies viable for the spread of false information, it’s important more than ever for the PR industry to play a pivotal role in combatting distorted truths.  

Here are three ways communications professionals can combat disinformation: 

Improve media literacy 

Media literacy is no longer just desirable; it is essential to everyday practices. PR professionals are experts in navigating the media landscape by understanding, analyzing, and evaluating messages told to the public – and they use this knowledge to understand the public’s sentiments to build trust. Detecting disinformation is the same.  

PR professionals will need to become experts in understanding the tell-tale signs of AI-generated image creation and manipulation, psychological tactics used to invoke extreme emotional reactions, and ethical AI integration frameworks.  

Leverage generative AI  

There’s plenty of opportunities and challenges with the emergence of widespread AI adoption in the PR industry. It increases productivity and efficiency in daily tasks but also requires fostering a new skillset when leveraging it – proficiency in tools to create AI-generated content, subedit, and optimize outreach.  

Generative AI has also transformed the disinformation landscape through the facilitation of creating deepfakes and manipulated content. The ease of access to these tools lowers the barrier for black hat agents. To caveat this and build upon establishing and maintaining trust with the public – a core fundamental of the communications industry – PR professionals should embrace preventative measures, like developing risk and resilience strategies. 

When integrated ethically, generative AI can be a great ally to combat disinformation. For example, PR professionals can utilize natural language processing to identify suspicious or manipulated content patterns. From there, AI-powered tools can authenticate the image, and the PR professional can learn the public’s sentiment towards the photo through understanding, analyzing, and evaluating the reactions.   

The blending of AI assistance and human expertise can create resilient defenses against disinformation and contribute to a healthier information ecosystem.  

Take the Disinformation Awareness Month Pledge 

Confronting disinformation and developing the tools to combat it can lead to navigating trust. That is why the Institute of Public Relations continues to be at the forefront of the issue and has progressed from conducting signature studies to launching the IPR Behavioral Insights Research Centre. 

By enhancing our capacity to fight disinformation, we protect not only reputations but also contribute to less polarizing news cycles for the betterment of society.  

Précis AI has proudly taken the pledge to combat disinformation and promote media literacy. We encourage all PR practitioners to visit the IPR website, participate in the Disinformation Awareness Month activities, and sign the commitment to help discern fact from fiction within content consumption. 

Listen to this blog with one of our AI generated voices.

Ethical Implications for AI-Driven PR 

In PR Week’s recent 2024 Global Comms Report, the use of AI in public relations is still largely experimental. Only 14% of U.S. PR agencies report regularly using generative AI for content creation, while 45% are experimenting (largely with ChatGPT), and a significant 41% aren’t using it at all. Part of the reason for these surprising numbers is that effective PR-specific AI tools haven’t appeared in the marketplace yet. 

However, this is changing. Précis Public Relations is the first comprehensive AI platform specifically built and trained for PR, and others will likely follow. Data gathered from a 28-country, five-month beta test of the Précis platform reported dazzling time savings, indicating that AI is set to become omnipresent in the PR industry. After all, what agency or corporate comms department will be able to resist creating quality content 61% faster? 

AI Tools for Marketing & PR: A Technological Revolution 

The public relations industry is clearly standing on the precipice of a technological revolution. The integration of artificial intelligence (AI) into daily practices is both inevitable and transformative. However, with great power comes great responsibility, and the ethical use of AI in PR demands the immediate and undivided attention of the entire industry. 

Recent guidelines issued by leading industry bodies underscore the importance of ethical considerations in the deployment of AI tools. The PR Council and the Public Relations Society of America (PRSA)have both provided comprehensive frameworks to ensure that AI enhances, rather than undermines, the integrity of the PR profession. 

AI Marketing & PR: Balancing Benefits and Ethical Challenges 

AI’s potential to revolutionize PR is undeniable. From generating content to analyzing data and engaging with customers, AI tools can significantly boost productivity and efficiency. However, these benefits come with significant ethical challenges. AI lacks human judgment and understanding, which can lead to the propagation of misinformation, bias, and privacy violations. For instance, generative AI tools, while capable of producing vast amounts of content, often do so without the nuanced understanding that human professionals bring to the table. 

The PRSA’s Code of Ethics provides a robust framework for navigating these challenges. Key principles such as ensuring the free flow of information, promoting fair practices, and safeguarding confidences are more relevant than ever in the age of AI. These principles serve as a reminder that while AI can assist with PR work, it cannot replace the ethical judgment and critical thinking that are the hallmarks of the PR and marketing profession. 

Addressing Bias and Ensuring Transparency 

One of the most pressing concerns with AI in PR is the potential for bias. AI systems are trained on large datasets, which can inadvertently include biases present in the data. This can lead to biased outputs, which can perpetuate stereotypes and misinformation. The PR Council’s guidelines emphasize the importance of validating AI-generated content for accuracy and potential biases. This involves not only checking the content for factual accuracy but also ensuring that it does not inadvertently propagate harmful stereotypes or misinformation. 

Another critical aspect of ethical AI use in PR is transparency. The PR Council recommends that agencies disclose the use of AI tools in their creative processes to clients. This transparency builds trust and ensures that clients are fully aware of how their content is being generated. It also addresses legal concerns, as AI-generated content may not be fully protected under current copyright laws. 

The Future of AI in PR: Ethical Considerations and Professional Integrity 

The integration of AI into PR practices also raises questions about the future of the profession. As AI takes over more routine tasks, there is a risk that junior professionals may miss out on valuable learning opportunities. The PRSA’s guidance highlights the importance of human oversight in AI applications, particularly in sensitive areas such as hiring and financial reporting. This oversight ensures that AI is used responsibly and that junior professionals continue to develop the skills they need to succeed in the industry. 

PR agencies are increasingly developing their own guidelines to govern the use of generative AI. These internal policies are designed to align with broader industry standards while addressing specific needs and concerns within each agency. By establishing clear protocols for AI use, agencies can better manage the ethical and practical implications of this technology. This proactive approach not only safeguards the integrity of the profession but also enhances the agency’s ability to deliver high-quality, ethical services to clients. 

Navigating the Ethical Landscape of AI in PR 

The Précis beta test results highlight the transformative potential of AI, with users reporting an average time savings of 61% on individual tasks and over 10 hours saved weekly by heavy content producers. These productivity gains are not just impressive; they are game-changing. They allow PR professionals to focus more on strategic and creative tasks, ultimately delivering greater value to clients. 

As agencies and in-house teams continue to adopt and refine AI tools, the landscape of public relations will be irrevocably transformed. The challenge lies in ensuring that this transformation is guided by ethical principles and a commitment to professional integrity. 

As AI technology continues to evolve, PR professionals must remain vigilant about its ethical implications. This includes staying informed about the latest developments in AI and continuously evaluating the impact of AI on work product. The PR Council’s guidelines recommend regular training on best practices for using AI, avoiding potential biases, and maintaining transparency with clients and stakeholders. 

The ethical use of AI in PR is a complex and multifaceted issue. While AI offers exciting opportunities to enhance work products, it is crucial that its use is approached with caution and a strong ethical framework. And paying attention to industry guidelines like those of the PR Council and the PRSA, PR professionals can ensure that AI serves to enhance, rather than undermine, the integrity of the profession.  

###