娇色导航

Our Network

Charlyn Ho
Contributor

Creator’s dilemma: Dissonance in copyright law at the heart of GenAI

Opinion
Aug 13, 202510 mins
Generative AIGovernmentRegulation

AI’s changing the creative game, but the rules on who owns what are fuzzy, leaving creators stuck in a messy copyright grey zone.

hbomaxghibli
Credit: WarnerMedia

Generative AI is reshaping how businesses create, scale and distribute content. From marketing copy to legal summaries and original art, we’re witnessing a seismic shift in creative workflows. But with this transformation comes legal tension — and few areas are as murky as intellectual property rights.

Creators in particular have been uniquely impacted by the rise of GenAI. Unlike traditional tools, GenAI systems can ingest and replicate vast swaths of existing creative works, often without properly licensing these creative works or even the original creator’s knowledge. This has sparked widespread concern across creative industries, as the value and ownership of their intellectual property is threatened not just by direct imitation, but by the potential displacement of human-made content in favor of algorithmically generated alternatives.

As an example, consider the recent trend of using in a fraction of the time that it took for studio director Hayao Miyazaki to create these iconic works. Many fans have commented that this AI filter goes against the ethos of Miyazaki, and he has said regarding AI-generated art, “I strongly feel that this is an insult to life itself.”

Over the past few years, I’ve advised clients ranging from startups to Fortune 50 companies on how to use GenAI responsibly and in compliance with law. One pattern that keeps emerging is a dual challenge: creators (whether businesses or individuals) often want to protect the outputs they generate with AI tools (for commercial purposes or a myriad of other reasons), while also ensuring their proprietary content isn’t being used to train others’ models without their explicit consent.

But if more creators opt out of contributing their content to training datasets, how can GenAI models continue to improve without access to the high-quality data that makes them useful in the first place? Yes, GenAI developers can try to license every single piece of copyrighted material used in training, but given the vast volume and diversity of data required to effectively train and fine-tune large language models (LLMs) over time, is that truly realistic or operationally feasible? 

This is what I call the creator’s dilemma: under current US law and regulatory guidance, you can’t generally copyright GenAI-generated content — but others may potentially use your copyrighted works to train their models. However, the law here is still unsettled and murky at best. Here’s what is behind the conflict, and how companies can navigate it.

GenAI outputs can’t be copyrighted without sufficient human authorship 

In 2023, the US Copyright Office launched a comprehensive initiative examining the copyright law and policy issues raised by AI. One tangible result of this initiative is a three-part series analyzing these issues. , published in January 2025, focused specifically on whether GenAI outputs are eligible for copyright protection. The short answer: only if they include sufficient human authorship.

The Office reaffirmed that copyright law — based on Article I, Section 8 of the US Constitution — requires originality and human creativity. Simply generating content from a prompt like “write a children’s book about space whales” doesn’t make the result copyrightable. The Office based its guidance on the Supreme Court’s 1991 ruling in , where the Court stated that some level of creativity must be present for a work to be protectable, but “the requisite level of creativity is extremely low; even a slight amount will suffice.”

That said, there are a few exceptions. In its guidance, the Office clarified that copyright protection may apply when:

  • The human uses the AI as a tool, exercising creative control over the output 
  • The output includes perceptible excerpts from a human-authored work 
  • The AI-generated material is modified or arranged in a creatively meaningful way 

If the AI is just enhancing an existing human draft, there’s a stronger argument for copyrightability. But when AI is generating from scratch based on vague prompts or without substantively leveraging human-generated content, it’s much harder to claim protection.

Fair use and the training data dilemma

If you can’t copyright what AI helps you make, surely others can’t train their AI on your original work — right? Not necessarily.

In a pair of cases this year — and a separate — federal judges dismissed authors’ claims that using copyrighted books to train LLMs was an infringement. The courts suggested that the training process might qualify as fair use — particularly when it doesn’t replicate expressive elements or directly harm the market for the original works. 

These decisions gave some comfort to GenAI developers. But they are far from definitive. 

In contrast, the decision reached a different outcome. There, the court ruled that Ross’s use of Westlaw summaries to train a competing AI legal product was not fair use, citing the “market substitution” test. Because the AI model was designed to compete with Westlaw directly, the use undermined the plaintiff’s market and failed fair use analysis.

However, the court expressly limited its opinion to non-generative AI, noting that Ross’ system did not generate new expressive works, but rather used editorial content to build a competing research tool. As a result, the decision may not directly apply to GenAI models trained on expressive materials like images, music or literature.

The takeaway? Fair use is context-specific. Factors include:

  • Whether the use is transformative 
  • Whether the use is commercial 
  • The nature of the copyrighted work 
  • The effect on the market for the original 

While some recent rulings (like Kadrey v. Meta) have suggested that training GenAI models on copyrighted works may qualify as fair use, the Ross case serves as a warning. Courts may take a stricter view when an AI tool competes in the same market as the material it was trained on — especially when the material contains editorial or creative structure, such as Westlaw’s headnotes and classification system. Given the lack of a consistent legal standard, changes in law or further guidance from the courts may eventually be needed to resolve these issues.

Why this paradox matters to business strategy 

This creates a strategic gap. I’ve worked with clients who invest heavily in GenAI tools to create marketing, legal or even artistic content — only to discover that they may not be able to claim copyright over the final product. Meanwhile, competitors (or the model developers themselves) might have trained their AI using publicly available copyrighted works, potentially without consent.

This puts creators in a tough spot. There’s value in using GenAI tools to accelerate work, but less clarity around how to protect that value from being copied or reused. The Office highlights a policy contradiction: while training may sometimes be justified under fair use, outputs that are substantially similar to copyrighted works (or that compete in the same market) may not be protected by fair use and could constitute infringement. The lack of clear legal standards for outputs creates a gap between what is permissible in training and what is permissible in deployment.

In light of the ongoing uncertainty in copyright law surrounding generative AI, some LLM providers have proactively offered indemnification to certain users for third-party infringement claims arising from the outputs generated by their models. These indemnification programs are designed to instill confidence in enterprise customers who may be wary of legal exposure when incorporating GenAI into their workflows. However, these protections often come with significant caveats and are generally available only to enterprise customers.  

The paradox is especially urgent for companies building AI-native products. Can you license your GenAI output to partners? Can you stop others from copying it? How do you handle client expectations around ownership? What happens if the LLM has been trained on copyrighted material and the fair use exception doesn’t apply?

These are questions I’m increasingly helping clients to think through, and they’re forcing legal and product teams to rethink how intellectual property fits into GenAI-enabled workflows.

How companies can stay ahead

Until Congress or the courts establish clearer rules, companies need to take proactive steps. Based on my experience advising across industries, here are some risk mitigation strategies that work: 

  • Use GenAI as a co-creation tool, not a replacement: The more human direction and editing involved, the stronger your copyright claim to outputs. 
  • Document human contributions: Keep records of your input during GenAI-assisted content creation to help support any IP assertions. 
  • Don’t rely solely on copyright: Consider contracts, trade secrets or trademarks to protect high-value assets. 
  • Audit how your training data is sourced: If you’re developing your own models, make sure you know what’s in the dataset and how it was obtained to mitigate the risk of copyright infringement liability. 
  • Monitor regulatory trends: Laws and regulations relating to GenAI are rapidly shifting. For example, Texas just recently passed the Texas Responsible AI Governance Act, making it the third US state to adopt a comprehensive AI law.  

A path forward 

The Office has discussed solutions like licensing frameworks, though it acknowledges this could entrench inequality by favoring large players with deep pockets. President Trump recently weighed in disfavoring the licensing approach, stating, “Of course, you can’t copy or plagiarize an article, but if you read an article and learn from it, we have to allow AI to use that pool of knowledge without going through the complexity of contract negotiations.”

The US could also potentially follow the lead of other jurisdictions like the EU, which have promulgated legal exceptions for text and data mining applications that are relevant to GenAI, but Congress has yet to act.

In the meantime, the mismatch between what can be used to train and what can be protected continues to frustrate both creators and developers. As I’ve seen firsthand, this paradox complicates how companies think about value capture and competitive advantage.

But awareness is a first step. By understanding how copyright law is evolving and adjusting internal practices accordingly, businesses can minimize risk and make smarter GenAI investments — even in an uncertain legal environment. 

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Charlyn Ho
Contributor

is a seasoned lawyer, coach, mentor, thought leader and Navy veteran. She has almost two decades of deep business and legal experience across multiple industries with a keen focus on emerging technology areas, including AI, blockchain, adtech and consumer health. She is the CEO, managing member and founder of , a multidisciplinary organization providing solutions for today’s complex legal and technology challenges.

Before founding Rikka, Charlyn was a partner in the Technology transactions and privacy group at one of the largest law firms in the US. Her practice focused on drafting and negotiating complex technology and intellectual property agreements and counseling clients on privacy and cybersecurity, with a focus on AI/machine learning, healthcare tech and connected or “smart” devices. She served as a trusted counsel to companies on the cutting edge of technology, ranging from Fortune 100 companies to start-ups and across a wide variety of industries.