Are You Infringing on Intellectual Property by Using Generative AI?
What should teams using Generative AI be aware of?
Understanding IP Risk When Integrating Generative AI
Any product team considering the integration of a generative model should be aware that doing so may propagate intellectual property (IP) infringement. Content produced by generative models may contain legally protected IP, putting both the product and its users at legal risk.
From an ethical standpoint, these legal risks raise serious concerns about our collective ethical obligation to the creators and innovators whose work was used—often without consent—to train generative models.
Beyond potential infringement, there is a glaring legal gray area regarding IP ownership of AI-generated content that exists around the world. This ambiguity raises more questions than answers, such as:
Does a generative AI provider or user own the IP it creates? Law regarding IP ownership of generated content is weak. Many AI providers have stepped in with their own legal interpretations in AI user agreements such as End User License Agreements.
As a user of an AI tool, do you own the copywrite of generated content? Since many models use partially deterministic mechanisms, similar result or identical content may be generated for different users. Early legal indicators, such as the U.S. Copywrite Office decision around a Midjourney-generated work called Zarya of the Dawn, is indicating, no, you don’t.
Product teams leveraging LLM’s will need to consider the ethical aspects of this issue, and ensure legal agreements cover the both infringement and rights issues.
The Current Situation
Since the explosion of generative AI in early 2022, serious controversy has emerged around the data used to train these models.
A wide range of professionals—journalists, authors, screenwriters, musicians, photographers, and artists, as well as engineers, developers, and inventors in patent-heavy industries—have voiced concerns about how their content is being used. Content providers, from news organizations to publishers, were among the first to take legal action against companies such as OpenAI.
In their March 2025 update, Sustainable Tech Partner News reported that in the over 100 active AI-based lawsuits against providers OpenAI, Microsoft, Nvidia, Perplexity and others, most of them deal with IP infringement claims.
Citing a lack of federal intervention, states like California have begun introducing their own regulations to protect copyright holders and establish clearer boundaries. In September 2024, Governor Gavin Newsom signed Bill AB-2013: Generative Artificial Intelligence—Training Data Transparency, which requires AI developers to disclose training data sources, helping IP owners determine if their content was used. Generative AI companies must comply by January 2026.
In response, AI companies are actively lobbying the federal government to classify the use of copyrighted material in AI training as “fair use”—a move that could significantly reduce or eliminate their legal obligations to intellectual property holders. For example, both OpenAI and Microsoft submitted public responses to the government's Public Comment Invitation on Artificial Intelligence Action Plan, each advocating for the weakening of existing IP laws to allow broader use of copyrighted content in training AI models.
In short, the situation is evolving rapidly, and it remains unclear how these issues will be resolved in the U.S. or globally.
Why Are You at Risk?
Generative AI models, by design, can replicate or remix content used in their training data. If you use AI-generated content that harms original IP holders, you may be liable for damages. If you’re a product company integrating generative AI, you could face additional legal exposure for content generated by your platform.
Furthermore, if you use AI to generate materials— from written content, to design patterns, images, or computer code—you may not actually have the legal right to claim ownership. Copyright and trademark protections (including “first use” rights) may not apply to AI-generated content. Legal clarity on this issue remains unresolved. If your product helps customers generate content, consult legal counsel to understand the risks, and how you should protect your organization.
Why Hasn’t This Slowed Adoption?
Given I’m in AI conversations daily, it’s surprising how little this issue is discussed. It’s unclear whether that’s due to lack of awareness or the estimation of risk is low.
Early indemnity offerings from major AI providers—Google, OpenAI, Microsoft, Adobe—likely played a role in easing the perception of legal risks related to IP infringement. Indemnity is a legal mechanism that protects end users from liability, shifting the burden to the AI provider in cases of infringement. Through a series of changes to their own legal contracts, these companies have reduced the risk related to using AI generated content, even if it contains legally protected IP.
Offering content use indemnity should be a consideration for any company incorporating generative AI into their products.
The Backstory of Indemnity Agreements
IP infringement risks were well understood by AI companies early on—and they recognized these concerns could slow adoption.
As far as I can tell, GitHub was the first to act on this concern. In June 2022, GitHub began offering indemnification for users of its AI-powered tool, Copilot—but only if they used Copilot’s “duplication detection” filter.
Nearly a year later, other providers followed suit:
In June 2023, Adobe began offering IP indemnification for commercial users of Firefly, its generative image tool.
In July 2023, Shutterstock introduced indemnification for enterprise users of its GAI-generated image licenses.
In September 2023, Microsoft launched Copilot, then rolled out its Copilot Copyright Commitment, indemnifying paying customers for copyright claims related to AI-generated content.
By November 2023, Microsoft extended this protection to Azure service users under a broader Customer Copyright Commitment—including those using Azure’s OpenAI services.
OpenAI responded to the above with its own Copyright Shield for Enterprise and API customers in November 2023 (note: this protection did not extend to free, pro or plus users).
Today, indemnity clauses are a common tool to reassure customers and drive adoption. Product teams will need consider whether indemnity protections over the use of embedded generative AI within their products will also extend to their customers.
Where Is This All Headed?
Despite these concerns, momentum behind generative AI continues to accelerate. For now, indemnity appears to satisfy early adopters, especially in the absence of meaningful global regulation.
Assuming most AI use cases are made in good faith, and based on early court decisions, we may be heading toward the following outcomes:
Some use of protected IP for training AI may be considered fair use, especially if no direct harm can be proven.
More generative AI products will build in mechanisms to prevent or mitigate IP infringement, such as content filters and preventative tools.
Legislation may pave the way for content owners to request or require generative AI providers to eliminate their content from the model, or restrict from further training.
In clear cases of copied or traceable IP, providers may be required to compensate the original creators.
Courts will continue to clarify ownership rights of content produced by AI, but this may differ from country to country
.
Considerations for Product Teams
If you're thinking about integrating generative AI into your product:
Educate your team on the current legal and ethical landscape.
Choose providers that offer indemnity and legal protections for generated content and understand whether this protection would extend to your customers.
Reduce risk by:
Evaluating your AI provider’s training data sources
Scoping and limiting types of content created in your own use of generative AI
Add guardrails to your own products, such as reviewing generated content for potential infringement
Consider indemnifying your customers for content generated by your product.
Take a clear position on IP ownership of AI-generated content—even if the law remains unsettled.
Final Thoughts: Do Your Homework
As exciting as generative AI is, it’s critical not to overlook the legal and ethical complexities—especially around intellectual property. While some companies argue for expanding the definition of "fair use" to accommodate AI training, many creators, rights holders, and legal experts warn that such changes could severely undermine IP protections that support innovation, creativity, and livelihoods.
If you're building with generative AI, don't rely solely on provider assurances or industry momentum. Take time to research the perspectives of those whose work may have been used without permission. Writers, artists, musicians, inventors, and journalists are raising serious concerns about the long-term implications of weakening IP laws in the name of technological progress.
Educate yourself. Read the public comments from both sides of the debate. Listen to the arguments of IP owners, not just AI companies. Understand what’s at stake—because building responsibly in this space means more than avoiding lawsuits. It means deciding what kind of innovation ecosystem we want to be part of.