Blog

banner-asset-med

AI Can Create It. But Can You Own It?

ChatGPT Image Apr 21, 2026, 11_50_47 AM

 

As the world watches the latest battle over which AI model is best, a quiet courtroom battle in Washington, D.C., has just shifted the risk profile for every company using AI to generate content. This under-the-radar decision may have a significant impact on future business decisions for nearly every organization. It may also impact the role of the CISO and their involvement in AI use within their organization.

 

The decision, Thaler v. Perlmutter, concerned the US Copyright Office's determination on the ability to copyright materials created with AI, a decision the Supreme Court recently let stand, affirming a lower court’s ruling that AI-generated output lacks the 'human authorship' required for copyright protection. Copyright is an important protection that organizations and individuals can use to protect the intellectual property they create. Once something is copyrighted, it cannot be used, republished, or otherwise shared without the permission of the copyright holder. Copyright stems from the fact that creators of new content deserve the right to control their intellectual property. As intellectual property, whether written word, art, or videos, is an important asset, the holders of the copyright enjoy and defend this privilege. But if you use AI purely to develop that asset, you may not be able to copyright that material, according to the US Copyright Office.

 

In making this decision, the Copyright Office considered a few variables. First, AI is not currently considered a “Person”, and the language that is in the US Constitution that gives the copyright office their authority states that copyright “Secur[e] for limited times to ... authors the exclusive right to their ... writings”, which has been further refined by the Supreme Court that “[T]he author [of a copyrighted work] is ... the person who translates an idea into a fixed, tangible expression entitled to copyright protection.” AI, as not a legal person, cannot receive copyright protection for works it purely creates.

 

The second variable is “How much AI can a person use,” and, for part of this determination, they reverted to previous determinations on joint ownership, and even the use of a camera! For joint ownership, it noted historical disputes over copyright claims between the writers of the material and those who printed it. When the content that was printed is almost entirely unchanged except for the medium (going from handwritten to print, for example), the copyright goes to the author, not the person who printed it. In the case of AI, if a user were to take an output and publish it without any further effort, it would be similar to the situation where the AI is the author, and AI cannot claim copyright. For the use of mechanical tools in the creation of content, it was once claimed that in the act of taking a picture, the camera is the creator, not the person who chose to take a picture rather than paint. However, the determination was, and still is, that using a camera is one part of the expression of intellectual property. Determining lighting, angle, pose, etc., is under the artist's control, and therefore the photo is a human-driven expression, even when aided by a machine. Where this applies to AI is that if the AI's output requires a significant human adjustment, then the human is the ultimate author. Taking an article developed by an AI and making significant changes to wording, style, etc., could qualify as a human-created article, even if the AI played a role in its initial development.

 

One note the US Copyright Office also addressed is prompts. Many good arguments were made and shared with the US Copyright Office that complex prompts could constitute sufficient input and control to meet the definition of authorship. The Office considered it but determined that certain factors regarding prompting make this not applicable. Specifically, the input granularity is limited, and the output is inconsistent. A user can only supply so many details for the AI to process, and it will fill in any gaps with its own discretion. For example, you can ask for a picture of a cat on a surfboard, but what color the cat’s eyes will be determined by the AI, unless you make it explicit in the prompt, and there are limitations to what you can prompt. And you can supply as many details as you want and still get a different response each time, because this technology is probabilistic, not deterministic, meaning there is always an element of chance that impacts its outputs. This may change one day, but for now, these factors make it simply writing good prompts that aren’t enough to define authorship.

The "Human Authorship" Test:

  • Non-Copyrightable: Pure AI output from a simple prompt (e.g., "Write a blog post about cybersecurity"). The AI is the "author," and AI has no legal standing.
  • Copyrightable: AI-assisted work where a human exerts "creative control" (e.g., significant manual rewriting, specific structural editing, or combining AI elements into a larger, human-curated whole).

So, what does this mean for CISOs and other security leaders?

One of the key roles of security leaders is protecting their organization's intellectual property, which may include copyrighted (or potentially copyrighted) material. Theft and redistribution of copyrighted material are huge issues for media companies, design companies, and anyone in the business of licensing creative thought. These assets are targets of competitors seeking an edge or syndicates looking to distribute copyrighted material without paying royalties to authors.

 

But if something wasn’t protected by copyright due to excessive AI development, should the organization defend it? And what might happen if the organization takes legal action, such as a cease-and-desist letter or a takedown request? Savvy adversaries can use tools to demonstrate the extent of AI involvement in the creation, putting authorities in an awkward position of having to choose not to support the legal position of the original “author”. Further, users engaging in “Shadow AI” may inadvertently make intellectual property less protected through using AI in a non-copyrightable way.

 

Understanding the nature and use of AI in intellectual property development helps the CISO determine which actions they should and should not take. This gives the CISO a clear view into the development process, which may uncover other risks, such as integration issues and permissions, and even catch inappropriate use of copyrighted material. What was previously seen as a potential intrusion by security into the creative development process can be reframed as an effort to protect legal rights and privileges.

 

Companies will use AI to create material for which they seek copyright protection. At this time, there are certain requirements that organizations must meet to achieve this protection, and it is imperative that CISOs are involved in this process. This is a chance for them to showcase that they are not just here to stop attacks but are here to ensure that value is properly created, distributed, and protected.

 

Sources:

Baker Donelson, Supreme Court Denies Certiorari in Thaler v. Perlmutter:
AI Cannot Be an Author Under the Copyright Act

United States Copyright Office, Copyright and Artificial Intelligence, Part 2: Copyrightability

 

 

    Subscribe

    Stay up to date with cyber security trends and more