Who Owns the Code When AI Writes It?

In short: AI-assisted coding tools like Claude Code, Cursor, and GitHub Copilot can generate production-ready code in minutes. But the legal framework around who owns that code, whether it's copyrightable, and what happens when it inadvertently copies someone else's work hasn't caught up. If you're building products with AI, you need to understand the risks before you ship.
The Speed Is Real. So Are the Questions.
We use AI-assisted coding every day at SmplCo. It's how we build products in five days instead of five months. It's how we shipped our own internal tools in weeks. We're not sceptical about AI coding — we're practitioners.

But we've also had to think carefully about what it means when a significant chunk of your codebase was generated by a machine. Not because we're worried about quality — the code is often excellent. Because we're building products for clients who need to own what they've paid for. And right now, the law isn't entirely clear on that.
Can You Copyright AI-Generated Code?
This is the big one. In most jurisdictions, copyright requires a human author. The US Copyright Office has been explicit: works created entirely by AI, with no meaningful human creative input, cannot be copyrighted.
But here's where it gets interesting. Most AI-assisted coding isn't fully autonomous. You're prompting it, reviewing it, editing it, directing it. You're making creative decisions about architecture, naming, structure, flow. The AI is more like a very fast junior developer than a replacement for human thought.
The practical reality:
- Fully AI-generated code (zero human editing) — likely not copyrightable
- AI-assisted code (human-directed, reviewed, modified) — probably copyrightable, but untested in most courts
- AI-generated code that you substantially rework — almost certainly copyrightable as a derivative work
The problem is that "substantial" is doing a lot of heavy lifting in that sentence. There's no bright line. Nobody's tested this properly in court yet, certainly not for code.
What About the Training Data Problem?
AI coding tools are trained on billions of lines of existing code — much of it open source, some of it proprietary. This creates two risks:
1. Accidental licence contamination. If the AI generates code that's substantially similar to GPL-licensed code, your project might inherit those licence obligations. That could mean you're legally required to open-source your own code. For a commercial product, that's a nightmare.
2. Reproducing proprietary code. There have been documented cases of Copilot generating near-verbatim snippets from its training data. If that snippet belongs to someone who didn't consent to it being used for training, you could have a copyright infringement problem.
What to do about it:
- Run licence scanning tools on AI-generated code (like FOSSA or Snyk)
- Treat AI output as you would code from an untrusted source — review it
- For critical or novel logic, write it yourself or heavily rework the AI output
- Keep records of your prompts and editing process (this helps establish human authorship)

The EU AI Act and What It Means for Developers
The EU AI Act came into force in 2025 and introduces obligations around transparency and risk. While coding tools are generally classified as limited-risk, there are requirements worth knowing:
- Transparency obligations. If you're deploying AI-generated content (including code that powers user-facing features), you may need to disclose that AI was involved in its creation.
- High-risk classifications. If your AI-assisted product touches healthcare, finance, or critical infrastructure, stricter rules apply — including documentation, human oversight, and conformity assessments.
- General-purpose AI models. The providers of tools like Claude and GPT-4 have their own compliance obligations. But downstream users (that's you) share responsibility for how the outputs are used.
In Norway, the Datatilsynet is actively engaging with AI governance. If you're building products for the Norwegian or European market, this isn't theoretical — it's compliance you need to plan for.
Liability: When AI Code Breaks Things
Here's a scenario we think about: you ship a product. The AI wrote a significant piece of the backend logic. That logic has a bug that causes a data breach. Who's liable?
Currently, the answer is: you are. The company that ships the product bears the liability, regardless of whether a human or an AI wrote the problematic code. AI tool providers generally disclaim liability for the outputs in their terms of service.
This isn't actually that different from using Stack Overflow or open-source libraries — you're responsible for what you ship. But it does mean that AI-generated code needs the same (or more) rigorous review as human-written code. The speed gain from AI is real, but it can't come at the cost of testing and review.
What We Actually Do at SmplCo
We've landed on a practical approach that balances speed with responsibility:
- We treat AI as a collaborator, not an autopilot. Every piece of AI-generated code is reviewed, tested, and often substantially modified by a human developer.
- We keep prompt logs. When building client products, we document the creative direction and decisions that shaped the code. This strengthens the case for human authorship.
- We run licence checks. Automated scanning catches potential contamination before it reaches production.
- We're transparent with clients. We explain that we use AI-assisted development and what that means for their IP ownership. No surprises.
- We don't ship what we don't understand. If the AI generates something clever but opaque, we either rewrite it or learn it before it goes live.
Five Things You Should Do Right Now
If you're building with AI coding tools, here's a practical checklist:
- Establish an AI usage policy. Even a one-pager. What tools are approved? How should AI-generated code be reviewed? Who signs off?
- Document your process. Keep records of prompts, creative decisions, and human modifications. This is your evidence of authorship if it's ever challenged.
- Run licence scanning. Tools like FOSSA, Snyk, or even basic grep searches can flag potential GPL contamination in AI-generated code.
- Update your client contracts. Make sure your IP assignment clauses account for AI-assisted development. Be explicit about it.
- Stay informed. The law is evolving fast. The EU AI Act, US copyright rulings, and UK IP guidance are all in flux. What's grey today might be black or white next year.
The Bottom Line
AI-assisted coding is here to stay. We use it. We love it. We've built 125+ products with it. But the legal landscape is genuinely uncertain, and pretending otherwise is irresponsible.
The good news is that the practical mitigations are straightforward: review what AI generates, document your creative input, scan for licence issues, and be transparent with clients. None of this slows you down meaningfully. It just means you're building responsibly.
If you want to talk about how to adopt AI-assisted development without the legal headaches, get in touch. It's the kind of thing we help with.
Frequently Asked Questions
Can I own code that was written by AI?
It depends on how much human creative input was involved. Fully autonomous AI output is likely not copyrightable. But AI-assisted code where a human directed, reviewed, and modified the output is probably protectable — though this hasn't been definitively tested in court.
Does using GitHub Copilot or Claude Code create licence risks?
Potentially, yes. These tools are trained on open-source code, and there's a small risk of generating output that matches GPL or other copyleft-licensed code. Running licence scanning tools on AI-generated code is a sensible precaution.
Do I need to tell my clients I used AI to write their code?
There's no universal legal requirement yet, but transparency builds trust and protects you. Under the EU AI Act, certain transparency obligations may apply depending on your product's risk classification. We recommend being upfront about it.

About the author
Andreas Melvær
Managing Director & Co-founder, SmplCo
Andreas is the MD and co-founder of SmplCo. A product nerd at heart, he leads the company's 5-Day Prototype service and has helped 125+ startups and enterprises turn ideas into working digital products. He builds with AI, ships with speed, and occasionally wins marketing awards.