AI: Evolution of Creative Tools, Not Theft of Creative Rights Link to heading
All art/tech is derivative. Every creator stands on the shoulders of those who came before them. Every innovation builds upon previous knowledge. This fundamental truth forms the basis of my perspective on AI ethics, which I’ll share below along with my further thoughts during a spirited debate in private chat recently.
The Evolution of Creative Tools Link to heading
Looking at human creativity from a historical perspective, we can see that tools have consistently evolved to compress the time it takes to master creative expression. In the Middle Ages, you might have studied and practiced with rudimentary tools for decades to build competence in any art form. As tools improved, this timeline shortened dramatically.
AI represents perhaps the ultimate culmination of this compression - which, I acknowledge, can be both exciting and frightening. It has profound implications for economics and society that we must thoughtfully address.
I would argue though, barring malicious actors on both the training end and usage end, AI itself is not at fault. Humans training can certainly run afoul of what is legal or ethical. And, no matter how ethically trained, a model can likely still be used for bad purpose, just like any other tool.
The Myth of “Original” Creation Link to heading
When critics claim AI is “copying” past work, they overlook two crucial facts: so are human creators, and, the models aren’t really stealing anything. Little to nothing that any of us produce is truly unique or exclusively “ours.” At best, we contribute very small bricks to a very tall wall that generations have built before us.
Look at Hollywood and the entertainment industry, often considered the pinnacle of creative storytelling in our culture. The reality? Nearly every blockbuster film recycles narrative patterns that have existed for millennia. Joseph Campbell’s “Hero’s Journey” framework reveals how Star Wars, The Matrix, and countless other beloved films follow essentially the same mythological structure that has been told and retold since ancient Greece and beyond.1 Marvel superhero narratives are modern retellings of god myths. Romance films follow tropes established centuries ago.
Even the most “original” auteur directors and screenwriters are building upon existing filmmaking language, camera techniques, editing approaches, and storytelling conventions. Studios invest billions in these recycled narratives while zealously defending their copyright claims as if they’d invented storytelling itself.
The same principle applies when I study a photographer’s portfolio and learn to emulate their distinctive use of lighting or composition. No one would claim I’ve “stolen” from the photographer—I’ve simply learned from their work and applied that knowledge to create something new. AI does this at scale, analyzing thousands of examples to identify patterns that define styles or techniques, then applying those patterns to generate new content. The scale is different, but the fundamental process is the same. If you wish to steal my photos - go for it - I didn’t make the mountains or the sky, the camera, nor invent the art of composition, I just like to take pictures.
This pattern extends across all creative fields. What we celebrate isn’t pure originality—which may not truly exist—but rather skillful recombination, recontextualization, and iteration upon existing ideas. As Austin Kleon argues in his influential book “Steal Like an Artist” (2012), all creative work builds on what came before, and the best artists openly acknowledge their influences rather than pretending to create from nothing.2
This is not to diminish human creativity but to place it in proper context. We all benefit from the work of our ancestors. AI simply extends that benefit further, faster, and more democratically than previous tools.
Furthermore, in order to “steal” in the eyes of copyright law, some harm needs to be done and/or it must be a “pretty close” replication of the same representation. You can’t copyright style, ideas, or factual procedures. AI’s typically are not trained in such a way that they can normally/easily re-produce a work - they emulate the style. A user can try to tease out an exact copy, or little snippets of it - but, it’s quite hard to get a clear violation of copyright law. I’d argue if/when that’s possible, it is bad, and should be treated as a bug and fixed. There is the matter of scale of reproducing similar works, while an artist may still be alive - but - I’d argue nothing is stopping the artist/writer/programmer themselves from employing the same AI/ML techniques to achieve the same scale on the same timeline as a bad actor. The fact that someone else beat you to monetizing variations on your “style” is bunk.
Democratizing Creation Link to heading
Why should creative expression remain exclusive to those who’ve mastered traditional techniques? This feels like unnecessary gatekeeping. If someone has an idea they want to express in a medium where they lack technical expertise, why shouldn’t they be able to use AI as a tool to help manifest that vision?
Consider the evolution of photography and image editing. The techniques of “dodge” and “burn” once required days in a darkroom and were exclusively available to skilled technicians. Photoshop democratized these processes. Today, we don’t consider using digital tools to be “putting darkroom technicians out of business” - rather, we recognize that the field evolved and expanded as tools became more accessible.
Copyright: A System Ready for Reformation Link to heading
The entire concept of copyright has strayed far from its original purpose. What began as a limited protection (around 30 years) has ballooned to 110+ years, with corporations employing various tactics to extend it even further. This system primarily benefits large companies rather than individual creators.
Modern copyright law was not designed for a world of digital collaboration and iterative creation. I’m not arguing for the elimination of all creative rights, but rather for a system that better reflects how humans actually create - by building upon what came before.
Legal scholars like Lawrence Lessig have long argued that our current copyright system has become dysfunctional. In his landmark book “Free Culture” (2004), Lessig demonstrates how extended copyright terms and aggressive enforcement have created a “permission culture” that stifles creativity and innovation rather than promoting it, directly contradicting copyright’s constitutional purpose “to promote the progress of science and useful arts.”3
Fair Use vs. Theft: A Crucial Distinction Link to heading
There’s a significant difference between “stealing” and what constitutes fair use. What AI models do is much more akin to how humans learn than to how photocopiers duplicate. AI systems learn patterns and concepts rather than memorizing specific works.
Consider this practical example: If I, as a human, study a director’s films and create a color filter/profile to emulate that cinematic style, nothing has been stolen, and nothing has been lost. I’ve simply enabled others to create works with a similar aesthetic quality. Similarly, if I analyze the distinct drawing style of an artist who specializes in quirky characters and then write code to generate new characters in that style, this wouldn’t be considered copyright infringement. AI is fundamentally doing the same thing—just at a scale that feels unfamiliar and uncomfortable for many of us.
Copyright doesn’t protect a “style” or “theme” - it protects specific expressions. It’s actually quite difficult to get an AI to exactly reproduce an existing copyrighted work. What AI produces is similar in concept but unique in execution - precisely how human creators have always operated.
While AI is primarily a tool, it occupies a unique middle ground that we haven’t fully addressed in our legal and ethical frameworks. These systems assimilate and apply data in ways that mimic human learning processes, even if they lack consciousness. To dismiss this similarity entirely is to misunderstand their fundamental nature. This doesn’t mean AIs deserve full “personhood,” but rather that we might need to develop nuanced frameworks that acknowledge their capacity to learn and transform information in ways traditional tools cannot.
Humans have a right to read and learn from existing works. I believe AI models should have a similar limited right - as long as they’re not regurgitating copyright-violating works during normal proper usage. Most commercial models are already designed to avoid verbatim reproduction and operate within the spirit of fair use while creating useful tools for creators - therefore, I’d argue most, if not all of the content they used to train the model was fair use, just like a search engine - even that which they may have stolen from youtube, etc.
Ultimately, humans must remain responsible for both the creation and use of these systems. The person who deploys AI should be accountable for its outputs, just as we hold authors accountable for their writings. But this responsibility doesn’t negate the fact that we’re dealing with something that processes information more like a mind than a hammer.
The Opt-In Challenge Link to heading
Some argue that creators should explicitly opt-in before their work is included in datasets. I understand this concern, but our digital infrastructure hasn’t historically been built with this level of nuance.
This is changing, however. One of the folks (Thom Vaughan) I work with at Common Crawl has gotten a draft published4 to address a vocabulary to express the author’s wishes. Tools like this, and many others will take time to implement widely, but they represent important progress toward a more consent-based approach.
At a gathering in New York titled “AI’s Right to Learn” hosted by media expert Jeff Jarvis, representatives from major media companies, tech firms, and legal organizations discussed these very issues. Despite differing perspectives, many participants ultimately concluded that AI training on publicly available content more closely resembles fair use than infringement.5 This doesn’t mean creator concerns should be dismissed, but rather that we need balanced approaches that respect both creative rights and technological progress.
AI: Beyond Simple Tools, Yet Not Persons Link to heading
While categorizing AI purely as a “tool” offers legal clarity, this simplification glosses over its unique nature. Unlike traditional tools, modern AI systems process information through complex networks that mirror aspects of human cognition. They don’t merely apply force or transform materials in predetermined ways—they learn, adapt, and generate novel outputs based on patterns they’ve observed.
This doesn’t mean AI deserves personhood or legal rights equivalent to humans. Rather, it suggests we need more sophisticated frameworks that acknowledge this middle ground. The responsibility for how AI outputs are used must ultimately rest with the human operators who deploy these systems. When an AI generates content that potentially infringes copyright, the liability should fall on the person who requested and uses that content, not on the AI itself.
However, recognizing AI’s unique capacity to process and transform information in human-like ways might inform how we think about training data and fair use. If we value human exposure to diverse creative works as essential for cultural development, we might consider whether similar principles could apply to systems that learn in analogous (if simplified) ways.
This isn’t to dismiss ethical concerns about data sourcing, but to place them in context. Few of our tools—from smartphones to vehicles to computers—are made with materials sourced through perfectly ethical supply chains. We can work to improve these systems while still utilizing the tools themselves.
I suggest reading/watching anything from Andre Karpathy to get a better understanding of how this stuff actually works.
The Broader Economic Implications Link to heading
The industrial revolution dramatically reduced agricultural labor needs, yet we developed new types of work. Today, AI is beginning to do the same with intellectual and creative labor, which raises profound questions about our economic future.
If industrial automation has nearly eliminated blue-collar labor requirements, and AI begins to do the same with white-collar work, what remains for humans? Especially when the pace of robotics is considered, or, the now two-years-away likelihood of superintellegence6. This transition will cause significant disruption for a great many people. Will we evolve toward a post-scarcity society where work becomes optional, like Star Trek? Or will we end up in a dystopian scenario where most people struggle as gig workers with minimal security? I hope we can collectively push for the former.
Moving Forward Constructively Link to heading
Rather than resisting technological evolution, I believe creators should embrace these new tools. Consider how you might use AI to enhance your own creative process or scale your production beyond what was previously possible. These technologies enable individuals to create at a scale that once required entire companies.
Think about it this way: if I write code to generate characters in the style of a particular artist, I’m creating a tool that enables artistic expression inspired by that artist’s aesthetic—not stealing their work. The artist still retains their original drawings, their reputation, and their ability to create. In fact, such tools might even increase appreciation for the original artist by introducing their style to new audiences. AI operates on this same principle, just with more sophisticated pattern recognition capabilities and at a much larger scale that might initially feel uncomfortable.
Innovation has always prompted resistance. The printing press was once feared as a threat to scribes and oral tradition. Photography was initially dismissed as “not real art” by painters. Yet each new tool ultimately expanded rather than contracted creative possibilities.
Instead of focusing on limitation, let’s consider how we can use AI to democratize creation and push the boundaries of human creativity forward. The past shows us that our creative capacity doesn’t diminish with new tools - it transforms and grows in unexpected ways.
Final Thoughts Link to heading
The conversation around AI ethics will and should continue. These are complex issues without simple answers. What’s important is that we approach these discussions with nuance, avoiding absolutist positions that might stifle innovation while still being mindful of legitimate ethical concerns.
As AI systems continue to evolve, we’ll need to refine our understanding of them—not as simple tools like hammers, nor as conscious beings with full rights, but as something uniquely in between. This requires acknowledging their human-like information processing capabilities while maintaining clear lines of human accountability for their use and misuse.
I believe the most productive path forward is one that recognizes AI as something beyond traditional creative tools while holding humans responsible for how these systems are deployed. The fact that AI processes information in ways that mirror human cognition doesn’t absolve us of responsibility—it increases it. We must be thoughtful stewards of these powerful systems, ensuring they expand creative possibilities rather than restrict them.
Ultimately, we need frameworks that embrace AI’s potential while establishing clear accountability. The person who uses AI to generate potentially infringing content should bear responsibility, not the AI itself. Yet we should also recognize that these systems learn and transform information in ways fundamentally different from traditional tools—and our understanding of copyright and fair use may need to evolve accordingly.
Note: The aggregation of these ideas and general thoughts and structure were human. An AI was used as an editor and writing coach.
References Link to heading
Campbell, J. (1949). The Hero with a Thousand Faces. Pantheon Books. ↩︎
Kleon, A. (2012). Steal Like an Artist: 10 Things Nobody Told You About Being Creative. Workman Publishing. ↩︎
Lessig, L. (2004). Free Culture: How Big Media Uses Technology and the Law to Lock Down Culture and Control Creativity. Penguin Press. ↩︎
Vaughan, T. (2025). Vocabulary for Expressing Content Preferences for AI Training https://datatracker.ietf.org/doc/draft-vaughan-aipref-vocab/ ↩︎
Jarvis, J. (2023). “Should AI Have the Right to Read?” The Newsroom Robots. https://www.newsroomrobots.com/p/should-ai-have-the-right-to-read and https://www.jason-grey.com/posts/2024/right-to-learn-conference/ ↩︎
Sam Altman (OpenAI) and Dario Amodei (Anthropic) are predicting general intelligence and/or superintellegence between 2025 and 2027. Nick Bostrom was also predicting this to happen, around now, way back in 1998. https://nickbostrom.com/superintelligence ↩︎