The Evolution of Music in the Age of AI
Back in 1995, when the internet was just starting to find its feet as a mass medium, I had the privilege of co-founding Apple Music Group. My team and I were at the forefront of a digital revolution that spurred the creation of groundbreaking platforms like iTunes and the iTunes Music Store (now known as Apple Music). Over the following years, alongside my ventures with MyPlay and as CEO of eMusic, I was driven by a singular goal: to forge new ways for artists to connect with their fans. However, I also witnessed firsthand how transformations in technology—such as the rise of MP3s, music streaming, and platforms like Napster—were reshaping copyright norms and the music distribution landscape.
During this transformative time, the music industry was engaged in heated debates with technology companies regarding piracy and copyright issues. Many artists found themselves sidelined, looking up to industry giants like Metallica and Dr. Dre for leadership in the fight. Fast forward to today, and the landscape has dramatically changed; technology is more accessible, and artists have become incredibly tech-savvy, allowing them to carve their own paths in the music world.
While the rise of digital music distribution has opened up a treasure trove of opportunities for new artists—creating careers where many once found barriers—it’s also led to an unequal market. The multitude of legacy music retailers that once filled the space has been narrowed down to a handful of dominant streaming services, including Apple, Spotify, and Amazon. Although we continue to see icons like Taylor Swift standing up for artists’ rights—she famously removed her music from Spotify until they improved artist royalties—the emergence of generative AI brings a fresh wave of challenges.
The debate has evolved beyond simply how music is paid for or distributed; now, it’s about how music is created. There’s a looming concern that human artistry might be reduced to mere fodder for synthetic content generated by large language models (LLMs), potentially using the very data from those artists without their approval.
Is Unfair Also Illegal?
As we draw closer to a new digital frontier, it’s important to recognize the broad spectrum of creators—writers, bloggers, vloggers, musicians, and journalists—whose work fuels these AI models. It’s evident that generating synthetic content without compensating the original creators is ethically questionable. Yet, the legal ramifications are still murky.
Developers of LLMs argue that scraping data from the internet constitutes “fair use.” However, many creators dispute this, leading to federal lawsuits that could define the future of content creation and copyrights. For instance, in New York, The New York Times has filed a lawsuit against Microsoft and OpenAI for alleged copyright infringement, while in Massachusetts, major record labels are pursuing legal action against Suno AI, an innovative generative music tool capable of producing compositions on demand.
Research from EpochAI suggests that the vast troves of human-generated text data used for training LLMs may be depleted within a few years, not accounting for music and video creations. Tech giants like OpenAI and Google are reportedly rapidly amassing whatever data they can get their hands on, often without permission. As internet users, we might feel our data is commodified, but for artists, this situation is particularly concerning as their unique work is at stake.
Where Do We Go From Here?
If courts decide that training LLMs on publicly available data is not fair use, companies will need to seek permission—and likely pay creators—to use their work. It’s feasible to envision a marketplace where creators could opt in to share their work in exchange for a share of the profits generated by the AI models.
On the flip side of the coin, if ruling favors unrestricted use of creators’ works by LLMs, artists may face difficult choices. Protecting their creations may require putting new content behind paywalls, reducing their discoverability, and limiting their ability to reach wider audiences.
This situation invites further exploration of how emergent technologies, such as blockchain, could provide a buffer against AI encroachment on creative work. Blockchain, originally known for its role in cryptocurrency, carries the potential to entirely redefine our approach to the internet, often called web3. By decentralizing control over data and transactions, blockchain empowers individuals—be they consumers or creators—and establishes a new social contract in the digital realm.
With blockchain technology, artists can securely timestamp their creations, ensuring proper attribution of ownership while granting them control over how their work is utilized. This could protect against unauthorized usage and enable transparent revenue-sharing for new creations.
Generative AI doesn’t have to be viewed as a threat; when used innovatively within the blockchain framework, it can become a compelling tool for artists. The visionary artist Grimes has exemplified this by creating Elf.Tech, an AI platform that enables users to transform their vocals into her signature sound, allowing them to sell songs while splitting royalties.
This synergy between AI and blockchain could herald a new era of creativity, where artists harness technology to enhance their craft while safeguarding their intellectual property. The current landscape echoes the challenges music creators faced 20 years ago, representing an opportunity for adaptation and growth in the face of change.
As we navigate the ongoing debate around AI and copyright, blockchain holds promise for empowering creators, ensuring that they can shape a future that duly rewards their contributions.
The AI Buzz Hub team is excited to see where these breakthroughs take us. Want to stay in the loop on all things AI? Subscribe to our newsletter or share this article with your fellow enthusiasts.