© 2025 88.9 KETR
Public Radio for Northeast Texas
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
88.9 KETR's 50-Year Milestone is here! Support local journalism, public media, and the free press with your contribution today.

AI-generated music is here to stay. Will streaming services like Spotify label it?

Unlike other tech giants — including YouTube, Meta and TikTok — Spotify is not currently taking steps to label AI-generated content.
Jakub Porzycki
/
NurPhoto via Getty Images
Unlike other tech giants — including YouTube, Meta and TikTok — Spotify is not currently taking steps to label AI-generated content.

It sounds like a joke, or a bad episode of Black Mirror.

A band of four guys with shaggy hair released two albums' worth of generic psych-rock songs back-to-back. The songs ended up on Spotify users' Discover Weekly feeds, as well as on third-party playlists boasting hundreds of thousands of followers. Within a few weeks, the band's music had garnered millions of streams — except the band wasn't real. It was a "synthetic music project" created using artificial intelligence.

The controversy surrounding The Velvet Sundown spun out almost as quickly as it gained traction. A person falsely claiming to be part of the band spoke to media outlets, including Rolling Stone, about the AI usage — and then admitted to lying about the whole thing in an attempt to troll journalists. Later, the official Velvet Sundown page updated its Spotify biography to acknowledge that all the music was composed and voiced with AI.

"This isn't a trick – it's a mirror," reads the statement. "An ongoing artistic provocation designed to challenge the boundaries of authorship, identity, and the future of music itself in the age of AI."

Like every other technological advancement that has preceded it, artificial intelligence has caused some panic — and fascination — over how it might transform the music industry. Its practical uses run the gamut from helping human artists restore audio quality (like the surviving members of The Beatles did with John Lennon's old vocal demos on the Grammy-winning track "Now and Then") to full-blown deception à la The Velvet Sundown.

Spotify is the most popular streaming service globally, with 696 million users in more than 180 markets. In podcasts and interviews, Spotify CEO Daniel Ek has spoken about his optimism that AI will allow Spotify's algorithm to better match listeners with what they're looking for, ideally delivering "that magical thing that you didn't even know that you liked — better than you can do yourself," as he told The New York Post in May. (In 2023, Spotify rolled out an AI DJ that provides a mix of recommendations and commentary. The platform also has an AI tool for translating podcasts into different languages.)

Ek has also made it clear that AI should help human creators, not replace them. But unlike other tech giants — including YouTube, Meta and TikTok — Spotify is not currently taking steps to label AI-generated content. So why doesn't the world's largest streaming service alert users if what they're listening to was generated through AI? And what issues does that raise for both artists and their fans?

In response to questions about whether Spotify has considered implementing a detection or labeling system for music created with AI — as well as what challenges might arise from doing that — a Spotify spokesperson did not confirm or deny the possibility.

"Spotify doesn't police the tools artists use in their creative process. We believe artists and producers should be in control," a Spotify spokesperson told NPR in a written statement. "Our platform policies focus on how music is presented to listeners, and we actively work to protect against deception, impersonation, and spam. Content that misleads listeners, infringes on artists' rights, or abuses the platform will be penalized or taken down."

Generative AI and ghost artists

In 2023, Spotify and other platforms removed a song that used AI to clone the voices of Drake and The Weeknd without the artists' permission after Universal Music Group invoked copyright violations. But The Velvet Sundown's profile is still active; a new album was uploaded on July 14. Because the page is not pretending to be an existing artist, it's not technically violating any rules. But if one of its songs came up on a user's Discover Weekly — Spotify's automated playlists that rack up millions of streams every week — there would also be no warning that the voice they're listening to doesn't belong to a real person.

Liz Pelly, a journalist and the author of Mood Machine: The Rise of Spotify and the Costs of the Perfect Playlist, says that transparency has been a major problem on streaming services for nearly a decade — and that users should get a clearer understanding of what they're consuming and where it's coming from.

"In order for users of these services to make informed decisions and in order to encourage a greater sense of media literacy on streaming, I do think that it's really important that services are doing everything they can to accurately label this material," Pelly says. "Whether it's a track that is on a streaming service that is fully made using generative AI, or it's a track that is being recommended to a user because of some sort of preexisting commercial deal that allows the streaming service to pay a lower royalty rate. "

Ek, Spotify's CEO, has commended AI for simplifying music production and lowering the barrier of entry into creation — but AI-generated music could also reduce licensing fees and overall payout costs for streaming services. Pelly says there's already a precedent of Spotify looking for the cheapest content to serve users. In her reporting, she found that Spotify already relies on background music created by production companies en masse to pad out its playlists. The rise of AI-generated music, she says, is a slippery slope for tech companies looking to boost streams and cut costs.

In response to questions about this practice and the financial implications, a Spotify spokesperson told NPR: "Spotify prioritizes listener satisfaction, and there is a demand for music to suit certain occasions or activities, including mood or background music. This kind of content represents a very small portion of the music available on our platform. Like all other music on Spotify, this music is licensed by rightsholders, and the terms of each agreement vary. Spotify doesn't dictate how artists present their work, including whether they publish their songs with real names, under a band name, or a pseudonym."

One platform is already doing it

In June, Deezer rolled out the first AI detection and tagging system to be used by a major music-streaming company. The platform, which was founded in Paris in 2007, had been closely following technological developments that have allowed AI models to produce more and more realistic-sounding songs.

Manuel Moussallam, head of research at Deezer, says his team spent two and a half years developing the tool. They also published a report acknowledging that the tool focuses primarily on waveform-based generators and can only detect songs created by certain tools, meaning detection can be bypassed.

"We started seeing [AI] content on the platform, and we were wondering if it corresponds to some kind of new musical scene, like a niche genre," Moussallam explains. "Or if there were also some kind of generational effect — like are young people going to switch to this kind of music?"

So far, he says, that hasn't been the case. The tool has identified that approximately 20% of the songs uploaded to Deezer on a daily basis are AI-generated, totaling nearly 30,000 tracks a day. But much of it, Moussallam says, is essentially spam. Upon detection, Deezer removed AI-generated songs from automated and editorially curated playlists in order to gauge how many people were organically streaming this content. They found that approximately 70% of the streams were fraudulent, meaning people created fake artists and used bots to generate fake streams in order to receive payouts. Upon detection, Deezer excludes fraudulent streams from royalty payments. The company estimates that revenue dilution linked to AI-generated music — meaning legitimate streams of real people listening to this content — is less than 1%.

"The only thing that we didn't really find is some kind of emergence of organic, consensual consumption of this content," Moussallam says. "It's quite striking. We have a huge increase in the amount of tracks that are AI-generated, and there is no increase in real people streaming this content."

Instead, he says, AI-generated content like The Velvet Sundown sees a spike in listenership when there's media attention, but it quickly subsides when listeners move on from the novelty.

Who's responsible?

Hany Farid, a professor at the University of California, Berkeley who studies digital forensics, says it's important to note that not all AI usage is explicitly bad. There are many instances in which artists can use artificial intelligence to boost or enhance their work — but both in and out of the music industry, transparency is key to AI usage.

"When I go to the grocery store, I can buy all kinds of food. Some of it is healthy for me; some of it is unhealthy. What the government has said is we are going to label food to tell you how healthy and unhealthy it is, how much sugar, how much sodium, how much fat," Farid says. "It's not a value judgment. We're not saying what you can and can't buy. We're simply informing you."

Sticking with the grocery analogy, Farid says the responsibility for those labels doesn't fall on the store — it falls on whoever is manufacturing the products. Similarly, on social media platforms, he says the burden to disclose AI usage should ideally be on the shoulders of whoever uploads a song or image. But because tech companies rely on user-generated content to sell ads against — and because more content equals more ad money — there aren't many incentives to enforce that disclosure from users or for the industry to self-police. Like with cigarette warnings or food labels, Farid says, the solution may come down to government regulation.

"There's responsibility from the government to the platforms, to the creators, to the consumers, to the tech industry," Farid says. "For example, you could say, somebody created music, but they used [an AI software tool]. Why isn't that tool adding a watermark in there? There's responsibility up and down the staff here."

AI-generated models move at such a fast pace, Farid says, it's difficult to give people guidance on how to identify deepfakes or other AI-generated content. But when it comes to listening to music, he and Pelly suggest going back to the basics.

"If music listeners are concerned with not accidentally finding themselves in a situation where they are listening to or supporting generative AI music, I would say the most direct thing to do is go straight to the source," Pelly says, "whether that be buying music directly from independent artists and independent record labels, getting recommendations not through these anonymous algorithmic news feeds, and investing in the networks of music culture that exist outside of the centers of power and the tech industry."

Copyright 2025 NPR

Isabella Gomez Sarmiento is a production assistant with Weekend Edition.