MUSIC
Here’s Everything Going On in AI and Music Right Now
A lot has been going on since we wrote about Stromae’s “Papaoutai” back in January. Here’s a brief rundown.
AI-driven audio generation, scraping systems, and automated distribution pipelines now make it possible to mass-produce derivative content, register it, and stake claims with minimal effort.
by Oluwatobi Afolabi for OkayAfrica
DJ Shimza recently flagged a “fake artist” who trained an AI model on snippets from his Instagram and rushed out a track before Shimza could even release his own. “I don’t know how we’re going to protect the music from such — interesting times ahead,” he wrote.
These incidents aren’t entirely new. Content ID fraud has existed for over a decade, but new tools are accelerating it.
Elsewhere, a young Rwandan artist played around with using AI to create music; it blew up and became his best-performing song. OkayAfrica talked to him about his conflict and the nuances of having artificial intelligence create his most famous work.
At the center of many of these disputes is YouTube’s Content ID system, introduced in the late 2000s to help rights holders identify and manage copyrighted material. By matching uploaded content against a database of registered audio and visual fingerprints, it allows owners to block, track, or monetize matches. In practice, however, it has long been criticized for enabling false or abusive claims.
What’s changed is scale. AI-driven audio generation, scraping systems, and automated distribution pipelines now make it possible to mass-produce derivative content, register it, and stake claims with minimal effort. Where this once required industry access or manual coordination, it can now be done quickly and cheaply. The result is an industrialized form of fraud that exploits longstanding weaknesses in the system, leaving independent artists to clean up the mess.
In South Africa, the effects are already visible. AI-generated songs are making their way onto mainstream radio, quietly normalizing the technology. Figures like Rea Gopane have been open — if defiant — about using AI in their music, while others are experimenting more strategically. Like the curious case of William Lehong, who is using an AI artist persona, Olga, as part of his re-entry into the mainstream after nearly two decades away.
Outside the continent, interesting things are happening in the spaces that live between AI and Music. A video surfaced earlier this month of American folk artist Murphy Campbell detailing an alarming ordeal: a company used her YouTube videos to train a machine learning model, uploaded the outputs to streaming platforms, and then issued copyright strikes against her original recordings.
“It’s the Wild West, and it can happen to anyone. I’m no longer making money on YouTube — Vydia is […] off of my own videos of me playing my own banjo, in my own backyard,” she says.
Campbell isn’t alone. In March, jazz musician Jason Moran was congratulated on a new EP he had never released. He doesn’t even upload his music to Spotify —“’cause it’s the worst,” he says in an Instagram reel. After a takedown was filed, the release was eventually removed from Spotify, but it remains on other platforms, including Amazon Music.
“What do you do when a malicious company acts like it’s giving music to your people… but doesn’t mind having possibly millions of fake artists out there siphoning off your style?” Moran asks.
Negotiations between Suno and the major labels, Universal Music Group and Sony Music Entertainment, have reportedly broken down, with insiders telling the Financial Times that there is “no path forward” under current terms. The labels want AI-generated music contained within closed platforms; Suno wants it to circulate freely across services like Spotify, TikTok, and YouTube.
The stakes are structural. If AI music floods open platforms, royalty pools thin out, catalog values drop, and labels lose control over supply — control they’ve spent decades consolidating.
This impasse follows a broader shift in strategy. In June 2024, the RIAA, on behalf of Universal, Sony, and Warner, sued Suno for allegedly training on copyrighted material without permission. By 2025, that hardline stance softened into licensing talks, suggesting the industry saw more value in monetization than litigation. That approach now appears to have hit a wall.
The breakdown underscores a larger issue: the AI music economy is still operating without a stable legal or ethical framework. Artist opposition is growing in parallel, including campaigns like “Say No to Suno,” which accuse AI companies of building their models on unlicensed creative work.
Meanwhile, platforms and tech companies are experimenting with partial fixes.
Sony is developing tools to trace original material within AI-generated tracks, potentially enabling revenue-sharing models. Spotify has partnered with major rights organizations to explore “responsible” AI tools for creators. Apple Music has introduced “Transparency Tags,” a metadata system that allows distributors to disclose whether AI was used in a song’s creation, from composition to artwork. For now, however, the system is optional and largely unverifiable.
Deezer has taken a different approach, investing in detection. The platform now receives over 75,000 fully AI-generated tracks, with synthetic content making up a significant share of daily uploads. Most of it is never meaningfully consumed, and a large portion is flagged as fraudulent, suggesting that the primary use case isn’t creativity, but royalty extraction.
Together, these approaches reflect two opposing standpoints: one assumes the industry will self-report honestly, while the other assumes it won’t.
Governments, by contrast, have moved cautiously.
In the UK, an initial proposal to allow AI training on copyrighted works through an opt-out system was met with overwhelming resistance from the creative sector. A March 2026 report ultimately declined to introduce new regulations, leaving key questions unresolved. The House of Lords was more direct, warning that AI poses a “clear and present danger” to creative industries.
Elsewhere, enforcement has been more decisive. In Sweden, an AI-generated track briefly topped Spotify’s charts before IFPI Sweden removed it after its origins were uncovered. The decision signaled a clear boundary: if a song is primarily AI-generated, it may not qualify for chart recognition.
Bandcamp has taken the firmest stance so far. In January 2026, it banned music generated wholly or substantially by AI, prohibited scraping from its catalog, and asked users to report violations. While governments and labels continue to debate frameworks, Bandcamp has acted unilaterally.
At the recent AI & African Music showcase at the University of the Witwatersrand, artists, engineers, and members of the media gathered to present prototypes exploring how AI might intersect with African musical traditions. The event, held on 16 April 2026, was the culmination of a six-month program pairing musicians with machine learning practitioners to collaboratively build instruments and archives. Projects ranged from AI-powered digital twins that enable real-time vocal interaction to instruments co-created with communities to preserve endangered musical forms to archival systems designed around consent and contributor royalties.
In contrast to Big Tech's direction, there was an emphasis on ethics. Ownership and cultural context were treated as design principles and guided the entire development process.
We are witnessing an unprecedented restructuring of how music is made and distributed. Often at the expense of artists' labor, those systems were built to protect. Across all of this, it’s apparent that the technology is moving faster than the systems meant to govern it.