MUSIC

AI Can Make a Song, But Can It Make the Process?

Artificial Intelligence can generate songs in seconds, but listening closely reveals a growing gap between making music and describing it.

A graphic of an open AI hand with a waveform floating above it against a red background.
AI can generate songs in seconds, but listening closely reveals a growing gap between making music and describing it.

South African artist Credo V Daniels' album Still We Were Here has been gathering momentum since its release in early April, propelled by viral social media clips and a steady drumbeat of name-drops across the internet. It has accumulated over ten million streams on Spotify alone — a number that will only climb as more people discover his music. Songs like "Njalo Njalo," "Ngafa," and "Sedilaka" have listeners in a chokehold. They offer comfort, reassurance, a warm-and-fuzzy feeling that functions more as an escape than as a reckoning with the present. The songs sound fluent, well-produced, even.

I could tell that "Njalo Njalo" was made using an AI tool the moment I heard those drums. It comes from listening to too much lo-fi music over the past few years — most of it inundated with AI slop — and noticing the recurring motifs: the drums always have these perfectly timed fills at the end of every four-bar cycle; the guitars are neatly tucked in; the horns carry a synthetic uniformity, like they don't want to disturb the peace. It also comes from my own experiments with Suno, where I've been uploading some of my music to see what comes out on the other end (see an example here).

Credo V Daniels vehemently denies any extensive use of AI-generated music on the album. In a recent interview with Kaya FM, he said, "I think it's a tool. If you can use it, use it. But not fully [...] I only use it for backing vocals, the choirs," — clearly denying additional input. In a separate article, he was quoted as saying that he used AI to enhance what was already there. "I didn't prompt it to write a song [...] As a composer and producer, I enjoy having tools in my arsenal."

The internet was quick to arrive with receipts. One user pointed out that his voice sounds like Sjava's — if Sjava were a robot — neither inspired nor lived-in. In his response, the artist said he is a "classically trained pianist" who studied music production between 2008 and 2010. 

But say his claims are correct — that he did use certain elements to enhance his overall production. There is no other music besides this album on streaming services, which makes it impossible to track any progress or growth over time. The mostly hip-hop songs on his SoundCloud, recorded in 2015 and uploaded in 2023, bear no resemblance to what he is making now. Perhaps he spent the intervening period developing his sound, but the nature of the internet almost demands that artists share even a glimpse of their process. What we have instead are videos of him miming songs in the studio, and a dismal showing at the recent Mzansi Soul event, where people further questioned his vocal abilities — one podcast host calling the performance "very questionable."

I reached out to Sfiso Atomza for a conversation. As guitarist and vocalist for The Muffinz and its de facto spokesperson, he is a knowledgeable sounding board on all things live music. As a producer, he began releasing his sonic blend, Gozonko, in 2023 and is currently working on a new iteration of the project that incorporates AI-generated elements. "I'm okay with AI, if there is some proof of this person making some music before this time," he says.

"The first premise is: are you able to make music? That sits outside of 'are you able to describe music.' Because then I feel like we go into the realm of critics, and I don't like critics much — the people who describe music in terms of what's good and what's bad. Without the hard work involved in creativity itself, we risk trivializing the creation process into hit-making."

"The question 'what is art' is far more interesting, because it's a lot harder to classify. The process is the art — the thing that you're going through, the uncertainty, the figuring out, the not being able to achieve something but believing it's possible, and still pushing through. I think that is the art. The things we produce after we have engaged in art are just the evidence — whether it's an album or an artwork. If you can modify your product and monetize it, then great. But the art is personal. You have to do that yourself."

"What AI risks doing — and I love it, and I think everyone should approach it with an open mind — is flooding the industry with products that bypass the art entirely. And I'll put it that way because the industry is a money-making space. You can get [Credo V Daniels'] music from an AI machine using a few words. We don't have to do much," he says.

AI’s Digital Colonization and the Data Gap

Before Credo V Daniels was doing cartwheels at the boards with little to show for his ability to translate the songs in a live setting, a little-known project called Nguni Machina arrived on streaming services in 2021. It was a proto-generative AI inquiry into the limits of the technology at the time, released by the multi-hyphenate creator Vulane Mthembu, of Big Fkn Gun acclaim. 

Nguni Machina was exactly what Sfiso Atomza describes above: a vehicle Mthembu used to ask pertinent questions about artificial intelligence — from the practicalities of technologies that employ it, to the ethical concerns AI-related innovations often raise.

He said: "It's been an idea I've had for a long time. But the technology was way out of reach for people. The processing power required was reserved for research institutions such as MIT." The undertaking served as a device to test the waters and to show what's possible. The results, snippets of mostly classical music instruments, sound nothing like the detailed and uniform textures we hear today. In many ways, it was groundbreaking. 

Mthembu wanted to ask questions around accessibility and ethics, a field wrought with so much friction at the present moment. "Will artists, graphic designers, and photographers be replaced? There's definitely a digital colonization of AI happening at the moment. It's being forced upon us by the Western World, and it's being used by all of us when asking Siri questions, but it's not representing us. Black people are not in that space," he said. 

By and large, Black people still aren't in the space in 2026. We train the models, sure, and engage with their dark underbelly to facilitate a smooth user experience. Recent efforts to fill the gap are modest in scope. UCT researchers have built MzansiLM, an AI language model covering all eleven of South Africa's official languages — a project born from the recognition that for speakers of most South African languages, popular AI assistants remain unreliable or simply wrong, primarily because there are far fewer and smaller textual datasets available in these languages for training. On the continent's western edge, Google released WAXAL, a large-scale open speech dataset covering 21 Sub-Saharan African languages, developed in partnership with African institutions, under a framework that ensures those partners retain ownership of the data they collected. 

While meaningful, these gestures don't nearly scratch the surface, and the lack of homegrown datasets manifests itself in the work of artists such as Credo V Daniels, who, as much as they might try, can't localize the machine, and therefore have to rely on the catalog it's trained on — most of which occludes the African continent entirely.