Tuesday, 02 January 2024 12:17 GMT

How I Used AI To Transform Myself From A Female Dance Artist To An All-Male Post-Punk Band And What That Means For Other Musicians


Author: all-male
(MENAFN- The Conversation) When you click on the Spotify profile of Intelligent Band Machine you will see an image of three young men staring moodily back into the camera. Their profile confirms that they are a“British band”,“influenced by the post-punk scene” and trying to capture the spirit of bands like The Cure“while carving out their own unique sound”. When you listen to their music you might be reminded of Joy Division's Ian Curtis.

If you dig a little deeper and read about them on their record label's page you will find that Cameron is the lead singer and his musical tastes were shaped by the concerts he attended at Nottingham's Rock City nightclub. Tyler, the drummer, was indeed inspired by The Cure, as well as U2, and The Smiths, while guitarist, Antonio, blends his Italian mother's love of classic Italian folk songs with his British father's passion for The Beatles and The Rolling Stones.

What these profiles don't say is that Intelligent Band Machine is not real, at least not in the human sense. And I should know, because I created them.

I used a range of Generative Artificial Intelligence (GenAI) tools, as well as my skills as a professional songwriter and sound engineer to make their debut album, Welcome to NTU, and I released it on my dedicated AI record label, XRMeta Records in May 2025.

You might ask why an independently releasing singer-songwriter and music producer like me would create an artificial band. As well as being a musician, I'm an academic with a background in computer science, carrying out research about how GenAI can be used for music.

I had reservations about these tools and how they might affect me as a musician. I had heard about various AI controversies like“fake” Drake, and artists like Grimes embracing GenAI in 2023. So, I was also intrigued by the possibilities.

Over 100 million people have tried Suno, an AI music generation platform that can create songs with vocals and instrumentation from simple text prompts. More than 100 million tracks have been created using the Mubert API, which allows streaming to platforms like YouTube, TikTok, Twitch and Instagram; and according to Deezer 28% of released music is fully AI-generated.

It was time for me to investigate what these tools could do. This is the story of how I experimented with GenAI and was transformed from a dance artist to a post-punk soft rock band.

GenAI has changed everything

In my early days of songwriting one of the first pieces of equipment I bought was a Panasonic RQ-2745, a small slim portable cassette tape recorder that allowed me to record rough drafts of vocals on an audio cassette tape.

When cheap products like the Sony cfs-w30 boombox began to incorporate double cassette decks, I could overdub songs and add choruses or instruments like flute or guitar at home. If I wanted a quality recording, I had to book a recording studio. I became an expert at splicing tape to remove vocal parts from the tape recording or to fix tape jams.

Cutting and taping, became cutting and pasting as I experimented with the very early free digital music sequencers that were included on a disk I found on the cover of a PC magazine. I felt liberated when sequencers like Cubase, Pro Tools, and Logic allowed high quality recordings to be produced at home. This, along with the significant reduction in the cost of studio equipment, led to the emergence of the bedroom producer and the proliferation of the 808 sound. This deep, booming, bassline can be heard in hits like It's Tricky by RUN DMC, Emergency Room by Rihanna, and Drunk in Love by Beyoncé.

Digital distribution and social media then paved the way for self-releasing independent artists like me to communicate directly with fans, sell music, and bypass record labels.

Yet during all of these changes musicians still needed the skills and knowledge to create their songs. Like many musicians I honed my skills over several years, learning to play the guitar, flute and piano, and developing sound engineering skills. Even when AI powered tools began to be incorporated into digital audio workstations, a musician's skill and knowledge was still needed to use these tools effectively.

Being able to create music from text prompts changed this.

Not since the introduction of music streaming services in the late 1990s has there been such a dramatic shift in music composition and listening technologies. Now non-musicians can create studio quality music in minutes without the extensive training that I had, and without having to buy instruments or studio equipment.

Now anyone can do this. It was time for me to learn what these tools could do.

I typically produce RnB/neo soul, nu-jazz and dance music, although I can write songs for multiple genres of music. For the experiment, I wanted to try a genre that I do not usually produce music for.

The Insights section is committed to high-quality longform journalism. Our editors work with academics from many different backgrounds who are tackling a wide range of societal and scientific challenges.

I tested about 60 different GenAI tools and platforms. These included standalone tools that focus on one task, like MIDI generation (musical data that can be played back on a keyboard or music sequencer). I also tried AI music studios. These platforms have user friendly interfaces that combine a range of AI tools to support lyric, music, image and video creation.

Suno and Udio were two of the best platforms. They can generate songs with complex vocal melodies and harmonies across a range of genres, with the best outputs being difficult to distinguish from what human musicians can create. Both Telisha“Nikki” Jones and music mogul Timbaland are said to have used Suno to create music for their AI-generated artists.

In June 2025, Timbaland announced the signing of his AI artist TaTa to his dedicated AI record label, Stage Zero. In September 2025 Jones was reported to have signed a US$3 million (about £2.3 million) deal with Hallwood Media for her AI-generated artist Xania Monet.

At the time of my experiment in March/April 2025, both Suno and Udio had issues, such as silence gaps, tempo changes, inconsistent vocal quality, and variations in genre. Sometimes the voice might change within the song. There was limited control in terms of editing, and the audio quality could vary within a single track or across a series of songs.

After trying several GenAI music platforms I decided to use Udio due to the quality of its output and its favourable terms and conditions at that time. Taking inspiration from pop-rock and post-punk bands like Joy Division and The Cure, I started the journey towards creating a new persona.

Using GenAI to produce one or two good songs was quite simple. Producing an album of 14 songs that sounded as if they were played by the same band was more challenging, particularly generating the same male voice and musical style for each song.

The songs were either far too similar to each other or had other issues such as the voice changing, or the instruments sounding too different. A careful listen to the songs in Unfolded by the AI artist Xania Monet will reveal similar inconsistencies. For example, you can hear a difference in the voice that is generated for the first song, This Aint No Tryout, compared to Back When Love Was Real.

GenAI can't write (decent) lyrics

My first task was to create the lyrics. I generated about 1,000 songs using Udio and found repeated words and phrases in the lyrics like“neon”,“whisper”, and“we are, we are, we are”, appearing both within and across the two user accounts I created. Themes like darkness, shadows, and light were also repeated within the lyrics for a significant number of songs.

GenAI just couldn't write lyrics with the complexity or playfulness I needed, so I chose to write the lyrics for the album myself and used a semi-autobiographical narrative. This allowed me to maintain a story across the album; from arriving at Nottingham Trent University and settling into student accommodation, to experiencing university life, graduating and leaving.

I could interweave current affairs like the closure of Nottingham's Victoria Centre Market in the song Goodbye Vicky Market. I included lines that referenced Nottingham's historical figures like Alan Sillitoe, who wrote The Loneliness of the Long Distance Runner, and the author D.H. Lawrence, in the song, Books.

After writing the lyrics I generated the music. There were issues with prompt adherence. I tested prompts of different lengths. In some cases, prompts were partly or wholly ignored. I might write a prompt asking for one genre and a different genre would be produced.

There were also issues with the synthetic voice pronouncing some of the lyrics. For example, it could not pronounce“NTU” or“Sillitoe” and I had to rewrite some of the lyrics phonetically or edit the audio to get the correct pronunciation for certain words.

I relied on my sound engineering skills; extending the outputs, editing, mixing, remixing, and manually recording vocals in Cubase to achieve a coherent final mix. This took a significant amount of time. In fact, editing the Udio outputs took so much time it would have been easier to recreate the music myself. I can write a song in ten minutes, and I sometimes record myself freestyling lyrics for an entire song directly in Cubase, so this was frustrating.

I encountered similar issues with prompt adherence when generating images and video. When using Kling AI to create images of the band members, I followed its prompt engineering guide. However, I had to generate hundreds of images and edit them with external tools to achieve the final band photos.

Generating video was equally tricky. One way to create a video is to upload a photo, which becomes the first frame. The rest of the video is generated based on the prompt. However, when I uploaded Cameron's profile image to Kling AI, the initial frames of the ten-second video resembled him. But by the end of the video, Cameron often morphed into someone else, and this happened frequently when generating video.

Prompts for camera instructions, such as zoom and pan, were frequently ignored. I also had to edit out scenes with other problems, such as the appearance of extra fingers or an additional leg on the band members.

All this wasn't cheap either. With 8,000 Kling AI credits at a cost of US$64.99 (about £50), I could generate about 40 ten-second videos, but many were unusable.

Music generation is cheaper. Paying between US$24 and US$30 (roughly £18-£24) for a monthly subscription might allow a user to create between 2,000 and 3,000 songs, depending on how the“credits” are used. I was very surprised to discover how quickly these song credits can be consumed. Every error or song that didn't suit my taste still cost credits.

Eventually, after generating thousands of songs, hundreds of images and video, using tools like Duck to create the band's biographies, and spending many hours editing the outputs; Cameron, Tyler and Antonio began to emerge as the band.

Something unexpected happened

I have always been passionate about creating my own music. As much as I love writing songs, the poor royalty payouts I was receiving had become disheartening. A song I recorded in 2001 and released in 2011 called Only Heaven Can Compare was streamed about 1 million times in France during 2024 but I only received about £21 in royalties.

Prior to streaming, had my song been downloaded by just 10,000 people, I would have been paid about £6,900 (69p per download). Artists like Kate Nash have raised concerns about the poor royalty payouts to musicians, citing her £500,000 payout for over 100 million plays of her song“Foundations”.

But as I created the band's album something unexpected started to happen. I began to enjoy creating music again. The frustrations with using GenAI was balanced by wonder and curiosity.

At times Udio was able to generate vocals that were so realistic I could hardly believe they were created by an AI model. There were moments when I laughed, when I was really moved, and even had chills when I heard some of the songs.

Lyrics that once lay dormant in multiple lever arch files on my bookshelf began to find new life through these generative tools, allowing me to rapidly test them across multiple genres.

I decided to take this experiment further.

After carefully selecting a set of songs I had written many years ago, I created a new persona, Jake Davy Smith. For his 14 track album, called I'll Be Right Here, which was released on November 22, 2025, I used Suno's v5 model to generate studio quality music that matched my original vision.

Suno's extensive editing tools allow users to upload vocals, create a cover song, and edit the music, lyrics, or voice with greater precision than their earlier models. This helped me nearly recreate my original songs. The track Calling is an example of a rock ballad I wrote years ago, recorded and didn't release.

Conflicting emotions

Reflecting on this experiment, I found myself with conflicting views about using GenAI. These tools are fast and affordable (in some cases, completely free). They can produce instant results. I now have tools that I can use to quickly reimagine my old songs.

I can use multiple personas to bring my lyrics to life. I am Priscilla Angelique. I am Intelligent Band Machine. I am Jake Davy Smith. I am Moombahtman 25, a male African American moombahton artist who combines hip hop with Latin American beats, and I have many more personas.

I am a“multiple persona musician” or MPM, a term I've created to define my new musical identity. Musicians having alter egos isn't new, but GenAI has completely changed how this is done.

However, there's another side to this. Human musicians are now having to compete with algorithms capable of producing high quality music at scale – as well as with each other.

These tools are improving rapidly, and the issues I experienced when using Udio to create the album for Intelligent Band Machine in March/April 2025 have already been addressed in Suno's v5 model. It is now easier to create a persona with a consistent voice. Users can upload their own songs and also create cover versions of their songs.

Creating the album for Intelligent Band Machine took about one month and there were multiple issues with trying to create consistently sounding high quality AI-generated songs. I spent hours reviewing thousands of outputs and then more time editing the final set of curated songs in Cubase.

My experience was very different when I created the album for Jake Davy Smith. I used lyrics I had already written, generated between five and 20 versions of each song, and spent far less time editing them. The process was faster, however, there were still some issues. Changes in Jake's voice occurred, though they were less frequent and easier to correct. There were also problems with pronunciation, but I could now quickly regenerate the audio. In essence, what had previously taken a month now took only a week.

Ethical issues and data collection

Yet beneath this lies a further internal conflict related to the data used to train these AI models or, as music journalist Richard Smirke describes it,“the largest IP theft in human history”. It is this issue that has made a technology that ought to have been celebrated as one of the biggest technological achievements in decades, one of the most contested instead.

Chatbots like ChatGPT, estimated to have 1 billion users worldwide, have been described by the linguist and activist Noam Chomsky as both“marvels of machine learning” and the“banality of evil”. Image generators like OpenAI's DALL-E have also come under fire. Critics like Ted Chiang challenge whether AI can make art and other commentators have criticised the lack of cultural diversity in image generation.

In addition to this, in 2024 the UK government announced it was considering an exception to copyright law that would allow industry to use copyrighted works for AI training without compensating the creators. This led to protests. More than 1,000 musicians released a silent album called Is This What We Want in protest against unauthorised AI training. The artists included Kate Bush, Annie Lennox, Damon Albarn, and The Clash.

Elton John and Paul McCartney also voiced their opposition to changes in copyright law that would benefit AI companies. The mystery about whether a band called The Velvet Sundown was AI-generated added fuel to the fire and sparked further debate during the summer of 2025.

Yet AI companies have been winning, or at least partially winning, court cases. In November 2025 Getty Images“lost its claim for secondary infringement of copyright” against Stability AI. Other AI companies are making deals, and this includes Udio and Suno's recent deals with music companies. However, more alternative platforms are emerging. Klay is negotiating with the big labels prior to launching, and Soundraw only uses music created in-house for AI training.

So GenAI is here to stay, and musicians will need to adapt. Library music, background music, and music for social media or film can easily be created with AI. However, there are risks. The risk that similar music may be generated for other users; the risk that any uploaded songs may be used for training data. Then there's the risk that these tools may inadvertently generate something that breaches someone else's IP.

One way for musicians to safely use GenAI is by training models using their own data, as YACHT did when they used their back catalogue of songs as training data for a new album. In this way musicians can have full control over the outputs. This is something I will be exploring for the next stage of my research.

What AI can't do

My transformation has been anything but straightforward. It has been marked by the deep frustration I encountered when initially using these tools, an ongoing conflict about how these tools are trained, and moments of genuine amazement. The albums I created may be imperfect, but they are a clear departure from my usual style and show how GenAI can support musical creativity.

Financially, the albums are unlikely to recoup the cost of creating them, as independent musicians may need hundreds of millions of streams to earn a decent income from music. Even a few million streams of the songs will barely cover the various fees for music, image and video generation of around £140. Merchandise, licensing, sync deals and other revenue streams will likely remain important sources of income for musicians, whether they are human or AI-generated.

On the legal side, one possible way forward is for AI companies to make open-source versions of their models freely available for offline use. Some already have, but for those that haven't, it seems fair that if they have used our data to build these systems, they should allow broader access to the models themselves.

New technologies might change how music is produced. We have gone from clapping to drumming, and from using drum machines in recording studios to generating“new” sounds with AI. Yet now that I have completed these experiments, I realise that one thing remains the same.

Whether I am cutting tape using scissors, cutting and pasting in a sequencer like Cubase, or regenerating parts in an AI music studio like Suno, human creativity is still an essential part of the process. Using GenAI was transformative, yet it was my creative decisions that shaped the songs, the albums, the avatars for my personas, their biographies, and the overall vision. This is something that AI cannot do – at least, not for now.

For you: more from our Insights series:

  • Underground data fortresses: the nuclear bunkers, mines and mountains being transformed to protect our 'new gold' from attack

  • 'I have it in my blood and brain... I still haven't been able to shake this nightmare off.' How voices from a forgotten archive of Nazi horrors are reshaping perceptions of the Holocaust

  • Inside Porton Down: what I learned during three years at the UK's most secretive chemical weapons laboratory

  • Ignored, blamed, and sometimes left to die – a leading expert in ME explains the origins of a modern medical scandal

To hear about new Insights articles, join the hundreds of thousands of people who value The Conversation's evidence-based news. Subscribe to our newsletter


The Conversation

MENAFN05012026000199003603ID1110558607


Institution:Nottingham Trent University

The Conversation

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.

Search