While social media and cryptocurrency dominated headlines of the recent past, Artificial Intelligence (AI) has taken center stage in 2023 and holds court for the world to consider its seemingly infinite possibilities.
There are more than a few practical uses of AI. The music industry has become increasingly aware of how AI helps with audio and music creation. Examples include:
- iZotope’s RX9, which utilizes machine learning to repair audio
- Google’s Magenta plug-ins, which employ AI for audio stem separation
- AudioShake, which uses AI for audio restoration.
Algorithms for AI music are based on conventional machine learning, which supposedly train from free source datasets of Musical Instrument Digital Interface (MIDI) music.
The music data is synthesized into audio files that may be listened to almost instantly. Examples include:
- Jukebox by OpenAI
Six years ago, with over 2.9 million Youtube views, the song Daddy’s Car was composed with Artificial Intelligence.
Drawbacks of AI Music
Artificial intelligence (AI) music algorithms are currently producing low-quality music out of nothing. AI has difficulty with two objective criteria:
Cannot Perfectly Replicate Actual Instruments
The technology used to create the music concepts is fairly advanced, but there is still a long way to go before software can perfectly replicate actual instruments.
The majority of AI music projects employ self-developed or open-source instrument sounds, yet they fall short of even the greatest software instruments, much less real instruments captured in a studio.
A professionally-produced song stands out from the competition due to its superior sonic quality, which has an impact on how the music sounds and feels.
Lacks Human Expression
Throughout the songwriting process, musicians use their instruments to express their feelings by choosing the melody and chord arrangements that best capture how they are feeling. The result is music that appeals to listeners on a human level.
AI creates random melodies and chords based on the laws of music theory, but it is unable to match a desired mood without a substantial level of human involvement.
David Guetta Faked Eminem’s Voice With AI
AI music is not just about making instrumental music. Using machine learning to replicate and even duplicate human voices may one day bypass the need for a real singer. But not yet.
Let’s look at what French DJ and music producer David Guetta has achieved by saying he has ‘cloned’ Eminem’s iconic voice:
Supertone, Tencent & HYBE All Use AI
Korea-based AI company Supertone also boasts that its AI technology can produce ‘a hyper-realistic and expressive voice [not] distinguishable from real humans.’
Things have now advanced to a new level in China where Tencent Music Entertainment (TME) claims to have produced and published more than 1,000 songs with artificial intelligence (AI) voices that closely resemble human vocalizations.
In the three months leading up to the end of September, Tencent released the Lingyin Engine, which it describes as ‘patented voice synthesis technology.’ According to TME, this technology can ‘quickly and vividly mimic vocalists’ voices to make creative songs of any style and language.’
Meanwhile, K-pop company HYBE with popular artists like BTS can use AI to capture their voice without live recordings to create games, audiobooks, or dub an animation.
With the entire acquisition of Supertone in October for $32 million, HYBE has now increased the focus of its AI-generated voice ambitions.
In fact, HYBE CEO Jiwon Park stated that, “HYBE plans to unveil new content and services to our fans by combining our content-creation capabilities with Supertone’s AI-based speaking and singing vocal synthesis technology.”
Watch how these TikTok users use a generating program to generate a catchy AI song in three hours.
Spotify may generate its own music using these lawsuit-proof samples or it can sell them to producers, record labels, and other parties.
The potential for AI-generated hits is infinite given the wealth of information Spotify has on its customers and how they react to its music catalog.
Spotify submitted a patent application for a plagiarism interface earlier this year. The goal of this approach is to produce content using an AI model.
if it successfully navigates the plagiarism interface, it could clone their current material and train its model instead of directly using the original content. They can compile more playlists to bolster the growing category of mood related music such as Lofi beats without licensing artist created music.
Is AI now capable of independently generating full songs to stand up to the real thing? At this point, without capturing human expression and live instruments as two of many possible limitations, there is much more work to be done.
While AI is currently being used to produce stock music, it is not yet capable of independently producing and performing complete hit songs.