Sunday, December 22, 2024
Sunday, December 22, 2024

AI starts a music-making revolution and plenty of noise about ethics and royalties

Must read

John Furner
John Furnerhttps://dailyobserver.uk
Experienced multimedia journalist with a background in investigative reporting. Expert in interviewing, reporting, fact-checking, and working on a deadline. Excel at cinematic storytelling and sourcing images, sound bites, and video for multimedia publication. Work well with photographers and videographers when not shooting his own stories, and love to collaborate on large, in-depth features.

The Daily Observer London Desk: Reporter- John Furner

Yet his influence shows up three minutes into “Vertigo,” a song that uses artificial intelligence to fuse a melody from singer-songwriter Kemi Sulola with sounds generated by NoiseBandNet, a computer model from doctoral student Adrian Barahona-Ríos at the University of York in the U.K. and associate professor of music engineering technology Tom Collins at the University of Miami’s Frost School of Music.

The model used Tchaikovsky’s “Souvenir de Florence” string sextet, ambient noises and other “training” clips to generate audio samples based on musical ideas from Ms. Sulola, resulting in a unique sonic landscape.

The song won third place at the International 2023 AI Song Contest.

“It’s a good example of technology, or innovation in technology, just kind of being a catalyst for creativity,” Mr. Collins said. “We would not have written a song with Kemi Sulola — and she wouldn’t have written one with us — were it not for this interest around AI.”

Artificial intelligence, which allows machines to receive inputs, learn and perform human-like tasks, is making a splash in health care, education and multiple economic sectors. It also has seismic implications for music-making and record labels, posing existential questions about the meaning of creativity and whether machines are enhancing or replacing human inspiration.

Due to rapid advancements in AI technology, the web is chock full of programs that can clone the Beatles’ John Lennon, Nirvana’s Kurt Cobain or other well-known voices or spit out completed songs with a few text prompts, challenging the copyright landscape and sparking mixed emotions in listeners who are amused by new possibilities but skittish about what comes next.

“Music’s important. AI is changing that relationship. We need to navigate that carefully,” said Martin Clancy, an Ireland-based expert who’s worked on chart-topping songs and is the founding chairman of the IEEE Global AI Ethics Arts Committee.

Online generators that can produce fully baked songs on their own is an aspect of AI-in-music that has exploded in the last year or two, alongside the buzz about ChatGPT, a popular chatbot that allows users to generate written pieces.

Other AI and machine-learning programs in music include “tone transfer” apps that allow you to sing a melody and have it come back in the form of, say, a trumpet instead of your voice.

Additional programs help you mix and master demo tapes by relying on machines to scan them and let you know if there should be a bit more vocals here or a little less drums there.

Even those steeped in the AI music phenomenon find it hard to keep up.

“There’s a point in each semester where I say something isn’t possible yet and then some student finds that exact thing has been released to the public,” said Jason Palamara, an assistant professor of music technology at Indiana University, Indianapolis.

Some AI programs can fill a so-called skills gap by allowing creators with a musical idea to express it fully. It’s one thing to have a rough melody or harmonic idea, yet it’s another to execute it if you don’t have the instrumental skills, studio time or ability to enlist an ensemble.

“That’s where I think the really exciting stuff is already happening,” Mr. Collins said, using the example of someone who wants to add a bossa nova beat to a song but needs a program to tell them how because it’s not part of their musical palette. “That’s what I can do with the generative AI that I couldn’t do before.”

Other AI advances in music are geared toward having fun. Suno AI’s “Chirp” app, for instance, can spit out a song within minutes after you type in a few instructions.

“If you did all of the 10 sales points for re-introducing the ukulele to market now in North America, we’d see a correlation between the sales pitch for that and for AI music,” said Mr. Clancy, referring to the four-string instrument that gives many people an entry point to instrument-playing. “It’s affordable. It’s fun. That’s the important part about these tools. Like they’re really, really good fun, and they’re really easy to use.”

To underscore this point, Mr. Clancy asked Suno AI to write a song about the drafting of this article. You can listen to it here.

Creators in the fast-growing field of music generators tend to emphasize the need to democratize the music-making process in explaining why they’re in the field. One generator, Loudly, says its growing team is “made up of musicians, creatives and techies who deeply believe that the magic of music creation should be accessible to everyone.”

Voice cloning is another popular front in AI music production. For instance, there is a popular clip on the internet of Soundgarden’s “Black Hole Sun” sung by Cobain instead of fellow grunge icon Chris Cornell, who recorded the original. The Beatles broke up decades ago but released a new song, “Now and Then,” using an old demo and AI to produce a clearer version of the late John Lennon’s voice.

Voice cloning is a fun, if somewhat eerie, experiment for listeners, yet it poses serious questions for the music industry. One record label faced a major test case earlier this year when a user named “ghostwriter” uploaded a duet from rapper Drake and pop star Weeknd titled “Heart on My Sleeve.” The issue, of course, is that neither artist was involved in the song. It was crafted with voice-cloning AI.

Universal Music Group sprang into action and got it from streaming services, saying it violated copyright law. Yet it raised questions about which aspects of the songs are controlled by the labels, the artists themselves or the creators of AI content.

“Does Drake own the sound of his voice, or [does] just the record label he’s signed to, UMG, own the sound of his voice? Or is this an original composition that is fair use?” Rick Beato, an instrumentalist and producer, said in an AI segment on his popular YouTube channel about music. “People are not going to stop using AI. They’re going to use it more and more and more. The only question is: What are the labels going to do about it, what are the artists going to do about it and what are the fans going to do about it?”

In the Drake-Weeknd case, Universal said the “training of generative AI using our artists’ music” is a breach of copyright. Yet some artists are embracing AI, so long as they get a cut of proceeds.

“I’ll split 50% royalties on any successful AI-generated song that uses my voice,” electronic music producer Grimes tweeted earlier this year.

The U.S. Copyright Office offered some clarity in March about works primarily produced by a machine alone. It said it would not register those works.

“When an AI technology determines the expressive elements of its output, the generated material is not the product of human authorship,” the guidance said. “As a result, that material is not protected by copyright and must be disclaimed in a registration application.”

The Biden administration, the European Union and other governments are rushing to catch up with AI and harness its benefits while controlling its potentially adverse impacts on society. They are also wading through copyright and other matters of law.

Even if they devise new legislation now, the rules likely will not go into effect for years. The EU, for instance, recently passed a sweeping AI law but it won’t take effect until 2025.

“That’s forever in this space, which means that all we’re left with is our ethical decision-making,” Mr. Clancy said.

For now, the AI-generated music landscape is a bit like the Wild West. Many AI-generated songs are hokey or just not very good. If there is a glut of AI-generated music, then listeners might require a curator to filter through it all and find what’s worth their time.

There are also thorny questions of whether using the voices of artists like Cobain, who killed himself in 1994, is in good taste, or what is gained by attempting to generate AI music through him.

“If we train a model on Nirvana, and then we say, ‘Give me a new track by Nirvana,’ we’re not going to get a new track from Nirvana, we’re going to get a rehash of ‘Nevermind,’ ‘In Utero’ and ‘Bleach,’” Mr. Palamara said, referring to albums the band released in 1989-1993. “It’s not the same thing as, like, if Kurt Cobain was alive today. What would he do? Who knows what he would do?”

At a Senate hearing in November, Mr. Beato testified there should be an “AI Music dataset license” so that listeners know what kind of music an AI platform trained on, and so that copyright holders and artists can be compensated fairly after their work contributed to the piece.

Mr. Palamara worries that as AI tools get more straightforward to use, musicians might generally lose the ability to make music at a virtuosic level. Already, some singers rely on pitch-correction technologies such as Auto-Tune.

“The new students coming in the door know how to use these technologies and never really have to strive to sing in tune, so it makes it harder to justify that they should learn how,” he said. “Some might argue that maybe this just means the ability to sing in tune is less important in today’s world, which might be true. But you can’t argue that humankind is being improved by the erosion of certain abilities we’ve been honing for centuries.”

There is also concern that machines could replace jingle writers or other jobs that musicians — some of whom are scrapping for gigs already — rely on for income.

At the same time, AI is opening new opportunities for musicians and arts organizations.

Lithuanian composer Mantautas Krukauskas and Latvian composer Māris Kupčs produced what is being called the first AI-generated opera for the city of Vilnius in September.

Only the words for the 17th-century piece, “Andromeda,” survived, but the modern-day composers restored the opera using an AI system called Composer’s Assistant.

The model was developed by Martin Malandro, an associate professor of mathematics at Sam Houston State University, and can fill in melody, harmony, and percussion that fit certain prompts. The European composers trained the model on the opera’s libretto and surviving music from the Baroque-era composer, Marco Scacchi, and his contemporaries to produce an opera that might have sounded like the original, even if it wasn’t the exact score.

Mr. Malandro said he wasn’t directly involved in the restoration, though said he is credited as the contributor of the AI model, and “my understanding is that the opera was sold out and received well at its premiere.”

A British arts nonprofit, Youth Music, conducted a survey and found 63% of people ages 16-24 say they are embracing AI to assist in their creative process, though interest wanes with age, with only 19% of those 55 and older saying they would be likely to use it.

Mr. Palamara said mixing and mastering are areas ripe for AI use. He took some of the “awful” demos his high school band made in the 1990s and ran them through a program from IzoTope that analyzed the demos and found ways to make them better.

Experts say programs like this one can also take over some grunt work for music professionals if they want to focus on one project but let AI assist with the assignments they need to pay the bills and meet tight deadlines.

AI is “definitely going to change our musicianship,” said Mr. Collins. “But I think change in musicianship has been happening for centuries.”

John Furner
John Furnerhttps://dailyobserver.uk
Experienced multimedia journalist with a background in investigative reporting. Expert in interviewing, reporting, fact-checking, and working on a deadline. Excel at cinematic storytelling and sourcing images, sound bites, and video for multimedia publication. Work well with photographers and videographers when not shooting his own stories, and love to collaborate on large, in-depth features.

PLACE YOUR AD HERE

- Advertisement -spot_img

More articles

PLACE YOUR AD HERE

- Advertisement -spot_img

Latest article

John Furner
John Furnerhttps://dailyobserver.uk
Experienced multimedia journalist with a background in investigative reporting. Expert in interviewing, reporting, fact-checking, and working on a deadline. Excel at cinematic storytelling and sourcing images, sound bites, and video for multimedia publication. Work well with photographers and videographers when not shooting his own stories, and love to collaborate on large, in-depth features.