TECHNOLOGY, INTERNET TRENDS, GAMING

Meta Creates AI-Driven Music Generator

Meta Creates AI-Driven Music Generator

By auroraoddi

AI-powered music in an effort to keep up with Google, Meta has unveiled its own artificial intelligence (AI)-powered music generation platform. And what is even more remarkable, it has launched it as an open source project. Meta, aware of the importance of collaborative innovation and open access to knowledge, has decided to share its AI music generator with the community at large.

Meta Music Generator

This move seeks to encourage participation and co-development of advanced music technologies. By enabling artists, developers and music enthusiasts to take full advantage of this powerful music creation tool.

With this bold step, Meta demonstrates its commitment to the advancement of artificial intelligence applied to music and its desire to foster collaboration and progress in the industry.

MusicGen

Known as MusicGen, Meta’s music generation tool has captured attention. This tool has the ability to transform text descriptions, such as “An 80’s pop song with energetic drums and subtle synth pads in the background,” into approximately 12-second audio snippets in just a few seconds.

In addition, MusicGen has the option to be “guided” by reference audio. Such as an existing song, allowing it to follow both textual description and melody.

According to Meta, MusicGen has been trained using a staggering 20,000 hours of music. Which includes 10,000 high-quality licensed tracks and 390,000 instrument-only tracks selected from ShutterStock and Pond5. Renowned stock media libraries.

Although Meta has not shared the code used to train the model, it has made pre-trained models available so that anyone with the right hardware. Primarily a GPU with around 16 GB of memory, can run them and enjoy the experience of generating music with MusicGen.

How Does It Work?

So how does MusicGen get it to work?

The generated songs are reasonably melodic, at least when given basic cues such as “ambient chiptunes.” To my mind, they are on par with, and may even slightly surpass, the results of Google’s AI-based music generator, MusicLM. While these creations won’t win any awards, they do manage to produce coherent and pleasing melodies.

To further test MusicGen’s limits, it was provided with a more complex message in an attempt to challenge its capabilities: “Lo-fi slow BPM electro chill with organic samples.” Surprisingly, MusicGen outperformed MusicLM in terms of musical coherence. Generating something that would easily find its place in Lofi Girl’s musical environment, known for its relaxing and atmospheric beats. This shows that MusicGen is capable of adapting to more detailed descriptions and producing amazing results within those specific parameters.

Conclusions

It is clear that generative music is experiencing significant improvements (as can be seen in OpenAI’s Riffusion, Dance Diffusion and Jukebox). However, significant unresolved ethical and legal issues remain. Platforms such as MusicGen use learning from existing music to produce similar effects, raising concerns among some artists and users of generative artificial intelligence.

Increasingly, AI-generated tracks that mimic familiar sounds in a surprisingly authentic, or at least close enough, manner have gone viral. Record labels have taken swift action to flag these tracks to streaming services, citing intellectual property concerns, and have generally obtained favorable results. However, there remains a lack of clarity as to whether deepfake music infringes the copyrights of artists, labels and other rights holders.

Legal guidance on this issue is likely to be forthcoming soon. Several court cases related to music-generating AI are progressing through the courts. Including lawsuits related to the rights of artists whose work has been used to train AI systems without their knowledge or consent. These cases could help set precedents and establish guidelines regarding liability and rights in the field of AI-generated music.