Meta’s new AI-generating model can take melodies or beats and turn them into songs
Facebook and Instagram-owned Meta Platforms are among a growing number of contenders in the field of AI music generation, and on Tuesday (June 18), the company’s AI research division unveiled the latest step forward in that effort.
Meta’s Fundamental AI Research (FAIR) team gave the world its first glimpse of this JASCOa tool that can take melodies or beats and turn them into complete music tracks.
Meta says this functionality will give creators more control over the output of AI music tools.
JASCO – which stands for “Joint Audio and Symbolic Conditioning for Temporally Controlled Text-to-Music Generation” – is comparable in quality to other AI tools, “while allowing finer and more flexible controls over the generated music,” Meta FAIR said in a blog post .
To demonstrate JASCO’s capabilities, Meta published a music clips page, where simple public domain songs are converted into music tracks.
For example, a melody from Maurice Ravel’s The Bolero is transformed into a “driving ’80s pop song” and a “folk song with accordion and acoustic guitar.” Portrait of Tchaikovsky Swan Lake it becomes “a traditional Chinese song with guzheng, percussion, and bamboo flute,” and “an R&B song with deep bass, electric drums, and lead trumpet.”
“As innovation in the field continues to move at breakneck speed, we believe that collaborating with the global AI community is more important than ever.”
Meta
Meta has always made a fair amount of its AI research available to the public. With JASCO, the company released a research paper describing the work, and later this month, it plans to release the inference code under the MIT license and the pre-trained JASCO model under the Creative Commons license. This means that other AI developers will be able to use the model to build their own AI tools.
“As innovation in the field continues to move at a rapid pace, we believe that collaboration with the global AI community is more important than ever,” Meta FAIR said.
The latest update comes a year after the release of Meta MusicGena text-to-audio generator that can create 12-second tracks from simple text information.
That tool was trained on 20,000 hours of music licensed by Meta for AI training purposes, as well as 390,000 instrumental tracks from Shutterstock and Pond5.
MusicGen is also able to use music as its input, which, according to some, made it the first AI music tool capable of turning music into a fully developed song.
JASCO’s Meta comes on the heels of several innovations in the AI music space that will be unveiled in the coming days.
On the same day Meta unveiled JASCO, GoogleAI Lab, DeepMind, unveiled a new video-to-audio (V2A) tool that can create video songs. Users can enter text commands to tell the tool what kind of sound they want in a video – or the tool can simply generate the sounds itself, based on what the video shows.
DeepMind described this as an important part of being able to create video content using only AI tools. Most AI video producers only create silent videos.
Last week, AI stabilitythe company behind the popular AI art generator Stable Distributionreleased Stable Sound Ona free, open-source model for creating audio clips up to 47 seconds long.
The tool – which is not intended to create songs, but to create sounds that can be used in songs or other applications – enables users to tune the product with their own custom audio data.
For example, a drummer can train a model on his drum recordings to produce new and unique beats in their own style.
These types of AI tools stand in contrast to AI music platforms like The sound again Sunowhich creates all the tracks without any other than text commands.
Such tools are often trained on large amounts of data, and have become a source of concern in the music industry, due to suspicions that they have been trained on copyrighted music without authorization.Music Business Worldwide
Source link