Nvidia has unveiled a new experimental generative AI model that the company describes as a “Swiss army knife for audio.” The Fugatto (Foundational Generative Audio Transformer Opus 1) model uses text prompts to generate new or modify existing music, voice and sound files. Developers from all over the world took part in the creation of the model, which strengthened the “multi-accent and multilingual capabilities of the model.”

Image source: NVIDIA

«We wanted to create a model that understands and generates sound the way humans do,” said Rafael Valle, a project participant and manager of applied audio research at Nvidia. The company has proposed several scenarios in which the Fugatto model may be in demand:

  • Music producers can quickly create a prototype song that can be easily edited by trying out different styles, voices and instruments.
  • Fugatto can be used to create language learning tools with the choice of the most suitable voice.
  • Video game developers can use it to create variations of pre-recorded assets to match changes in the game based on player choices and actions.

The researchers claim that the model, with some additional fine-tuning, can also perform tasks that were not part of its prior training. The model can combine separate instructions, for example, generating speech with a certain intonation and accent, or the sound of birds singing during a thunderstorm. The model can also generate sounds that change over time, such as the sound of an approaching rainstorm or a departing train.

Fugatto is not the first generative AI technology that can create sounds from text prompts. Meta✴ previously released a similar open-source AI model. Google offers its own AI text-to-music tool, MusicLM, which can be accessed through the company’s AI Test Kitchen website.

Nvidia has not yet provided public access to Fugatto and has refrained from commenting on this matter.

Leave a Reply

Your email address will not be published. Required fields are marked *