It may be hard to imagine, but Google’s next venture may be in the music industry. Using the latest AI and machine learning technologies, they’ve built a music production tool called Magenta Studio. Best of all, it is available to the public for free.
In this article, I will examine Google's Magenta in detail.
Summary of the Tool
Magenta is “an open source research project exploring the role of machine learning in the creative process.” Their website provides the Magenta Studio product, code, and research papers that describe the technology.
Magenta Studio comes in two forms : as an Ableton Live plugin or as a standalone application. For those with access to Ableton Live, the Ableton Live plugin is the better choice, as it offers more flexibility than the standalone application.
All the tools offered by the Google AI team are free to use.
Try out Magenta Studio before venturing into Google's other offerings, since the other projects are experimental and may not be fully functional.
Both the Ableton Live plugin and the standalone application are easy to download and install.
The easiest way to test the application is to use the Generate application first. With Generate, you create new melodies or drum patterns that are 4 bars long. Each pattern is called a "clip."
After creating the new clips, you can use the Continue application to extend a clip. Continue learns the existing pattern of a clip, and extends the clip in novel ways.
The Interpolate application can be used to merge two clips together.
The Groove application can be used to add a groove to an existing drum clip. The Drumify application adds a drum accompaniment to an existing melody.
Overall, Magenta Studio is easy to understand and use. Magenta is not intended to be a complete music production solution such as AIVA or Amper Music; rather Magenta can be used as a tool during songwriting or music production.
For example, you can use Magenta during songwriting to generate new melodic ideas. You can also use the tool to get you started on the drum tracks.
The website contains detailed explanations for the underlying technology and training dataset used for their tools. The main technology, MusicVAE, can be found in this paper.
For the Generate, Interpolate, and Continue engines, the training dataset is 1.5 million MIDI files. The MIDI files are preprocessed into monophonic melodic lines, or polyphonic drum patterns. There is no harmony element with the training dataset.
For the Groove and Drumify engines, the training dataset is 15 hours of a human drummer performing on a MIDI drum kit.
The notes in the dataset are time-divided into 16th note intervals.
Machine Learning Architecture
The MusicVAE is a recurrent variational autoencoder. The recurrent network (RNN) is made up of bidirectional LSTM cells. The variational autoencoder (VAE) is made up of a standard encoder, and a hierarchical decoder. The hierarchical decoder is made up of many smaller decoders, where each decoder generates a part of the output sequence.
RNN based designs suffer from data dilution over a long sequence. The hierarchical decoder can be a solution to this problem, but may be less efficient than using attention mechanisms, or even replacing the entire design with a Transformer. Note that Google does have an updated Transformer based design in the Piano Transformer, but this technology has not been integrated into Magenta Studio.
Google’s Magenta Studio is an impressive and user-friendly demonstration of AI music technology. The tool encapsulates the possibilities and limitations of using AI for music production. Magenta is not intended to be used as a complete music production solution, but rather, as a tool to aid in the process of songwriting and music production.
Future versions of Magenta Studio should be even more impressive, especially after they integrate the Transformer architecture into the product. In addition, the tool already has an Ableton Live plugin, and it is only a matter of time before Google creates a VST plugin that is compatible across all DAWs.
Google’s Magenta demonstrates that AI music technology may one day become an indispensable tool in a music producer’s toolkit.
About the Author
Wayne Cheng is the founder and AI mobile app developer at Audoir. His focus is on the use of generative deep learning to build songwriting tools. Prior to starting Audoir, he worked as a hardware design engineer for Silicon Valley startups, and an audio engineer for creative organizations. He has a MSEE from UC Davis, and a Music Technology degree from Foothill College.