The Magenta Project, part of the Google Brain group, was unveiled last month. The project would be using Google's open source artificial intelligence platform named TensorFlow in creating algorithms that could generate art and music.
Magenta has released its first masterpiece in the form of a 90-second piano melody. The song, which is the first tangible product of the project, was generated through the trained neural network, with only four notes provided at the start.
While the question on whether neural networks can write songs has been answered with a yes, another question remains. Are the songs any good?
Well, the answer to that depends on the listener. The song would sound great to people who love hearing toddlers smash on the keys of an electronic keyboard, but for everyone else, the song sounds pretty bad.
The song generated by Google's Magenta is not any good, but the breakthrough matters because it is one of the first steps in teaching machines how to create art.
In a blog post written by Google research scientist Douglas Eck, he explained that Magenta has two goals. The first is to advance the development of algorithms which can allow machines to learn how to create art and music. The second, more long-term goal, on the other hand, is to create a community made up of machine learning researchers, coders and artists who will support each other in the development of machine-generated content.
As with any creative project, Magenta will be releasing its work to gather feedback, and so Google is looking for researchers and artists alike to join the project community by checking out its code and feeding data to Magenta, among other things.
This is not the first time that Google has dived into a project that looks to generate art from machines. Before Magenta, there was DeepDream, a free visualization tool which allows users to create their own images inspired by neural networks.