Visualizing and sonifying how an artificial ear hears music
Immersions is a system that lets us interact with and explore an audio processing neural network, or what I call an “artificial ear”. There are two main aspects of this project - one is visual, the other is sonic. For the visualization, first the neurons of the network are laid out in 2D, then their activation is shown at every moment, depending on the input. To make audible how music sound to the artificial ear an optimization procedure generates sounds that that specifically activate certain neurons in the network. For more informations, please the the paper or the poster, as well as the visualization code (that you could use to visualize your own networks)!