I was immediately drawn to Paul Chan’s Untitled #11. Paul Chan’s piece, and others of his, seem to echo forms of alternative musical notation, as if this is a distillation, or instructions for performance to some time-based media or performance. One naturally attempts to “read” the piece, from left to right and from top to bottom, despite it being a 2D image, almost like a map and not a narrative or score.
I began attempting to “solve” the musical puzzle, by taking the image as actual notation of sorts, reading the pixels and shapes and making the computer create noises based on what it found.
As the sounds generated became “almost musical” I tried different approaches, sampling using grids, from left to right, as you might read it, or finding “significant” visual structures within the image and re-interpreting them as midi information to coerce the 2D information into data that could play in time. Components and videos of the Self Justification Engine can be seen below, reading the image from left to right, or using Hough line detection, or pasting the image back onto itself based on itself.
It is "in two movements". The first movement is ten minutes long, and is the result of throwing seven different images at tools, software and approaches that attempt to turn image into sound. The second movement is result of throwing the results of the first process back into itself and it limits itself based on what was created, which is a sort of compositional compression. The almost totally hare-brained and arbitrary parameters that were produced sort of average out itself according to itself.