For the exhibition Time After Time I am planning to create several displays that re-imagine an exhibition history. This display is an example of the module Display, which is a combination of software and hardware. It forms part of the long-term project UGLYD, which I described in an earlier post.
One way to explore a gallery’s exhibition history is to create a simulation of all the works exhibited in the order of exhibition and I am focussing on the textual element of an artwork, that is, the artwork title and other meta data. To be able to create a simulation I created software that copies the movement of a gallery visitor who reads an artwork label and then walks to the next label etc. The display shows the artwork label, using the Being Human font, and a voice reads the title. As it is all scripted, I imagine a robot following software code and learning the human language of artwork titles.
In one display, the simulation of the robotic gallery visitor looks up all of the 5000+ works shown at the John Hansard Gallery between 1980 and 2016. The full sequence takes about 8 hours. In another display the robot is making a selection of 185 titles based on the key words of the human rights declaration. It then displays these titles and reads them.
Top Image: UGLyD_Display1_Image15 ( 2017) screenshot
Screen recording of the script and generated Audio Visual
Video: Unconsumable Global Luxury Dispersion @ John Hansard Gallery, Display1 from Walter van Rÿn on Vimeo.
Display 1 is part of the installation UGLyD by Walter van Rijn. At the installation it looks like a video (the visual part is on the right side of this video), but it is a live generated Audio Visual using a script. The script creates a robotic procedure of looking at art. It goes to a database of the gallery’s exhibition history, reads the meta-data of artwork 1, visualises it, reads artwork 2 etc, etc.
The script is created with Script Editor and UI Browser.