Here are the different technologies involved in Open Air:

An iOS app was programmed by Antimodular Research to record and listen to voice messages. The app works best when connected to a local wifi network called “Open Air” which will be available for free in the parkway. The app asks for your location so that the system may use GPS to direct all the lights at you during playback, likewise the app notifies you when your message is about to go live.

The users reach the Open Air servers running in the cloud and linked to the project control room. The web servers in the Open Air infrastructure use a Linux/Nginx/Apache stack and are load-balanced by the excellent HAProxy system. The servers themselves run the HEAP web platform from Turbulent to manage message uploads, audio conversion and storage. The queue scheduler system, using ZeroMQ, then communicates program updates and scheduling to mobile clients in real-time.

To participate online the user needs the latest flash installed. This allows the system to record using his or her computer microphone. Also Google Earth plugin is used to visualize the lights in 3D over the parkway.

At the project control room, custom-software analyzes the voice messages for frequency, volume and entonation. The resulting data is used to map to different 24 virtual searchlights in an openGL 3D environment that converts this into DMX signals. This application knows the exact 3D position of each searchlight in the parkway thanks to a calibration performed with GPS or traditional surveying. The DMX application sends appropriate DMX commands over a wireless signal to the 24 Syncrolite SX10K searchlights to direct them to the desired location. The SX10Ks are new-generation 10kW xenon robotic fixtures that produce a very collimated lightbeam.

When the searchlights produce the participant’s design, four Axis webcams take digital pictures from four vantage points in the city. As soon as the pictures are taken, an email is sent to the participant with the URL of a personal web page that is built for the participant.

A user may view his or her web page by entering the personal code or by typing the URL in a web browser. The personal page contains the real and virtual images of the user’s design as well as their name, location, date, time and comments.