My name is Eamonn Bell. I’m a third-year Ph.D. student (GSAS) in Music Theory at Columbia University, and I have a particular interest in the application of mathematics and computational methods in music research.
I’m delighted to be working with Jennifer Brown at the Digital Science Center, at the Science and Engineering Library for the 2015/16 academic year. This semester, I will be developing a virtual reality data visualization for the Center’s VR equipment based on the music listening habits of library users. This first blog post is about the use of the Processing programming language as a tool for the rapid prototyping of VR experiences. More information about the project can be found at my website, http://www.columbia.edu/~epb2125/#!listening.md.
Press coverage of the latest virtual reality (VR) technologies is becoming unavoidable. Oculus, bolstered by the PR talents of the infamous John Carmack (legendary programmer of 90s shooter Doom), is beginning production of their consumer head-mounted virtual reality solution.
In 2014, at an industry conference, Google introduced a low-cost and self-assembled VR headset which leverages the sensors and high-resolution displays of the now-ubiquitous smartphone, named after its construction medium: Cardboard. Only yesterday, the New York Times announced that it was partnering with Google to deliver one million Cardboard headsets to its subscribers in early November, with the intention of co-distributing immersive 3D video content to be consumed using the device (October 20, 2015).
In the space of just over a year, Cardboard will have made the jump from dorky tech conference swag to middlebrow info-/edutainment content delivery system.
In short, for better or worse, VR is hot right now. But the technical barrier to entry in terms of development is quite high.
So, what’s a graduate student with a basic grasp of a scripting language traditionally underutilized in graphics development (Python), a data visualization project, and an Oculus Rift DK2 at hand to do?
Fortunately, the Processing programming language provides a great platform to get started with programming in 3D. Created in 2001 by Casey Reas and Benjamin Fry, Processing provides a simplified Java-like interface to slew of static and animated 2D and 3D graphics functions in a graphical IDE.
One advantage of Processing its minimal syntax and built-in draw routines. Most Processing sketches contain two blocks of code, the contents of two static functions:
setup(), which is called when the Processing “sketch” (project) is loaded for the first time; and
draw(), which is called once per frame. Changes made to the canvas in the
Processing has a large community of users and developers, and a number of introductory texts have been written which step the reader through the most common features of the language, sometimes with clear applications in mind. I can recommend Fry’s now-dated Visualizing Data (http://shop.oreilly.com/product/9780596514556.do), or the excellent The Nature of Code (http://natureofcode.com/), which focuses on physics simulation in Processing.
Processing also has extensive support for 3D rendering, which we I use to prototype the immersive data visualization that is the goal of this project. Since Processing is in effect a subset of the Java language, the Java standard libraries and third-party classes can be used in Processing sketches. This opens the door to a vast number of applications outside of simple drawing and visualization, even to VR applications. This Processing library (https://github.com/kougaku/OculusRiftP5) exposes the Oculus Rift in a very straightforward way that can be used to rapidly protoype VR experiences using Processing. The example code included runs well, and though it has not been updated recently, seems to be stable enough. The class implemented by this library will be the basis for my first experiments with the Oculus Rift. More to come!