20160128_150747

During the course of researching optic flow and interesting features in a series of images, it is necessary to record videos to be analyzed. In order to accomplish this, William designed a headset based around an Oculus DK2 with a Raspberry Pi (a small single-board Linux computer), camera, and Wii Nunchuck attached to the side. Some features of the device are:

  • The whole device is battery powered, making it untethered and completely portable
  • Python script displays camera output right in front of the user’s eyes
  • Camera captures video at 1920×1080, 15fps
  • Wii Nunchuck allows for easy menu navigation
  • Menus allow the user to modify the occluded area during use

A mask over the image occludes a large portion of it except for an ellipse in the middle, forcing the user to turn their head to capture the entire scene, and places the most interesting features of the scene in the center of the camera’s field of view. However, this overlay does not appear in the video file, leaving clean data for analysis.

View Inside Headset
What the user sees while wearing the headset.

The size of the ellipse and the opacity of the surrounding occlusion are able to be changed by the user on the fly by navigating an on-screen menu using the Nunchuck. From this same menu, the user can begin recording a video of the scene. A small red circle will appear in the corner of the user’s field of view to indicate that recording is in progress, and the camera’s output will be continuously saved to an h.264 video file at a resolution of 1920×1080, at 15 frames per second. Once recording is complete, the video will be automatically converted to an MP4 file and saved locally on the Pi. In addition to the video file, the camera’s h.264 encoder tracks motion in between frames and can save this motion data to a file in the following format (taken from the picamera python library documentation):

Motion data is calculated at the macro-block level (an MPEG macro-block represents a 16×16 pixel region of the frame), and includes one extra column of data. Hence, if the camera’s resolution is 640×480 (as in the example above) there will be 41 columns of motion data ((640 / 16) + 1), in 30 rows (480 / 16).

Motion data values are 4-bytes long, consisting of a signed 1-byte x vector, a signed 1-byte y vector, and an unsigned 2-byte SAD (Sum of Absolute Differences) value for each macro-block.

Multiple files can be easily recorded and stored on the Pi, and when it is time to collect the data, the user can easily copy the files to his or her computer either by using SCP or by turning off the headset, removing the micro SD card, and mounting it on his or her computer.

20160128_154125

A view of the entire system.

The headset is powered by a 10200mAh dual-output USB power pack that can easily be kept in the user’s pocket during normal operation. The Raspberry Pi and Camera are secured to the headset with a 3D-printed bracket, and the Nunchuck connects to the Pi directly via the GPIO header.

This headset makes it easier for any user wishing to collect video as they advance through the scene. The Computational Vision Laboratory will use this data to analyze optic flow and track which objects in the scene that the users determines to be interesting.