The Computational Vision Laboratory conducts psychophysical and computational modeling studies of many aspects of visual perception. Below you will find some brief descriptions of investigations being carried out by our lab as well as some of our close collaborators from the CNS Vision Lab at Boston University where Professor Ennio Mingolla and Harald Ruda previously operated.
This website is currently under construction, please contact Harald Ruda <email@example.com> if you encounter any issues or would like to provide feedback. Thank you!
During the course of researching optic flow and interesting features in a series of images, it is necessary to record videos to be analyzed. In order to accomplish this, William designed a headset based around an Oculus DK2 with a Raspberry Pi (a small single-board Linux computer), camera, and Wii Nunchuck attached to the side.
To improve video capture for object tracking analysis, the robot allows recording stable video for later analysis while also providing a platform for testing tracking algorithms using the attached PTZ camera.
Surfaces can appear to have different depths even when the only information available to the visual system comes from motion. What are the mechanisms contributing to these perceptions? Please see the published article in JoV. Also the FMO model source code is available.
When heading information is combined with goal and obstacle location, the ViSTARS model steers around obstacles towards a goal. (work by Andrew Browning)
Simple hyperacuity phenomena can be modeled with a small number of oriented filters. Such models break down when the stimuli become more complex, such as opposite polarity with respect to the background, or masked by sinusoidal gratings, or when the gap between stimulus elements is increased. How can these stimuli be modeled?
Finding objects in natural environments presents a challenge for even simple objects with known properties. Because of occlusion and distance, the number and extent (visual angle) of visible surfaces do not directly convey the number of objects seen. Are those three reddish surfaces one nearby partially occluded apple or three distant apples? Resolving such ambiguities seems effortless for human observers. How is it done?
Color perception is a complex process mediated by opponent color channels. What determines the color of objects that you see?
Recently, cells with only local receptive fields have been found to be selective for border-ownership. Can we model the operations such cells may be performing in order to achieve this selectivity?