Below you will find some brief descriptions of investigations being carried out by our lab as well as some of our close collaborators from the CNS Vision Lab at Boston University where Professor Ennio Mingolla and Harald Ruda previously operated. Please click on the headings for more information.
Surfaces can appear to have different depths even when the only information available to the visual system comes from motion. What are the mechanisms contributing to these perceptions?
Simple hyperacuity phenomena can be modeled with a small number of oriented filters. Such models break down when the stimuli become more complex, such as opposite polarity with respect to the background, or masked by sinusoidal gratings, or when the gap between stimulus elements is increased. How can these stimuli be modeled?
Finding objects in natural environments presents a challenge for even simple objects with known properties. Because of occlusion and distance, the number and extent (visual angle) of visible surfaces do not directly convey the number of objects seen. Are those three reddish surfaces one nearby partially occluded apple or three distant apples? Resolving such ambiguities seems effortless for human observers. How is it done?
Humans are able to rapidly and accurately count a number of targets up to about four – this is called subitizing. How is this accomplished? How do partial occlusions impact the human ability to subitize?
There are multiple independent sources of perceived chromaticity, including real light, afterimages, spatial chromatic induction, and positive afterimages. How are these sources combined perceptually?
Recently, cells with only local receptive fields have been found to be selective for border-ownership. Can we model the operations such cells may be performing in order to achieve this selectivity?