When humans view objects in a visual scene, they tend to perceive the object borders as “belonging to” the object that occludes or is in front of others. In this context, the figure defines the object that appears in front of the others (the ground). Recently, single cells have been identified in primate visual areas V1, V2, and V4 that have border-ownership selectivity. Interestingly, these cells typically possess very small receptive fields relative to the size of objects in the visual scene about which the cells signal information about figure-ground segregation. Humans generally require global information about the context of the visual scene to inform figure-ground perception, yet border-ownership cells only have access to local information.

Image from Peterson, M.A., Salvagio, E. (2008). Inhibitory competition in figure-ground perception: Context and convexity. Journal of Vision, 8(16),4.

Convexity display of Peterson & Salvagio (2008). Humans were more likely to report that the convex region (on the right) appeared as the figure compared to the concave region (on the left). The model border-ownership results are indicated by the arrows. Each arrow shows the direction of border-ownership at that location determined by the border-ownership cells. Consistent with the human psychophysics data, the model border-ownership cells indicate the convex region as figure.

Our model of primate visual areas LGN, V1, V2, and V4 predicts that border-ownership cells acquire information about global scene context through fast feedback via inter-areal connections from cells that have large receptive fields. Units in model area V4, which are sensitive to curvature and have annular receptive fields of different sizes, compete to determine the location of potential figures in the visual scene. This information is dynamically fed back to border-ownership cells sampling the object borders, which in turn locally signal the direction of figure. Local luminance junctions that separate regions of different luminance are thought to convey information about figure-ground segregation. The model yields results in scenes containing T-, X-, and L-junctions that are consistent with human percepts without specialized junction-detectors. Attention interfaces with units in model area V4 to multiplicatively enhance different potential figures in bistable visual displays.

We are currently extending the model to determine border-ownership signals in motion displays containing independently moving objects.

Selected publications

Layton, O. W., Mingolla, E., & Yazdanbakhsh, A. (2012). Dynamic coding of border-ownership in visual cortex. Journal of vision, 12(13), 8.

Yazdanbakhsh, A., Layton, O.W, Mingolla, E., A neural model of figure-ground segregation explains occlusion without junction detectors. VSS 2011.