A Virtual Environment for Visual Cognition Experiments
The FAVE software platform allows researchers to design 3D virtual environments to investigate the performance of visual tasks such as navigating through the world and searching for objects, by humans and by computational agents.
Although people live and act in a rich 3D world, most visual cognition experiments use only 2D static images which are easy to generate in a controlled manner. Rich artificial 3D worlds exist, for example, in video games, but these are not intended or designed to be used as platforms for carefully parameterized experiments. The new software allows researchers to move their studies into a richer 3D world without sacrificing experimental control. The software also makes it easy to run cognitive model simulations in the same environments explored by human participants, facilitating collaborations between researchers in visual cognition, computer vision, and computational neuroscience.
This software allows the researcher to control many aspects of the environment, which can be generated stochastically (with objects drawn randomly from a distribution specified by the experimenter) or deterministically (with particular objects specified by the experimenter). The experimenter can also assign rewards to various actions in the environment: rewarding the human participant (or computer simulation) for picking up a particular type of object, for example. The software's configuration interface is a simple syntax that allows even non-programmers to quickly and easily design a variety of experiments. A complete history of navigation and interaction is kept for each trial, allowing later analysis and recreation.
The software is currently being used by the Visual Attention Lab at Brigham & Women's Hospital, Harvard Medical School to study human foraging behavior. In these studies, human observers forage for apples in a virtual orchard. Human behavior is compared to computer simulations in order to determine how the observer decides which tree to visit next, and how the environment is represented in memory in order to make this decision.
A video of the software in action: orchardYellow .
FAVE code will be made available soon for use by other researchers.