A Bayesian theory for intercepting objects moving in 3D
In order to intercept an object moving in a scene, the future position of that object must be estimated. We developed an ideal observer model that uses monocular visual information to estimate the optimal interception point—that point where a moving object is most likely to cross an arbitrary line through the observer’s viewpoint. We compared optimal interception points with human reaches to intercept an object in a virtual 3D environment. Specifically, we explored how prior knowledge of the object’s size influences optimal and human interception performance. The ideal observer uses Bayes’ rule to combine available visual image information with prior statistical knowledge about the object’s size, starting position, and velocities to compute a probability distribution over intersection points along a specified line through the viewpoint. The optimal interception point was defined as the crossing point with the highest posterior probability given the image data and prior knowledge. With only one eye, the visual information for estimating the crossing point is ambiguous. For a given image size, an object that is small and near must cross nearer than an object that is large and far. Thus, prior knowledge about the object’s 3D size can be used to disambiguate the interception point. We asked human participants to intercept two different moving objects. Participants first performed an interception task with no information about the objects’ sizes. They were then taught distinct object sizes, through visual and haptic feedback, so that they could use size as prior knowledge in subsequent interception tasks. Differences in interception performance before and after training should reflect the impact of training. Participants learned the size of the objects in the experiment and used this information to improve their interception performance. This use of prior knowledge supports a Bayesian model for object trajectory inference.