Interactive Vision: How Active Perceivers Sample Information Over Time

Authors

DOI:

https://doi.org/10.30682/diid7421l

Keywords:

Interactive vision, Temporal processing, Active vision, Aging, Advanced Driver Assistance Systems

Abstract

Human-object and human-computer interactions take place over time, during which we take in sensory information, make predictions about the impact of our actions based on our goals, and then integrate the new sensory information in order to update our internal models and guide new actions. Here, I focus on two key aspects of interaction that unfold over time: (1) active vision using eye and body movements and (2) temporal windows and rhythms. Recent scientific research provides new insights into how we integration sensory input over time and how information processing speed varies over time and between individuals. Understanding these temporal parameters of how we perceive and act, and tailoring the experience to match individual differences in temporal processing, may dramatically improve the design of efficient and usable objects and interfaces. Failing to take these temporal factors into account can lead to the user being overwhelmed or confused, or failing to notice important information. These issues are illustrated with the example of the challenge of adapting automatic driver assistance technologies to older drivers.

Author Biography

David Melcher, New York University Abu Dhabi

Ph.D. (Rutgers University, 2001), Professor of Psychology at New York University Abu Dhabi. A cognitive neuroscientist, he is author of over 100 scientific journal articles and book chapters and received the 2011 American Psychological Association Distinguished Scientific Award for Early Career Contributions.

Published

2021-11-18

How to Cite

Melcher, D. (2021). Interactive Vision: How Active Perceivers Sample Information Over Time. Diid — Disegno Industriale Industrial Design, (74), 8. https://doi.org/10.30682/diid7421l