Grasping in a Cluttered Environment: Avoiding Obstacles Under Memory Guidance

Loading...
Thumbnail Image
Date
2019
Authors
Abbas, Hana H
Marotta, Jonathan J
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Humans often reach to remembered objects, such as when picking up a coffee cup from behind our morning paper. When reaching to previously seen, now out-of-view objects, we rely on our perceptual memory of the scene, to guide our actions (Milner & Goodale, 1995). Based in relative coordinates, encoded perceptual representations may likely exaggerate the risk associated with nearby obstacles. For instance, a cereal bowl next to our coffee cup may be judged as larger than it really is under memory-guided conditions, resulting in a more cautious obstacle avoidance approach to best prevent a messy collision. In contrast, when visual information is available up to the point when a reach is initiated, the precise positions of objects relative to the self are likely to be computed and incorporated into a motor plan, allowing for finely tuned eye-hand maneuvers around positioned obstacles. The objective of this study was to examine obstacle avoidance during memory-guided grasping. Eye-hand coordination was monitored as subjects had to reach through a pair of obstacles in order to grasp a 3D target. The availability of visual information underwent a between-subjects manipulation, such that reaches occurred either with continuous visual information (visually-guided condition), immediately in the absence of visual feedback (memory-guided no-delay condition), or after a 2-s delay in the absence of visual feedback (memory-guided delay condition). The positions and widths of obstacles were manipulated, though their inner edges remained a constant distance apart. We expected the memory-guided delay group to exhibit exaggerated avoidance strategies, particularly around wider obstacles. Results revealed subjects were able to effectively avoid obstacles in the visually-guided and memory-guided no-delay conditions, though overall performance was poorer in the no-delay group, resulting from the inability to use visual information for the online control of action. Still, subjects in these groups consistently altered the paths of the index finger and wrist and adjusted the index finger position on the target object to accommodate obstacles that obstructed the reach path to different degrees. Contrary to expectation, the memory-guided delay group resorted to a more moderate strategy, with fewer instances of altered index finger and wrist paths or adjusted index finger positions on the target object in response to positioned obstacles, though successful grasps were still seen. In other words, subjects reaching to remembered objects tended to use a “good enough” approach for avoiding obstacles. In conclusion, obstacle avoidance behaviour, driven by our stored perceptual representations of a scene, appears to adopt a more moderate, rather than exaggerative, strategy. This work was funded by Research Manitoba, NSERC CGSM, and NSERC Discovery Grant.
Description
Keywords
Perception and Action, Vision, Grasping, Memory
Citation
Collections