Abstract
Purpose: To visually perceive our surroundings we constantly move our eyes and focus on particular details, and then integrate them into a combined whole. Current visual rehabilitation methods, both invasive, like bionic-eyes and non-invasive, like Sensory Substitution Devices (SSDs), down-sample visual stimuli into low-resolution images. Zooming-in to sub-parts of the scene could potentially improve detail perception. Can congenitally blind individuals integrate a 'visual' scene when offered this information via different sensory modalities, such as audition? Can they integrate visual information - perceived in parts - into larger percepts despite never having had any visual experience? Methods: We explored these questions using a zooming-in functionality embedded in the EyeMusic visual-to-auditory SSD. Eight blind participants were tasked with identifying cartoon faces by integrating their individual components recognized via the EyeMusic's zooming mechanism. Results: After specialized training of just 6-10 hours, blind participants successfully and actively integrated facial features into cartooned identities in 79±18% of the trials in a highly significant manner, (chance level 10%; rank-sum P < 1.55E-04). Conclusions: These findings show that even users who lacked any previous visual experience whatsoever can indeed integrate this visual information with increased resolution. This potentially has important practical visual rehabilitation implications for both invasive and non-invasive methods.
Original language | English |
---|---|
Pages (from-to) | 97-105 |
Number of pages | 9 |
Journal | Restorative Neurology and Neuroscience |
Volume | 34 |
Issue number | 1 |
DOIs | |
State | Published - 30 Oct 2015 |
Keywords
- Action-perception
- Active sensing
- Motor control
- Sensory substitution
- Vision rehabilitation
ASJC Scopus subject areas
- Neurology
- Developmental Neuroscience
- Clinical Neurology