Depth-to-audio sensory substitution for increasing the accessibility of virtual environments

Shachar Maidenbaum, Daniel Robert Chebat, Shelly Levy-Tzedek, Amir Amedi

Research output: Contribution to journalConference articlepeer-review

4 Scopus citations

Abstract

As most computerized information is visual, it is not directly accessible to the blind and visually impaired. This challenge is especially great when discussing graphical virtual environments. This is especially unfortunate as such environments hold great potential for the blind community for uses such as social interaction, online education and especially for safe mobility training from the safety and comfort of their home. While several previous attempts have increased the accessibility of these environments current tools are still far from making them properly accessible.We suggest the use of Sensory Substitution Devices (SSDs) as another step in increasing the accessibility of such environments by offering the user more raw "visual" information about the scene via other senses. Specifically, we explore here the use of a minimal-SSD based upon the EyeCane, which uses point depth distance information of a single pixel, for tasks such as virtual shape recognition and virtual navigation. We show both success and the fast learned use of this transformation by our users in these tasks, demonstrating the potential for this approach and end with a call for its addition to accessibility toolboxes.

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Depth-to-audio sensory substitution for increasing the accessibility of virtual environments'. Together they form a unique fingerprint.

Cite this