Earpod: Eyes-free menu selection using touch input and reactive audio feedback

Shengdong Zhao, Pierre Dragicevic, Mark Chignell, Ravin Balakrishnan, Patrick Baudisch

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

Abstract

We present the design and evaluation of earPod: an eyes-free menu technique using touch input and reactive auditory feedback. Studies comparing earPod with an iPod-like visual menu technique on reasonably-sized static menus indicate that they are comparable in accuracy. In terms of efficiency (speed), earPod is initially slower, but outperforms the visual technique within 30 minutes of practice. Our results indicate that earPod is potentially a reasonable eyes-free menu technique for general use, and is a particularly exciting technique for use in mobile device interfaces. Copyright 2007 ACM.
Original languageEnglish
Title of host publicationProceedings of the SIGCHI Conference on Human Factors in Computing Systems 2007, CHI 2007
PublisherAssociation for Computing Machinery
Pages1395-1404
ISBN (Print)1595935932, 9781595935939
DOIs
Publication statusPublished - 2007
Externally publishedYes
Event25th SIGCHI Conference on Human Factors in Computing Systems 2007, CHI 2007 - San Jose, CA, United States
Duration: 28 Apr 20073 May 2007

Publication series

NameConference on Human Factors in Computing Systems - Proceedings

Conference

Conference25th SIGCHI Conference on Human Factors in Computing Systems 2007, CHI 2007
PlaceUnited States
CitySan Jose, CA
Period28/04/073/05/07

Bibliographical note

Publication details (e.g. title, author(s), publication statuses and dates) are captured on an “AS IS” and “AS AVAILABLE” basis at the time of record harvesting from the data source. Suggestions for further amendments or supplementary information can be sent to [email protected].

Research Keywords

  • Auditory menu
  • Gestural interaction

Fingerprint

Dive into the research topics of 'Earpod: Eyes-free menu selection using touch input and reactive audio feedback'. Together they form a unique fingerprint.

Cite this