language-icon Old Web
English
Sign In

Screen reader

A screen reader is a form of assistive technology (AT) which is essential to people who are blind, as well as useful to people who are visually impaired, illiterate, or have a learning disability. Screen readers are software applications that attempt to convey what people with normal eyesight see on a display to their users via non-visual means, like text-to-speech, sound icons, or a Braille device. They do this by applying a wide variety of techniques that include for example interacting with dedicated accessibility APIs, using various operating system features (like inter-process communication and querying user interface properties) and employing hooking techniques. Microsoft Windows operating systems have included the Microsoft Narrator screen reader since Windows 2000. Apple Inc.'s macOS, iOS, and tvOS include VoiceOver as a built-in screen reader, while Google's Android provides Talkback screen reader since 2009. Similarly, Android-based devices from Amazon provide the VoiceView screen reader. BlackBerry 10 devices such as the BlackBerry Z30 also include a built-in screen reader. There is also a free screen reader application for older BlackBerry (BBOS7 and earlier) devices. There are also popular free and open source screen readers, such as Speakup and Orca for Linux and Unix-like systems and NonVisual Desktop Access for Windows. The most widely used screen readers are often separate commercial products: JAWS from Freedom Scientific, NonVisual Desktop Access (NVDA) from NV Access, Window-Eyes from GW Micro, Dolphin Supernova by Dolphin, System Access from Serotek, and ZoomText Magnifier/Reader from AiSquared are prominent examples. In early operating systems, such as MS-DOS, which employed command-line interfaces (CLIs), the screen display consisted of characters mapping directly to a screen buffer in memory and a cursor position. Input was by keyboard. All this information could therefore be obtained from the system either by hooking the flow of information around the system and reading the screen buffer or by using a standard hardware output socket and communicating the results to the user. In the 1980s, the Research Centre for the Education of the Visually Handicapped (RCEVH) at the University of Birmingham developed Screen Reader for the BBC Micro and NEC Portable. With the arrival of graphical user interfaces (GUIs), the situation became more complicated. A GUI has characters and graphics drawn on the screen at particular positions, and therefore there is no purely textual representation of the graphical contents of the display. Screen readers were therefore forced to see employ new low-level techniques, gathering messages from the operating system and using these to build up an 'off-screen model', a representation of the display in which the required text content is stored. For example, the operating system might send messages to draw a command button and its caption. These messages are intercepted and used to construct the off-screen model. The user can switch between controls (such as buttons) available on the screen and the captions and control contents will be read aloud and/or shown on refreshable Braille display.

[ "Multimedia", "Human–computer interaction", "World Wide Web", "visually impaired" ]
Parent Topic
Child Topic
    No Parent Topic