language-icon Old Web
English
Sign In

Multi-touch

In computing, multi-touch is technology that enables a surface (a trackpad or touchscreen) to recognize the presence of more than one point of contact with the surface. The origins of multitouch began at CERN, MIT, University of Toronto, Carnegie Mellon University and Bell Labs in the 1970s. Multi-touch was in use as early as 1985. Apple popularized the term 'multi-touch' in 2007. Plural-point awareness may be used to implement additional functionality, such as pinch to zoom or to activate certain subroutines attached to predefined gestures. In computing, multi-touch is technology that enables a surface (a trackpad or touchscreen) to recognize the presence of more than one point of contact with the surface. The origins of multitouch began at CERN, MIT, University of Toronto, Carnegie Mellon University and Bell Labs in the 1970s. Multi-touch was in use as early as 1985. Apple popularized the term 'multi-touch' in 2007. Plural-point awareness may be used to implement additional functionality, such as pinch to zoom or to activate certain subroutines attached to predefined gestures. The two different uses of the term resulted from the quick developments in this field, and many companies using the term to market older technology which is called gesture-enhanced single-touch or several other terms by other companies and researchers. Several other similar or related terms attempt to differentiate between whether a device can exactly determine or only approximate the location of different points of contact to further differentiate between the various technological capabilities, but they are often used as synonyms in marketing. In computing, multi-touch is technology which enables a trackpad or touchscreen to recognize more than one or more than two points of contact with the surface. Apple popularized the term 'multi-touch' in 2007 with which it implemented additional functionality, such as pinch to zoom or to activate certain subroutines attached to predefined gestures. The two different uses of the term resulted from the quick developments in this field, and many companies using the term to market older technology which is called gesture-enhanced single-touch or several other terms by other companies and researchers. Several other similar or related terms attempt to differentiate between whether a device can exactly determine or only approximate the location of different points of contact to further differentiate between the various technological capabilities, but they are often used as synonyms in marketing. The use of touchscreen technology predates both multi-touch technology and the personal computer. Early synthesizer and electronic instrument builders like Hugh Le Caine and Robert Moog experimented with using touch-sensitive capacitance sensors to control the sounds made by their instruments. IBM began building the first touch screens in the late 1960s. In 1972, Control Data released the PLATO IV computer, a terminal used for educational purposes, which employed single-touch points in a 16×16 array user interface. These early touchscreens only registered one point of touch at a time. On-screen keyboards (a well-known feature today) were thus awkward to use, because key-rollover and holding down a shift key while typing another were not possible. An exception was a multi-touch reconfigurable touchscreen keyboard/display developed at the Massachusetts Institute of Technology in the early 1970s. In 1977, one of the early implementations of mutual capacitance touchscreen technology was developed at CERN based on their capacitance touch screens developed in 1972 by Danish electronics engineer Bent Stumpe. This technology was used to develop a new type of human machine interface (HMI) for the control room of the Super Proton Synchrotron particle accelerator. In a handwritten note dated 11 March 1972, Stumpe presented his proposed solution – a capacitive touch screen with a fixed number of programmable buttons presented on a display. The screen was to consist of a set of capacitors etched into a film of copper on a sheet of glass, each capacitor being constructed so that a nearby flat conductor, such as the surface of a finger, would increase the capacitance by a significant amount. The capacitors were to consist of fine lines etched in copper on a sheet of glass – fine enough (80 μm) and sufficiently far apart (80 μm) to be invisible (CERN Courier April 1974 p117). In the final device, a simple lacquer coating prevented the fingers from actually touching the capacitors. In 1976, MIT described a keyboard with variable graphics capable of multi-touch detection, for what is very likely to be the very first multitouch screen. In the early 1980s, The University of Toronto's Input Research Group were among the earliest to explore the software side of multi-touch input systems. A 1982 system at the University of Toronto used a frosted-glass panel with a camera placed behind the glass. When a finger or several fingers pressed on the glass, the camera would detect the action as one or more black spots on an otherwise white background, allowing it to be registered as an input. Since the size of a dot was dependent on pressure (how hard the person was pressing on the glass), the system was somewhat pressure-sensitive as well. Of note, this system was input only and not able to display graphics. In 1983, Bell Labs at Murray Hill published a comprehensive discussion of touch-screen based interfaces, though it makes no mention of multiple fingers. In the same year, the video-based Video Place/Video Desk system of Myron Krueger was influential in development of multi-touch gestures such as pinch-to-zoom, though this system had no touch interaction itself.

[ "Computer hardware", "Computer vision", "Multimedia", "Human–computer interaction", "Artificial intelligence" ]
Parent Topic
Child Topic
    No Parent Topic