Observer Calibrator for Color Vision Research
1
Citation
2
Reference
10
Related Paper
Citation Trend
Abstract:
The variability of human observers and differences in the cone photoreceptor sensitivities are important to understand and quantify in the context of Color Science research. Differences in human cone sensitivity may cause two observers to see different colors on the same display. Technicolor SA built a prototype instrument that allows classification of an observer with normal color vision into a small number of color vision categories. The instrument is used in color critical applications for displaying colors to human observers. To facilitate Color Science research, an Observer Calibrator is being designed and built. This instrument is modeled on one developed at Technicolor, but with improvements including providing higher luminance levels to the observers, a more robust MATLAB computer interface, two sets of individually controlled LED primaries, and the potential for interchangeable optical front ends to present the color stimuli to observers. The new prototype is lightweight, inexpensive, stable, and easy to calibrate and use. Human observers can view the difference between two displayed colors, or match one existing color by adjusting one LED primary set. The use of the new prototype will create opportunities for further color science research and will provide an improved experiment experience for participating observers.Keywords:
Observer (physics)
Procedures for three-dimensional image reconstruction that are based on the optical and neural apparatus of human stereoscopic vision have to be designed to work in conjunction with it. The principal methods of implementing stereo displays are described. Properties of the human visual system are outlined as they relate to depth discrimination capabilities and achieving optimal performance in stereo tasks. The concept of depth rendition is introduced to define the change in the parameters of three-dimensional configurations for cases in which the physical disposition of the stereo camera with respect to the viewed object differs from that of the observer's eyes.
Observer (physics)
Computer stereo vision
Stereo cameras
Stereo imaging
Cite
Citations (32)
Abstract : An observer extracts local and global information from a natural scene to form a visual perception. Neisser and Treisman demonstrated that a natural scene contains different types of features, i.e., color, edges, luminance, and orientation to aid visual search. Infrared and visible sensors present nighttime images to an observer to aid target detection. These sensors present the observer an adequate representation of a nighttime scene, but sometimes fail to provide quality features for accurate visual perception. The purpose of this thesis is to investigate whether color features (combining an infrared and visible sensor image) improve visual scene comprehension compared to single band grayscale features during a signal detection task. Twenty three scenes were briefly presented in four different sensor formats (infrared, visible, fused monochrome, and fused color) to measure subjects global visual ability to detect whether a natural scene was right side up or upside down. Subjects are significantly more accurate at detecting scene orientation for an infrared and fused color scene compared to a fused monochrome and visible scene. Both the infrared and fused color sensor formats provide enough essential features to allow an observer to perceptually organize a complex nighttime scene.
Monochrome
Observer (physics)
Cite
Citations (14)
Cite
Citations (0)
Observer (physics)
Monocular vision
Monocular
Optical Flow
Relative Motion
Cite
Citations (12)
Shading (variations of image intensity) provides an important cue for understanding the shape of three-dimensional surfaces from monocular views. On the other hand, texture (distribution of discontinuities on the surface) is a strong cue for recovering surface orientation by using monocular images. But given the image of an object or scene, what technique should we use to recover the shape of what is imaged ? Resolution of shape from shading requires knowledge of the reflectance of the imaged surface and, usually, the fact that it is smooth (i. e. it shows no discontinuities). Determination of shape from texture requires knowledge of the distribution of surface markings (i. e. discontinuities). One might expect that one method would work when the other does not. I present a theory on how an active observer can determine shape from the image of an object or scene regardless of whether the image is shaded, textured, or both, and without any knowledge of reflectance maps or the distribution of surface markings. The approach is successful because the active observer is able to manipulate the constraints behind the perceptual phenomenon at hand and thus derive a simple solution. Several experimental results are presented with real and synthetic images.
Observer (physics)
Classification of discontinuities
Photometric Stereo
Texture (cosmology)
Shading
Monocular
Cite
Citations (4)
Our portable video-based monocular eye tracker contains a headgear with two cameras that capture videos of the observer's right eye and the scene from the observer's perspective (Figure 1a). With this eye tracker, we typically obtain a position -- that represents the observer's point of regard (POR) -- in each frame of the scene video (Figure 1b without bottom left box). These POR positions are in the image coordinate system of the scene camera, which moves with the observer's head. Therefore, these POR positions do not tell us where the person is looking in an exocentric reference frame. Currently, the videos are analyzed manually by examining each frame. In short, we aim to automatically determine how long the observer spends fixating specific objects in the scene and in what order these objects are fixated.
Observer (physics)
Endocentric and exocentric
Monocular
Cite
Citations (0)
Optical Camouflage is the process of concealing objects in visual spectrum range. This paper proposes a system which, can conceal any 2D object that is in front of an observer using RGB-D sensor and LCD display. This sensor is Kinect v2 sensor which, is used for depth sensing of background scene behind the object, and for 3D tracking of an observer's eyes. The LCD display covers the object which is required to be concealed. Images which are outputted on the display, are a real-time processing video frame of the background region, which are unseen by the observer and occluded by the object. These images should be observed from the viewpoints of the observer rather than camera's viewpoints.
Observer (physics)
RGB color model
Camouflage
Viewpoints
Cite
Citations (3)
To study an observer's eye movements during realistic tasks, the observer should be free to move naturally throughout our three-dimensional world. Therefore, a technique to determine an observer's point-of-regard (POR) as well as his/her motion throughout a scene in three dimensions with minor user input is proposed. This requires robust feature tracking and calibration of the scene camera in order to determine the 3D location and orientation of the scene camera in the world. With this information, calibrated 2D PORs can be triangulated to 3D positions in the world; the scale of the world coordinate system can be obtained via input of the distance between two known points in the scene. Information about scene camera movement and tracked features can also be used to obtain observer position and head orientation for all video frames. The final observer motion -- including the observer's positions and head orientations -- and PORs are expressed in 3D world coordinates. The result is knowledge of not only eye movements but head movements as well allowing for the evaluation of how an observer combines head and eye movements to perform a visual task. Additionally, knowledge of 3D information opens the door for many more options for visualization of eye-tracking results.
Observer (physics)
Position (finance)
Monocular
Cite
Citations (48)
Previous methods for estimating the motion of an observer through a static scene require that image velocities can be measured. For the case of motion through a cluttered 3D scene, however, measuring optical flow is problematic because of the high density of depth discontinuities. This paper introduces a method for estimating motion through a cluttered 3D scene that does not measure velocities at individual points. Instead the method measures a distribution of velocities over local image regions. We show that motion through a cluttered scene produces a bowtie pattern in the power spectra of local image regions. We show how to estimate the parameters of the bowtie for different image regions and how to use these parameters to estimate observer motion. We demonstrate our method on synthetic and real data sequences.
Observer (physics)
Optical Flow
Classification of discontinuities
Motion field
Structure from Motion
Cite
Citations (4)