logo
    Investigation on a modular high speed multispectral camera
    9
    Citation
    0
    Reference
    20
    Related Paper
    Citation Trend
    Abstract:
    Multispectral imaging grows in importance in an increasing field of image processing applications. The reason for it is that they provide substantially more spectral information than for example RGB (red-green-blue) colour cameras. For this reason a multispectral camera is designed and developed at the department quality assurance of the Ilmenau University of Technology. Based on well known approaches for filter wheel cameras, a new modular concept was developed particularly for high speed applications. For the complex control and correction computations thereby a field programmable gate array (FPGA) is used.
    Keywords:
    RGB color model
    Gate array
    Image quality is one of the important aspects in nanosatellite remote sensing missions. Image quality parameters, such as image detail and coverage area, have to be taken into consideration in designing nanosatellite remote sensing payload while limited mass, dimension, and power consumption of nanosatellite add another constraint. Lengthening the lens focal length of the camera can increase the image detail but this causes the smaller coverage area of the image. To maintain the detailed image with wide coverage area, the concept of synthetic aperture optical imaging is used in this research. Synthetic aperture optical imaging is a concept that combines images from array of camera capturing the same object from various angles. FPGA XuLA2 LX9 is used as On Board Data Handling (OBDH) in this research to increase the performance. The system built in this research is synthetic aperture optical imaging with 2 × 2 cameras, using LS Y201 camera modules which produces JPEG image with VGA resolution 640 × 480 pixels. The result achieved in this research is image with resolution 1280 × 960 pixels produced by 4 cameras with resolution 640 × 480 pixels with the average time of fetching the image data is 27.58498 s for low compressed image and 11.1972 s for high compressed image.
    Payload (computing)
    Aperture (computer memory)
    Video Graphics Array
    Camera interface
    X-ray computer tomography is a powerful method for nondestructive investigations in many fields. Three-dimensional images of internal structure are reconstructed from a sequence of two-dimensional projections. The polychromatic high density photon flux of modern synchrotron light sources offer hard X-ray imaging with spatio-temporal resolution up to the micrometer and micrometers range. Existing indirect X-ray image detection systems can be adapted for fast image acquisition by high-speed visible-light cameras. In this paper, we present a platform for custom high-speed CMOS cameras with embedded field-programmable gate array (FPGA) processing. This modular system is characterized by a high-throughput PCI Express (PCIe) interface and efficient communication blocks. It has been used to develop a novel architecture for a self-event trigger that increases the effective image frame rate and reduces the amount of received data. Thanks to a low-noise design, high frame rates in the kilohertz range, and high-throughput data transfer, this camera is well suited for ultrafast synchrotron-based X-ray radiography and tomography. The camera setup is accomplished by high-throughput Linux drivers and a seamless integration in our GPU computing framework.
    Frame rate
    Citations (17)
    Range imaging is often used in classification of objects in process industry. The speed of inspection needs to be high, so it does not become the bottleneck in the process. This paper presents an FPGA based architecture for range imaging. Using centre of gravity it calculates the range positions from 2D images. The results show that the proposed architecture can process range values with a performance up to 150 Msamples per second. Thus, using cheep standard technology we can achieve up to 3 times higher performance than expensive state-of-the-art high performance smart-cameras.
    Traditional spectral imaging cameras typically operate as pushbroom cameras by scanning a scene. This approach makes such cameras well-suited for high spatial and spectral resolution scanning applications, such as remote sensing and machine vision, but ill-suited for 2D scenes with free movement. This limitation can be overcome by single frame, multispectral (here called snapshot) acquisition, where an entire three-dimensional multispectral data cube is sensed at one discrete point in time and multiplexed on a 2D sensor. Our snapshot multispectral imager is based on optical filters monolithically integrated on CMOS image sensors with large layout flexibility. Using this flexibility, the filters are positioned on the sensor in a tiled layout, allowing trade-offs between spatial and spectral resolution. At system-level, the filter layout is complemented by an optical sub-system which duplicates the scene onto each filter tile. This optical sub-system and the tiled filter layout lead to a simple mapping of 3D spectral cube data on the sensor, facilitating simple cube assembly. Therefore, the required image processing consists of simple and highly parallelizable algorithms for reflectance and cube assembly, enabling real-time acquisition of dynamic 2D scenes at low latencies. Moreover, through the use of monolithically integrated optical filters the multispectral imager achieves the qualities of compactness, low cost and high acquisition speed, further differentiating it from other snapshot spectral cameras. Our prototype camera can acquire multispectral image cubes of 256x256 pixels over 32 bands in the spectral range of 600-1000nm at 340 cubes per second for normal illumination levels.
    Data cube
    Snapshot (computer storage)
    Frame rate
    Citations (8)
    We present the Light Field Video Camera, an array of CMOS image sensors for video image based rendering applications. The device is designed to record a synchronized video dataset from over one hundred cameras to a hard disk array using as few as one PC per fifty image sensors. It is intended to be flexible, modular and scalable, with much visibility and control over the cameras. The Light Field Video Camera is a modular embedded design based on the IEEE1394 High Speed Serial Bus, with an image sensor and MPEG2 compression at each node. We show both the flexibility and scalability of the design with a six camera prototype.
    Video camera
    Light Field
    Three-CCD camera
    Citations (121)
    This paper researches the digital high definition imaging technology and successfully designs a digital high definition camera system .The system takes large array CCD(KAI-2093CM) which conforms to SMEPT 274M standard as photoelectric transfer device, FPGA+AFE as framework, HD-SDI as transforming interface, and combines with the current advanced digital high definition video standard. The result of imaging shows that the high definition camera can realize high definition shooting. The pictures are clear and can be displayed with no stagnation in real time. Moreover, the small camera with high resolution can be applied for high definition shooting in aerospace and other fields.
    High definition
    Interface (matter)
    Digital camera
    Citations (1)
    We design a camera by combining a micromirror-array with a single optical sensor and exploiting compressed sensing based on projections with white-noise basis. A practical image/video camera is developed based on this concept and realized.
    Citations (0)
    The first prototype of a high-speed camera with embedded image processing has been developed. Beside high frame rate and high through-put, the camera introduces a novel self triggering architecture to increase the frame rate and to reduce the amount of received data. The camera is intended for synchrotron ultra-fast X-ray radiography and tomography, but it's concept is also suitable for other fields.
    Frame rate
    Smart camera
    High-speed camera
    Citations (1)
    The benefits of multispectral imaging are well established in a variety of applications including remote sensing, authentication, satellite and aerial surveillance, machine vision, biomedical, and other scientific and industrial uses. However, many of the potential solutions require more compact, robust, and cost-effective cameras to realize these benefits. The next generation of multispectral sensors and cameras needs to deliver improvements in size, weight, power, portability, and spectral band customization to support widespread deployment for a variety of purpose-built aerial, unmanned, and scientific applications. A novel implementation uses micro-patterning of dichroic filters1 into Bayer and custom mosaics, enabling true real-time multispectral imaging with simultaneous multi-band image acquisition. Consistent with color image processing, individual spectral channels are de-mosaiced with each channel providing an image of the field of view. This approach can be implemented across a variety of wavelength ranges and on a variety of detector types including linear, area, silicon, and InGaAs. This dichroic filter array approach can also reduce payloads and increase range for unmanned systems, with the capability to support both handheld and autonomous systems. Recent examples and results of 4 band RGB + NIR dichroic filter arrays in multispectral cameras are discussed. Benefits and tradeoffs of multispectral sensors using dichroic filter arrays are compared with alternative approaches – including their passivity, spectral range, customization options, and scalable production.
    Dichroic glass
    Color filter array
    RGB color model
    Citations (2)
    A modular image capture system with close integration to CCD cameras has been developed. The aim is to produce a system capable of integrating CCD sensor, image capture and image processing into a single compact unit. This close integration provides a direct mapping between CCD pixels and digital image pixels. The system has been interfaced to a digital signal processor board for the development and control of image processing tasks. These have included characterization and enhancement of noisy images from an intensified camera and measurement to subpixel resolutions. A highly compact form of the image capture system is in an advanced stage of development. This consists of a single FPGA device and a single VRAM providing a two chip image capturing system capable of being integrated into a CCD camera. A miniature compact PC has been developed using a novel modular interconnection technique, providing a processing unit in a three dimensional format highly suited to integration into a CCD camera unit. Work is under way to interface the compact capture system to the PC using this interconnection technique, combining CCD sensor, image capture and image processing into a single compact unit.
    Subpixel rendering
    Camera interface
    Interface (matter)
    Charge-coupled device
    Citations (0)