To well monitor and optimize the flotation production, a computer vision and image analysis system is used. In such a system, the first important step is to acquire the froth surface images in high quality. Froth imaging quality is hard to control, and the industrial field noise, froth 3D properties, complex textures, and mixed colors can also cause the flotation image to be difficult to segment and process. To acquire high quality images, a new system for image acquisition of the lead flotation is studied. The system constructs the free-form surface lens based on the non-imaging optics theory, which can improve the optical efficiency of the lens and the uniformity of light sources, and can reduce flare effects. For the compensation, an improved MSR (Multi-Scale Retinex) adaptive image algorithm is proposed to increase the brightness and intensity contrast for small bubbles, and to enhance texture details and froth weak edges by analyzing the Retinex output characteristics of the shaded area and improving the gain function. Under the condition of the optimal parameters, the image acquisition system can obtain uniform illumination and reduce different noises. Experiments show that the new froth image acquisition system increases Signal/Noise by 14%, contrast by 21%, and image segmentation accuracy by 26% in an image.
Abstract The ability to image at high speeds is necessary for biological imaging to capture fast-moving or transient events or to efficiently image large samples. However, due to the lack of rigidity of biological specimens, carrying out fast, high-resolution volumetric imaging without moving and agitating the sample has been a challenging problem. Pupil-matched remote focusing has been promising for high NA imaging systems with their low aberrations and wavelength independence, making it suitable for multicolor imaging. However, owing to the incoherent and unpolarized nature of the fluorescence signal, manipulating this emission light through remote focusing is challenging. Therefore, remote focusing has been primarily limited to the illumination arm, using polarized laser light to facilitate coupling in and out of the remote focusing optics. Here, we introduce a novel optical design that can de-scan the axial focus movement in the detection arm of a microscope. Our method splits the fluorescence signal into S and P-polarized light, lets them pass through the remote focusing module separately, and combines them with the camera. This allows us to use only one focusing element to perform aberration-free, multi-color, volumetric imaging without (a) compromising the fluorescent signal and (b) needing to perform sample/detection-objective translation. We demonstrate the capabilities of this scheme by acquiring fast dual-color 4D (3D space + time) image stacks with an axial range of 70 μm and camera-limited acquisition speed. Owing to its general nature, we believe this technique will find its application in many other microscopy techniques that currently use an adjustable Z-stage to carry out volumetric imaging, such as confocal, 2-photon, and light sheet variants.
Abstract Single‐molecule localization‐based superresolution imaging is complicated by emission from multiple emitters overlapping at the detector. The potential for overlapping emitters is even greater for 3D imaging than for 2D imaging due to the large effective “volume” of the 3D point spread function. Overlapping emission can be accounted for in the estimation model, recovering the ability to localize the emitters, but with the caveat that the localization precision has a dependence on the amount of overlap from other emitters. Whether a particular 3D imaging modality has a significant advantage in facilitating the position estimation of overlapping emitters is investigated. The variants of two commonly used and easily implemented imaging modalities for 3D single‐molecule imaging are compared: astigmatic imaging; dual focal plane imaging; and the combination of the two approaches, dual focal plane imaging with astigmatism. The Cramér–Rao lower bound is used to quantify the multiemitter estimation performance by calculating the theoretical best localization precision under a multiemitter estimation model. The performance of these 3D modalities is investigated under a wide range of conditions including various distributions of collected photons per emitter, background counts, pixel sizes, and camera readout noise values. Differences between modalities are small and it is therefore concluded that multiemitter fitting performance should not be a primary factor in selecting between these modalities.