X-ray nanotomography is a powerful tool for the characterization of nanoscale materials and structures, but it is difficult to implement due to the competing requirements of X-ray flux and spot size. Due to this constraint, state-of-the-art nanotomography is predominantly performed at large synchrotron facilities. We present a laboratory-scale nanotomography instrument that achieves nanoscale spatial resolution while addressing the limitations of conventional tomography tools. The instrument combines the electron beam of a scanning electron microscope (SEM) with the precise, broadband X-ray detection of a superconducting transition-edge sensor (TES) microcalorimeter. The electron beam generates a highly focused X-ray spot on a metal target held micrometers away from the sample of interest, while the TES spectrometer isolates target photons with a high signal-to-noise ratio. This combination of a focused X-ray spot, energy-resolved X-ray detection, and unique system geometry enables nanoscale, element-specific X-ray imaging in a compact footprint. The proof of concept for this approach to X-ray nanotomography is demonstrated by imaging 160 nm features in three dimensions in six layers of a Cu-SiO2 integrated circuit, and a path toward finer resolution and enhanced imaging capabilities is discussed.
Abstract : A class of vector-space bases is introduced for the sparse representation of discretizations of integral operators. An operator with a smooth, non-oscillatory kernel possessing a finite number of singularities in each row or column is represented in these bases as a sparse matrix, to high precision. A method is presented that employs these bases for the numerical solution of second-kind integral equations in time bounded by O(nlog squared n) , where n is the number of points in the discretization. Numerical results are given which demonstrate the effectiveness of the approach, and several generalizations and applications of the method are discussed.
Limited-angle X-ray tomography reconstruction is an ill-conditioned inverse problem in general. Especially when the projection angles are limited and the measurements are taken in a photon-limited condition, reconstructions from classical algorithms such as filtered backprojection may lose fidelity and acquire artifacts due to the missing-cone problem. To obtain satisfactory reconstruction results, prior assumptions, such as total variation minimization and nonlocal image similarity, are usually incorporated within the reconstruction algorithm. In this work, we introduce deep neural networks to determine and apply a prior distribution in the reconstruction process. Our neural networks learn the prior directly from synthetic training samples. The neural nets thus obtain a prior distribution that is specific to the class of objects we are interested in reconstructing. In particular, we used deep generative models with 3D convolutional layers and 3D attention layers which are trained on 3D synthetic integrated circuit (IC) data from a model dubbed CircuitFaker. We demonstrate that, when the projection angles and photon budgets are limited, the priors from our deep generative models can dramatically improve the IC reconstruction quality on synthetic data compared with maximum likelihood estimation. Training the deep generative models with synthetic IC data from CircuitFaker illustrates the capabilities of the learned prior from machine learning. We expect that if the process were reproduced with experimental data, the advantage of the machine learning would persist. The advantages of machine learning in limited angle X-ray tomography may further enable applications in low-photon nanoscale imaging.
Related DatabasesWeb of Science You must be logged in with an active subscription to view this.Article DataHistoryPublished online: 12 July 2006Publication DataISSN (print): 0036-1445ISSN (online): 1095-7200Publisher: Society for Industrial and Applied MathematicsCODEN: siread
We describe a series of microcalorimeter X-ray spectrometers designed for a broad suite of measurement applications. The chief advantage of this type of spectrometer is that it can be orders of magnitude more efficient at collecting X-rays than more traditional high-resolution spectrometers that rely on wavelength-dispersive techniques. This advantage is most useful in applications that are traditionally photon-starved and/or involve radiation-sensitive samples. Each energy-dispersive spectrometer is built around an array of several hundred transition-edge sensors (TESs). TESs are superconducting thin films that are biased into their superconducting-to-normal-metal transitions. The spectrometers share a common readout architecture and many design elements, such as a compact, 65 mK detector package, 8-column time-division-multiplexed superconducting quantum-interference device readout, and a liquid-cryogen-free cryogenic system that is a two-stage adiabatic-demagnetization refrigerator backed by a pulse-tube cryocooler. We have adapted this flexible architecture to mate to a variety of sample chambers and measurement systems that encompass a range of observing geometries. There are two different types of TES pixels employed. The first, designed for X-ray energies below 10 keV, has a best demonstrated energy resolution of 2.1 eV (full-width-at-half-maximum or FWHM) at 5.9 keV. The second, designed for X-ray energies below 2 keV, has a best demonstrated resolution of 1.0 eV (FWHM) at 500 eV. Our team has now deployed seven of these X-ray spectrometers to a variety of light sources, accelerator facilities, and laboratory-scale experiments; these seven spectrometers have already performed measurements related to their applications. Another five of these spectrometers will come online in the near future. We have applied our TES spectrometers to the following measurement applications: synchrotron-based absorption and emission spectroscopy and energy-resolved scattering; accelerator-based spectroscopy of hadronic atoms and particle-induced-emission spectroscopy; laboratory-based time-resolved absorption and emission spectroscopy with a tabletop, broadband source; and laboratory-based metrology of X-ray-emission lines. Here, we discuss the design, construction, and operation of our TES spectrometers and show first-light measurements from the various systems. Finally, because X-ray-TES technology continues to mature, we discuss improvements to array size, energy resolution, and counting speed that we anticipate in our next generation of TES-X-ray spectrometers and beyond.
We introduce a near-field to far-field transformation method that relaxes the usual restriction that data points be located on a plane-rectangular grid. It is not always practical or desirable to make uniformly spaced measurements; for example, the maintenance of positioning tolerances becomes more difficult as frequency is increased. Our method can (1) extend the frequency ranges of existing scanners, (2) make practical the use of portable scanners for on-site measurements, and (3) support schemes, such as plane-polar scanning, where data are collected on a nonrectangular grid. Although "ideal" locations are not required, we assume that probe positions are known. (In practice, laser interferometry is often used for this purpose.) Our approach is based on a linear model of the form A/spl xi/=b. The conjugate gradient method is used to find the "unknown" /spl xi/ in terms of the "data" b. The operator A must be applied once per conjugate gradient iteration, and this is done efficiently using the recently developed unequally spaced fast Fourier transform and local interpolation. As implemented, each iteration requires O (N log N) operations, where N is the number of measurements. The required number of iterations depends on desired computational accuracy and on conditioning. We present a simulation that is based on actual near-field antenna data.
X-ray tomography is a non-destructive imaging technique that reveals the interior of an object from its projections at different angles. Under sparse-view and low-photon sampling, regularization priors are required to retrieve a high-fidelity reconstruction. Recently, deep learning has been used in X-ray tomography. The prior learned from training data replaces the general-purpose priors in iterative algorithms, achieving high-quality reconstructions with a neural network. Previous studies typically assume the noise statistics of testing data is acquired a priori from training data, leaving the network susceptible to a change in the noise characteristics under practical imaging conditions. In this work, we propose a noise-resilient deep-reconstruction algorithm for X-ray tomography. By training the network with regularized reconstructions from a conventional algorithm, the learned prior shows strong noise resilience without the need for additional training with noisy examples, and allows us to obtain acceptable reconstructions with fewer photons in testing data. The advantages of our framework may further enable low-photon tomographic imaging where long acquisition times limit the ability to acquire a large training set.
Probing electronic states on ultrafast timescales is critical for studies of chemical reactions. A tabletop method for performing time-resolved x-ray emission spectroscopy is presented and tested using a polypyridyl iron complex.