Dicussion on the meeting on 'Statistical approaches to inverse problems'

2004 
Johnstone, Kerkyacharian, Picard and Raimondo Johnstone, Kerkyacharian, Picard and Raimondo are interested in the inverse problem of estimating f where f has been convolved with g and then contaminated with white noise. This popular problem has been tackled by a wide variety of procedures and wavelet methods have recently generated considerable interest. Donoho’s (1995) seminal wavelet–vaguelette paper introduced the notion that wavelets would be a good choice for the representation of f since real life objects, such as images, are more likely to be efficiently represented using wavelets when compared with, for example, Fourier representations. Johnstone and his colleagues have moved the field on significantly. In particular, their procedure is more direct than wavelet–vaguelette or Abramovich and Silverman’s (1998) vaguelette–wavelet method; it can handle boxcar blur theoretically and practically, they have rates of convergence forp =2 (p defines the type of loss) and the paper innovates through use of the new maxiset approach. For me, the most appealing of these innovations is that of enabling the treatment of boxcar blur which is one of the most common types of inverse problem. However, is it really, really, the case that for rational a nothing can be done? Formula (4) compels us to say no, nothing can, but naively it still feels wrong. Formula (19) is the popular ‘signal-plus-noise’ model but here it is a little different from what normally appears in the literature because the quantities are complex-valued random variables. More specifically, the zl are zero-mean Gaussian variables which are complex valued and satisfy E.zlzk/= δlk. One question is why threshold the βk and not the yl directly? The covariance of the βk is given by cov.βk, βl/=n ∑ m ΨkmΨ l m
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    70
    References
    20
    Citations
    NaN
    KQI
    []