Freyermuth, Jean-Marc
[KUL ORSTAT]
von Sachs, Rainer
[UCL]
First of all, we would like to congratulate the author for having provided a nice addition to the literature on wavelet thresholding which is a little different from providing ‘‘another more refined choice of the threshold value’’. As he says himself in the Introduction, existing papers on ‘‘threshold selection procedures in the function estimation problem advocate the choice of one single threshold value for each wavelet coefficients, . . . ’’ This approach is also referred to as ‘‘separable’’ or ‘‘diagonal’’ thresholding in the literature. What we can learn from his paper is that trying to optimize this threshold value is perhaps not the only route to go down when it comes to denoising statistical signals with sufficiently interesting local structure: as soon as our interest goes beyond an overall satisfying and uniformly not too bad reconstruction of the signal (i.e. estimation of an underlying somewhat ‘‘smooth’’ function), then it is certainly a good idea to broaden the view on thresholding schemes. This would apply to situations where we want to simultaneously estimate the overall shape of the function and the precise location of some pronounced discontinuities, or to cases where our denoised reconstruction is supposed to give us an idea about some (spatial or temporal) segmentation behind the data-generating process. What we will address in our little discussion will be in the line of thought of this paradigm: while the author contributes by a vector-valued view, a whole (visual) map of reconstructions using different (but still ‘‘diagonal’’-based thresholding) values, a complementary approach could be to change the thresholding rule away from ‘‘separable’’ and use rules built on vectors or blocks of coefficients that share some property of closeness. The size (or structural ‘‘geometry’’) of those rules could then be used as index to compare the performance of such a ‘‘family’’ of threshold rules (rather than threshold values as in this paper here) with one another—and why not use again a mapping over time to visualize this performance and try to learn from the information it carries? We will structure the sequel of our contribution as follows. In a first part, without going into too much detail, we would like to put a few questions to the author and build bridges to some of the applications that he has given towards the end of his paper. In the second larger part, we give a small summary on recent work on ‘‘block thresholding’’ rules: these rules have been advocated not primarily to minimize ‘‘uniform’’ estimation criteria such as minimal mean squared error or even certain minimax risks (as e.g. in the seminal work by Cai (1999) on horizontal block thresholding, using blocks of spatially neighbored coefficients within one scale). They were rather developed in order to improve function estimation in the vicinity of a discontinuity, such as a jump, and this really also visually—cf. Figs. 3.1–3.2. They do this by using ‘‘vertical blocks’’ of coefficients following a certain geometrical structure across wavelet scales, i.e. a tree structure.
Bibliographic reference |
Freyermuth, Jean-Marc ; von Sachs, Rainer. Discussion: Time-threshold maps: Using information from wavelet reconstructions with all threshold values simultaneously. In: Journal of the Korean Statistical Society, Vol. 41, no.2, p. 161-164 (2012) |
Permanent URL |
http://hdl.handle.net/2078.1/127294 |