(Warning: These materials may be subject to lots of typos and errors. We are grateful if you could spot errors and leave suggestions in the comments, or contact the author at yjhan@stanford.edu.)
In the last three lectures we systematically introduced the Le Cam’s two-point method with generalizations and various examples. The main idea of the two-point method is to reduce the problem in hand to a binary hypothesis testing problem, where the hypotheses may be either single or composite. This approach works when the target is to estimate a scalar (even when the underlying statistical model may involve high-dimensional parameters), while it typically fails when the target is to recover a high-dimensional vector of parameters. For example, even in the simplest Gaussian location model where the target is to estimate
under the mean squared loss, the best two-point approach gives a lower bound
while we have known from Lecture 4 that the minimax risk is
. To overcome this difficulty, recall from the minimax theorem that a suitable prior should always work, which by discretization motivates the idea of multiple hypothesis testing.
In this lecture, we develop the general theory of reducing problems into multiple hypothesis testing, and present several tools including tree-based methods, Assouad’s lemma and Fano’s inequality. In the next two lectures we will enumerate more examples (possibly beyond the minimax estimation in statistics) and variations of the above tools.
1. General Tools of Multiple Hypothesis Testing
This section presents the general theory of applying multiple hypothesis testing to estimation problems, as well as some important tools such as tree, Fano and Assouad. Recall from the two-point method that the separability and indistinguishability conditions are of utmost importance to apply testing arguments, the above tools require different separability conditions and represent the indistinguishability condition in terms of different divergence functions.
1.1. Tree-based method and Fano’s inequality
We start with the possibly simplest separability condition, i.e., the chosen parameters (hypothese) are pairwise seperated. The following lemma shows that the minimax estimation error is lower bounded in terms of the average test error.
Lemma 1 (From estimation to testing) Let
be any loss function, and there exist
such that
Then
where the infimum is taken over all measurable tests
.
Proof: For any given estimator , we construct a test
as follows:
Then the separability condition implies that
Then the rest follows from that the maximum is lower bounded by the average.
Remark 1 The following weaker separability condition also suffices for Lemma 1 to hold: for any
and
, the inequality
always implies that
.
The remaining quantity in Lemma 1 is the optimal average test error , for which we are looking for a lower bound. Recall that when
, this quantity equals to
by Le Cam’s first lemma. For general
, we have the following two different lower bounds.
Lemma 2 (Tree-based inequality) Let
be any undirected tree on vertex set
with edge set
. Then
Proof: It is straightforward to see that
It is also easy (left as an exercise) to establish the following elementary inequality: for any reals and any tree
, we have
Now using gives the desired inequality.
Lemma 3 (Fano’s inequality) Let
. Then
Remark 2 If we introduce auxiliary random variables
and
with
, then
where
denotes the mutual information between
and
.
Proof: We present two proofs of Lemma 3. The first proof builds upon the representation (1) and is more analytical, while the second proof makes use of the data-processing inequality, which is essentially the classical information-theoretic proof of Fano’s inequality.
(First Proof) By (1), it suffices to prove that
By the linearity of expectation, it further suffices to prove that for any non-negative reals with
, we have
To establish this inequality, let . Then by the convexity of
, Jensen’s inequality gives
Plugging in the above inequality completes the proof of Lemma 3.
(Second Proof) Introduce the auxiliary random variables and
as in Remark 2. For any fixed
, consider the kernel
which sends
to
, then
is the Bernoulli distribution with parameter
, and
is the Bernoulli distribution with parameter
. By the data-processing inequality of KL divergence,
Now taking expectation on at both sides gives
and rearranging completes the proof.
Lemma 2 decomposes a multiple hypothesis testing problem into several binary testing problems, which cannot outperform the best two-point methods in typical scenarios but can be useful when there is some external randomness associated with each (see the bandit example later). Lemma 3 is the well-known Fano’s inequality involving the mutual information, and the additional
term is the key difference in multiple hypothesis testing. Hence, the typical lower bound arguments are to apply Lemma 1 together with Lemma 3 (or sometimes Lemma 2), with the following lemma which helps to upper bound the mutual information.
Lemma 4 (Variational representation of mutual informatino)
Proof: Simply verify that
1.2. Assouad’s lemma
Instead of the previous pairwise separation condition, Assouad’s lemma builds upon a different one where the hypotheses are essentially the vertices of an -dimensional hypercube. Specifically, we shall require that the distance between parameters
and
is proportional to their Hamming distance
, which becomes quite natural when the parameter of the statistical model lies in an
-dimensional space.
Theorem 5 (Assouad’s Lemma) Let
be any function. If there exist parameters
indexed by
such that
then
where
Proof: For a given estimator , define
Then it is straightforward to see that
The rest of the proof follows from Le Cam’s first lemma and the fact that the maximum is no smaller than the average.
In most scenarios, the total variation distance between the mixtures and
is hard to compute directly, and the following corollaries (based on joint convexity of the total variation distance and Cauchy–Schwartz) are often the frequently presented versions of Assouad’s lemma.
Corollary 6 Under the conditions of Theorem 5, we have
Corollary 7 Under the conditions of Theorem 5, we have
where
is the resulting binary vector after flipping the
-th coordinate of
.
The idea behind the Assouad’s lemma is to apply a two-point argument for each coordinate of the parameter and then sum up all coordinates. Hence, Assouad’s lemma typically improves over the best two-point argument by a factor of the dimension , given that the hypercube-type separation condition in Theorem 5 holds.
1.3. Generalized Fano’s inequality
In the Fano’s inequality and Assouad’s lemma above, we introduce a random variable which is uniformly distributed on the set
or the hypercube
. Moreover, the separation condition must hold for each pair of the realizations of
. In general, we may wish to consider some non-uniform random variable
, and only require that the separation condition holds for most pairs. This is the focus of the following generalized Fano’s inequality.
Theorem 8 (Generalized Fano’s Inequality) Let
be any loss function, and
be any probability distribution on
. For any
, define
Then for
, we have
Proof: By Markov’s inequality, it suffices to show that for any estimator ,
We again use the second proof of Lemma 3. For any fixed , consider the deterministic kernel
which sends
to
. Then the composition
is a Bernoulli distribution with parameter
, and the composition
is a Bernoulli distribution with parameter
. Then by the data processing inequality of KL divergence,
Since , taking expectation on
at both sides gives the desired inequality.
To see that Theorem 8 is indeed a generalization to the original Fano’s inequality, simply note that when and the separation condition in Lemma 1 holds, we have
and therefore the denominator
becomes the log-cardinality. Hence, in the generalized Fano’s inequality, the verification of the seperation condition becomes upper bounding the probability
.
2. Applications
In this section we present some applications of the above tools to statistical examples. Here the applications are mostly simple and straightforward, and we defer some more sophisticated examples in other domains to the next lecture.
2.1. Example I: Gaussian mean estimation
We start from the possibly simplest example of Gaussian mean estimation. Consider with known
, and the target is the estimate the vector
under the squared
loss. Let
be the corresponding minimax risk.
We first show that any two-point argument fails. In fact, if the two points are chosen to be , the two-point argument gives
where is the normal CDF. Optimization only gives
.
Now we show that Assouad’s lemma can give us the rate-optimal lower bound . Let
for
, where
is a parameter to be chosen later. Since
the separation condition in Theorem 5 holds with . Consequently, Corollary 6 gives
and choosing gives
.
We can also establish the same lower bound using the generalized Fano’s inequality. Let , then Lemma 4 gives
Since for any with
(break ties arbitrarily), we have
choosing gives
for some numerical constant , where the last inequality is given by sub-Gaussian concentration. Consequently,
, and Theorem 8 gives
Again, choosing with small
gives
.
Remark 3 The original version of Fano’s inequality can also be applied here, where
is a maximal packing of
with
packing distance
.
2.2. Example II: Sparse linear regression
Consider the following sparse linear regression , where
is a fixed design matrix, and
is a sparse vector with
. Here
is a known sparsity parameter, and we are interested in the minimax risk
of estimating
under the squared
loss. Note that when
, this problem is reduced to the sparse Gaussian mean estimation.
We apply the generalized Fano’s inequality to this problem. Since a natural difficulty of this problem is to recover the support of , we introduce the random vector
, where
is a parameter to be specified later,
,
denotes the restriction of
onto
, and
is uniformly chosen from all size-
subsets of
. Clearly,
By Lemma 4,
where is the Frobenius norm of
. Now it remains to upper bound
for
. Fix any
such that
holds with non-zero probability (otherwise the upper bound is trivial), and by symmetry we may assume that
. Now by triangle inequality,
implies that
. Hence,
for a numerical constant after some algebra. Consequently, Theorem 8 gives
Finally, choosing for some small constant
gives
In the special cases where or
consists of i.i.d.
entries, we arrive at the tight lower bound
for both sparse Gaussian mean estimation and compressed sensing.
2.3. Example III: Multi-armed bandit
Next we revisit the example of the multi-armed bandit in Lecture 5. Let be the time horizon,
be the total number of arms. For each
, the reward of pulling arm
at time
is an independent random variable
. The target of the learner is to devise a policy
where
is the arm to pull at time
, and
may depend on the entire observed history
and
. The learner would like minimize the following worst-case regret:
where is a technical condition. As in Lecture 5, our target is to show that
via multiple hypothesis testing.
To prove the lower bound, a natural idea is to construct hypotheses where the
-th arm is the optimal arm under the
-th hypothesis. Specifically, we set
where is some parameter to be specified later. The construction is not entirely symmetric, and the reward distribution under the first arm is always
.
For a mean vector and policy
, let
be the non-negative loss function for this online learning problem. Then clearly the separation condition in Lemma 1 is fulfilled with
. Next, we apply Lemma 2 to a star graph
with center
gives
where in step (a) we denote by the distribution of the observed rewards under mean vector
, step (b) follows from the inequality
presented in Lecture 5, step (c) is the evaluation of the KL divergence where
denotes the expected number of pullings of arm
under the mean vector
, step (d) follows from Jensen’s inequality, and step (e) is due to the deterministic inequality
. Now choosing
gives the desired lower bound.
The main reason why we apply Lemma 2 instead of the Fano’s inequality and choose the same reward distribution for arm at all times is to deal with the random number of pullings of different arms under different reward distributions. In the above way, we may stick to the expectation
and apply the deterministic inequality
to handle the randomness.
2.4. Example IV: Gaussian mixture estimation
Finally we look at a more involved example in nonparametric statistics. Let be a density on
which is a Gaussian mixture, i.e.,
for some density
, where
denotes the convolution. We consider the estimation of the density
given
i.i.d. observations
from
, and we denote by
the minimax risk under the squared
loss between real-valued functions. The central claim is that
which is slightly larger than the (squared) parametric rate .
Before proving this lower bound, we first gain some insights from the upper bound. In the problem formulation, the only restriction on the density is that
must be a Gaussian mixture. Since convolution makes function smoother,
should be fairly smooth, and harmonic analysis suggests that the extent of smoothness is reflected in the speed of decay of the Fourier transform of
. Let
be the Fourier transform of
, we have
and . Hence, if we truncate
to zero for all
, the
approximation error would be smaller than
. This suggests us to consider the kernel
on
with
Then the density estimator has mean
, which by Parseval’s identity has squared bias
Moreover, by Parseval again,
and a triangle inequality leads to the desired result.
The upper bound analysis motivates us to apply Fourier-type arguments in the lower bound. For (with integer
to be specified later), consider the density
where is some centered probability density,
is some parameter to be specified later,
are perturbation functions with
, and
is the PDF of
. To apply the Assouad’s lemma on this hypercube structure, we require the following orthogonality condition
to ensure the separation condition in Theorem 5 with . By Plancherel’s identity, the above condition is equivalent to
Recall from Lecture 7 that the Hermite polynomials are orthogonal under the normal distribution, a natural candidate of is
, where we have used that
, and we restrict to odd degrees to ensure that
. This choice enjoys another nice property where the inverse Fourier transform of
has the following closed-form expression for
:
Moreover, since for large
, we have
. Therefore, we must have
to ensure the non-negativity of the density
.
Now the only remaining condition is to check that the -divergence between neighboring vertices is at most
, which is essentially equivalent to
A natural choice of is the PDF of
, with
to be specified. Splitting the above integral over
into
and
for some large constant
, the orthogonality relation of
gives
To evaluate the integral over , recall that
behaves as
for large
. Hence, the natural requirement
leads to the same upper bound
for the second integral, and therefore the largest choice of
is
.
To sum up, we have arrived at the lower bound
subject to the constraints and
. Hence, the optimal choices for the auxiliary parameters are
, leading to the desired lower bound.
3. Bibliographic Notes
The lower bound technique based on hypothesis testing was pioneered by Ibragimov and Khas’minskii (1977), who also applied Fano’s lemma (Fano (1952)) to a statistical setting. The Assouad’s lemma is due to Assouad (1983), and we also refer to a survey paper Yu (1997). The tree-based technique (Lemma 2 and the analysis of the multi-armed bandit) is due to Gao et al. (2019), and the generalized Fano’s inequality is motivated by the distance-based Fano’s inequality (Duchi and Wainwright (2013)) and the current form is presented in the lecture note Duchi (2019).
For the examples, the lower bound of sparse linear regression is taken from Candes and Davenport (2013), and we also refer to Donoho and Johnstone (1994), Raskutti, Wainwright and Yu (2011), and Zhang, Wainwright and Jordan (2017) for related results. The full proof of the Gaussian mixture example is referred to Kim (2014).
- Il’dar Abdullovich Ibragimov and Rafail Zalmanovich Khas’minskii. On the estimation of an infinite-dimensional parameter in Gaussian white noise. Doklady Akademii Nauk. Vol. 236. No. 5. Russian Academy of Sciences, 1977.
- Robert M. Fano, Class notes for transmission of information. Massachusetts Institute of Technology, Tech. Rep (1952).
- Patrick Assouad, Densité et dimension. Annales de l’Institut Fourier. Vol. 33. No. 3. 1983.
- Bin Yu, Assouad, Fano, and Le Cam. Festschrift for Lucien Le Cam. Springer, New York, NY, 1997. 423–435.
- Zijun Gao, Yanjun Han, Zhimei Ren, and Zhengqing Zhou, Batched multi-armed bandit problem. Advances in Neural Information Processing Systems, 2019.
- John C. Duchi, and Martin J. Wainwright. Distance-based and continuum Fano inequalities with applications to statistical estimation. arXiv preprint arXiv:1311.2669 (2013).
- John C. Duchi, Lecture notes for statistics 311/electrical engineering 377. 2019.
- Emmanuel J. Candes, and Mark A. Davenport. How well can we estimate a sparse vector? Applied and Computational Harmonic Analysis 34.2 (2013): 317–323.
- David L. Donoho, and Iain M. Johnstone. Minimax risk over
-balls for
-error. Probability Theory and Related Fields 99.2 (1994): 277–303.
- Garvesh Raskutti, Martin J. Wainwright, and Bin Yu. Minimax rates of estimation for high-dimensional linear regression over
-balls. IEEE transactions on information theory 57.10 (2011): 6976–6994.
- Yuchen Zhang, Martin J. Wainwright, and Michael I. Jordan. Optimal prediction for sparse linear models? Lower bounds for coordinate-separable M-estimators. Electronic Journal of Statistics 11.1 (2017): 752-799.
- Arlene K. H. Kim. Minimax bounds for estimation of normal mixtures. Bernoulli 20.4 (2014): 1802–1818.