(Warning: These materials may be subject to lots of typos and errors. We are grateful if you could spot errors and leave suggestions in the comments, or contact the author at yjhan@stanford.edu.)
In the last lecture we have seen the general tools and concrete examples of reducing the statistical estimation problem to multiple hypothesis testing. In these examples, the loss function is typically straightforward and the construction of the hypotheses is natural. However, other than statistical estimation the loss function may become more complicated (e.g., the excess risk in learning theory), and the hypothesis constructions may be implicit. To better illustrate the power of the multiple hypothesis testing, this lecture will be exclusively devoted to more examples in potentially non-statistical problems.
1. Example I: Density Estimation
This section considers the most fundamental problem in nonparametric statistics, i.e., the density estimation. It is well-known in the nonparametric statistics literature that, if a density on the real line has Hölder smoothness parameter , it then can be estimated within
accuracy
based on
samples. However, in this section we will show that this is no longer the correct minimax rate when we generalize to other notions of smoothness and other natural loss functions.
Fix an integer and some norm parameter
, the Sobolev space
on
is defined as
where the derivative is defined in terms of distributions (
not necessarily belongs to
). Our target is to determine the minimax rate
of estimating the density
based on
i.i.d. observations from
, under the general
loss with
. For simplicity we suppress the dependence on
and assume that
is a large positive constant.
The main result of this section is as follows:
This rate is also tight. We see that as are fixed and
increases from
to
, the minimax rate for density estimation increases from
to
. We will show that these rates correspond to the “dense” and the “sparse” cases, respectively.
1.1. Dense case:
We first look at the case with , which always holds for Hölder smoothness where the norm parameter is
. The reason why it is called the “dense” case is that in the hypothesis construction, the density is supported everywhere on
, and the difficulty is to classify the directions of the bumps around some common density. Specifically, let
be a bandwidth parameter to be specified later, we consider
for , where
is a fixed smooth function supported on
with
, and
is some tuning parameter to ensure that
. Since
we conclude that the choice is sufficient. To invoke the Assoaud’s lemma, first note that for
the separation condition is fulfilled with
. Moreover, for any
with
, we have
Hence, the largest to ensure that
is
, which for
gives the minimax lower bound
This lower bound is also valid for the general case due to the monotonicity of
norms.
1.2. Sparse case:
Next we turn to the sparse case , where the name of “sparse” comes from the fact that for the hypotheses we will construct densities with only one bump on a small interval, and the main difficulty is to locate that interval. Specifically, let
be bandwidth and smoothness parameters to be specified later, and we consider
where , and
is a fixed smooth density on
. Clearly
is a density on
for all
, and
Consequently, , and the Sobolev ball requirement leads to the choice
.
Next we check the conditions of Fano’s inequality. Since
the separation condition is fulfilled with . Moreover, since
we conclude that . Consequently, Fano’s inequality gives
and choosing gives the desired result.
2. Example II: Aggregation
In estimation or learning problems, sometimes the learner is given a set of candidate estimators or predictors, and she aims to aggregate them into a new estimate based on the observed data. In scenarios where the candidates are not explicit, aggregation procedures can still be employed based on sample splitting, where the learner splits the data into independent parts, uses the first part to construct the candidates and the second part to aggregate them.
In this section we restrict ourselves to the regression setting where there are i.i.d. observations
, and
where is the independent noise, and
is an unknown regression function with
. There is also a set of candidates
, and the target of aggregation is to find some
(not necessarily in
) to minimize
where the expectation is taken over the random observations ,
for any
, and
is a suitable subset corresponding to different types of aggregation. For a fixed data distribution
, the minimax rate of aggregation is defined as the minimum worst-case excess risk over all bounded functions
and candidate functions
:
Some special cases are in order:
- When
, the estimate
is compared with the best linear aggregates and this is called linear aggregation, with optimal rate denoted as
;
- When
, the estimate
is compared with the best convex combination of candidates (and zero) and this is called convex aggregation, with optimal rate denoted as
;
- When
, the set of canonical vectors, the estimate
is compared with the best candidate in
and this is called model selection aggregation, with optimal rate denoted as
.
The main result in this section is summarized in the following theorem.
Theorem 1 If there is a cube
such that
admits a density lower bounded from below w.r.t. the Lebesgue measure on
, then
We remark that the rates in Theorem 1 are all tight. In the upcoming subsections we will show that although the loss function of aggregation becomes more complicated, the idea of multiple hypothesis testing can still lead to tight lower bounds.
2.1. Linear aggregation
Since the oracle term is hard to deal with, a natural idea would be to consider a well-specified model such that this term is zero. Since
admits a density, we may find a partition
such that
for all
. Consider the candidate functions
, and for
, let
where is to be specified.
To apply the Assoaud’s lemma, note that for the loss function
the orthogonality of the candidates implies that the separability condition holds for
. Moreover,
Therefore, the Assoud’s lemma (with Pinsker’s inequality) gives
Choosing completes the proof.
2.2. Convex aggregation
The convex aggregation differs only with the linear aggregation in the sense that the linear coefficients must be non-negative and the sum is at most one. Note that the only requirement in the previous arguments is the orthogonality of under
, we may choose any orthonormal functions
under
with
(existence follows from the cube assumption) and use the density lower bound assumption to conclude that
(to ensure the desired separation). Then the choice of
above becomes
, and we see that the previous arguments still hold for convex aggregation if
. Hence, it remains to prove that when
,
Again we consider the well-specified case where
with the above orthonormal functions , a constant scaling factor
, and a uniform size-
subset of
being
and zero otherwise (
is to be specified). Since the vector
is no longer the vertex of a hypercube, we apply the generalized Fano’s inequality instead of Assoaud. First, applying Lemma 4 in Lecture 8 gives
Second, as long as , using similar arguments as in the sparse linear regression example in Lecture 8, for
with a small constant
we have
with . Therefore, the generalized Fano’s inequality gives
and choosing completes the proof.
2.3. Model selection aggregation
As before, we consider the well-specified case where we select orthonormal functions on
, and let
,
. The orthonormality of
gives
for the separation condition of the original Fano’s inequality, and
Hence, Fano’s inequality gives
and choosing completes the proof.
We may show a further result that any model selector cannot attain the above optimal rate of model selection aggregation. Hence, even to compare with the best candidate function in , the optimal aggregate
should not be restricted to
. Specifically, we have the following result, whose proof is left as an exercise.
Exercise 1 Under the same assumptions of Theorem 1, show that
3. Example III: Learning Theory
Consider the classification problem where there are training data
i.i.d. drawn from an unknown distribution
, with
. There is a given collection of classifiers
consisting of functions
, and given the training data, the target is to find some classifier
with a small excess risk
which is the difference in the performance of the chosen classifer and the best classifier in the function class . In the definition of the excess risk, the expectation is taken with respect to the randomness in the training data. The main focus of this section is to characterize the minimax excess risk of a given function class
, i.e.,
The subscript “pes” here stands for “pessimistic”, where can be any distribution over
and there may not be a good classifier in
, i.e.,
may be large. We also consider the optimistic scenario where there exists a perfect (error-free) classifer in
. Mathematically, denoting by
the collection of all probability distributions
on
such that
, the minimax excess risk of a given function class
in the optimistic case is defined as
The central claim of this section is the following:
Theorem 2 Let the VC dimension of
be
. Then
Recall that the definition of VC dimension is as follows:
Definition 3 For a given function class
consisting of mappings from
to
, the VC dimension of
is the largest integer
such that there exist
points from
which can be shattered by
. Mathematically, it is the largest
such that there exist
, and for all
, there exists a function
such that
for all
.
VC dimension plays a significant role in statistical learning theory. For example, it is well-known that for the empirical risk minimization (ERM) classifier
we have
Hence, Theorem 2 shows that the ERM classifier attains the minimax excess risk for all function classes, and the VC dimension exactly characterizes the difficulty of the learning problem.
3.1. Optimistic case
We first apply the Assoaud’s lemma to the optimistic scenario. By the definition of VC dimension, there exist points and functions
such that for all
and
, we have
. Consider
hypotheses indexed by
: the distribution
is always
where is to be specified later. For the conditional distribution
, let
hold almost surely under the joint distribution
. Clearly this is the optimistic case, for there always exists a perfect classifier in
.
We first examine the separation condition in Assoaud’s lemma, where the loss function here is
For any and any
, we have
and therefore the separation condition holds for . Moreover, if
and
only differ in the
-th component, then
and
are completely indistinguishable if
does not appear in the training data. Hence, by coupling,
Therefore, Assoaud’s lemma gives
and choosing yields to the desired lower bound.
3.2. Pessimistic case
The analysis of the pessimistic case only differs in the construction of the hypotheses. As before, fix and
such that
for all
and
. Now let
be the uniform distribution on
, and under
, the conditional probability
is
and is to be specified later. In other words, the classifier
only outperforms the random guess by a margin of
under
.
Again we apply the Assoaud’s lemma for the lower bound of . First note that for all
,
Hence, for any and any
,
By triangle inequality, the separation condition is fulfilled with . Moreover, direct computation yields
and therefore tensorization gives
Finally, choosing gives the desired result.
3.3. General case
We may interpolate between the pessimistic and optimistic cases as follows: for any given , we restrict to the set
of joint distributions with
Then we may define the minimax excess risk over as
Clearly the optimistic case corresponds to , and the pessimistic case corresponds to
. Similar to the above arguments, we have the following lower bound on
, whose proof is left as an exercise.
Exercise 2 Show that when the VC dimension of
is
, then
4. Example IV: Stochastic Optimization
Consider the following oracle formulation of the convex optimizaton: let be the convex objective function, and we aim to find the minimum value
. To do so, the learner can query some first-order oracle adaptively, where given the query
the oracle outputs a pair
consisting of the objective value at
(zeroth-order information) and a sub-gradient of
at
(first-order information). The queries can be made in an adaptive manner where
can depend on all previous outputs of the oracle. The target of convex optimization is to determine the minimax optimization gap after
queries defined as
where is a given class of convex functions, and the final query
can only depend on the past outputs of the functions.
Since there is no randomness involved in the above problem, the idea of multiple hypothesis testing cannot be directly applied here. Therefore, in this section we consider a simpler problem of stochastic optimization, and we postpone the general case to later lectures. Specifically, suppose that the above first-order oracle is stochastic, i.e., it only outputs
such that
, and
. Let
be the set of all convex
-Lipschitz functions (in
norm),
, and assume that the subgradient estimate returned by the oracle
always satisfies
as well. Now the target is to determine the quantity
where the expectation is taken over the randomness in the oracle output. The main result in this section is summarized in the following theorem:
Theorem 4
Since it is well-known that the stochastic gradient descent attains the optimization gap for convex Lipschitz functions, the lower bound in Theorem 4 is tight.
Now we apply the multiple hypothesis testing idea to prove Theorem 4. Since the randomness only comes from the oracle, we should design the stochastic oracle carefully to reduce the optimization problem to testing. A natural way is to choose some function such that
is convex
-Lipschitz for all
, and
. In this way, the oracle can simply generate i.i.d.
, and reveals the random vector
to the learner at the
-th query. Hence, the proof boils down to find a collection of probability distributions
such that they are separated apart while hard to distinguish based on
observations.
We first look at the separation condition. Since the loss function of this problem is , we see that
which is known as the optimization distance . Now consider
and for , let
with
defined as
Clearly is convex and
-Lipschitz, and for
we have
Hence, it is straightforward to verify that , and
In other words, the separation condition in the Assoaud’s lemma is fulfilled with . Moreover, simple algebra gives
and therefore Assoaud’s lemma gives
and choosing completes the proof of Theorem 4. In fact, the above arguments also show that if the
norm is replaced by the
norm, then
5. Bibliographic Notes
The example of nonparametric density estimation over Sobolev balls is taken from Nemirovski (2000), which also contains the tight minimax linear risk among all linear estimators. For the upper bound, linear procedures fail to achieve the minimax rate whenever , and some non-linearities are necessary for the minimax estimators either in the function domain (Lepski, Mammen and Spokoiny (1997)) or the wavelet domain (Donoho et al. (1996)).
The linear and convex aggregations are proposed in Nemirovski (2000) for the adaptive nonparametric thresholding estimators, and the concept of model selection aggregation is due to Yang (2000). For the optimal rates of different aggregations (together with upper bounds), we refer to Tsybakov (2003) and Leung and Barron (2006).
The examples from statistical learning theory and stochastic optimization are similar in nature. The results of Theorem 2 and the corresponding upper bounds are taken from Vapnik (1998), albeit with a different proof language. For general results of the oracle complexity of convex optimization, we refer to the wonderful book Nemirovksi and Yudin (1983) and the lecture note Nemirovski (1995). The current proof of Theorem 4 is due to Agarwal et al. (2009).
- Arkadi Nemirovski, Topics in non-parametric statistics. Ecole d’Eté de Probabilités de Saint-Flour 28 (2000): 85.
- Oleg V. Lepski, Enno Mammen, and Vladimir G. Spokoiny, Optimal spatial adaptation to inhomogeneous smoothness: an approach based on kernel estimates with variable bandwidth selectors. The Annals of Statistics 25.3 (1997): 929–947.
- David L. Donoho, Iain M. Johnstone, Gérard Kerkyacharian, and Dominique Picard, Density estimation by wavelet thresholding. The Annals of Statistics (1996): 508–539.
- Yuhong Yang, Combining different procedures for adaptive regression. Journal of multivariate analysis 74.1 (2000): 135–161.
- Alexandre B. Tsybakov, Optimal rates of aggregation. Learning theory and kernel machines. Springer, Berlin, Heidelberg, 2003. 303–313.
- Gilbert Leung, and Andrew R. Barron. Information theory and mixing least-squares regressions. IEEE Transactions on Information Theory 52.8 (2006): 3396–3410.
- Vlamimir Vapnik, Statistical learning theory. Wiley, New York (1998): 156–160.
- Arkadi Nemirovski, and David Borisovich Yudin. Problem complexity and method efficiency in optimization. 1983.
- Arkadi Nemirovski, Information-based complexity of convex programming. Lecture Notes, 1995.
- Alekh Agarwal, Martin J. Wainwright, Peter L. Bartlett, and Pradeep K. Ravikumar, Information-theoretic lower bounds on the oracle complexity of convex optimization. Advances in Neural Information Processing Systems, 2009.
These notes are excellent! I think it’d be worth considering combining them into a survey monograph
Really glad you like them Jonathan! We’ll definitely combine these lecture notes at the end (also plan to teach a course based on these materials soon). Also this is not the end of the lecture series – more (tools and examples) to follow!