(Warning: These materials may be subject to lots of typos and errors. We are grateful if you could spot errors and leave suggestions in the comments, or contact the author at yjhan@stanford.edu.)
In the last two lectures we have seen two specific categories of the Le Cam’s two-point methods, i.e., testing between two single hypotheses, or between one single hypothesis and a mixture of hypotheses. Of course, the most powerful and natural generalization of the two-point methods is to test between two mixtures of distributions, which by the minimax theorem is potentially the best possible approach to test between two composite hypotheses. However, the least favorable prior (mixture) may be hard to find, and it may be theoretically difficult to upper bound the total variation distance between two mixtures of distributions. In this lecture, we show that both problems are closely related to moment matching via examples in Gaussian and Poisson models.
1. Fuzzy Hypothesis Testing and Moment Matching
In this section, we present the general tools necessary for this lecture. First, we prove a generalized two-point method also known as the fuzzy hypothesis testing. To upper bound the crucial divergence term between two mixtures, we introduce the orthogonal polynomials under different distributions and show that the upper bound depends on moment differences of the mixtures.
1.1. Mixture vs. Mixture
We first state the theorem of the generalized two-point methods with two mixtures. As usual, we require that any two points in respective mixtures be well separated but the mixtures be indistinguishable from samples. However, since many natural choices of mixtures may not satisfy the well-separated property in the worst case, the next theorem will be a bit more flexible to require that the mixtures are separated with a large probability.
Theorem 1 (Mixture vs. Mixture) Let
be any loss function, and there exist
and
such that

Then for any probability measures
supported on
, we have

Proof: For
, let
be the conditional probability measure of
conditioned on
, i.e.,
Simple algebra gives
. By coupling, we also have
Now the desired result follows from the standard two-point arguments and the triangle inequality of the total variation distance.
The central quantity in Theorem 1 is
, the total variation distance between mixture distributions with priors
and
. A general upper bound on this quantity is very hard to obtain, but the next two sections will show that it is small when the moments of
and
are close to each other if the model
is Gaussian or Poisson.
1.2. Hermite and Charlier Polynomials
This section reviews some preliminary results on orthogonal polynomials under a fixed probability distribution. Let
be a probability measure on
with
and all moments finite. Recall that functions
with
are called orthogonal under
iff
for
and all
. By orthogonal polynomials we mean that for each
,
is a polynomial with degree
.
The simplest way to construct orthogonal polynomials is via the Gram–Schmidt orthogonalization. Specifically, we may choose
, and
to be the orthogonal component of
projected onto
with the inner product structure
. This approach works for general distributions and can be easily implemented in practice, but it gives little insight on the properties of
. We shall apply a new approach to arrive at orthogonal functions, which turn out to be polynomials in Gaussian and Poisson models.
Let
be a family of distributions on
with
. Assume that
admits the following local expansion around
:
The next lemma claims that under specific conditions of
, the functions
are orthogonal under
.
Lemma 2 Under the above conditions, if for all
the quantity

depends only on their product
and
, then the functions
are orthogonal under
.
Remark 1 Recall that the quantity
plays an important role in the Ingster-Suslina method in Lecture 6. The upper bounds in the next section can be thought of as a generalization of the Ingster-Suslina method, with the help of proper orthogonality properties.
Proof: The local expansion of likelihood ratio gives
The condition of Lemma 2 implies that the coefficient of the monomial
on the RHS with
must be zero, as desired.
Exercise 1 Show that under the conditions in Lemma 2,
for some scalar
and any
. In other words,
is an unbiased estimator of
up to scaling in the location model
.
The condition of Lemma 2 is satisfied by various well-known probability distributions. For example,
and for any
,
In fact, the functions
given in the defining equation (1) in the Gaussian and Poisson models are both polynomials of degree
, known as the Hermite polynomial
and the Charlier polynomial
, respectively. The proof of Lemma 2 gives the following orthogonal relations:
Throughout this lecture, we won’t need the specific forms of the Hermite and Charlier polynomials. We shall only need the defining property (1) and the above orthogonal relations.
1.3. Divergences between Mixtures
Now we are ready to present the upper bound on the total variation distance between Gaussian mixture and Poisson mixture models. We also provide upper bounds on the
divergence due to its nice tensorization property.
We first deal with the Gaussian location model. Let
and
be two random variables on
, and let
be the Gaussian mixture with random mean
.
Theorem 3 (Divergence between Gaussian Mixtures) For any
, we have
![\|\mathop{\mathbb E} \mathcal{N}(\mu+ U,1) - \mathop{\mathbb E} \mathcal{N}(\mu+U',1) \|_{\text{\rm TV}} \le \frac{1}{2}\left(\sum_{m=0}^\infty \frac{|\mathop{\mathbb E}[U^m] - \mathop{\mathbb E}[(U')^m] |^2 }{m!} \right)^{\frac{1}{2}}.](https://s0.wp.com/latex.php?latex=+%5C%7C%5Cmathop%7B%5Cmathbb+E%7D+%5Cmathcal%7BN%7D%28%5Cmu%2B+U%2C1%29+-+%5Cmathop%7B%5Cmathbb+E%7D+%5Cmathcal%7BN%7D%28%5Cmu%2BU%27%2C1%29+%5C%7C_%7B%5Ctext%7B%5Crm+TV%7D%7D+%5Cle+%5Cfrac%7B1%7D%7B2%7D%5Cleft%28%5Csum_%7Bm%3D0%7D%5E%5Cinfty+%5Cfrac%7B%7C%5Cmathop%7B%5Cmathbb+E%7D%5BU%5Em%5D+-+%5Cmathop%7B%5Cmathbb+E%7D%5B%28U%27%29%5Em%5D+%7C%5E2+%7D%7Bm%21%7D+%5Cright%29%5E%7B%5Cfrac%7B1%7D%7B2%7D%7D.+&bg=ffffff&fg=7f8d8c&s=0&c=20201002)
Moreover, if
and
, then
![\chi^2(\mathop{\mathbb E}\mathcal{N}(\mu+ U,1), \mathop{\mathbb E}\mathcal{N}(\mu+U',1)) \le e^{M^2/2}\cdot \sum_{m=0}^\infty \frac{|\mathop{\mathbb E}[U^m] - \mathop{\mathbb E}[(U')^m] |^2 }{m!}.](https://s0.wp.com/latex.php?latex=+%5Cchi%5E2%28%5Cmathop%7B%5Cmathbb+E%7D%5Cmathcal%7BN%7D%28%5Cmu%2B+U%2C1%29%2C+%5Cmathop%7B%5Cmathbb+E%7D%5Cmathcal%7BN%7D%28%5Cmu%2BU%27%2C1%29%29+%5Cle+e%5E%7BM%5E2%2F2%7D%5Ccdot+%5Csum_%7Bm%3D0%7D%5E%5Cinfty+%5Cfrac%7B%7C%5Cmathop%7B%5Cmathbb+E%7D%5BU%5Em%5D+-+%5Cmathop%7B%5Cmathbb+E%7D%5B%28U%27%29%5Em%5D+%7C%5E2+%7D%7Bm%21%7D.+&bg=ffffff&fg=7f8d8c&s=0&c=20201002)
Proof: By translation we may assume that
. Let
be the pdf of
, and
. Then
where step (a) is due to the defining property (1), step (b) follows from the Cauchy–Schwartz inequality, and step (c) uses the orthogonal relation of
. Hence the upper bound on the total variation distance is proved.
For the
-divergence, first note that by Jensen’s inequality,
As a result,
where again we have used the defining property (1) and the orthogonality in the last two identities.
Specifically, Theorem 3 shows that when the moments of
and
are close, then the corresponding Gaussian mixtures are statistically close. Similar results also hold for Poisson mixtures.
Theorem 4 (Divergence between Poisson Mixtures) For any
and random variables
supported on
, we have
![\|\mathop{\mathbb E}\mathsf{Poi}(\lambda+U) - \mathop{\mathbb E}\mathsf{Poi}(\lambda+U') \|_{\text{\rm TV}} \le \frac{1}{2}\left(\sum_{m=0}^\infty \frac{|\mathop{\mathbb E}[U^m] - \mathop{\mathbb E}[(U')^m] |^2}{m!\lambda^m}\right)^{\frac{1}{2}}.](https://s0.wp.com/latex.php?latex=+%5C%7C%5Cmathop%7B%5Cmathbb+E%7D%5Cmathsf%7BPoi%7D%28%5Clambda%2BU%29+-+%5Cmathop%7B%5Cmathbb+E%7D%5Cmathsf%7BPoi%7D%28%5Clambda%2BU%27%29+%5C%7C_%7B%5Ctext%7B%5Crm+TV%7D%7D+%5Cle+%5Cfrac%7B1%7D%7B2%7D%5Cleft%28%5Csum_%7Bm%3D0%7D%5E%5Cinfty+%5Cfrac%7B%7C%5Cmathop%7B%5Cmathbb+E%7D%5BU%5Em%5D+-+%5Cmathop%7B%5Cmathbb+E%7D%5B%28U%27%29%5Em%5D+%7C%5E2%7D%7Bm%21%5Clambda%5Em%7D%5Cright%29%5E%7B%5Cfrac%7B1%7D%7B2%7D%7D.+&bg=ffffff&fg=7f8d8c&s=0&c=20201002)
Moreover, if
and
almost surely, then
![\chi^2(\mathop{\mathbb E}\mathsf{Poi}(\lambda+U), \mathop{\mathbb E}\mathsf{Poi}(\lambda+U') ) \le e^{M}\cdot \sum_{m=0}^\infty \frac{|\mathop{\mathbb E}[U^m] - \mathop{\mathbb E}[(U')^m] |^2}{m!\lambda^m}.](https://s0.wp.com/latex.php?latex=+%5Cchi%5E2%28%5Cmathop%7B%5Cmathbb+E%7D%5Cmathsf%7BPoi%7D%28%5Clambda%2BU%29%2C+%5Cmathop%7B%5Cmathbb+E%7D%5Cmathsf%7BPoi%7D%28%5Clambda%2BU%27%29+%29+%5Cle+e%5E%7BM%7D%5Ccdot+%5Csum_%7Bm%3D0%7D%5E%5Cinfty+%5Cfrac%7B%7C%5Cmathop%7B%5Cmathbb+E%7D%5BU%5Em%5D+-+%5Cmathop%7B%5Cmathbb+E%7D%5B%28U%27%29%5Em%5D+%7C%5E2%7D%7Bm%21%5Clambda%5Em%7D.+&bg=ffffff&fg=7f8d8c&s=0&c=20201002)
Proof: The proof of both inequalities essentially follow the same lines as those in the proof of Theorem 3, with the Hermite polynomial
replaced by the Charlier polynomial
. The only difference is that when
and
almost surely, for all
we have
2. Examples in Gaussian Models
In this section, we present examples in Gaussian location models where we need to test between two mixtures. In these examples, we match moments up to either some finite degree, or some large and growing degrees such that the information divergences become extremely small.
2.1. Gaussian Mixture Models
Consider the following two-component Gaussian mixture model where
i.i.d. samples
are drawn from
. One possible task of proper learning is to estimate the parameters
within
-distance to the truth up to permutation. In other words, the target is to recover the components of the Gaussian mixture, which in practice helps to perform tasks such as clustering. The target is to determine the optimal sample complexity of this problem, assuming that
is unknown but bounded away from
and
(e.g.,
), and the overall variance of the mixture is at most
, where
is some prespecified parameter.
To derive a lower bound, the two-point method suggests to find two sets of parameters
which are
-seperated, while the information divergence between these two mixtures is
. However, since Theorem 3 can only deal with mixtures with an identical variance in each component, we cannot simply take
to be a discrete random variable supported on two points
. To overcome this difficulty, note that
where
denotes convolution of probability measures. Hence, we may treat the overall mixture to be
, and choose
and similarly for
. Now the desired
-divergence becomes
To apply Theorem 3, we should construct random variables
and
with as many matched moments as possible. Since there are
free parameters in the two-component Gaussian mixtures, we expect that
and
can only have matched moments up to degree
. A specific choice can be as follows:
Clearly both
and
have overall variance
. Replacing
by their centered version, Theorem 3 gives
as long as
. Hence, by the additivity of the
-divergence, we conclude that
is a lower bound on the sample complexity.
The seemingly strange bound
is also tight for this problem, and the idea is to estimate the first
moments of the mixture and then show that close moments imply close parameters. We leave the details to the reference in the bibliographic notes.
2.2.
-norm Estimation of Bounded Gaussian Mean
Consider the Gaussian location model
with unit variance, where the mean vector
satisfies
. The target here is to estimate the
norm of the mean vector
, and let
be the minimax risk under the absolute value loss. The main result here is to prove the following tight lower bound:
2.2.1. Failure of Point vs. Mixture
Motivated by the point vs. mixture approach in the last lecture, one natural idea is to test between
and a composite hypothesis
, where
is a parameter to be specified later such that
and
are indistinguishable in the minimax sense. Consequently, this approach gives the lower bound
, and the target is to find some
as large as possible. We claim that the largest possible
is
, which is strictly smaller than the desired minimax risk.
Let
, and consider any prior distribution
supported on
. Then Ingster-Suslina method gives
Using the Taylor expansion
and the inequality
for all
, we conclude that
Note that the above inequality holds for any
supported on
. Hence, when
, these hypotheses
and
become statistically distinguishable, and therefore the best possible lower bound from the point vs. mixture approach is
.
There is also another way to show the desired failure, i.e., we may construct an explicit test which reliably distinguishes between
and
. The idea is to apply the
test, i.e., compute the statistic
. Clearly under
we have
, and after some algebra we may show that
under
. Since
implies
, we conclude that comparing
with a suitable threshold
results in a reliable test.
2.2.1. Moment Matching and Polynomial Approximation
Previous section shows that testing between a single distribution and a mixture does not work, where the knowledge of the single distribution can be used for the
test and may make the problem significantly easier. Hence, an improvement is to consider two composite hypotheses
and
, where
are parameters to be chosen later. For the priors
on
, Theorem 3 motivates us to use product priors
where
are probability measures on
with matched moments up to degree
(to be chosen later). The specific choices of
must fulfill the following requirements:
- Have matched moments up to degree
while with the quantity
as large as possible; - For
, the prior
is supported on
; - The
-divergence is upper bounded by
, or in other words,
.
We check the above requirements in the reverse order and specify the choices of
and
gradually. For the last requirement, Theorem 3 with
gives
As a result, to have
, it suffices to take
.
The second constraint that
be (almost) supported on
is also easy. Set
where
is a large eough numerical constant. The idea behind the above choices is that, under the product distribution
, the random variable
is the sum of
i.i.d. random variables taking value in
following distribution
. Then by the sub-Gaussian concentration, the
norm is centered at
with fluctuation
. Hence, for large
both probabilities
and
are small, which fulfills the conditions in Theorem 1.
The most non-trivial requirement is the first requirement, which by our choice of
essentially aims to maximize the difference
subject to the constraint that the probability measures
are supported on
and have matching first
moments. The following lemma shows the duality between moment matching and best polynomial approximation.
Lemma 5 For any bounded interval
and real-valued function
on
, let
be the maximum difference
subject to the constraint that the probability measures
are supported on
and have matching first
moments. Then

where
denotes the best degree-
polynomial approximation error of
on the interval
:

Proof: It is an easy exercise to show that
. We present two proofs for the hard direction
. The first proof is an abstract proof which holds for general basis functions other than monomials, while the construction of the measures
is implicit. The second proof gives an explicit construction, while some properties of polynomials are used in the proof.
(First Proof) Consider the following linear functional
, with
for
and
. Equipped with the
norm on functions, it is easy to show that the operator norm of
is
. By the Hahn-Banach theorem, the linear functional
can be extended to
without increasing the operator norm. Then by the Riesz representation theorem, there exists a signed Radon measure
on
such that
The fact
implies that the total variation of
is one. Write
by Jordan decomposition of signed measures, then
and
satisfy the desired properties.
(Second Proof) Let
be a degree-
polynomial with
on
. Since
is a Haar basis on
, Chebyshev’s alternation theorem shows that there exist
points
such that
with
or
. Consider the signed measure
supported on
with
where
is a normalizing constant such that
. Then by simple algebra,
and
has total variation
. Moreover, the following identity is given by Lagrange interpolation
and comparing the coefficient of
on both sides gives
for
. Now another Jordan decomposition of
gives the desired result.
By Lemma 5, it boils down to the best degree-
polynomial approximation error of
on
. This error is analyzed in approximation theory and summarized in the following lemma.
Lemma 6 There is a numerical constant (known as the Bernstein’s constant)
such that
![E_K(|x|; [-1,1]) = (\beta_\star+o_K(1))K^{-1}.](https://s0.wp.com/latex.php?latex=+E_K%28%7Cx%7C%3B+%5B-1%2C1%5D%29+%3D+%28%5Cbeta_%5Cstar%2Bo_K%281%29%29K%5E%7B-1%7D.+&bg=ffffff&fg=7f8d8c&s=0&c=20201002)
Hence, by Lemma 5 and 6, the condition of Theorem 1 is satisfied with
. As a result, we finally arrive at
.
3. Examples in Poisson Models
In this section, we present examples in i.i.d. sampling models from a discrete distribution with a large support. We show that a general Poissonization technique allows us to operate in the simpler Poisson models, and use moment matching to either finite or growing degrees to establish tight lower bounds.
3.1. Poissonization and Approximate Distribution
Throughout this section the statistical model is i.i.d. sampling from a discrete distribution
, where
denotes the sample size and
denotes the support size. It is well-known that the histogram
with
constitutes a sufficient statistic. Moreover,
for all
, and
are negatively dependent. To remove the dependence among different bins, recall the following Poissonized model:
Definition 7 (Poissonization) In the Poissonized model,
for all
and they are mutually independent.
In other words, in the Poissonized model we draw a random number
of samples from
and then compute the histogram. In Lecture 3 we have shown the asymptotic equivalence of the i.i.d. sampling model and the Poissonized model, but the arguments are highly asymptotic. The following lemma establishes a non-asymptotic relationship.
Lemma 8 For a given statistical task, let
and
be the minimax risk under the i.i.d. sampling model and the Poissonized model with design sample size
, respectively. Then

Proof: Let
be the Bayes risks under prior
in respective models. The desired inequality for Bayes risks follows from the identity
the Poisson tail bounds and the monotonicity of Bayes risks
. Now the minimax theorem gives the desired lemma.
To establish the first identity, simply note that under any prior distribution
, the Bayes estimator under the Poissonized model given the realization
is exactly the Bayes estimator under the i.i.d. sampling model with
samples.
Lemma 8 shows that the minimax risk essentially does not change after Poissonization. We sometimes also consider approximate distributions in the Poissonized model, where
may not necessarily sum into one (note that the distribution of the histogram is still well-defined). The approximate distribution is typically used in lower bounds where a product prior is assigned to
and cannot preserve the distribution property. For statistical problems where the objective function or hypothesis depends on the vector
in a nice way that changing
into
with
will not change the objective much, in the lower bound it typically suffices to consider the approximate distributions in the Poissonized model. The key idea is that, conditioning on
, the histogram
is exactly distributed as that in the i.i.d. sampling model from
with
samples. Then we may construct the estimator in the Poissonized model from the hypothetical optimal estimators (with different sample sizes) in the i.i.d. sampling model, and applying the same tail bounds as in the proof of Lemma 8 suffices. The details of the arguments may vary from example to example, but in most scenarios it will not hurt the lower bound.
3.2. Generalized Uniformity Testing
Consider the following generalized uniformity testing problem: given
i.i.d. observations from some discrete distribution
supported on at most
elements, one would like to test whether the underlying distribution
is uniform on its support. Note the difference from the traditional uniformity testing problem: the distribution
may be supported on a subset of
while still be uniform on this subset. Specifically, the task is to determine the sample complexity of distinguishing from the hypothesis
uniform on its support and
is
-away from any uniform distribution with support
under
distance. We will show that the desired sample complexity is lower bounded by
Note that the first term
trivially follows from the Paninski’s construction in the traditional uniformity testing problem, the goal is to prove the second term. It is expected that the second term captures the difficulty of recovering the support of the distribution, and therefore we should consider a mixture of uniform distributions with random support in
. Specifically, we consider the following mixture: let
be two random variables with
Assign the
-fold product distribution of
(or
) to the probability vector
, then
forms an approximate probability distribution since
. Moveover, under prior
the (normalized) distribution is always uniform, and under prior
the (normalized) distribution is
-far from any uniform distribution supported on a subset of
with high probability. Hence, neglecting the additional details for the approximate distribution, it suffices to show that
To establish the above bound for the
-divergence, note that the random variables
are chosen carefully with
for
. Moreover,
Consequently, Theorem 4 gives that
Since
is a stronger lower bound than
if and only if
, under this condition and
we will have
. Then the above inequality gives
which is the claimed upper bound on the
-divergence, establishing the lower bound.
We provide some discussions on the choice of
and
. Theorem 4 suggests that if
and
could match more moments, the
-divergence could even be smaller. However, the number of matched moments is in fact limited by the problem structure. In the traditional uniformity testing problem, in the last lecture we essentially choose
and
as above. In this case,
and
only match the first moment, which is the best possible for
must be a constant. In generalized uniformity testing,
must only be supported on two points, one of which is zero. Meanwhile, the support of
can potentially be arbitrarily large. The next lemma shows that no matter how we choose
, we can match at most the first two moments.
Lemma 9 Let
be a probability measure supported on
elements of
, one of which is zero, and
be any probability measure supported on
. Then if
and
match the first
moments, we must have
.
Proof: Let
. Let the support of
be
. Consider the polynomial
of degree
, the assumption gives
. Finally, since
is always non-negative, we have
.
Lemma 9 applied to
shows that moment matching up to degree
is the best we can hope for. In fact, the above lower bound is also tight (see bibliographic notes).
3.3. Shannon Entropy Estimation
Finally we revisit the Shannon entropy estimation problem where the target is to estimate the Shannon entropy
. Let
be the minimax risk of estimating
under the mean squared error, our target is to show that
The lower bound
has already been shown via the two-point method in Lecture 5, and therefore the remaining target is to establish
.
Similar to the
norm example above, the Shannon entropy
is a symmetric sum of individual functions of
, where the individual function is non-differentiable at zero. It suggests us to apply similar ideas based on moment matching and best polynomial approximation to establish the lower bound. Specifically, the target is to construct two priors
on the interval
(with parameter
to be chosen later) such that:
- The priors
have matched moments up to degree
(to be chosen later); - The difference between
and
is large; - With high probability, the Shannon entropy
under
and
is well-separated; - The common mean
is at most
.
Note that the first requirement (moment matching) ensures a small TV distance between the mixtures, the second and third requirements ensure the separation property (i.e., lower bound the mean difference and upper bound the fluctuations), and the last requirement ensures that
sums into a constant smaller than one (recall that
by assumption) and therefore setting
gives a valid probability vector
. We check these requirements one by one to find proper parameters
.
For the first requirement, recall that Theorem 4 gives
Since the individual TV distance should be at most
(for the future triangle inequality), we should choose
, where
is due to the assumption
and that
to make the first term
become dominate in the minimax risk.
For the second requirement, by the duality result in Lemma 5 it is essentially the best degree-
polynomial approximation error of
on
. The next lemma gives the best approximation error.
Lemma 10
![E_K(-x\log x; [0,M]) = \Theta\left(\frac{M}{K^2}\right).](https://s0.wp.com/latex.php?latex=+E_K%28-x%5Clog+x%3B+%5B0%2CM%5D%29+%3D+%5CTheta%5Cleft%28%5Cfrac%7BM%7D%7BK%5E2%7D%5Cright%29.+&bg=ffffff&fg=7f8d8c&s=0&c=20201002)
By Lemma 10, the target is to maximize
subject to the previous condition
. Simple algebra shows that the maximum is
, with the maximizer
.
To resolve the third requirement, note that the mean difference of
is
by the above choice of
and
. Moreover, since
for some constant
if
, the sub-Gaussian concentration shows that the fluctuation of
under both
is at most
. Since
, the fluctuation is indeed negligible compared with the mean difference. Careful analysis also shows that the contribution of
to the entropy difference is negligible.
The last requirement requires proper modifications on the priors
constructed in Lemma 5 to satisfy the mean constraint. This can be done via the following trick of change of measures: let
be the priors constructed in Lemma 5which attains
, whose value is summarized in the following lemma.
Lemma 11
![E_{\log n}\left(-\log x; \left[\frac{1}{n\log n}, \frac{\log n}{n}\right]\right) = \Theta(1).](https://s0.wp.com/latex.php?latex=+E_%7B%5Clog+n%7D%5Cleft%28-%5Clog+x%3B+%5Cleft%5B%5Cfrac%7B1%7D%7Bn%5Clog+n%7D%2C+%5Cfrac%7B%5Clog+n%7D%7Bn%7D%5Cright%5D%5Cright%29+%3D+%5CTheta%281%29.+&bg=ffffff&fg=7f8d8c&s=0&c=20201002)
Next we construct the priors
as follows: for
, set
Then it is easy to show that both
and
are probability measures, have matched moments up to degree
, and have mean
. Moreover,
Hence, the fourth requirement is fulfilled without hurting the previous ones.
In summary, Theorem 1 holds with
, and we arrive at the desired lower bound
.
4. Bibliographic Notes
The method of two fuzzy hypotheses (Theorem 1) are systematically used in Ingster and Suslina (2012) on nonparametric testing, and the current form of the theorem is taken from Theorem 2.14 of Tsybakov (2009). Statistical closeness of Gaussian mixture models via moment matching (Theorem 3) was established in Cai and Low (2011), Hardt and Price (2015), Wu and Yang (2018) for the
-divergence, where the result is new for the TV distance. For Theorem 4, the TV version was established in part by Jiao, Kartik, Han and Weissman (2015) and Jiao, Han and Weissman (2018), where the
version is new here. When
, a stronger bound of the TV distance without the squared root is also available in Wu and Yang (2016). For more properties of Hermite and Charlier polynomials, we refer to Labelle and Yeh (1989).
For Gaussian examples, the proper learning of two-component Gaussian mixture was established in Hardt and Price (2015). The
norm estimation problem was taken from Cai and Low (2011), which was further motivated by Lepski, Nemirovski and Spokoiny (1999). The Gaussian mean testing example under
metric was taken from Ingster and Suslina (2012). For technical lemmas, proofs of Lemma 5 is taken from Lepski, Nemirovski and Spokoiny (1999) and Wu and Yang (2016), respectively, and Lemma 6 was due to Bernstein (1912).
For Poisson examples, the non-asymptotic equivalence between i.i.d. sampling model and the Poissonized model was due to Jiao, Kartik, Han and Weissman (2015). The tight bounds of the generalized uniformity testing problem were due to Batu and Canonne (2017) for constant
, and Diakonikolas, Kane and Stewart (2018) for general
, where their proof was greatly simplified here thanks to Theorem 4. For Shannon entropy estimation, the optimal sample complexity was obtained in Valiant and Valiant (2011), and the minimax risk was obtained independently in Jiao, Kartik, Han and Weissman (2015) and Wu and Yang (2016). For tools in approximation theory to establish Lemma 10 and 11, we refer to books Devore and Lorentz (1993), Ditzian and Totik (2012) for wonderful toolsets.
- Yuri Ingster and Irina A. Suslina. Nonparametric goodness-of-fit testing under Gaussian models. Vol. 169. Springer Science & Business Media, 2012.
- Alexandre B. Tsybakov. Introduction to Nonparametric Estimation. Springer, 2009.
- T. Tony Cai, and Mark G. Low. Testing composite hypotheses, Hermite polynomials and optimal estimation of a nonsmooth functional. The Annals of Statistics 39.2 (2011): 1012–1041.
- Moritz Hardt and Eric Price. Tight bounds for learning a mixture of two gaussians. Proceedings of the forty-seventh annual ACM symposium on Theory of computing. ACM, 2015.
- Yihong Wu and Pengkun Yang. Optimal estimation of Gaussian mixtures via denoised method of moments. arXiv preprint arXiv:1807.07237 (2018).
- Jiantao Jiao, Kartik Venkat, Yanjun Han, and Tsachy Weissman, Minimax estimation of functionals of discrete distributions. IEEE Transactions on Information Theory 61.5 (2015): 2835-2885.
- Jiantao Jiao, Yanjun Han, and Tsachy Weissman. Minimax estimation of the
distance. IEEE Transactions on Information Theory 64.10 (2018): 6672–6706. - Yihong Wu and Pengkun Yang, Minimax rates of entropy estimation on large alphabets via best polynomial approximation. IEEE Transactions on Information Theory 62.6 (2016): 3702–3720.
- Jacques Labelle, and Yeong Nan Yeh. The combinatorics of Laguerre, Charlier, and Hermite polynomials. Studies in Applied Mathematics 80.1 (1989): 25–36.
- Oleg Lepski, Arkady Nemirovski, and Vladimir Spokoiny. On estimation of the L r norm of a regression function. Probability theory and related fields 113.2 (1999): 221-253.
- Serge Bernstein. Sur l’ordre de la meilleure approximation des fonctions continues par des polynomes de degré donné. Vol. 4. Hayez, imprimeur des académies royales, 1912.
- Tugkan Batu, and Clément L. Canonne. Generalized uniformity testing. 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS). IEEE, 2017.
- Ilias Diakonikolas, Daniel M. Kane, and Alistair Stewart. Sharp bounds for generalized uniformity testing. Advances in Neural Information Processing Systems. 2018.
- Gregory Valiant and Paul Valiant. The power of linear estimators. 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science. IEEE, 2011.
- Ronald A. DeVore and George G. Lorentz. Constructive approximation. Vol. 303. Springer Science & Business Media, 1993.
- Zeev Ditzian and Vilmos Totik. Moduli of smoothness. Vol. 9. Springer Science & Business Media, 2012.