# Connected by Poincaré Inequality

While studying two seemingly irrelevant subjects, probability theory and partial differential equations (PDEs), I ran into a somewhat surprising overlap: the Poincaré inequality. On one hand, it is not out of the ordinary for analysis based subjects to share inequalities such as Cauchy-Schwarz and Hölder; on the other hand, the two forms of Poincaré inequality have quite different applications.

In this blog post, I hope to put together some excellent content I studied recently, specifically from:

*Concentration Inequalities*by Boucheron, Lugosi, and Massart (2013)*Partial Differential Equations*by Evans (2010)

## A Simple Inequality

We first state a very simple version of the inequality:

**Theorem (A Simple Poincaré Inequality)**
Let be open and bounded,
and let (differentiable
with compact support).
Then there exists a constant that depends only
on such that:

\[ \left\lVert f \right\rVert_{L^2(\Omega)} \leq C \lVert \nabla f \rVert_{L^2(\Omega)} \]

Quick aside: we say a function has **compact support**
if the set
has compact closure.
This implies near the boundary.

Observe that the inequality simply bounds the -norm of a function in terms of the -norm of its gradient instead. Note the compact support here is an important assumption when we are integrating with respect to the Lebesgue measure. Consider for example a constant function, then this inequality would fail as the gradient is zero. The reader may be comforted that a general form will require much fewer assumptions, and can be generalized to all norms.

The reason we start with this inequality is because the proof is quite straightforward:

*proof (of the Simple Poincaré Inequality):*

Without loss of generality, we let for some large , and by the Cauchy-Schwarz inequality we have

\[ \vert f(x) \vert^2 \leq \left\vert \int_0^{x_1} \frac{\partial }{\partial x_i} f(y_1, x_2, \ldots) dy_1 \right\vert^2 \leq \left[ \int_0^M 1^2 dy_1 \right] \left[ \int_0^M \left\vert \frac{\partial f}{\partial x_1} \right\vert^2 dy_1 \right] \]

Summing over all possible derivatives, and integrating over we have

\[ n \int_\Omega \vert f(x) \vert^2 \leq \int_\Omega \sum_{i=1}^n M \int_0^M \left\vert \frac{\partial f}{\partial x_i} \right\vert^2 dy_i = \sum_{i=1}^n M \left\lVert \frac{\partial f}{\partial x_i} \right\rVert^2_{L^2(\Omega)} \]

where in the last step we exchanged the order of integration, and used the fact that the norm is a constant. Rewriting the above we get the desired result

\[ \lVert f \rVert_{L^2(\Omega)} \leq \frac{M}{\sqrt{n}} \lVert \nabla f \rVert_{L^2(\Omega)} \]

## As a Concentration Inequality

We now state the inequality in a form most useful for probability theory, see Theorem 3.20 from Boucheron, Lugosi, Massart (2013):

**Theorem (Gaussian-Poincaré Inequality)**
Let be a vector of i.i.d.
standard Gaussian random variables.
Let
be any continuously differentiable function.
Then

\[ \text{Var}[f(X)] \leq \mathbb{E}\left[ | \nabla f(X)|^2 \right] \]

Observe that the inequality is slightly different. Firstly this time the norm is centered, although centering in this case is not an issue since . Secondly due to the measure being a probability measure, we have a much smaller constant on the inequality . In combination, we were also able to drop the compact support assumption.

An immediate consequence is to consider Lipschitz with coefficient , i.e. , then we have

\[ \text{Var}[f(X)] \leq 1 \]

In other words, we just found a constant bound on the variance for a huge class of random functions! In general, we can consider to be a smooth estimator based on a dataset with noise . The Poincaré inequality will provide a very useful bound on estimation error.

To prove this inequality, we will use a famous result from 1981 (Theorem 3.1 in Boucheron, Lugosi, Massart (2013)):

**Theorem (Efron-Stein Inequality)**
Let be a vector of i.i.d.
random variables and let be a square-integrable
function of .
Then

\[ \text{Var}(Z) \leq \sum_{i=1}^n \mathbb{E} \left[ \left( Z - \mathbb{E}^{(i)}Z \right)^2 \right] \]

where , i.e. the expectation over only.

The Efron-Stein inequality can be proved by decomposing the variance as a sum of telescoping differences of conditional expectations, and applying Jensen’s inequality to the individual terms. While we omit the proof here, we should remark that the simple Efron-Stein inequality has wide ranging applications; we will only look at one such use for the proof of the Poincaré inequality, taken from Theorem 3.20 in Boucheron, Lugosi, Massart (2013):

*proof (of Gaussian-Poincaré Inequality):*

First we observe that a direct application of the Efron-Stein inequality can reduce the problem down to , i.e. it is sufficient to show

\[ \mathbb{E}^{(i)} \left[ \left( Z - \mathbb{E}^{(i)}Z \right)^2 \right] \leq \mathbb{E}^{(i)} \frac{\partial f}{\partial x_i}(X)^2 \]

From here we assume without loss of generality . Then we notice that it is sufficient to prove this inequality for compactly supported, twice differentiable functions, i.e. , since otherwise we can just take a limit to the original function.

Here we let be i.i.d. Rademacher random variables, i.e. , and we define

\[ S_n = n^{-1/2} \sum_{j=1}^n \epsilon_j \]

Observe that for every we have

\[ \text{Var}^{(i)}[f(S_n)] = \frac{1}{4} \left[ f\left( S_n + \frac{1-\epsilon_i}{\sqrt{n}} \right) - f\left( S_n + \frac{1+\epsilon_i}{\sqrt{n}} \right) \right]^2 \]

Applying the Efron-Stein inequality, we get

\[ \text{Var}[f(S_n)] \leq \frac{1}{4} \sum_{i=1}^n \mathbb{E} \left[ \left( f\left( S_n + \frac{1-\epsilon_i}{\sqrt{n}} \right) - f\left( S_n + \frac{1+\epsilon_i}{\sqrt{n}} \right) \right)^2 \right] \]

Let , then we have that

\[ \left|f\left( S_n + \frac{1-\epsilon_i}{\sqrt{n}} \right) - f\left( S_n + \frac{1+\epsilon_i}{\sqrt{n}} \right)\right| \leq \frac{2}{\sqrt{n}} |f’(S_n)| + \frac{2K}{n} \]

which implies

\[ \frac{n}{4} \left( f\left( S_n + \frac{1-\epsilon_i}{\sqrt{n}} \right) - f\left( S_n + \frac{1+\epsilon_i}{\sqrt{n}} \right) \right)^2 \leq f’(S_n)^2 + \frac{2K}{\sqrt{n}} | f’(S_n) | + \frac{K^2}{n} \]

Finally the central limit theorem then imply the desired result

\[ \limsup_{n\to\infty} \frac{1}{4} \sum_{i=1}^n \mathbb{E} \left[ \left( f\left( S_n + \frac{1-\epsilon_i}{\sqrt{n}} \right) - f\left( S_n + \frac{1+\epsilon_i}{\sqrt{n}} \right) \right)^2 \right] = \mathbb{E} \left[ f’(X)^2 \right] \]

**Remark**
There are also Poincaré type inequalities for
non-Gaussian random variables,
for example if Poisson:

\[ \text{Var}[f(X)] \leq \mu \mathbb{E}\left[ (f(X+1) - f(X))^2 \right] \]

Or if is double exponential i.e. with density , then we have:

\[ \text{Var}[f(X)] \leq 4 \mathbb{E}\left[ (f’(X))^2 \right] \]

## An Application to PDEs

To do proper justice for the theory of PDEs,
we will need a significant background in functional analysis.
In this section, we will try to side-step the technical details
and focus on one single application,
that is showing the existence and uniqueness of
a weak solution for **Poisson’s equation**:

\[ -\Delta u = f \; \text{ in } \Omega \] \[ u = g \; \text{ in } \partial \Omega \]

where is open bounded with
smooth boundaries .
By **weak solution**, we meant there exists a
such that we have

\[ B[u,v] := \int_\Omega \nabla u \cdot \nabla v = \int_\Omega f v \]

Note if is a solution to the (original) Poisson’s equation, then we have the above weak equation by Green’s identity. The main tool we will use to prove existence and uniqueness is the following result:

**Theorem (Lax-Milgram)**
Let be a Hilbert space,
be a continuous, coersive, bilinear form.
Then ,
there exists a unique such that

\[ B(u,v) = <\varphi, v> \quad \forall v \in H \]

where is the linear functional applied to .

**Remark** Before going into the definitions and technical details,
we observe that Lax-Milgram Theorem gives us exactly
what we want - the existence and uniqueness!
Now we just have to fill in the blanks:

- define a Hilbert space where the solution lives in
- show that is continuous and coersive (we will define later)
- let

**Step 1**
To define our Hilbert space,
we will consider the following
inner product:

\[ (u,v) := \int_\Omega [uv + \nabla u \cdot \nabla v] \]

which corresponds to the following **Sobolev norm**:

\[ \lVert u \rVert_{H^{1}(\Omega)} := (u,u)^{1/2} = \left[ \lVert u \rVert_{L^2(\Omega)}^2 + \lVert \nabla u \rVert_{L^2(\Omega)}^2 \right]^{1/2} \]

By equipping the space with the above inner product, we almost have a Hilbert space! Here we will simply take the completion of with respect to the Sobolev norm, i.e. add all the limit points to the space. We call this (completed) Hilbert space .

**Step 2**
We now turn our attention to
, the bilinear form
(fancy term for separately linear in both inputs).
Then we say is **continuous** if

\[ \exists C_1 > 0 : \forall u,v \in H, \vert B(u,v) \vert \leq C_1 \lVert u \rVert_{H^1(\Omega)} \lVert v \rVert_{H^1(\Omega)} \]

Note this is an immediate consequence of Cauchy-Schwarz inequality

\[ \vert B(u,v) \vert \leq \lVert \nabla u \rVert_{L^2(\Omega)} \lVert \nabla v \rVert_{L^2(\Omega)} \leq \lVert u \rVert_{H^1(\Omega)} \lVert v \rVert_{H^1(\Omega)} \]

We say is **coersive** if

\[ \exists C_2 > 0 : \forall u \in H, B(u,u) \geq C_2 \lVert u \rVert_{H^1(\Omega)}^2 \]

We notice this is the only non-trivial condition left to check,
and to prove this we will finally use *Poincaré inequality*!
Start by rewriting

\[ B(u,u) = \int_\Omega \nabla u \cdot \nabla u = \lVert \nabla u \rVert_{L^2(\Omega)}^2 \]

Applying the Poincaré inequality on half of the norm we have

\[ \frac{1}{2} \lVert \nabla u \rVert_{L^2(\Omega)}^2 \geq \frac{1}{2C} \lVert u \rVert_{L^2(\Omega)}^2 \]

Therefore

\[ B(u,u) \geq \frac{1}{2} \lVert \nabla u \rVert_{L^2(\Omega)}^2 + \frac{1}{2C} \lVert u \rVert_{L^2(\Omega)}^2 \geq \min\left(\frac{1}{2}, \frac{1}{2C}\right) \lVert u \rVert_{H^1(\Omega)}^2 \]

And voilà, we have existence and uniqueness! A rigorous and careful reader may notice that does not necessarily have compact support - this is correct. However every is a limit of compactly supported functions, therefore we just need to take a limit to get our result!

**Remark** In fact, we can use similar Lax-Milgram
based methods to show existence and uniqueness
for a large subset of
elliptical PDEs.
We should note that
the fact we can “convert” between and
is highly useful for studying Sobolev norms.
We refer curious readers to Evans (2010)
for an excellent chapter on
Sobolev spaces
and related inequalities.

## Final Words

I have a weak spot for connections between different fields, probably because it’s always surprising, and surprises are intriguing in math! I hope to have to presented a readable introduction to the inequality and its applications in both topics, without drowning readers in technical details. On this note, I should remark that to study Sobolev spaces rigorously, the reader will need to go through all the details carefully!

As this is my first blog post, any constructive feedback or suggestions on future topics will be appreciated!

## References

- Boucheron, S., Lugosi, G., & Massart, P. (2013). Concentration inequalities: A nonasymptotic theory of independence. Oxford university press.
- Evans, L. C. (2010). Partial differential equations. Providence, R.I.: American Mathematical Society.