I picked these problems from Modern Fourier Analysis Vol I - I think that they serve as a good primer for the basic techniques and theorems in harmonic analysis (a subject that I have recently started looking back into in order to deal with some of the techniques used when Levy processes in mathematical finance).
Problem I. Fix $d\geq1$ and suppose $\psi:(0,\infty)\mapsto[0,\infty)$ is $C^{1}$, non-increasing, and $\int_{\mathbb{R}^{d}}\psi(|x|)\;dx\leq A<\infty.$ DefineSolution. We first observe that the translation invariance of the indicated estimate implies that it is sufficient to prove the case $x=0$ (this can be seen more explicitly by replacing $f$ by $\tau_{x}f$, where $\tau_{x}$ is the translation by $x$ operator, and applying the present case to be proven to see then that the estimate holds for all $x$). For convenience let us define $\psi_{r}(|y|)=r^{-d}\psi(|y|/r)$. The radial properties of the terms in the estimate suggest polar coordinates will be useful in dealing with the resultant integrals. Let us recall that the polar coordinate formula implies as a consequence of itself that
$$[M_{\psi}f](x):=\sup_{0<r<\infty}\frac{1}{r^{d}}\int_{\mathbb{R}^{d}}|f(x-y)|\psi\left(\frac{|y|}{r}\right)\;dy$$
and show that $$[M_{\psi}f](x)\leq A[Mf](x)$$
where $M$ is the usual Hardy-Littelwood maximal function.
$$\frac{d}{ds}\int_{B(0,s)}f(y)\;dy=\frac{d}{ds}\int_{0}^{s}dt\int_{\partial B(0,t)}f(\omega)\;dS(\omega)=\int_{\partial B(0,s)}f(\omega)\;dS(\omega)=s^{d-1}\int_{S^{d-1}}f(s\omega)\;dS(\omega).$$
In the last term we have used a change of variables in order to place the integration over the the unit sphere (and in particular, to keep the domain fixed). In order to apply this formula in an integration by parts without causing notational chaos, let us define
$$\alpha(s)=\int_{S^{d-1}}|f(s\omega)|\;dS(\omega)$$
and
$$\beta(s)=\int_{0}^{s}\alpha(t)t^{d-1}\;dt.$$
Note that $\beta(s)$ is majorized by $\omega(d)s^{d}[Mf](0)$ where $\omega(d)$ is the measure of the unit ball in $\mathbb{R}^{d}$.
Let us make a further assumption that $\psi$ is compactly supported on a ball of radius $\delta$ so that $\psi_{r}$ is also compactly supported (and thus bounded) on a ball of radius $r\delta$. Invoking polar coordinates in the left hand side, setting $x=0$, $\psi(|y|)=\psi(|-y|)$ (in particular, the associativity of convolution), and integration by parts along with the fact that $\beta(0)=0=\psi_{r}(r\delta)$, we estimate at last
$$\begin{align*}
\int_{\mathbb{R}^{d}}|f(-y)|\psi_{r}(|y|)\;dy&=\int_{\mathbb{R}^{d}}|f(y)|\psi_{r}(|y|)\;dy\\
&=\int_{0}^{\infty}\psi_{r}(s)s^{d-1}\;ds\int_{\mathcal{S}^{d-1}}|f(s\omega)|\;dS(\omega)\\
&=\int_{0}^{r\delta}\psi_{r}(s)s^{d-1}\int_{\mathcal{S}^{d-1}}|f(s\omega)|\;dS(\omega)\\
&=\int_{0}^{r\delta}\psi_{r}(s)s^{d-1}\alpha(s)\;ds\\
&=\beta(r\delta)\psi_{r}(r\delta)-\beta(0)\psi_{r}(0)-\int_{0}^{r\delta}\beta(s)d\psi_{r}(s)\\
&=\int_{0}^{r\delta}\beta(s)d(-\psi_{r}(s))\\
&\leq[Mf](0)\int_{0}^{\infty}\omega(d)s^{d}d(-\psi_{r}(s))\\
&=[Mf](0)\int_{0}^{\infty}d\omega(d)s^{d-1}\psi_{r}(s)\;ds\\
&=[Mf](0)\int_{0}^{\infty}\psi_{r}(s)\;ds\int_{\partial B(0,s)}\;dS(\omega)\\
&=[Mf](0)\int_{\mathbb{R}^{d}}\frac{1}{r^{d}}\psi\left(\frac{|y|}{r}\right)\;dy\\
&=[Mf](0)\int_{\mathbb{R}^{d}}\psi(|y|)\;dy\\
&=A[Mf](0),
\end{align*}$$
as desired. To complete the proof, simply take an increasing sequence $\psi_{n}\to\psi$ of compactly supported $C^{1}$ functions. Since the estimate holds for each $\psi_{n}$, it holds for the limit function $\psi$.
Problem II. Consider the heat kernel $$G(x,t)=\frac{1}{(4\pi t)^{\frac{d}{2}}}e^{-\frac{|x|^{2}}{4t}}.$$(1) Solution. We estimate the quadratic form
- Given $\alpha>0$ find constants $\beta$ and $C$ so that $$G(x+y,t)\leq CG(x,\beta t)$$ holds for every $x\in\mathbb{R}^{d}$, $t>0$ and $|y|\leq\alpha\sqrt{t}.
- Deduce that for $f\in L^{1}$ and $u(x,t)=(G(t,\cdot)*f)(x)$ that $$\mu\left(\left\{y:|u(x,t)|\geq\lambda\;\text{for some}\;t>0\;\text{and}\;x\in B(y,\alpha\sqrt{t})\right\}\right)\leq\frac{||f||_{L^{1}}}{\lambda}.$$
$$\begin{align*}
|x|^{2}
&=|x+y-y|^{2}\\
&=|x+y|^{2}-2(x+y)\cdot y+|y|^{2}\\
&\leq|x+y|^{2}+2|x+y||y|+|y|^{2}\\
&\leq|x|y|^{2}+|x+y|^{2}+|y|^{2}+|y|^{2}\\
&=2|x+y|^{2}+2|y|^{2}\\
&\leq2|x+y|^{2}+\alpha^{2}t,
\end{align*}$$
where we have used the fact that $2ab\leq a^{2}+b^{2}$ for $a,b\in\mathbb{R}$ and also the restriction $|y|\leq\alpha\sqrt{t}.$ Consequently,
$$\frac{|x+y|^{2}}{4t}\geq\frac{\frac{1}{2}|x|^{2}-\alpha^{2}t}{4t}=\frac{|x|^{2}}{8t}-\frac{\alpha^{2}}{4}$$
and thus
$$\begin{align*}
G(x+y,t)
&=(4\pi t)^{-d/2}\exp\left\{\frac{-|x+y|^{2}}{4t}\right\}\\
&\leq(4\pi t)^{-d/2}\exp\left\{\frac{-|x|^{2}}{8t}+\frac{\alpha^{2}}{4}\right\}\\
&=e^{\alpha^{2}/4}2^{d/2}(8\pi t)^{-d/2}\exp\left\{\frac{-|x|^{2}}{8t}\right\}.
\end{align*}$$
This completes the proof, with the estimate holding for $\beta=2$ and $C=e^{\alpha^{2}/4}2^{d/2}.$
Remark. It is interesting that this result can be obtained as a special case of the Hardy-Moser-Trudinger inequality available here http://arxiv.org/abs/1012.5591. Indeed, generalizing a bit, we are asked to produce constants $a,\beta$ such that
$$\sup_{x\in\mathbb{R}^d,y\in B(0,a)}\frac{G(x+y,t)}{G(x,\beta t)}<\infty.$$
But the continuity and non-vanishing of $G$ implies that the above estimate will hold if
$$\int_{\mathbb{R}^d} \frac{G(x+y,t)}{G(x,\beta t)} dx$$
converges. Bounding the integral, we get
$$D\int_{\mathbb{R}^d} \exp\left(\frac{-\beta+1}{4t\beta} \left\{|x+y|^2-|x|^2\right\}\right) dx \\ \leq \int_{\mathbb{R}^d} \exp\left(\frac{-\beta+1}{4t\beta} \left\{|y|^2-2x\cdot y\right\}\right) dx:=J.$$
Now, using the fact that $|y|\leq a$, we get
$$J\leq I(a)=\int_{\mathbb{R}^d} \exp\left(\frac{-\beta+1}{4t\beta} \left\{|a|^2-2x\cdot y\right\}\right) dx.$$
The Moser-Trundinger inequality states that $I(a)<\infty$ for values $$a^2\frac{-\beta+1}{4t\beta} \leq 4\pi,$$ which readily gives $a\leq C\sqrt{t}$, from which the claim follows.
(2) Solution. From Problem I we ave
$$\sup_{t>0}|G(\cdot,t)*f(\cdot)|(x,t)\leq A(Mf)(x),$$
and the claim follows by observing that it is true for $Mf$.
Problem III. Let $f:\mathbb{R}^{d-1}\to\mathbb{R}$ belong to $L^{\infty}$ and let $u:\mathbb{R}^{d}_{+}\to\mathbb{R}$ be the Poisson integral of $f$.(1) Solution. Fix $x_{0}\in\mathbb{R}^{d-1}=\partial\mathbb{R}^{d}_{+}.$ To obtain a.e. nontangential convergence, we need to show $u(x_{0}-t,y)\to f(x_{0}$ a.e. $x_{0}$ where $t\in\mathbb{R}^{d-1}$ and $|t|\leq\alpha y.$ We have as a first attempt
- Show that $u(x,y)$ converges non-tangentially almost everywhere to $f(x)$ (i.e. from approach regions contained in any cone with vertex at $x$). Specifically, for fixed $x_{0}$ and $$\mathcal{C}_{\alpha}(x_{0}):=\{(x,y)\in\mathbb{R}^{n}_{+}\;:\;|x-x_{0}|<\alpha y\},$$ we have $$\lim_{(x,y)\to(x_{0},0),\;(x,y)\in C_{\alpha}(x_{0})}u(x,y)=f(x_{0}).$$ The convention for the Poisson kernel is $$P_{y}(x)=\frac{c_{n}y}{(|x|^{2}+y^{2})^{d/2}},$$ where we regard $y>0$ and $x\in\mathbb{R}^{d-1}$ so that a typical point $z\in R^{d}_{+}$ is $z=(x,y)$ ($y$ is the distance from $z$ to $\partial\mathbb{R}^{d}_{+}$). The Poisson integral of $f$ is then $$u(x,y)=\int_{\mathbb{R}^{d-1}}P_{y}(t)f(x-t)\;dt.$$ Since it will be needed below, we note that this is equivalent to the more usual representation $$P_{y}(x-t)=\frac{c_{n}y}{|(x,y)-(t,0)|^{d}}.$$ The expression $P_{y}(x-t)$ of course arises from the associativity of the convolution above. This last form will be used to obtain the final estimate for $J$ in the proof below.
- Show that $u$ matches the boundary values in a distributional sense.
- Show that if $v\in L^{\infty}(\mathbb{R}^{d}_{+})$ is distributionally harmonic and matches the boundary values $f$ in a distributional sense, then $u=v$ as distributions.
$$\begin{align*}
|u(x_{0}-t,y)-f(x_{0})|
&=\left|\int_{\mathbb{R}^{d-1}}P_{y}(x)f(x_{0}-t-x)\;dx-f(x_{0})\right|\\
&\leq\int_{\mathbb{R}^{d-1}}P_{y}(x-t)|f(x_{0}-t)-f(x_{0})|\;dx.
\end{align*}$$
If we estimate the integral directly, we get the inferior estimate $\leq2||f||_{\infty}.$ To gain better control on $f$, we suppose now $x_{0}\in\mathcal{L}(f),$ the Lebesgue set of $f$. Since $L^{\infty}\subset L^{1}_{\text{loc}}$, we see that $m(L(f)^{c})=0$ by the corresponding well-known result for $L^{1}$. Now let $\epsilon>0$. Our choice of $x_{0}$ and the Lebesgue differentiation theorem implies the existence of a $\delta>0$ such that if $r<\delta$ we have
$$\frac{1}{m(B_{r})}\int_{|x-x_{0}|\leq r}|f(x)-f(x_{0})|\;dx=\frac{1}{m(B_{r})}\int_{|x|\leq r}|f(x_{0}-x)-f(x_{0})|\;dx<\epsilon.$$
With this $\delta$, we now estimate the last integral by splitting it over the complementary sets $B(0,\delta)$ and $(B(0,\delta))^{c}$, denoted by $I$ and $J$, respectively. Note that $I$ would be trivial if $f$ were continuous; however, we are saved by the pointwise majorization property of the maximal function. Indeed, if $g(x)=|f(x)-f(x_{0})|\chi_{|x-x_{0}|<\delta}$, then $|g(x_{0}-t)|\leq[Mg](x_{0})<\epsilon$ by the above estimate. Hence,
$$I=\int_{|x|\leq\delta}P_{y}(x-t)|g(x_{0}-t)|\;dx\leq\epsilon\int_{\mathbb{R}^{d-1}}P_{y}(x-t)\;dx=\epsilon.$$
We now proceed to estimate $J$. We first observe if $y$ is fixed and $t\in\mathbb{R}^{d-1}$ and $|t|<\alpha y$, then
$$P_{y}(x-t)\leq A_{\alpha}P_{y}(x)$$
for some absolute constant $A_{\alpha}$ independent of $f$ (this is a very similar estimate to that in problem #7, however, the analog of $y$ is not fixed and therefore a second constant $\beta$ is introduced). To prove this, note that for $\alpha,y>0$ and $|t|\leq\alpha y$, and for simplicity $d=2$,
$$\begin{align*}
x^{2}+y^{2}
&=((x-t)+t)^{2}+y^{2}\\
&=(x-t)^{2}+2(x-t)t+t^{2}+y^{2}\\
&\leq(x-t)^{2}+\frac{1}{\alpha}(x-t)^{2}+\frac{1}{\alpha}t^{2}+y^{2}\\
&\leq(1+\alpha)[(x-t)^{2}+y^{2}],
\end{align*}$$
where we have used the well-known Cauchy inequality ``with $\epsilon=\alpha$'' in the first inequality. Therefore
$$\frac{y}{(x-t)^{2}+y^{2}}\leq\frac{A_{\alpha}y}{x^{2}+y^{2}}$$
as desired (the generalization to $d>2$ is essentially the same). Now, using the fact that $f\in L^{\infty}$, our choice of $x_{0}$, $\delta$, and $t$, and the ladder expression for $P_{y}(x)$ given in the problem statement, we get
$$\begin{align*}
J&\leq A_{\alpha}\int_{|x|>\delta}P_{y}(x)|f(x_{0}-x)-f(x_{0})|\;dx\\
&\leq2||f||_{\infty}A_{\alpha}\int_{|x|>\delta}P_{y}(x)\;dx\\
&\leq2||f||_{\infty}A_{\alpha}c_{n}y\int_{|x|>\delta}|x-x_{0}|^{-d}\;dx\\
&\to0\;\text{as}\;y\to0^{+}
\end{align*}$$
since the ladder integral exists. Putting this together, we get
$$\limsup_{|t|<\alpha y,y\to0}|u(x_{0}-t,y)-f(x_{0})|\leq A_{\alpha}\epsilon\to0\;\text{as}\;\epsilon\to0,$$
from which the desired nontangential convergence follows.
(2) Solution. Since $f\in L^{\infty}$, $Mf\in L^{\infty}$ because $M(\cdot)$ is bounded on $L^{p}$ for $p>1$. It follows from Problem I that
$$\sup_{y>0}|u(x,y)|=\sup_{y>0}|P_{y}*f|(x)\leq A(Mf)(x)\leq A||Mf||_{\infty}<\infty,$$
and thus
$$\sup_{y>0}||u(x,\cdot)||_{\infty}<\infty.$$
Fixing a test function $\phi\in C_{c}^{\infty}(\mathbb{R}^{d-1})$, the dominated convergence theorem now implies (since $|u(x,y)\phi(x)|\leq A||Mf||_{\infty}|\phi(x)|,$ which is integrable)
$$\lim_{y\to0}\int_{\mathbb{R}^{d-1}}u(x,y)\phi(x)\;dx=\int_{\mathbb{R}^{d-1}}\lim_{y\to0}u(x,y)\phi(x)\;dx=\int_{\mathbb{R}^{d-1}}f(x)\phi(x)\;dx.$$
Remark. The proof of Weyl's lemma is a mollification argument. For details, see problem #5 in my Since this holds for every test function, it follows that $u(x,0)=f(x)$ in the sense of distributions (the $0$ is just suggestive notation to indicate the restriction of $u$ to the boundary of the half space; $u$ is \emph{not} defined by convolution with $P_{y}$ when $y=0$ and must be obtained by a limiting process).
In fact, the limit function $\lim_{y\to0}u(x,y)$ is equal to $f(x)$ a.e. by the fundamental theorem of calculus of variations.
(3) Solution. For test functions $\phi\in C_{c}^{\infty}(\mathbb{R}^{d-1})$ and $\psi\in C_{c}^{\infty}(\mathbb{R}^{d}_{+})$, we have
$$(1) \int_{\mathbb{R}^{d-1}}v(x,0)\phi(x)\;dxdy=\int_{\mathbb{R}^{d-1}}f(x)\phi(x)\;dxdy$$
and
$$(2) \int_{\mathbb{R}^{d}_{+}}v(x,y)\Delta\psi(x,y)\;dxdy=0.$$
(Again, the setting of $y=0$ is just notation for the restriction of $u$ or $v$ to $\mathbb{R}^{d-1}.$) By (a), $(1)=\int_{\mathbb{R}^{d-1}}u(x,0)\phi(x)\;dx$ and so we see already that $u=v=f$ in the sense of distributions on $\mathbb{R}^{d-1},$ and actually pointwise a.e. Weyl's lemma implies that if $v\in L^{1}_{\text{loc}}(\mathbb{R}^{d}_{+})$ (recall that the half space is \emph{open}) is a weak solution to Laplace's equation, then $v$ is a classical solution after possibly a correction on a set of measure zero. The uniqueness of Dirichlet's problem and the preceeding application of Weyl's lemma (sine $L^{\infty}\subset L^{1}_{\text{loc}}$) now implies that $u=v$ a.e. in $\overline{\mathbb{R}^{d}_{+}}$ and so they are equal in the sense of distributions and after a correction of $v$ on a set of measure zero, they are equal pointwise too.
linked blog post in which I proved the lemma in a different course. http://mathtm.blogspot.com/2013/02/math-266b-assignment-2.html.