# Contents

## Introduction

This little note was motivated by a discussion at the nForum on alternate approaches to the teaching of calculus. We first present several ways of seeing that continuous exponential functions have derivatives and are in fact infinitely differentiable; the third proof we give can in principle be explained to students who are just beginning calculus, and without taking any shortcuts such as integration theory, or the assumption of major theorems like the intermediate value theorem or extreme value theorem. We then carry out a similar analysis on trigonometric functions (we do appeal to complex numbers for the sake of conceptual clarity and brevity, but of course it is a simple matter to rewrite the arguments so as to circumvent complex numbers).

## Exponential functions

###### Proposition

Suppose $E: \mathbb{R} \to \mathbb{R}$ is continuous and satisfies $E(x+y)=E(x)E(y)$. Then $E$ is infinitely differentiable.

To get something out of the way: if $E(a) = 0$ for any $a$, then $E(x) = E(x-a)E(a) = 0$ for all $x$. Otherwise, we have $E(x) \gt 0$ for all $x$, since after all $E(x) = E(x/2)^2$.

One can dream up various proofs of this proposition. But the ultimate goal is to try to keep it simple.

###### Proof 1

Construct the natural logarithm first as $\log(x) = \int_1^x \frac{d t}{t}$ and define $\exp$ as its inverse. These give a $C^\infty$ diffeomorphism between $\mathbb{R}_+$ and $\mathbb{R}$, so we can use this to reduce the question to the Cauchy functional equation

$f(x+y) = f(x) + f(y)$

where continuity of $f$ implies that $f$ is given by multiplication by the real scalar $k = f(1)$ (since this is true on the integers, and then on the rationals, and then on the reals by a density argument). Thus $E$ is of the form $E(x) = \exp (k x)$ which is certainly $C^\infty$.

###### Proof 2

This proof also uses integration theory. First one argues that the integral $\int_0^a E(t) d t$ (which exists since $E$ is continuous) is nonzero for some $a$; let $C$ denote this quantity. Then we have

$C E(x) = \int_0^a E(t)E(x) d t = \int_0^a E(x+t) d t = \int_{x}^{x+a} E(s) d s$

where the right side is continuously differentiable by the fundamental theorem of calculus. Hence the left side of

$E(x) = \frac1{C}\int_x^{x+a} E(s) d s$

is continuously differentiable. From here, one can apply the usual arguments to show $E'(x) = E'(0)E(x)$.

However, since the discussion which gave rise to this had to do with the teaching of calculus to beginning students (who might not have any background in integration theory), the question is whether a reasonable proof can be given without using integrals.

###### Remark

It’s not just a matter of avoiding integrals. More to the point is to avoid hauling out the big guns, such as the intermediate value theorem or the mean value theorem. From the point of view of developing calculus rigorously, such results should not be considered fair play unless one is actually prepared to prove them, a task which is usually left for a first course in real analysis. This makes our job a bit more challenging!

We will however allow ourselves one luxury: we assume the key property of the Dedekind real numbers, that every set of real numbers that admits an upper bound has a least upper bound. (End remark)

It suffices to show that $E$ has a derivative somewhere (for then it can be shown it has a derivative everywhere, and then one calculates $E'(x) = E'(0)E(x)$).

###### Proof 3

This is based on the convexity of $E$. First one proves $E(\frac{x+y}{2}) \leq \frac{E(x)+E(y)}{2}$ (easy exercise: it boils down to $2 a b \leq a^2 + b^2$), so that

$E(t x + (1-t)y) \leq t E(x) + (1-t)E(y)$

for all dyadic rationals $t$ between $0$ and $1$, and thence for all $t \in [0, 1]$ by virtue of continuity of $E$.

This has the consequence that slopes of secant lines through $(0, 1)$, namely $\frac{E(t)-1}{t}$ for $t \neq 0$, increase with $t$. It follows that the left limit

$\lim_{t \nearrow 0} \frac{E(t)-1}{t}$

exists, being a limsup over $t$ that has an upper bound (given by any $\frac{E(s)-1}{s}$ where $s \gt 0$). Similarly the right limit exists. We just need to show that the right limit $R_0$ and left limit $L_0$ agree. The basic idea is that if they don’t agree (if there is a jump discontinuity at $0$, so to speak), then there is a jump discontinuity everywhere, and that spells trouble, mister.

Thus, let us define

$L_a \coloneqq \lim_{h \nearrow 0} \; \frac{E(a+h)-E(a)}{h}; \qquad R_a \coloneqq \lim_{h\searrow 0} \; \frac{E(a+h)-E(a)}{h}.$

One easily verifies that $R_a = E(a)R_0$ and $L_a = E(a)L_0$. Also observe that whenever $a \lt b$, we have

$R_a \lt \frac{E(b)-E(a)}{b-a} \lt L_b$

so that $L_b/R_a \gt 1$.

Okay, certainly $0 \lt L_0 \leq R_0$, so $c \coloneqq R_0/L_0 \geq 1$. We have $R_a/L_a = c$ for any $a$. Telescoping, we have for every $n$ that

$\array{ \frac{L_1}{L_0} & = & (\frac{R_0}{L_0}\frac{L_{1/n}}{R_0})\cdot (\frac{R_{1/n}}{L_{1/n}}\frac{L_{2/n}}{R_{1/n}}) \cdot \ldots \cdot (\frac{R_{(n-1)/n}}{L_{(n-1)/n}} \frac{L_1}{R_{(n-1)/n}}) \\ & \gt & \frac{R_0}{L_0} \frac{R_{1/n}}{L_{1/n}} \ldots \frac{R_{(n-1)/n}}{L_{(n-1)/n}} \\ & = & c^n }$

so that $1 \leq c \lt (\frac{L_1}{L_0})^{1/n}$ for all $n \geq 1$. Thus $c = 1$, which completes the proof.

## Trigonometric functions

Now we set out to prove analogous results for trigonometric functions:

###### Theorem

Suppose $S, C: \mathbb{R} \to \mathbb{R}$ are continuous functions satisfying the addition formulas

$S(x+y) = S(x)C(y) + C(x)S(y); \qquad C(x+y) = C(x)C(y) - S(x)S(y).$

Then $S$ and $C$ are infinitely differentiable.

Once again, as in Proof 3 above, we will seek to avoid integration theory, including any discussion of arc length along the unit circle (on which the standard treatment of circular functions is based).

### Preliminary remarks

Note that if we put $E(t) = C(t) + i S(t)$, then we have a (continuous) exponential function $E: \mathbb{R} \to \mathbb{C}$. Arguing as we did earlier, it is easy to see that $E$ is either zero nowhere or zero everywhere. We restrict to the “zero nowhere” case, where $E(0) = 1$ (thus $C(0) = 1$ and $S(0) = 0$).

The complex conjugate $\bar{E}$ of $E$ is a continuous exponential function, as is therefore $E \bar{E} = {|E|}^2: \mathbb{R} \to \mathbb{R}$, and also ${|E|}: \mathbb{R} \to \mathbb{R}$. We have already characterized continuous exponential functions of the latter type in the previous section. Therefore we are reduced to understanding continuous exponential functions

$E/{|E|}: \mathbb{R} \to S^1$

mapping to complex numbers of norm $1$.

This reduction justifies adding in the assumption that $C(t)^2 + S(t)^2 = 1$. From $E(-t)E(t) = E(-t+t) = E(0) = 1$, we see that $E(-t) = \bar{E}(t)$, so that $C$ is an even function and $S$ is an odd function.

### Convexity

###### Lemma

If $S(a) \lt 0$ ($C(a) \lt 0$), then there is an interval about $a$ over which $S$ ($C$) is convex; similarly if $S(a) \gt 0$ ($C(a) \gt 0$), then there is an interval about $a$ over which $-S$ ($-C$) is convex.

###### Proof

We will just consider one case; the other cases are similarly routine1. Suppose $S(a) \gt 0$; choose an interval $U$ about $a$ where $S$ is positive. We show that for $x, y \in U$ that

$S(\frac{x+y}{2}) \geq \frac{S(x) + S(y)}{2}.$

This is sufficient, since we may argue as we did earlier (viz., that $S(t x + (1-t)y) \geq t S(x) + (1-t)S(y)$ for all dyadic rationals $t \in [0, 1]$ and thence for all $t \in [0, 1]$ by continuity).

For $u = x/2$ and $v = y/2$, we have $S(2 u), S(2 v) \gt 0$. Applying the addition formulas, we are to show

$S(u)C(v) + C(u)S(v) \geq \frac{2S(u)C(u) + 2S(v)C(v)}{2} = S(u)C(u) + S(v)C(v)$

where $S(u)$ and $C(u)$ have the same sign since $S(2 u) = 2S(u)C(u) \gt 0$. Rearranging, we must show

$(S(u) - S(v))(C(v) - C(u)) \geq 0.$

Suppose WLOG that $S(u) \geq S(v)$; consider first the case $S(u) \geq S(v) \gt 0$. Then $S(u)^2 \geq S(v)^2$, whence $C(u)^2 \leq C(v)^2$ (using $C(t)^2 + S(t)^2 = 1$), whence $C(u) \leq C(v)$ (since $C(u) \gt 0$; remember $C(u)$ and $S(u)$ have the same sign). The displayed inequality follows. If $0 \gt S(u) \geq S(v)$, then similarly $S(u)^2 \leq S(v)^2$, whence $C(u)^2 \geq C(v)^2$, whence $0 \gt C(v) \geq C(u)$, and again the displayed inequality follows. Finally, the case where $S(u) \gt 0 \gt S(v)$ actually does not occur, because then $C(u) \gt 0 \gt C(v)$ and consequently $S(\frac{x+y}{2}) = S(u)C(v) + C(u)S(v) \lt 0$, contrary to the fact that $\frac{x+y}{2}$ lies in the interval $U$ where $S$ is positive.

### Derivative computations

###### Lemma

If $f$ is continuous and convex in an open interval $U$ about a point $a$, then $\lim_{h \searrow 0} \frac{f(a+h)-f(a)}{h}$ exists and is finite.

###### Proof

We have seen this before: by convexity, slopes of secant lines $\frac{f(a+h)-f(a)}{h}$ decrease as $h$ decreases to $0$, and yet these slopes have a lower bound given by $\frac{f(b)-f(a)}{b-a}$ for any $b$ in $U$ that is less than $a$.

###### Corollary

The right limit $\lim_{h \searrow 0} \frac{C(h)-1}{h}$ exists and is finite. (It will be denoted $R_C$ below.)

###### Proof

Since $C(0) = 1$, lemma 1 implies $-C$ is convex in a neighborhood of $0$. Then apply lemma 2.

###### Lemma

$S'(0)$ exists and is finite.

###### Proof

We divide the proof into the following points.

• Either $S'(0) = 0$, or there is a point $a$ where $C(a)$ and $S(a)$ are both nonzero.

For, if $\delta$ is sufficiently small, we have $C(a)$ near $1$ if $-\delta \lt a \lt \delta$. If $S(a) = 0$ for all such $a$, then obviously $S'(0) = 0$.

• If $S(a) \neq 0$, then $\lim_{h \searrow 0} \frac{S(a+h)-S(a)}{h}$ exists and is finite. (We will denote this as $R_a$ below.)

By lemma 1, $S$ or $-S$ is convex in a neighborhood of $a$. Then apply lemma 2.

• $\lim_{h \searrow 0} \frac{S(h)}{h}$ exists and is finite.

We may assume $a$ is a point where $C(a)$ and $S(a)$ are nonzero. We have

$R_a = \lim_{h \searrow 0} \frac{S(a+h)-S(a)}{h} = \lim_{h \searrow 0} S(a)\frac{C(h)-1}{h} + C(a)\frac{S(h)}{h}$

by the addition formula. We then obviously have

$\lim_{h \searrow 0} \frac{S(h)}{h} = \frac{R_a - S(a)R_C}{C(a)}.$
• If $\lim_{h \searrow 0} \frac{S(h)}{h}$ exists and is finite (say $R$), then $S'(0)$ exists and is finite.

For, since $S$ is odd, we have

$\lim_{h \nearrow 0} \frac{S(h)}{h} = \lim_{{|h|} \searrow 0} \frac{S(-{|h|})}{-{|h|}} = \lim_{{|h|} \searrow 0} \frac{S({|h|})}{{|h|}} = R.$

This completes the proof.

Now the proof of Theorem 1 is routine: from the finiteness of $S'(0)$ we easily infer that $C'(0) = 0$, and the computations

$S'(t) = S'(0)C(t), \qquad C'(t) = -S'(0)S(t)$

are easy computations using the addition formulas for $S$ and $C$.

1. That doesn’t mean that the algebra isn’t annoying. In the case of the function $C$, it suffices to prove: If $a b - c d \gt 0$ and $a^2 - c^2 \gt0$ and $b^2 - d^2 \gt 0$ with $a^2 + c^2 = 1$ and $b^2 + d^2 = 1$, then $2(a b - c d) \geq a^2 - c^2 + b^2 - d^2$. A purely algebraic proof is given at the page “annoying algebra proposition” under my unpublished web.

Revised on September 26, 2013 at 04:34:30 by Todd Trimble