Last week in my analysis class we discussed ordinary differential equations (ODEs) and their solutions. The proof of the existence and uniqueness of solutions to ODEs is one of my favorite examples of an extremely abstract theorem (in this case, the contraction mapping principle) being used to solve a concrete problem (how do we know if an ODE has a solution, and if so, whether it’s uniquely specified by its initial conditions?). This post is just to give some simple examples that unearth the concerns one might have.

Here are the three examples I’d like to consider: \[ y' = y, \hspace{0.5in} y' = y^2, \hspace{0.5in} (y')^2 = y. \] The first equation is familiar as characteristic of exponential functions. With the initial condition \(y(t_0) = C\), we obtain the solution \(y(t) = Ce^{t-t_0}\). Thus every initial condition results in a globally-defined solution. With this solution in hand, we can even show that it is unique, by a bit of trickery. Suppose \(f(t)\) is any solution to the initial value problem \(y'=y\), \(y(t_0)=C\), and consider the function \(g(t)=\frac{Ce^{t-t_0}}{f(t)}\). Then \[ g'(t) = \frac{f(t) Ce^{t-t_0} - f'(t) Ce^{t-t_0}}{(f(t))^2} = \frac{Ce^{t-t_0}}{(f(t))^2}(f(t) - f'(t)). \] But by assumption \(f'(t) = f(t)\), so \(g'(t) = 0\). Thus \(g(t)\) is constant, and because \(g(t_0) = 1\), we must have \(g(t)=1\) for all \(t\). Therefore \(f(t) = Ce^{t-t_0}\). (How does the case where \(f(t) = 0\) for some \(t\) need to be handled?)

The form of the second equation suggests that its solutions, once they get above \(y = 1\), should grow *faster* than exponential functions, because the growth rate depends quadratically on \(y\) rather than linearly. A bit of thought suggests the solution \(y(t) = -\frac{1}{t}\); by translating in the \(t\)-direction, we can satisfy any initial condition with a solution of the form \(y(t) = -\frac{1}{t-a}\). Again, the solution is uniquely specified by its initial condition. But these solutions are no longer *globally* defined; each one “blows up” at some finite time, either before or after the point at which we specify the initial condition. This shows that in general we cannot expect global solutions to ODEs; the usual existence theorem only guarantees *local* existence of solutions (although global existence can be guaranteed by stronger hypotheses, which are rarely satisfied).

The third equation looks similar to the second, but squaring the \(y'\) term rather than \(y\) has some major consequences. First, the equation implies that solutions must be non-negative. This is not serious, however; changing the equation slightly to \((y')^2 = |y|\) allows solutions to be negative. More serious is the fact that both of the following functions are solutions that satisfy the initial condition \(y(t_0) = 0\):
\[
y(t) = 0 \hspace{1in}
y(t) = \frac14(t - t_0)^2.
\]
Indeed, by piecing together these two types of solutions, we can obtain solutions that remain 0 for any length of time, then “take off” unexpectedly. In short, we have existence, but we definitely do *not* have uniqueness, because as soon as a solution reaches zero, it can remain zero for an indeterminate amount of time. (We do have *local uniqueness* for solutions that have a non-zero initial condition, however.) The issue is that the equation allows the derivatives of its solutions to grow and shrink too quickly when they are near zero; more precisely, the function \(\sqrt{|y|}\) is not Lipschitz in any interval containing 0.

As I said, my purpose here is not to expound the statement or proof of any theorem on existence and uniqueness, just to provide some simple examples that illustrate what considerations must be made in formulating such a theorem.

## 2 comments:

Post a Comment