The Local Inverse Function Theorem

In A Local Inverse Function Theorem, Victoria Symposium on Nonstandard Analysis, Springer Verlag Lecture Notes in Math, vol 369, 1974) Michael Behrens noticed that the inverse function theorem is true for a function with a uniform derivative even just at one point.  (It is NOT true for a pointwise derivative.)  Specifically, condition (d) of the Uniform Differentiability Theorem makes the intuitive proof of Section 1 work.

Theorem: The Inverse Function Theorem

If m is a nonzero real number and the real function f[x] is defined for all x≈x_0, a real x_0with y_0 = f[x_0] and f[x] satisfies

(f[x_2] - f[x_1])/(x_2 - x_1) ≈m whenever x_1≈x_2≈x_0

then f[x] has an inverse function in a small neighborhood of x_0, that is, there is a real number Δ>0 and a smooth real function g[y] defined when | y - y_0 | <Δ with f[g[y]] = y and there is a real ε>0 such that if | x - x_0 | <ε, then | f[x] - y_0 | <Δ and g[f[x]] = x.

Proof

This proof introduces a "permanence principle."  When a logical real formula is true for all infinitesimals, it must remain true out to some positive real number.  We know that the statement

| x - x_0 | <δf[x] is defined

is true whenever δ≈0.  Suppose that for every positive real number Δ there was a real point r with | r - x_0 | <Δ where f[r] was not defined.  We could define a real function U[Δ] = r.  Then the logical real statement

Δ>0 ⇒ (r = U[Δ], | r - x_0 | <Δ, f[r] is undefined)

is true.  The Function Extension Axiom means it must also be true with Δ = δ≈0, a contradiction, hence, there is a positive real Δ so that f[x] is defined whenever | x - x_0 | <Δ.

We complete the proof of the Inverse Function Theorem by a permanence principle on the domain of y-values where we can invert f[x].  The intuitive proof of Section 1 shows that whenever | y - y_0 | <δ≈0, we have | x_1 - x_0 | ≈0, and for every natural n and k,

| x_n - x_0 | <2 | x_1 - x_0 |, | x_ (n + k) - x_k | <1/2^(k - 1) | x_1 - x_0 |, f[x_n] is defined, | y - f[x_ (n + 1)] | <1/2 | y - f[x_n] |

Recall that we re-focus our infinitesimal microscope after each step in the recursion.  The term y - f[x_ (n + 1)] is the error at the n^th step of solving the linear equation rather than the nonlinear one, and we can't see this error at the scale of our microscope, | y - f[x_n] |.  Technically we write the differential approximation

f[x_2] = f[x_1 + 1/m (y - f[x_n])] = f[x_1] + m · (1/m (y - f[x_n])) + ι · (1/m (y - f[x_n]))

f[x_2] - y = ι · (1/m (y - f[x_n])), with ι≈0

Now by the permanence principle, there is a real Δ>0 so that whenever | y - y_0 | <Δ, the properties above hold, making the sequence x_n convergent.  Define g[y] = Underscript[Lim, n∞] x_n.

For more details see p.66 - 68.


Created by Mathematica  (September 22, 2004)