Sponsored Links

Rabu, 21 Maret 2018

Sponsored Links

skechers mens black boots cheap > OFF74% The Largest Catalog Discounts
src: d3d71ba2asa5oz.cloudfront.net

In mathematics, Grönwall's inequality (also called Grönwall's lemma or the Grönwall-Bellman inequality) allows one to bound a function that is known to satisfy a certain differential or integral inequality by the solution of the corresponding differential or integral equation. There are two forms of the lemma, a differential form and an integral form. For the latter there are several variants.

Grönwall's inequality is an important tool to obtain various estimates in the theory of ordinary and stochastic differential equations. In particular, it provides a comparison theorem that can be used to prove uniqueness of a solution to the initial value problem; see the Picard-Lindelöf theorem.

It is named for Thomas Hakon Grönwall (1877-1932). Grönwall is the Swedish spelling of his name, but he spelled his name as Gronwall in his scientific publications after emigrating to the United States.

The differential form was proven by Grönwall in 1919. The integral form was proven by Richard Bellman in 1943.

A nonlinear generalization of the Grönwall-Bellman inequality is known as Bihari-LaSalle inequality. Other variants and generalizations can be found in Pachpatte, B.G. (1998).


Video Grönwall's inequality



Differential form

Let I denote an interval of the real line of the form [a, ?) or [a, b] or [a, b) with a < b. Let ? and u be real-valued continuous functions defined on I. If u is differentiable in the interior Io of I (the interval I without the end points a and possibly b) and satisfies the differential inequality

u ? ( t ) <= ? ( t ) u ( t ) , t ? I ? , {\displaystyle u'(t)\leq \beta (t)\,u(t),\qquad t\in I^{\circ },}

then u is bounded by the solution of the corresponding differential equation y ?(t) = ?(t) y(t):

u ( t ) <= u ( a ) exp ( ? a t ? ( s ) d s ) {\displaystyle u(t)\leq u(a)\exp {\biggl (}\int _{a}^{t}\beta (s)\,\mathrm {d} s{\biggr )}}

for all t ? I.

Remark: There are no assumptions on the signs of the functions ? and u.

Proof

Define the function

v ( t ) = exp ( ? a t ? ( s ) d s ) , t ? I . {\displaystyle v(t)=\exp {\biggl (}\int _{a}^{t}\beta (s)\,\mathrm {d} s{\biggr )},\qquad t\in I.}

Note that v satisfies

v ? ( t ) = ? ( t ) v ( t ) , t ? I ? , {\displaystyle v'(t)=\beta (t)\,v(t),\qquad t\in I^{\circ },}

with v(a) = 1 and v(t) > 0 for all t ? I. By the quotient rule

d d t u ( t ) v ( t ) = u ? ( t ) v ( t ) - v ? ( t ) u ( t ) v 2 ( t ) = u ? ( t ) v ( t ) - ? ( t ) v ( t ) u ( t ) v 2 ( t ) <= 0 , t ? I ? , {\displaystyle {\frac {d}{dt}}{\frac {u(t)}{v(t)}}={\frac {u'(t)\,v(t)-v'(t)\,u(t)}{v^{2}(t)}}={\frac {u'(t)\,v(t)-\beta (t)\,v(t)\,u(t)}{v^{2}(t)}}\leq 0,\qquad t\in I^{\circ },}

Thus the derivative of the function u ( t ) / v ( t ) {\displaystyle u(t)/v(t)} is non-positive and the function is bounded above by its value at the initial point a {\displaystyle a} of the interval I {\displaystyle I} :

u ( t ) v ( t ) <= u ( a ) v ( a ) = u ( a ) , t ? I , {\displaystyle {\frac {u(t)}{v(t)}}\leq {\frac {u(a)}{v(a)}}=u(a),\qquad t\in I,}

which is Grönwall's inequality.


Maps Grönwall's inequality



Integral form for continuous functions

Let I denote an interval of the real line of the form [a, ?) or [a, b] or [a, b) with a < b. Let ?, ? and u be real-valued functions defined on I. Assume that ? and u are continuous and that the negative part of ? is integrable on every closed and bounded subinterval of I.

  • (a) If ? is non-negative and if u satisfies the integral inequality
u ( t ) <= ? ( t ) + ? a t ? ( s ) u ( s ) d s , ? t ? I , {\displaystyle u(t)\leq \alpha (t)+\int _{a}^{t}\beta (s)u(s)\,\mathrm {d} s,\qquad \forall t\in I,}
then
u ( t ) <= ? ( t ) + ? a t ? ( s ) ? ( s ) exp ( ? s t ? ( r ) d r ) d s , t ? I . {\displaystyle u(t)\leq \alpha (t)+\int _{a}^{t}\alpha (s)\beta (s)\exp {\biggl (}\int _{s}^{t}\beta (r)\,\mathrm {d} r{\biggr )}\mathrm {d} s,\qquad t\in I.}
  • (b) If, in addition, the function ? is non-decreasing, then
u ( t ) <= ? ( t ) exp ( ? a t ? ( s ) d s ) , t ? I . {\displaystyle u(t)\leq \alpha (t)\exp {\biggl (}\int _{a}^{t}\beta (s)\,\mathrm {d} s{\biggr )},\qquad t\in I.}

Remarks:

  • There are no assumptions on the signs of the functions ? and u.
  • Compared to the differential form, differentiability of u is not needed for the integral form.
  • For a version of Grönwall's inequality which doesn't need continuity of ? and u, see the version in the next section.

Proof

(a) Define

v ( s ) = exp ( - ? a s ? ( r ) d r ) ? a s ? ( r ) u ( r ) d r , s ? I . {\displaystyle v(s)=\exp {\biggl (}{-}\int _{a}^{s}\beta (r)\,\mathrm {d} r{\biggr )}\int _{a}^{s}\beta (r)u(r)\,\mathrm {d} r,\qquad s\in I.}

Using the product rule, the chain rule, the derivative of the exponential function and the fundamental theorem of calculus, we obtain for the derivative

v ? ( s ) = ( u ( s ) - ? a s ? ( r ) u ( r ) d r ? <= ? ( s ) ) ? ( s ) exp ( - ? a s ? ( r ) d r ) , s ? I , {\displaystyle v'(s)={\biggl (}\underbrace {u(s)-\int _{a}^{s}\beta (r)u(r)\,\mathrm {d} r} _{\leq \,\alpha (s)}{\biggr )}\beta (s)\exp {\biggl (}{-}\int _{a}^{s}\beta (r)\mathrm {d} r{\biggr )},\qquad s\in I,}

where we used the assumed integral inequality for the upper estimate. Since ? and the exponential are non-negative, this gives an upper estimate for the derivative of v. Since v(a) = 0, integration of this inequality from a to t gives

v ( t ) <= ? a t ? ( s ) ? ( s ) exp ( - ? a s ? ( r ) d r ) d s . {\displaystyle v(t)\leq \int _{a}^{t}\alpha (s)\beta (s)\exp {\biggl (}{-}\int _{a}^{s}\beta (r)\,\mathrm {d} r{\biggr )}\mathrm {d} s.}

Using the definition of v(t) for the first step, and then this inequality and the functional equation of the exponential function, we obtain

? a t ? ( s ) u ( s ) d s = exp ( ? a t ? ( r ) d r ) v ( t ) <= ? a t ? ( s ) ? ( s ) exp ( ? a t ? ( r ) d r - ? a s ? ( r ) d r ? = ? s t ? ( r ) d r ) d s . {\displaystyle {\begin{aligned}\int _{a}^{t}\beta (s)u(s)\,\mathrm {d} s&=\exp {\biggl (}\int _{a}^{t}\beta (r)\,\mathrm {d} r{\biggr )}v(t)\\&\leq \int _{a}^{t}\alpha (s)\beta (s)\exp {\biggl (}\underbrace {\int _{a}^{t}\beta (r)\,\mathrm {d} r-\int _{a}^{s}\beta (r)\,\mathrm {d} r} _{=\,\int _{s}^{t}\beta (r)\,\mathrm {d} r}{\biggr )}\mathrm {d} s.\end{aligned}}}

Substituting this result into the assumed integral inequality gives Grönwall's inequality.

(b) If the function ? is non-decreasing, then part (a), the fact ?(s) <= ?(t), and the fundamental theorem of calculus imply that

u ( t ) <= ? ( t ) + ( - ? ( t ) exp ( ? s t ? ( r ) d r ) ) | s = a s = t = ? ( t ) exp ( ? a t ? ( r ) d r ) , t ? I . {\displaystyle {\begin{aligned}u(t)&\leq \alpha (t)+{\biggl (}{-}\alpha (t)\exp {\biggl (}\int _{s}^{t}\beta (r)\,\mathrm {d} r{\biggr )}{\biggr )}{\biggr |}_{s=a}^{s=t}\\&=\alpha (t)\exp {\biggl (}\int _{a}^{t}\beta (r)\,\mathrm {d} r{\biggr )},\qquad t\in I.\end{aligned}}}

DC5m United States mix in english Created at 2016-12-30 12:28
src: www.gannett-cdn.com


Integral form with locally finite measures

Let I denote an interval of the real line of the form [a, ?) or [a, b] or [a, b) with a < b. Let ? and u be measurable functions defined on I and let ? be a non-negative measure on the Borel ?-algebra of I satisfying ?([a, t]) < ? for all t ? I (this is certainly satisfied when ? is a locally finite measure). Assume that u is integrable with respect to ? in the sense that

? [ a , t ) | u ( s ) | ? ( d s ) < ? , t ? I , {\displaystyle \int _{[a,t)}|u(s)|\,\mu (\mathrm {d} s)<\infty ,\qquad t\in I,}

and that u satisfies the integral inequality

u ( t ) <= ? ( t ) + ? [ a , t ) u ( s ) ? ( d s ) , t ? I . {\displaystyle u(t)\leq \alpha (t)+\int _{[a,t)}u(s)\,\mu (\mathrm {d} s),\qquad t\in I.}

If, in addition,

  • the function ? is non-negative or
  • the function t ? ?([a, t]) is continuous for t ? I and the function ? is integrable with respect to ? in the sense that
? [ a , t ) | ? ( s ) | ? ( d s ) < ? , t ? I , {\displaystyle \int _{[a,t)}|\alpha (s)|\,\mu (\mathrm {d} s)<\infty ,\qquad t\in I,}

then u satisfies Grönwall's inequality

u ( t ) <= ? ( t ) + ? [ a , t ) ? ( s ) exp ( ? ( I s , t ) ) ? ( d s ) {\displaystyle u(t)\leq \alpha (t)+\int _{[a,t)}\alpha (s)\exp {\bigl (}\mu (I_{s,t}){\bigr )}\,\mu (\mathrm {d} s)}

for all t ? I, where Is,t denotes to open interval (s, t).

Remarks

  • There are no continuity assumptions on the functions ? and u.
  • The integral in Grönwall's inequality is allowed to give the value infinity.
  • If ? is the zero function and u is non-negative, then Grönwall's inequality implies that u is the zero function.
  • The integrability of u with respect to ? is essential for the result. For a counterexample, let ? denote Lebesgue measure on the unit interval [0, 1], define u(0) = 0 and u(t) = 1/t for t ? (0, 1], and let ? be the zero function.
  • The version given in the textbook by S. Ethier and T. Kurtz. makes the stronger assumptions that ? is a non-negative constant and u is bounded on bounded intervals, but doesn't assume that the measure ? is locally finite. Compared to the one given below, their proof does not discuss the behaviour of the remainder Rn(t).

Special cases

  • If the measure ? has a density ? with respect to Lebesgue measure, then Grönwall's inequality can be rewritten as
u ( t ) <= ? ( t ) + ? a t ? ( s ) ? ( s ) exp ( ? s t ? ( r ) d r ) d s , t ? I . {\displaystyle u(t)\leq \alpha (t)+\int _{a}^{t}\alpha (s)\beta (s)\exp {\biggl (}\int _{s}^{t}\beta (r)\,\mathrm {d} r{\biggr )}\,\mathrm {d} s,\qquad t\in I.}
  • If the function ? is non-negative and the density ? of ? is bounded by a constant c, then
u ( t ) <= ? ( t ) + c ? a t ? ( s ) exp ( c ( t - s ) ) d s , t ? I . {\displaystyle u(t)\leq \alpha (t)+c\int _{a}^{t}\alpha (s)\exp {\bigl (}c(t-s){\bigr )}\,\mathrm {d} s,\qquad t\in I.}
  • If, in addition, the non-negative function ? is non-decreasing, then
u ( t ) <= ? ( t ) + c ? ( t ) ? a t exp ( c ( t - s ) ) d s = ? ( t ) exp ( c ( t - a ) ) , t ? I . {\displaystyle u(t)\leq \alpha (t)+c\alpha (t)\int _{a}^{t}\exp {\bigl (}c(t-s){\bigr )}\,\mathrm {d} s=\alpha (t)\exp(c(t-a)),\qquad t\in I.}

Outline of proof

The proof is divided into three steps. In idea is to substitute the assumed integral inequality into itself n times. This is done in Claim 1 using mathematical induction. In Claim 2 we rewrite the measure of a simplex in a convenient form, using the permutation invariance of product measures. In the third step we pass to the limit n to infinity to derive the desired variant of Grönwall's inequality.

Detailed proof

Claim 1: Iterating the inequality

For every natural number n including zero,

u ( t ) <= ? ( t ) + ? [ a , t ) ? ( s ) ? k = 0 n - 1 ? ? k ( A k ( s , t ) ) ? ( d s ) + R n ( t ) {\displaystyle u(t)\leq \alpha (t)+\int _{[a,t)}\alpha (s)\sum _{k=0}^{n-1}\mu ^{\otimes k}(A_{k}(s,t))\,\mu (\mathrm {d} s)+R_{n}(t)}

with remainder

R n ( t ) := ? [ a , t ) u ( s ) ? ? n ( A n ( s , t ) ) ? ( d s ) , t ? I , {\displaystyle R_{n}(t):=\int _{[a,t)}u(s)\mu ^{\otimes n}(A_{n}(s,t))\,\mu (\mathrm {d} s),\qquad t\in I,}

where

A n ( s , t ) = { ( s 1 , ... , s n ) ? I s , t n | s 1 < s 2 < ? < s n } , n >= 1 , {\displaystyle A_{n}(s,t)=\{(s_{1},\ldots ,s_{n})\in I_{s,t}^{n}\mid s_{1}<s_{2}<\cdots <s_{n}\},\qquad n\geq 1,}

is an n-dimensional simplex and

? ? 0 ( A 0 ( s , t ) ) := 1. {\displaystyle \mu ^{\otimes 0}(A_{0}(s,t)):=1.}

Proof of Claim 1

We use mathematical induction. For n = 0 this is just the assumed integral inequality, because the empty sum is defined as zero.

Induction step from n to n + 1: Inserting the assumed integral inequality for the function u into the remainder gives

R n ( t ) <= ? [ a , t ) ? ( s ) ? ? n ( A n ( s , t ) ) ? ( d s ) + R ~ n ( t ) {\displaystyle R_{n}(t)\leq \int _{[a,t)}\alpha (s)\mu ^{\otimes n}(A_{n}(s,t))\,\mu (\mathrm {d} s)+{\tilde {R}}_{n}(t)}

with

R ~ n ( t ) := ? [ a , t ) ( ? [ a , q ) u ( s ) ? ( d s ) ) ? ? n ( A n ( q , t ) ) ? ( d q ) , t ? I . {\displaystyle {\tilde {R}}_{n}(t):=\int _{[a,t)}{\biggl (}\int _{[a,q)}u(s)\,\mu (\mathrm {d} s){\biggr )}\mu ^{\otimes n}(A_{n}(q,t))\,\mu (\mathrm {d} q),\qquad t\in I.}

Using the Fubini-Tonelli theorem to interchange the two integrals, we obtain

R ~ n ( t ) = ? [ a , t ) u ( s ) ? ( s , t ) ? ? n ( A n ( q , t ) ) ? ( d q ) ? = ? ? n + 1 ( A n + 1 ( s , t ) ) ? ( d s ) = R n + 1 ( t ) , t ? I . {\displaystyle {\tilde {R}}_{n}(t)=\int _{[a,t)}u(s)\underbrace {\int _{(s,t)}\mu ^{\otimes n}(A_{n}(q,t))\,\mu (\mathrm {d} q)} _{=\,\mu ^{\otimes n+1}(A_{n+1}(s,t))}\,\mu (\mathrm {d} s)=R_{n+1}(t),\qquad t\in I.}

Hence Claim 1 is proved for n + 1.

Claim 2: Measure of the simplex

For every natural number n including zero and all s < t in I

? ? n ( A n ( s , t ) ) <= ( ? ( I s , t ) ) n n ! {\displaystyle \mu ^{\otimes n}(A_{n}(s,t))\leq {\frac {{\bigl (}\mu (I_{s,t}){\bigr )}^{n}}{n!}}}

with equality in case t ? ?([a, t]) is continuous for t ? I.

Proof of Claim 2

For n = 0, the claim is true by our definitions. Therefore, consider n >= 1 in the following.

Let Sn denote the set of all permutations of the indices in {1, 2, . . . , n}. For every permutation ? ? Sn define

A n , ? ( s , t ) = { ( s 1 , ... , s n ) ? I s , t n | s ? ( 1 ) < s ? ( 2 ) < ? < s ? ( n ) } . {\displaystyle A_{n,\sigma }(s,t)=\{(s_{1},\ldots ,s_{n})\in I_{s,t}^{n}\mid s_{\sigma (1)}<s_{\sigma (2)}<\cdots <s_{\sigma (n)}\}.}

These sets are disjoint for different permutations and

? ? ? S n A n , ? ( s , t ) ? I s , t n . {\displaystyle \bigcup _{\sigma \in S_{n}}A_{n,\sigma }(s,t)\subset I_{s,t}^{n}.}

Therefore,

? ? ? S n ? ? n ( A n , ? ( s , t ) ) <= ? ? n ( I s , t n ) = ( ? ( I s , t ) ) n . {\displaystyle \sum _{\sigma \in S_{n}}\mu ^{\otimes n}(A_{n,\sigma }(s,t))\leq \mu ^{\otimes n}{\bigl (}I_{s,t}^{n}{\bigr )}={\bigl (}\mu (I_{s,t}){\bigr )}^{n}.}

Since they all have the same measure with respect to the n-fold product of ?, and since there are n! permutations in Sn, the claimed inequality follows.

Assume now that t ? ?([a, t]) is continuous for t ? I. Then, for different indices i, j ? {1, 2, . . . , n}, the set

{ ( s 1 , ... , s n ) ? I s , t n | s i = s j } {\displaystyle \{(s_{1},\ldots ,s_{n})\in I_{s,t}^{n}\mid s_{i}=s_{j}\}}

is contained in a hyperplane, hence by an application of Fubini's theorem its measure with respect to the n-fold product of ? is zero. Since

I s , t n ? ? ? ? S n A n , ? ( s , t ) ? ? 1 <= i < j <= n { ( s 1 , ... , s n ) ? I s , t n | s i = s j } , {\displaystyle I_{s,t}^{n}\subset \bigcup _{\sigma \in S_{n}}A_{n,\sigma }(s,t)\cup \bigcup _{1\leq i<j\leq n}\{(s_{1},\ldots ,s_{n})\in I_{s,t}^{n}\mid s_{i}=s_{j}\},}

the claimed equality follows.

Proof of Grönwall's inequality

For every natural number n, Claim 2 implies for the remainder of Claim 1 that

| R n ( t ) | <= ( ? ( I a , t ) ) n n ! ? [ a , t ) | u ( s ) | ? ( d s ) , t ? I . {\displaystyle |R_{n}(t)|\leq {\frac {{\bigl (}\mu (I_{a,t}){\bigr )}^{n}}{n!}}\int _{[a,t)}|u(s)|\,\mu (\mathrm {d} s),\qquad t\in I.}

By assumption we have ?(Ia,t) < ?. Hence, the integrability assumption on u implies that

lim n -> ? R n ( t ) = 0 , t ? I . {\displaystyle \lim _{n\to \infty }R_{n}(t)=0,\qquad t\in I.}

Claim 2 and the series representation of the exponential function imply the estimate

? k = 0 n - 1 ? ? k ( A k ( s , t ) ) <= ? k = 0 n - 1 ( ? ( I s , t ) ) k k ! <= exp ( ? ( I s , t ) ) {\displaystyle \sum _{k=0}^{n-1}\mu ^{\otimes k}(A_{k}(s,t))\leq \sum _{k=0}^{n-1}{\frac {{\bigl (}\mu (I_{s,t}){\bigr )}^{k}}{k!}}\leq \exp {\bigl (}\mu (I_{s,t}){\bigr )}}

for all s < t in I. If the function ? is non-negative, then it suffices to insert these results into Claim 1 to derive the above variant of Grönwall's inequality for the function u.

In case t ? ?([a, t]) is continuous for t ? I, Claim 2 gives

? k = 0 n - 1 ? ? k ( A k ( s , t ) ) = ? k = 0 n - 1 ( ? ( I s , t ) ) k k ! -> exp ( ? ( I s , t ) ) as  n -> ? {\displaystyle \sum _{k=0}^{n-1}\mu ^{\otimes k}(A_{k}(s,t))=\sum _{k=0}^{n-1}{\frac {{\bigl (}\mu (I_{s,t}){\bigr )}^{k}}{k!}}\to \exp {\bigl (}\mu (I_{s,t}){\bigr )}\qquad {\text{as }}n\to \infty }

and the integrability of the function ? permits to use the dominated convergence theorem to derive Grönwall's inequality.




References




See also

  • Logarithmic norm, for a version of Gronwall's lemma that gives upper and lower bounds to the norm of the state transition matrix.

This article incorporates material from Gronwall's lemma on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.

Source of the article : Wikipedia

Comments
0 Comments