Computational Theory of Iterative Methods

In Studies in Computational Mathematics, 2007

Lemma 8.6.2

(a)

If Lipschitz condition (8.6.4) holds then

(8.6.13) ] ( F ( x + y ) F ( x ) ) ( z ) [ ( w ( r + ] y [ ) w ( r ) ) ( ] z [ )

for all r, r+]y[∈S, x U ¯ (x 0,r),zX;
(b)

If Lipschitz condition (8.6.8) holds then

] G ( x + y ) G ( x ) [ r r + ] y [ w 1 ( t ) d t for all r , r + ] y [ S , x U ¯ ( x 0 , r ) .

Denote by Fix (f) the set of all fixed points of the operator f.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S1570579X07800280

Special Volume: Mathematical Modeling and Numerical Methods in Finance

Denis Talay INRIA , in Handbook of Numerical Analysis, 2009

Theorem 5.1

Under an appropriate locally Lipschitz condition on F, the value function V(t, x, p) is the unique viscosity solution in the space

S : = { φ ( t , x , p ) i s c o n t i n u o u s o n [ 0 , T ] × R n × R ; A ¯ > 0 , lim | p | 2 + x 2 φ ( t , x , p ) exp ( - A ¯ | log ( | p | 2 + x 2 ) | 2 ) = 0 for all t [ 0 , T ] }

to the Hamilton–Jacobi–Bellman–Isaacs equation

{ υ t ( t , x , p ) + H - ( D 2 υ ( t , x , p ) , D υ ( t , x , p ) , x , p ) = 0 in [ 0 , T ) × R n + 1 , υ ( T , x , p ) = F ( x , p ) ,

where

H - ( A , z , x , p ) : = max u K u min π K π [ 1 2 Tr ( a ( x , p , σ , π ) A ) + z q ( x , p , b , π ) ] .

For a proof, see Talay and Zheng [2002]. The numerical resolution of the PDE allows one to compute approximate reserve amounts of money to control model risk. Numerical investigations, undone so far, are necessary to evaluate how large are these provisions.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S157086590800001x

THE METHOD OF STEEPEST DESCENT

L.V. KANTOROVICH , G.P. AKILOV , in Functional Analysis (Second Edition), 1982

Theorem 1.

Suppose the derivative of Φ satisfies the Lipschitz condition (3) in the ball with centre at the origin and radius R′, where R > R = sup x Ω x . Then the sequence{x n } constructed according to (4) satisfies Φ′(x n ) → 0.

Proof. Suppose 0<α ⩽ R′R. Then, by the definition of the descent value ɛ n , we have Φ(x n −1+ α z n ) ⩾ Φ(x n ). Using Lemma 2, we obtain

Φ ( x n ) Φ ( x n 1 + α z n ) Φ ( x n 1 ) α Φ ( x n 1 ) ( z n ) + 1 2 L α 2

(where L is the constant appearing in the Lipschitz condition). It follows from this inequality and (2) that

| | Φ ( x n ) | | Φ ( x n 1 ) Φ ( x n ) α + 1 2 L α .

Let ɛ be any positive number and suppose α< min (ɛ, R′R). The boundedness of Ω0 and the Lipschitz condition imply that Φ is bounded below. Since the sequence Φ(x n ) is decreasing (by construction) and bounded, it converges, so for sufficiently large n we have 1/α (Φ(x n −1) −Φ(x n ))<ɛ. For such n, we have

| | Φ ( x n ) | | ε ( 1 + 1 2 L ) ,

and this proves the theorem.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780080230368500217

Finite-Dimensional Optimization

Alexander S. Poznyak , in Advanced Mathematical Tools for Automatic Control Engineers: Deterministic Techniques, Volume 1, 2008

Corollary 21.1

(a)

If f ( x ) satisfies the Lipschitz condition on [ x , x + y ] , that is,

(21.7) f ( u ) f ( υ ) L f u υ , u , υ [ x , x + y ]

then for all x , y n

(21.8) | f ( x + y ) f ( x ) ( f ( x ) , y ) | L f 2 y 2

(b)

If for all x , y n

(21.9) 2 f ( x + τ y ) L 2 , τ [ 0 , 1 ]

then for all x , y n

(21.10) | f ( x + y ) f ( x ) ( f ( x ) , y ) | L 2 2 y 2

(c)

If for all x , y n

(21.11) 2 f ( x + y ) 2 f ( x ) L 2 y

then for all x , y n

(21.12) | f ( x + y ) f ( x ) ( f ( x ) , y ) 1 2 ( 2 f ( x ) y , y ) | undefined L 2 6 y 3

Proof

The inequality (21.8) follows directly from (21.3), (21.7) and (21.11) if we take into account that

| τ = 0 1 ( f ( x + τ y ) f ( x ) , y ) d τ | τ = 0 1 | ( f ( x + τ y ) f ( x ) , y ) | d τ undefined τ = 0 1 f ( x + τ y ) f ( x ) y d τ τ = 0 1 L f τ y 2 d τ L f 2 y 2

The inequalities (21.10) and (21.12) result from (21.4) and (21.9) since

| t = 0 1 τ = 0 t ( 2 f ( x + τ y ) y , y ) d τ undefined d t | t = 0 1 τ = 0 t | ( 2 f ( x + τ y ) y , y ) | d τ undefined d t undefined t = 0 1 τ = 0 t 2 f ( x + τ y ) y 2 d τ undefined d t t = 0 1 τ = 0 t L 2 y 2 d τ undefined d t L 2 2 y 2

and

| t = 0 1 τ = 0 t ( [ 2 f ( x + τ y ) 2 f ( x ) ] y , y ) d τ undefined d t | undefined t = 0 1 τ = 0 t | ( [ 2 f ( x + τ y ) 2 f ( x ) ] y , y ) | d τ undefined d t undefined t = 0 1 τ = 0 t 2 f ( x + τ y ) 2 f ( x ) y 2 d τ undefined d t undefined t = 0 1 τ = 0 t L 2 y 3 τ d τ undefined d t L 2 6 y 3

which proves the corollary.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780080446745500249

Handbook of Differential Equations

Daniel Daners , in Handbook of Differential Equations: Stationary Partial Differential Equations, 2008

Assumption 6.3.1.

Suppose that Ω n ⊂ ℝ N ℕ are bounded open sets satisfying a Lipschitz condition. Let Ω be an open set and K Ω ¯ be a compact set of capacity zero such that for every neighbourhood U of K there exists n 0∈ ℕ such that

(6.3.1) Ω ¯ ( Ω n ( Ω U ) c ) ¯ = θ and Ω Ω n U

for all n > n 0. Moreover, assume that

lim n | Ω n Ω c | = 0 .

Note that the first condition in (6.3.1) means that U allows us to separate Ω ¯ from Ω n ( Ω U ) c ¯ which is the part of Ω ¯ n outside Ω ¯ as shown in Figure 6.4. Note that the above assumption also allows us to cut holes in Ω shrinking to a set of capacity zero as n → ∞. If (6.3.1) holds, then also

Fig. 6.4.. Separation of the part of Ω n from Ω ¯ by U.

(6.3.2) Ω U c Ω n .

This means that ∂Ω is contained in ∂Ω n except for a very small set. To see this let x ∈ ∂Ω ∩ Uc . If W is a small enough neighbourhood of x, then in particular W ∩ Ω ⊂ W ∩ Ω n by (6.3.1) and so W ∩ Ω n ≠ Ø for every neighbourhood of x. Moreover,

Ω Ω n U c Ω c Ω n U c Ω n ( Ω U ) c ¯

and because ∂Ω ⊂ Ω ¯ (6.3.1) implies that Ω Ω n U c = . Hence (6.3.2) follows. We next use the above to construct a sequence of cutoff functions.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S1874573308800186

BASIC ANALYSIS

G.M. PHILLIPS , P.J. TAYLOR , in Theory and Applications of Numerical Analysis (Second Edition), 1996

Lipschitz condition

Definition 2.12

A function f, defined on [a, b], is said to satisfy a Lipschitz condition on [a, b] if there exists a constant L > 0 such that

(2.12) | f ( x 1 ) f ( x 2 ) | L | x 1 x 2 | ,

for all x 1, x 2 ∈ [a, b]. L is called the Lipschitz constant. □

We may deduce from (2.12) that if f satisfies a Lipschitz condition on [a, b], then f is uniformly continuous on [a, b].

Example 2.10

Consider f(x)x 2 on [−1, 1], Then

f ( x 1 ) f ( x 2 ) = x 1 2 x 2 2 = ( x 1 x 2 ) ( x 1 + x 2 ) .

Thus on [−1, 1]

| f ( x 1 ) f ( x 2 ) | 2 | x 1 x 2 | ,

showing that f(x)x 2 satisfies a Lipschitz condition on [−1, 1] with Lipschitz constant 2. □

Example 2.11

Consider f(x)x ½, with x ⩾ 0. Then

f ( x 1 ) f ( x 2 ) = x 1 1 / 2 x 2 1 / 2 = ( x 1 x 2 ) / ( x 1 1 / 2 + x 2 1 / 2 ) .

By taking x 1 and x 2 sufficiently close to 0 we can make 1 / ( x 1 1 / 2 + x 2 1 / 2 ) as large as we please. Thus the function x ½ does not satisfy a Lipschitz condition on any interval which includes x = 0. □

This last example demonstrates that not every continuous function satisfies a Lipschitz condition.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780125535601500033

Theory of Hp Spaces

In Pure and Applied Mathematics, 1970

Theorem 5.1 (Hardy–Littlewood).

Let f(z) be a function analytic in |z| < 1. Then f(z) is continuous in |z| ≤ 1 and f(e ) Λ α (0 < α ≤ 1), if and only if

f ( z ) = o ( 1 ( 1 r ) 1-α ) .

PROOF. First let us dispose of the case α = 1. By Theorem 3.11 , the Lipschitz condition on the boundary implies that f∈ H 1 and that f′(e ) ∈ L . Hence f∈ H , by Theorem 2.11. Conversely, iff∈ H , another application of Theorem 3.11 shows that f(z) is continuous in |z| ≤ 1, f(e ) is absolutely continuous, and (d/)f(e ) ∈ L . Integration of the derivative therefore gives f ( e i θ ) Λ 1 .

Now let f(z) be continuous in |z| ≤ 1, and suppose f(e ) Λ α , 0 < α < 1. By the Cauchy formula,

f ( z ) = 1 2 π 0 2 π [ f ( e i t ) f ( e i θ ) ] e i t ( e i t z ) 2 d t , z = r e i θ .

Thus

(3) | f ( z ) | 1 2 π π π | f ( e i ( t + θ ) ) f ( e i θ ) | 1 2 r cos t + r 2 d t .

Using the Lipschitz condition and the relation

1 2 r cos t + r 2 = ( 1 r ) 2 + 4 r sin 2 t 2 ( 1 r ) 2 + 4 r t 2 π 2 ,

0 ≤ tπ, we find

| f ( z ) | A 2 π π π | t | α d t ( 1 r ) 2 + 4 r ( t / π ) 2 = o ( 1 ( 1 r ) 1 α ) .

The assumption α < 1 assures the convergence of the integral

0 u α d u 1 + u 2 ,

which arises after the substitution u = t/(1 – r).

Conversely, suppose

| f ( z ) | C ( 1 r ) 1 α , 0 < α < 1.

Then the radial limit

f ( e i θ ) = f ( 0 ) + lim R 1 0 R f ( r e i θ ) d r

exists for every θ. Furthermore, f ∈ H , so f(z) is the Poisson integral of f(e ). Hence the continuity of f(e ) would imply the continuity of f(z) in |z| ≤ 1. We shall prove the continuity by showing f(e ) Λ α . For this purpose, choose θ and φ with 0 < φθ < 1. Fix ρ, 0 < ρ < 1, and let Γ be the contour consisting of the radial segment from e to ρe , the arc of the circle |z| = ρ from ρe to ρe , and the radial segment from ρe to e . (See Fig. 2.) Then

Figure 2.

f ( e i φ ) f ( e i θ ) = Γ f ( ζ ) d ζ .

Breaking up the integral into its three components, we find

| f ( e i φ ) f ( e i θ ) | ρ 1 | f ( r e i θ ) | d r + ρ 1 | f ( r e i φ ) | d r + θ φ | f ( ρ e i t ) | d t 2 C ρ 1 d r ( 1 r ) 1 α + C ( φ θ ) ( 1 ρ ) 1 α .

With the choice ρ = 1 – (φθ), this gives

| f ( e i φ ) f ( e i θ ) | C ( 1 + 2 α ) | φ θ | α ,

and the proof is complete.

The method may be used to prove other theorems of the same type. We mention one example, leaving the proof as an exercise.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0079816908626690

Random Matrices

In Pure and Applied Mathematics, 2004

Theorem 18.2.1

If f(θ) is a positive function over 0 ⩽ θ ⩽ 2π, its derivative satisfies a Lipschitz condition and F N (f) is the N × N Toeplitz determinant

(18.2.1) F N ( f ) = det [ 1 2 π 0 2 π f ( θ ) exp ( i ( j k ) θ ) d θ ] j , k = 1 , , N .

Then as N →

(18.2.2) log F N ( f ) N f 0 + 1 4 k = 1 k f k f k + O ( 1 ) ,

where fk are the Fourier coefficients of log f(θ),

(18.2.3) log f ( θ ) = k = f k e i k θ .

Widom (1971) extended this theorem for functions f(θ) which are positive only on an arc of the unit circle.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0079816904801089

Differential Equations, Ordinary

Anthony N. Michel , in Encyclopedia of Physical Science and Technology (Third Edition), 2003

III.C Uniqueness of Solutions

Next, we address the question of uniqueness of solutions. To accomplish this, we require the following concept: f  C(D) is said to satisfy a Lipschitz condition on D (with respect to x) with Lipschitz constant L if

(15) | f ( t , x ) f ( t , x ) | L | x x |

for all ( t , x ) ( t , x ) in D. The function f is said to be Lipschitz continuous in x on D in this case.

For example, it can be shown that if ∂ f(t, x)/∂ x exists and is continuous on D, then f will be Lipschitz continuous on any compact and convex subsect D 0 of D.

In order to establish a uniqueness result for solutions of the initial value problem (I′), we will also require a result known as the Gronwall inequality: Let r and k be continuous nonnegative real functions defined on an interval [a, b] and let δ     0 be a constant. If

(16) r t δ + a t k s r s d s

then

(17) r t δ exp a t k s ds

Now suppose that for (I′) the Cauchy–Peano theorem holds and suppose that for one given (τ, ξ)   D, two solutions ϕ1 and ϕ2 exist over some interval ∣t    τ∣   d, d  >   0. On the interval τ   t    τ   + d we now have, using (V′) to express ϕ1 and ϕ2,

(18) ϕ 1 t ϕ 2 t = τ t f s , ϕ 1 s f s , ϕ 2 s d s

Now if, in addition, f is Lipschitz continuous in x, then Eq. (18) yields:

ϕ 1 t ϕ 2 t τ t L ϕ 1 s ϕ 2 s d s

Letting r(t)   =   ∣ϕ1(t)     ϕ2(t)∣, δ   =   0, and k(t)   L, and applying the Gronwall inequality, we now obtain:

ϕ 1 t ϕ 2 t 0 forall τ t τ + d

Hence, it must be true that ϕ1(t)   =   ϕ2(t) on τ   t    τ   + d. A similar argument will also work for the interval τ   d  t    τ.

Summarizing, we have the following uniqueness result:

If f  C(D) and if f satisfies a Lipschitz condition on D with Lipschitz constant L, then the initial-value problem (I′) has at most one solution on any intervalt    τ∣   d, d  >   0.

If the solution ϕ of (I′) is unique, then the ɛ -approximate solutions constructed before will tend to ϕ as ɛ     0+ and this is the basis for justifying Euler's method—a numerical method of constructing approximations to ϕ. Now, if we assume that f satisfies a Lipschitz condition, an alternative classical method of approximation is the method of successive approximations. Specifically, let f  C(D) and let S be the rectangle in D centered at (τ, ξ) shown in Fig. 5 and let c be defined as in Fig. 5. Successive approximations for (I′), or equivalently for (V′), are defined as:

(19) ϕ 0 ( t ) = ξ ϕ m + 1 ( t ) = ξ + τ t f ( s , ϕ m ( s ) ) d s , m = 0 , 1 , 2 ,

for ∣t    τ∣   c.

The following result is the basis for justifying the method of successive approximations:

If f  C(D) and if f is Lipschitz continuous on S with constant L, then the successive approximations ϕ m , m  =   0, 1, 2,   …,given in Eq.(19) exist ont    τ∣   c, are continuous there, and converge uniformly, as m    ∞, to the unique solution of (I′).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122274105001721

Ordinary Differential Equations

A. Rontó , M. Rontó , in Handbook of Differential Equations: Ordinary Differential Equations, 2008

4.2 Linear substitution of variables

It is clear that the simplest case of substitution (4.13) is the linear change of variables

(4.68) x θ = A θ y θ + b θ , θ ,

where A and b are T ˜ -periodic matrix and vector-valued functions, respectively. We assume that these functions are continuously differentiable and such that

(4.69) det A θ 0 , θ 0 T ˜ .

The change of variable (4.68) transforms the autonomous system (4.1) into the non-autonomous system

(4.70) d y d θ = F θ y μ ,

where μ is given by (4.20) and

F θ y μ : = A θ 1 μf A θ y + b θ A θ y + b θ .

The use of the linear substitution (4.68) instead of (4.13) simplifies the transformation of the autonomous system (4.1) to a non-autonomous one and the verification of conditions(4.23)–(4.26). In this case, the initial data (4.11) and (4.19), for which there may exist T ˜ -periodic solutions of (4.5) and (4.70), are related by the formula

(4.71) x 0 c = A 0 y 0 c + b 0 ,

whence, in view of (4.69),

(4.72) y 0 c = A 0 1 x 0 c = b 0 .

Therefore, we can start our iteration (4.27) from y 0(c) given by (4.72).

The T-periodic solution of the autonomous system (4.1) is given by the formula

(4.73) x * t = A t μ * y * t μ * , c * , μ * + b t μ * ,

where the pair (c*,μ*) is a solution of the determining equation(4.36). Similarly, the approximate solutions (4.56), (4.57) take the form

(4.74) x m t = A t μ * y m t μ * , c * , μ * + b t μ * ,

and

(4.75) x ˜ m t = A t μ m y m t μ m c m μ m + b t μ m ,

where the pair (cmm ) is a solution of the approximate determining equation(4.58).

For the linear change of variable (4.68), the vector M Φ and the matrix K Φ in the Lipschitz condition (4.59) as well as in the error estimates (4.60), (4.66) are given by the formulae K Φ = max θ 0 T ˜ A θ ,

M Φ = max θ y 0 T ˜ × Ω A θ y + f θ .

EXAMPLE 4.6

Let us suppose that the system (4.1) has the form

(4.76) x 1 t = f 1 x 1 t , x 2 t , x 2 t = f 2 x 1 t , x 2 t .

Consider (4.76), (4.77) together with the T-periodic boundary conditions

(4.77) x 1 0 = x 1 ( T ) , x 2 0 = x 2 ( T ) ,

where f 1  :   n     p , f 2  :   n     n  p , p is even, and T is an positive constant.

Let us use the special case of the linear transformation (4.68) where

(4.78) A θ = diag exp , I n p , b θ 0 ,

where A is a real skew-symmetric matrix with purely imaginary simple eigenvalues. In this case, substitution (4.68) is rewritten as follows:

(4.79) x 1 t = exp ( ) y 1 ( θ ) , x 2 θ = y 2 ( θ ) ,

and the transformed system (4.16) corresponding to (4.76) has the form

y 1 θ = μ exp θA f 1 exp θA y 1 θ , y 2 θ A y 1 θ , y 2 θ = μ f 2 exp θA y 1 θ , y 2 θ ,

where μ is given by (4.20).

EXAMPLE 4.7

Consider the autonomous system of the form

(4.80) x 1 t = A x 1 t + f 1 x 1 t , x 2 t , x 2 t = f 2 x 1 t , x 2 t ,

where f 1  :   n     p , f 2  :   n     n  p , p is even, T   >  0, and A is a square matrix of dimension p having the same properties as in Example 4.6. Let us assume that the functions f 1 and f 2 are continuous in the domain

(4.81) G = x 1 x 2 x 1 d , x 2 D 2 ,

where ‖x 12    k  =   1 p x 1k 2 and D2 is a closed bounded domain.

Suppose that the functions f 1 and f 2 satisfy the following conditions in domain (4.81): there exist some non-negative constants k 1, m1 , a non-negative vector M 2    n  p , and a non-negative (n – p) ×   n matrix K 2 such that

(4.82) f 1 x m 1 , f 2 x M 2

and

(4.83) f 1 x 1 x 2 f 1 z 1 z 2 k 1 x z ,

(4.84) f 2 x 1 x 2 f 2 z 1 z 2 K 2 x z

for all (x 1,x 2) and (z 1,z 2) from domain (4.81).

The change of variable (4.68) with A given by (4.78) transforms system (4.80) to the form

(4.85) d y 1 d θ = μ 1 A y 1 + μ exp [ θA ] f 1 exp θA y 1 , y 2 = : μ F 1 θ y 1 y 2 , d y 2 d θ = μ f 2 exp θA y 1 , y 2 = : μ F 2 θ y 1 y 2 .

Since A is skew-symmetric, it follows that the matrices exp(± θA) are orthogonal, i.e., e± θA (e± θA )   *   = I p  × p and, furthermore, x 1 and x 2 in (4.79) satisfy the relations

(4.86) x 1 = e θA y 1 = y 1 ,

(4.87) e θA f 1 e θA y 1 , y 2 = f 1 e θA y 1 , y 2 .

It follows from (4.86), (4.87) that the right-hand side of system (4.85) is well defined and continuous in domain (4.81). Moreover, the functions F 1(θ, y 1 ,y 2) and F 2(θ, y 1 ,y 2) in (4.85) satisfy the inequalities

(4.88) F 1 θ y 1 y 2 m ¯ 1 e p , F 2 θ y 1 y 2 M 2 , F 1 θ y 1 y 2 F 1 θ y 1 y 2 k ¯ 1 J p × n y z

for all (y 1,y 2) and (z 1,z 2) from domain (4.81), where

(4.89) m ¯ 1 = m 1 + 1 μ 1 max y 1 d A y 1 , k ¯ 1 = k 1 + 1 μ 1 A ,

and m 1 and k1 are the same as in (4.82), (4.83). Here, ep and J p  × n are the vector and matrix of dimension p  ×   1 and p  × n, respectively, with all elements equal to 1.

It follows from inequalities (4.88), (4.89) that the functions in the right-hand side of system (4.85) satisfy conditions(4.23) and (4.24) with the vector

(4.90) M = μ col m ¯ 1 e p , M 2

and matrix

(4.91) K = μ k ¯ 1 J p × n K 2 ,

where K 2 is the matrix from (4.84).

We thus see that, in some cases (in particular, for system (4.80)) one can verify conditions(4.23), (4.24) in terms of the constants m 1, M 2, k 1, and K 2 involved in conditions(4.82), (4.83), and (4.84) for the original system.

REMARK 4.8

Quite similarly, one can show that, for the autonomous system

x t = Ax + f x

with A having the same properties as in Example 4.7, the linear change of variables

(4.92) x θ = e θA y θ

brings the –1-periodic problem (4.5), (4.6)

(4.93) d x d θ = μ Ax + f x ,

(4.94) x θ = x T ˜

to the non-autonomous T ˜ -periodic problem (4.94) for the equation

(4.95) d x d θ = μ 1 Ay + μ e θA f e θA y .

EXAMPLE 4.9

Let us consider the second-order autonomous differential equation

(4.96) x ¨ + x ˙ 2 + x ε 2 + x = ε 2 + ε ,

where ε is a real parameter, and try do determine the period, T, which a periodic solution of (4.96) may have.

To Eq.(4.96), we associate the first-order system

(4.97) d x 1 d t = x 2 , d x 2 d t = 2 ε 1 x 1 x 1 2 x 2 2 + ε

and the following T-periodic boundary conditions:

(4.98) x 1 0 = x 1 ( T ) , x 2 0 = x 2 ( T ) .

System (4.97) will be considered for x 1 2  + x 2 2    ϱ2, where ϱ     2|ε|.

It is clear that (4.97), (4.98) is equivalent to the problem on T-periodic C 2 solutions of (4.96).

Applying the coordinate transformation (4.4) with T ˜ = 2 π , we transform (4.97), (4.98) into the system

(4.99) d x 1 d θ = μ x 2 , d x 2 d θ = μ 2 ε 1 x 1 x 1 2 x 2 2 + ε

with the 2π-periodic conditions

(4.100) x 1 0 = x 1 ( 2 π ) , x 2 0 = x 2 ( 2 π ) .

In (4.99), according to (4.20), μ   =   T(2π)–1. Rewriting (4.99), (4.100) as (4.93) by putting

(4.101) A = 0 1 1 0 , f x = 0 x 1 2 x 2 2 + 2 ε x 1 + ε

and using transformation (4.92), we arrive at system (4.95) of the form

(4.102) d y d θ = μ 1 1 μ 0 1 1 0 y + cos θ sin θ sin θ cos θ f cos θ sin θ sin θ cos θ y .

Here, we have used the equality

(4.103) exp = cos θ sin θ sin θ cos θ

easily derived from the series expansion of exp (–Aθ) in a neighbourhood of 0.

It follows from (4.103) that, in our case, the variable transformation (4.92) is given by the equalities

(4.104) x 1 = cos θ y 1 + sin θ y 2 , x 2 = sin θ y 1 + cos θ y 2

and, hence, system (4.102) has the form

(4.105) d y 1 d θ = μ 1 y 2 + μ sin θ y 1 2 + y 2 2 2 ε cos θ y 1 + sin θ y 2 ε , d y 2 d θ = 1 μ y 1 + μ cos θ y 1 2 + y 2 2 2 ε cos θ y 1 + sin θ y 2 ε .

It is obvious from (4.104) that system (4.105) should be considered in the same ϱ-neighbourhood of zero as system (4.97).

We need to verify whether conditions(4.23)–(4.25) hold in our case.

As it has been already pointed out above, the vector M in (4.90) and matrix K in (4.91) are determined directly from the terms in the right-hand side of the original system (4.97), i.e., on base of the properties of function f given by (4.101).

It is easy to see that the function

W x 1 x 2 = x 1 2 x 2 2 + 2 ε x 1 + ε

takes its maximal value W max  =     |ε|(1   +   |ε|) when x 1  = ε, x 2  =   0, whereas its minimal value W max  =     2  +   2ε 2  +   (2ϱ   +   1)|ε|] is achieved for x 1  =   ϱ signe, x 2  =   0. Therefore, according to (4.90), we may take the following vector M for system (4.105):

M = μ 2 ϱ 2 + 2 ε 2 + 2 ϱ + 1 ε + 1 μ 2 1 ϱ e 2 ,

where e 2 = 1 1 .

Furthermore, as can readily be verified, for the number k 1 appearing in (4.83) and (4.89), we may take

(4.106) k 1 = max x 1 x 2 B 0 ϱ r Γ x 1 x 2 ,

where

(4.107) Γ x 1 x 2 = 4 0 0 0 x 1 x 2 2 + x 2 2 .

Indeed, it is easy to see that

Γ x = f x x f x x * .

The maximal eigenvalue of matrix (4.107) is, obviously, given by the formula

r Γ x 1 x 2 = 4 x 1 ε 2 + x 2 2 ,

whence it follows that, according to (4.106), we may put

k 1 = 2 ϱ + | ε | .

Hence, since | | 0 1 1 0 | | = 1 , we see that the constant k ¯ 1 in (4.89) is given by the formula

k ¯ 1 = 2 ϱ + | ε | + 1 1 μ 2 .

Therefore, we have the following matrix K in (4.91):

K = μ 2 2 ϱ + | ε | + 1 1 μ 2 J 2 × 2 ,

where J 2 × 2 = 1 1 1 1 .

Since r(J 2 ×  2) =  2, it is not difficult to verify that conditions(4.25) and (4.26) are satisfied in B(0,ϱ) whenever

(4.108) 2 ϱ + | ε | + 1 1 μ 2 < 5 6 π μ 2

and

(4.109) ϱ 2 + 2 ε 2 + 2 ϱ + 1 ε + 1 1 μ 2 ϱ < ϱ π μ 2 .

Inequalities (4.108) and (4.109) hold, for example, when

(4.110) ε = 0.02 , ϱ = 0.06 , 0.95 < μ < 1.05 .

Thus, the techniques described above are applicable to our system (4.97). For system (4.97), Eq.(4.10) with j  =   1 has the form

x 2 = 0

and, hence, the one-parametric family of solutions

x 1 = c , x 2 = 0 ,

where c ∈ ℝ. According to the coordinate transformation (4.92), we have

x 1 0 = y 1 0 , x 2 0 = y 2 0 .

Therefore, we may start the successive approximations (4.27) from the constant function

y 0 θ c μ = c 0 .

Consider the approximate determining equation(4.58) with m  =   0 and T ˜ = 2 π :

μ 0 2 π sin θ c 2 2 εc cos θ ε d θ = 0 , 0 2 π 1 μ c μ cos θ c 2 2 εc cos θ ε d θ = 0 .

The first of these two equations is, in fact, an identity for an arbitrary c, whereas the second one means that, for all real c,

2 π 1 μ c + 2 εμc 0 2 π cos 2 θ d θ = 0

whence it follows that

μ = 1 1 + ε ,

and therefore, in the zero approximation, the period, T, of a solution of Eq.(4.96) is given by the formula

T = 2 π 1 ε .

For instance, in the case where ε, ϱ, and μ are given by (4.110), we have T   =  2.04π. Note that the autonomous equation(4.96) has the periodic solution

(4.111) x t = ε 1 + sin t ,

whose period is equal to 2π. Thus, for e sufficiently small, even the zero approximation provides a reasonable degree of accuracy for the unknown value of the period.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S1874572508800107