1. Trang chủ
  2. » Luận Văn - Báo Cáo

SOME QUALITATIVE PROBLEMS OF NONAUTONOMOUS STOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY FRACTIONAL BROWNIAN MOTIONS

189 4 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Some Qualitative Problems Of Nonautonomous Stochastic Differential Equations Driven By Fractional Brownian Motions
Tác giả Phan Thanh Hong
Người hướng dẫn Dr. Luu Hoang Duc
Trường học Vietnam Academy of Science and Technology
Chuyên ngành Probability and Statistics Theory
Thể loại dissertation
Năm xuất bản 2021
Thành phố Hanoi
Định dạng
Số trang 189
Dung lượng 15,8 MB

Cấu trúc

  • VIETNAM ACADEMY OF SCIENCE AND TECHNOLOGY

  • DOCTOR OF PHILOSOPHY IN MATHEMATICS

    • VIETNAM ACADEMY OF SCIENCE AND TECHNOLOGY

      • Speciality: Probability and Statistics Theory Speciality code: 9 46 01 06

  • DOCTOR OF PHILOSOPHY IN MATHEMATICS

    • Supervisor: Dr. Luu Hoang Duc

  • Acknowledgment

  • Contents

    • Chapter 3. Lyapunov spectrum of nonautonomous linear fSDEs 44

    • Chapter 4. Random attractors for nonautonomous fSDEs 67

    • General Conclusions 103

  • Introduction

    • 1.1 Fractional Brownian motions

      • Ho¨ lder continuity of paths

      • 1.1.2 Canonical spaces

        • Multidimensional fractional Brownian motions

    • 1.2 Pathwise stochastic integrals with respect to fractional Brownian motions

      • 1.2.1 Young integrals

        • Proof.

      • 1.2.2 Fractional integrals and fractional derivatives

      • 1.2.3 Stochastic integrals w.r.t. fractional Brownian motions

      • 1.2.4 Young integrals on infinite domains

    • 1.3 Greedy sequences of times

      • Q

    • 1.4 Stochastic flows

      • Stochastic two-parameter flows

      • Random dynamical systems

    • 2.1 Assumptions

    • 2.2 Existence and uniqueness theorem for deterministic equa- tions

      • 2.2.1 Existence and uniqueness of a global solution

        • Proof.

        • Step 1: Existence of local solution.

        • Step 2: Uniqueness of a local solution.

        • Step 3: Global solution.

      • 2.2.2 Estimate of the solution growth

      • 2.2.3 Special case: linear equations

        • Q

    • 2.3 Continuity and differentiability of the solution

      • 2.3.1 The continuity of the solution

      • 2.3.2 The differentiability of the solution

    • 2.4 The stochastic differential equations driven by fBm

    • 2.5 The generation of stochastic two parameter flows

    • 2.6 Conclusions and discussions

    • 3.1 The generation of stochastic flow of linear operators

    • 3.2 Lyapunov exponent of Young integrals w.r.t. BH

      • =

        • Q

          • Assumptions

      • 3.3.2 Lyapunov spectrum of triangular systems

      • Q

      • 3.5 Conclusions and discussions

        • Discussions on the non-randomness of Lyapunov exponents

        • Assumptions

      • 4.2 Existence of random attractors

      • ( )|

      • (| | + )

      • +

      • 4.4 Special case: g bounded

      • 4.5 Bebutov flow and its generation

        • =

          • .

        • + · · · ,

        • + · · · ,

        • + · · · ,

        • + · · · ,

          • Q

            • Proof.

      • 4.6 Conclusions and discussions

  • Conclusions

  • List of Author’s Related Papers

  • Appendix

      • Spaces of functions

        • Variation and Ho¨ lder spaces

      • Q

        • Compactness

        • Closure of smooth paths in variation norm, Ho¨ lder norm

      • Q

        • Proof.

  • References

Nội dung

SOME QUALITATIVE PROBLEMS OF NONAUTONOMOUS STOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY FRACTIONAL BROWNIAN MOTIONS.SOME QUALITATIVE PROBLEMS OF NONAUTONOMOUS STOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY FRACTIONAL BROWNIAN MOTIONS. SOME QUALITATIVE PROBLEMS OF NONAUTONOMOUS STOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY FRACTIONAL BROWNIAN MOTIONS. SOME QUALITATIVE PROBLEMS OF NONAUTONOMOUS STOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY FRACTIONAL BROWNIAN MOTIONS. SOME QUALITATIVE PROBLEMS OF NONAUTONOMOUS STOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY FRACTIONAL BROWNIAN MOTIONS. SOME QUALITATIVE PROBLEMS OF NONAUTONOMOUS STOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY FRACTIONAL BROWNIAN MOTIONS. SOME QUALITATIVE PROBLEMS OF NONAUTONOMOUS STOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY FRACTIONAL BROWNIAN MOTIONS. SOME QUALITATIVE PROBLEMS OF NONAUTONOMOUS STOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY FRACTIONAL BROWNIAN MOTIONS. SOME QUALITATIVE PROBLEMS OF NONAUTONOMOUS STOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY FRACTIONAL BROWNIAN MOTIONS. SOME QUALITATIVE PROBLEMS OF NONAUTONOMOUS STOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY FRACTIONAL BROWNIAN MOTIONS.

Fractional Brownian motions

Nonsemimartingale properties

Let X := (X t ) t∈ R+ be a stochastic process and let Π {t i , i = 0, ã ã ã , n} be a finite partition of [0, t] with fixed t Define the pth-variation of X over Π

(X) converges in some sense as Π

0, the limit is called the quadratic variation of X on [0, t] For example, the quadratic variation of Brownian motion is t for all t > 0 ( [55,

Recall that a real- valued continuous process is a semimartingale if it can be de- composed as the sum of a local martingale and a continuous adapted process of

1 locally bounded variation ( [55, Definition 3.1, p 149]) It is known that a semi- martigale is of locally bounded quadratic variation If X is a semimartingale on the interval [0, 1],

• V Π,[0,1] (X) converges in probability as n → ∞ and

(2) (1) if lim Π|→0 V Π,[0,1] (X) = 0, lim V Π→0 Π,[0,1] (X) < ∞ almost sure.

For the case fBm, due to [17, Proposition 3], lim

And similar to the case of B 2 ( [76, Corollary 2.5, p 29]), the fractional Brownian paths a.s have infinite variation on any interval Namely, since 1 > 1 and

If H > 1 , by choosing p = 2 > 1 one has lim

2 H |Π|→0 Π,[0,1] semimartingale, a.s lim V (1) (B H ) < ∞ which is a contradiction. Π →0 Π,[0,1]

If H < 1 , since 0 < lim V (1/H) (B H ) < ∞ and 1 > 2, the continuity of B H

(2) Π,[0,1](B H ) = ∞Therefore B H with H < 1 can not be a semimartingale either.

Canonical spaces

It is known that a stochastic process (X t ) t∈R defined on the probability (Ω, F, P) valued in R m with continuous paths can be viewed as a measurable map X : (Ω, F, P)

The space of continuous functions from R to R m, denoted as C(R, R m), is equipped with the σ-algebra B generated by cylinder sets This σ-algebra is the smallest one that guarantees the measurability of all projections π t: C(R, R m) → R m, where π t(x) = x t For effective application of random dynamical system theory, it is advantageous to utilize the canonical version (C(R, R m), B, P X), where P X represents the image measure XP, along with the coordinate canonical process.

Definition 1.2 ( [4]) Let (Ω, F, P) be a probability space A family (θ t ) t∈R of map- pings of (Ω, F, P) into itself is called a metric dynamical system if it satisfies the fol- lowing conditions

(iv) P is (θ t )−invariant, i.e P = θ t P, for all t ∈ R.

Definition 1.3 A metric dynamical system (Ω, F, P, (θ t ) t∈R) is called ergodic if the set A ∈ F which are (θ t )−invariant, i.e θ t (A) = A for all t ∈ R has measure one or zero.

This section outlines the construction of the canonical sample space for B H, which supports the random dynamical system framework for stochastic differential equations driven by fractional Brownian motion (fBm) The canonical space for standard Brownian motion is represented as (C0(R, R), B, P 1, θ), where C0(R, R) denotes the space of continuous functions on R that vanish at zero, equipped with the compact open topology based on uniform convergence on compact intervals Here, B is the Borel σ-algebra, P 1 is the Wiener measure, and θ is the Wiener shift operator defined by θ t (ω)(ã) := ω(t + ã) − ω(t) for ω in C0(R, R) It is established that this space (C0(R, R), B, P 1, θ) exhibits ergodicity.

The Ito integral \( R k(t, s)dW_s \), where \( W = \{W_t, t \in \mathbb{R}\} \) represents two-sided Brownian motion, defines a Gaussian process for each square integrable kernel \( k(t, s) \) Notably, fractional Brownian motion can be represented in law as \( R k(t, s)dW_s \) This representation, attributed to Mandelbrot and Van Ness, utilizes the kernel \( k(t, u) := [(t - u) \vee 0]^{H-1} - [(-u) \vee 0]^{H-1} \) for \( t, u \in \mathbb{R} \).

∫ k H (t, u)dW u (1.1) has a continuous modification which is two-sided fractional Brownian motion of Hurst parameter H.

In the context of B H as presented in (1.1), where W represents the coordinate process defined on the canonical space, we can derive a measurable mapping by considering its continuous version while maintaining the same notation.

Then there exists (C0(R, R), B, P H , θ) the canonical space for fBm, in which

P H := B H P 1 Moreover the space is an ergodic metric dynamical system.

Theorem 1.2 ( [45, Theorem 1])(C0(R, R), B, P H , (θ t )) is an ergodic metric dynam- ical system.

Definition 1.4 Let m be a positive integer An m−dimensional fractional Brownian motion B H of Hurst index H ∈ (0, 1) is (B H , B H , ã ã ã , B H ) where B H are independent one-dimensional fractional Brownian motions of Hurst index H.

In the context of m-dimensional Brownian motion, the canonical space (Ω, F, P, θ) can be established similarly to the one-dimensional case Here, Ω represents C0(R, Rm) with the Borel σ-algebra, P denotes the distribution of the Brownian motion, and θ is identified as the Wiener shift operator Additionally, this constructed space exhibits ergodic properties.

To achieve our objectives, we focus on the space of bounded p-variation paths defined on compact intervals For each p ≥ 1, we denote the spaces of all continuous functions ω: R → R m as C p−var (R, R m ) and C 0,p−var (R, R m ), where the function ω is restricted to the interval [−T, T].

T] is in C p−var ([−T, T], R m ), C 0,p−var ([−T, T], R m ) respectively for any T > 0 Equip

Then (C p−var (R, R m ), d) is a complete metric space.

C 0,p−var (R, R m ) := {x ∈ C 0,p−var (R, R m )| x 0 = 0}, Note that for x ∈ C 0,p−var (R, R m ), |x | p var,I and ǁxǁ p−var,I are equivalent

− norms for every compact interval I containing 0 Due to [5, Proposition 1, Theorem 5] the space C 0,p−var (R, R m ) is a completely separable metrizable

0 0,p−var m topological space Moreover, C0 (R, R ) is θ−invariant and for each t, θ t : (C 0,p−var (R, R m ), d) → (C 0,p−var (R, R m ), d) is continuous (see [5, Theo-

The B H process possesses a 1-Hölder continuous version, which allows for the definition of a p-variation for 1 < p < ∞ By taking the trace of B on the space C₀,p-var(R, Rⁿ) and restricting the probability measure P H to this new σ-algebra, we can construct a new metric dynamical system denoted as (Ω, F, P, θ) Notably, this newly established metric dynamical system is ergodic, as referenced in [12, Lemma 1, Remark 2].

Remark 1.2 (i) Identical results apply to ν− Ho¨lder space, for 0 < ν < H.

(ii) When dealing with the differential equation driven by Ho¨lder continuous paths, the

1 2 m i noise is often denoted by x Throughout this thesis we would prefer using the notation ω to emphasize the randomness of the objects.

In the following chapters, we fix p ∈ (1, 1 ), the fBm is understood to be defined on the canonical space Ω, , , H

Pathwise stochastic integrals with respect to fractional Brownian

Young integrals

This subsection revisits key aspects of the Young integral, an extension of the Riemann–Stieltjes integral that applies to broader function spaces For more comprehensive information, refer to sources [85] and [42].

R d ×m ) and g ∈ C p−var ([a, b], R m ) we say that z ∈ C([a, b], R d ) is an indefinite Young integral of f against g if there exist a sequence ( f n , g n ) in C 1−var ([a, b], R d ×m ) × C 1−var ([a, b], R m ) satisfying su ch th at s u p n li m n

| g n | q−var,[a,b] < ∞ ǁg n − gǁ ∞,[a,b] = 0 a n d z ã f (t)dg(t) tends to z uniformly on [a, b] as n

∞ If z is unique we write

∫ a ã f (t)dg(t) and set ∫ t f (u)dg(u) := ∫ t f (u)dg(u) − ∫ s f (u)dg(u). s a a

Given f ∈ C q−var ([a, b], R d ×m ) and g ∈ C p−var ([a, b], R m ) with 1 + 1 >

Young integral of f against g, a ã f (t)dg(t) Moreover, the

From the Young-Loeve estimate (1.5), one can see that the Riemann - Stieltjes sum in (1.3) converges to ∫ t f (u)dg(u) when the mesh |Π| tends to 0, i.e.

Lemma 1.1 For p ≥ 1, q ≥ 1 such that 1 + 1 > 1 and f ∈ C q−var ([a, b], R d ×m ), g ∈ C p−var ([a, b], R m ), the following estimate holds

The conclusion is a direct sequence of (1.5) and [42, Proposition 5.10(i)].

Due to Lemma 1.1, the integral t ›→ ∫ t f (s)dg(s) is a continuous bounded p-variation path Note that

Other properties of Young integrals for instance the change of variable formula or the integration by parts formula can be seen in [42], [6].

Fractional integrals and fractional derivatives

The content in this subsection is written on the basis of [88], [74] and [86] Let f ∈

L 1 (a, b) For every α ∈ (0, 1), the left-sided and right-sided Riemann- Liouville fractional integral of f of order α are defined for almost all t ∈ (a, b) as follows

G(α) t (s − t) 1−α ds respectively Here G stands for Gamma function G(α ) = ∞ u α −1 e −u du and

(−1) α = e −iπα For a function f (t) with t ∈ [a, b], the left-sided and right-sided

Riemann-Liouville fractional derivative of f of order α at t ∈ (a, b) are defined

(s − t) α ds respectively, if exist The corresponding Weyl representation reads

The definition presented is inspired by the Lebesgue-Stieltjes integral, where fractional derivatives substitute traditional derivatives We denote the class of functions as I a+ (L p ) (I b− (L p )), which can be expressed in the form I α φ t (I ) ( α φ )(t), where φ plays a crucial role in this representation.

Definition 1.6 The (fractional) integral of f with respect to g is defined by

The definition is wellposed, i.e independent of the choice of α (see

[86]) Denote by W α,1 ([a, b], R d ), the space of measurable functions f on [a, b] such t h a t ǁ f ǁ

< ∞ and W 1 −α,∞ ([a, b], R d ) the space of measurable functions g on [a, b] such that ǁgǁ

For all 0 < ε < α, it can be seen easily that

∈ W 1 −α,∞ ([a, b], R m ) then the condition in (1.8) is fulfilled with p = 1 and q

= ∞, hence the integral ∫ t f dg is determined by mean of (1.8) for all a ≤ t ≤ b, and

Remark 1.3 It is proved that if f , g are Hoălder continuous with the exponents à, λ respectively and à + λ > 1, one may choose α < λ, 1 − α < à, p = q = ∞ and the condition in (1.8) is fulfilled The integral coincides with the Young integral (see[86]).

Stochastic integrals w.r.t fractional Brownian motions

Given a stochastic process X t and a fractional Brownian motion B H with

Hurst index H > 1/2, since B H is not a semimartingale one can not define

In this article, we explore the construction of the stochastic integral of a process \(X_t\) with respect to fractional Brownian motion (fBm) using Young integrals instead of Ito stochastic calculus We focus on the locally bounded \(p\)-variation version of the fractional Brownian motion \(B_H\) Under the assumption that the sample paths of \(X_t\) exhibit locally bounded \(q\)-variation, where \(1 + 1 > 1\) almost surely, we establish the necessary framework for our analysis.

Young integrals on infinite domains

We introduce here the concept of Young integrals on infinite domains.

Let \( f \in C^{q-\text{var}}([a, b], \mathbb{R}^{d \times m}) \) and \( g \in C^{p-\text{var}}([a, b], \mathbb{R}^{m}) \) with \( 1 + 1 > 1 \) for all \( [a, b] \subset \mathbb{R}^+ \) For convenience, we restrict our discussion to the positive real numbers, although the entire real line can also be considered Under these conditions, the expression \( f(s)dg(s) \) is well-defined for all \( 0 < a < b \) For a fixed \( t \in \mathbb{R} \), we define the integral \( \int_{0}^{\infty} f(s)dg(s) \) as the limit \( \lim_{t \to \infty} \int_{0}^{t} f(s)dg(s) \), provided this limit exists and is finite.

Proposition 1.3 If ∑k ≥0 ǁ f ǁ q−var,∆k < ∞ and sup k≥0 |g | p−var,∆ k < ∞ then G t is well defined for all t ∈ R + Moreover, G is of bounded p- variation and as (1.6) we obtain

Proof First, due to the assumptions, ∫ t f (s)dg(s) exists for all t ∈ R + For t t

0 p q realization x t , ω t of X and B H the integral ∫ b

X t dB H is defined pathwise as the p q t 0

≤ K sup k≥0 | g | p−var,∆ k ∑ k=n ǁ f ǁ p−var,∆ k → 0 as n, m → ∞. obtain n∈N t 0

Greedy sequences of times

Greedy sequences, as introduced in [11], involve an element ω in C p−var (R, R m ) For a specified value of à > 0, we can create a non-decreasing sequence of times {τ n = τ n (ω)} starting with τ 0 ≡ a The subsequent terms are defined by τ i +1(ω) as the infimum of the set {t ∈ [τ i , b] : |ω | p−var,[τ i ,t] ≥ à} ∧ b.

N = N [a,b],à (ω) := sup{n : τ n ≤ b} (1.10) Note that for τ i (ω) < b and |ω | p−var,[τ i (ω),b] ≥ à, τ i +1(ω) is intuitively the first time | ω | p−var,[τ i (ω),ã] reaches à Since the function κ(t) := |ω | p-var,[t

0 ,t] is continuous and nondecreasing w.r.t t with κ(t 0) = 0 (see [42]) we obtain

Lemma 1.2 The following estimate holds

Proof The statement due to the superadditivity of p p−var,[s,t] , s, t ∈ [a, b], p N−2 p N−2 p p

Therefore lim ∫ n f (s)dg(s) < ∞ Moreover, for t > t 0 by similar estimate we

This implies the ∞ f (s)dg(s) The proof for the second state- ment is similar to that in Lemma 1.1.

In the context of the canonical space of B H, it is important to recognize that if ω varies, the functions τ i and N are defined on Ω According to [33], using the Hölder version, the sequence τ i (ω) established on the interval [0, ∞) is demonstrated to be a sequence of stopping times with respect to the natural filtration of B H Additionally, the counting function N(ω) is shown to be measurable on the probability space (Ω, F, P).

These results also hold if one replaces Ho¨ lder norms by p−variation norms The sequence {τ i } is also called the greedy sequence of stopping times.

Next we construct another greedy sequence of times For a given control function ω˜ on ∆[0, T] and α > 0 we define a strictly increasing sequence of times {τ n ∗ = τ n ∗ (ω)} on [0, T] such that τ 0 ∗ ≡ 0 and τ i ∗ +1 (ω) := inf{t ≥ τ i ∗ : ω˜ (τ i ∗ (ω), t)α + |ω |p-var,[τ ∗ (ω),t] ≥ à} ∧ T (1.13) Assign N ∗ = N ∗ (T, ω) := sup{n : τ n ∗ ≤ T} (1.14)

Since the function ω˜ (τ i ∗ (ω), t)α + |ω |p-var,[τ ∗ (ω),t] is continuous w.r.t t we ob- tain ω˜ ( τ i ∗ , τ i ∗ +1 ) α + | ω | p-var,

] = à (1.15) for i = 0, N ∗ − 2 Put p J := max{p, 1 }, then similar to Lemma 1.2,

∗Σ Hence the following estimate holds p-var,[0,τ N ∗ ]

It means that any bounded interval [0, T] can be covered by a finite number of intervals [τ i ∗ , τ i ∗ +1 ]. i i α i i+

Stochastic differential equations driven by fractional Brow- nian

In this chapter we recall some notations and summarize some standard re- sults which we will need in the sequel.

Fractional Brownian motion (fBm) was initially introduced by Kolmogorov and later expanded upon by Mandelbrot and Van Ness This process is a family of Gaussian processes, which can be explored in more detail in related literature.

Definition 1.1 ( [73, Definition 2.1], [71, Definition 1.2.1]) A two-sided one-dimensional fractional Brownian motion of Hurst parameter H ∈ (0, 1) is a centered continuous Gaussian process B H = (B H ), t ∈ R with covariance function

Remark 1.1 (i) When H = 1 , the covariance function reduces to R 1(s, t) = 1 (|t| +

|s| − |t − s|) Therefore B 2 is a classical Brownian motion.

(ii) It can be seen from the covariance function that E(B H − B H ) 2 = |t − s| 2H Since t s

B H is a Gaussian process we obtain

E|B H − B H | n = 2 n G n + 1 Σ |t − s| nH , for all n ∈ N ∗ , where G is Gamma function G(z) := ∞ t z−1 e −t dt, z > 0.

Proposition 1.1 ( [73, Proposition 2.2]) Let B H be a fractional Brownian motion with Hurst parameter H ∈ (0, 1) The following properties hold:

In the following we recall some properties of f Bm, we refer to [65], [73], [71] or other references there in for the details.

Ho¨ lder continuity of paths

The property of Ho¨ lder continuity paths of fBm is important one to deal with the integral w.r.t fBm

This is deduced by Kolmogorov Theorem.

[0, T] be a stochastic process Suppose that there exist p > 0, c > 0, ε > 0 so that for every s, t ∈ [0, T]

Then there exists a continuous modification X˜ uous for every α ∈ (0, ε/p). of X which is locally α−Ho¨lder contin-

H fixed, there exists a modification of B H with paths of ν−Ho¨lder continuity on every compact sets of R.

2 )|t − s| for n ∈ N ∗ Choose n such that nH − 1 > ν Due to,

B H has a continuous modification and ν−Ho¨ lder modification on any compact interval of R.

Let X := (X t ) t∈ R+ be a stochastic process and let Π {t i , i = 0, ã ã ã , n} be a finite partition of [0, t] with fixed t Define the pth-variation of X over Π

(X) converges in some sense as Π

0, the limit is called the quadratic variation of X on [0, t] For example, the quadratic variation of Brownian motion is t for all t > 0 ( [55,

Recall that a real- valued continuous process is a semimartingale if it can be de- composed as the sum of a local martingale and a continuous adapted process of

1 locally bounded variation ( [55, Definition 3.1, p 149]) It is known that a semi- martigale is of locally bounded quadratic variation If X is a semimartingale on the interval [0, 1],

• V Π,[0,1] (X) converges in probability as n → ∞ and

(2) (1) if lim Π|→0 V Π,[0,1] (X) = 0, lim V Π→0 Π,[0,1] (X) < ∞ almost sure.

For the case fBm, due to [17, Proposition 3], lim

And similar to the case of B 2 ( [76, Corollary 2.5, p 29]), the fractional Brownian paths a.s have infinite variation on any interval Namely, since 1 > 1 and

If H > 1 , by choosing p = 2 > 1 one has lim

2 H |Π|→0 Π,[0,1] semimartingale, a.s lim V (1) (B H ) < ∞ which is a contradiction. Π →0 Π,[0,1]

If H < 1 , since 0 < lim V (1/H) (B H ) < ∞ and 1 > 2, the continuity of B H

(2) Π,[0,1](B H ) = ∞ Therefore B H with H < 1 can not be a semimartingale either.

It is known that a stochastic process (X t ) t∈R defined on the probability (Ω, F, P) valued in R m with continuous paths can be viewed as a measurable map X : (Ω, F, P)

The space of continuous functions from R to R m is denoted as C(R, R m ), accompanied by the σ-algebra B generated by cylinder sets, which is the smallest σ-algebra ensuring the measurability of all projections π t : C(R, R m ) → R m defined by π t (x) := x t For effective application of random dynamical system theory, it is advantageous to utilize the canonical version (C(R, R m ), B, P X ), where P X represents the image measure XP, along with the coordinate canonical process.

Definition 1.2 ( [4]) Let (Ω, F, P) be a probability space A family (θ t ) t∈R of map- pings of (Ω, F, P) into itself is called a metric dynamical system if it satisfies the fol- lowing conditions

(iv) P is (θ t )−invariant, i.e P = θ t P, for all t ∈ R.

Definition 1.3 A metric dynamical system (Ω, F, P, (θ t ) t∈R) is called ergodic if the set A ∈ F which are (θ t )−invariant, i.e θ t (A) = A for all t ∈ R has measure one or zero.

This section outlines the construction of the canonical sample space for B H, which supports the random dynamical system approach to stochastic differential equations driven by fractional Brownian motion (fBm) The canonical space for standard Brownian motion is denoted as (C0(R, R), B, P 1, θ), where C0(R, R) represents the space of continuous functions on R that vanish at zero, equipped with the compact open topology based on uniform convergence on compact intervals B is the Borel σ-algebra, P 1 is the Wiener measure, and θ is the Wiener shift operator defined by θ t (ω)(ã) := ω(t + ã) − ω(t) for ω in C0(R, R) It is established that the space (C0(R, R), B, P 1, θ) exhibits ergodicity.

The Ito integral \( R k(t, s)dW_s \), where \( W \) represents two-sided Brownian motion, defines a Gaussian process for each square integrable kernel \( k(t, s) \) Notably, fractional Brownian motion (fBm) can be represented in law as \( R k(t, s)dW_s \) using an appropriate kernel This representation, introduced by Mandelbrot and Van Ness, utilizes the kernel defined as \( k(t, u) := [(t - u) \vee 0]^{H-1} - [(-u) \vee 0]^{H-1} \) for \( t, u \in \mathbb{R} \).

∫ k H (t, u)dW u (1.1) has a continuous modification which is two-sided fractional Brownian motion of Hurst parameter H.

In the context of the presentation of B H in (1.1), where W represents the coordinate process defined on the canonical space, we can derive a measurable mapping by considering the continuous version of B H while maintaining the same notation.

Then there exists (C0(R, R), B, P H , θ) the canonical space for fBm, in which

P H := B H P 1 Moreover the space is an ergodic metric dynamical system.

Theorem 1.2 ( [45, Theorem 1])(C0(R, R), B, P H , (θ t )) is an ergodic metric dynam- ical system.

Definition 1.4 Let m be a positive integer An m−dimensional fractional Brownian motion B H of Hurst index H ∈ (0, 1) is (B H , B H , ã ã ã , B H ) where B H are independent one-dimensional fractional Brownian motions of Hurst index H.

In the context of m-dimensional Brownian motion (B H), the canonical space is constructed as (Ω, F, P, θ), where Ω represents C0(R, R m) with the Borel σ-algebra, P denotes the distribution of Brownian motion, and θ is the Wiener shift operator This space is also characterized by its ergodic properties.

To effectively analyze bounded p-variation paths over compact intervals, we define the spaces C p−var (R, R m ) and C 0,p−var (R, R m ) for each p ≥ 1 These spaces consist of all continuous functions ω : R → R m, where the restriction of ω is considered on the interval [−T, T].

T] is in C p−var ([−T, T], R m ), C 0,p−var ([−T, T], R m ) respectively for any T > 0 Equip

Then (C p−var (R, R m ), d) is a complete metric space.

C 0,p−var (R, R m ) := {x ∈ C 0,p−var (R, R m )| x 0 = 0}, Note that for x ∈ C 0,p−var (R, R m ), |x | p var,I and ǁxǁ p−var,I are equivalent

− norms for every compact interval I containing 0 Due to [5, Proposition 1, Theorem 5] the space C 0,p−var (R, R m ) is a completely separable metrizable

0 0,p−var m topological space Moreover, C0 (R, R ) is θ−invariant and for each t, θ t : (C 0,p−var (R, R m ), d) → (C 0,p−var (R, R m ), d) is continuous (see [5, Theo-

Since B H possesses a 1-Hölder continuous version, it also has a p-variate version for 1 < p < ∞ By taking the trace of B on C0,p-var(R, R^m) and restricting P H to this new σ-algebra, we can construct a new metric dynamical system, which we denote by (Ω, F, P, θ) Additionally, this metric dynamical system is ergodic, as demonstrated in [12, Lemma 1, Remark 2].

Remark 1.2 (i) Identical results apply to ν− Ho¨lder space, for 0 < ν < H.

(ii) When dealing with the differential equation driven by Ho¨lder continuous paths, the

1 2 m i noise is often denoted by x Throughout this thesis we would prefer using the notation ω to emphasize the randomness of the objects.

In the following chapters, we fix p ∈ (1, 1 ), the fBm is understood to be defined on the canonical space Ω, , , H

1.2 Pathwise stochastic integrals with respect to fractional

This section discusses two methods for extending Stieltjes integrals to accommodate infinite variation integrators, both of which are applicable for defining stochastic integrals concerning fractional Brownian motions in a pathwise manner However, this thesis will focus exclusively on the first method.

We first review on Riemann - Stieltjes and Lebesgue - Stliejes integrals as a motivation for the extensions introduced later in this section The details can be seen in chapter 2 of [42].

Let f , g be functions on [a, b] value in R d ×m , R m respectively Let Π = {t i } be a partition of [a, b] and ξ i ∈ [t i , t i +1] If the sum

S Π := ∑ t i ∈Π f (ξ i )[g(t i +1) − g(t i )] (1.3) converges to a finite limit I as |Π| → 0 independent of the choice of Π and ξ i then I is called the Riemann - Stieltjes integral of f w.r.t g We then write

Due to [42, Proposition 2.2] , if f is continuous and g is in 1−var ([a, b], R m )then the Riemann–Stieltjes integral ∫ b f (t)dg(t) exists and

In the context of the Lebesgue-Stieltjes integral, if g is a function of finite variation on the interval [a, b], it can be represented as the difference between two monotone functions, g1 and g2 The Borel measures corresponding to these functions are denoted as μ1 and μ2 For a Borel function f, the Lebesgue-Stieltjes integral of f with respect to g over the interval [a, b] is then defined accordingly.

If f is smooth the integral b f (t)dg(t) is equal to

∫ b f dà 1 − ∫ b f dà 2 if it exists. which can be rewriten by

It is known that if f is continuous, g is right continuous and of bounded variation, then these two integrals exist and coincide.

This subsection revisits key aspects of the Young integral, which extends the Riemann-Stieltjes integral to broader function spaces For further details, please refer to sources [85] and [42].

R d ×m ) and g ∈ C p−var ([a, b], R m ) we say that z ∈ C([a, b], R d ) is an indefinite Young integral of f against g if there exist a sequence ( f n , g n ) in C 1−var ([a, b], R d ×m ) × C 1−var ([a, b], R m ) satisfying su ch th at s u p n li m n

| g n | q−var,[a,b] < ∞ ǁg n − gǁ ∞,[a,b] = 0 a n d z ã f (t)dg(t) tends to z uniformly on [a, b] as n

∞ If z is unique we write

∫ a ã f (t)dg(t) and set ∫ t f (u)dg(u) := ∫ t f (u)dg(u) − ∫ s f (u)dg(u). s a a

Given f ∈ C q−var ([a, b], R d ×m ) and g ∈ C p−var ([a, b], R m ) with 1 + 1 >

Young integral of f against g, a ã f (t)dg(t) Moreover, the

From the Young-Loeve estimate (1.5), one can see that the Riemann - Stieltjes sum in (1.3) converges to ∫ t f (u)dg(u) when the mesh |Π| tends to 0, i.e.

Lemma 1.1 For p ≥ 1, q ≥ 1 such that 1 + 1 > 1 and f ∈ C q−var ([a, b], R d ×m ), g ∈ C p−var ([a, b], R m ), the following estimate holds

The conclusion is a direct sequence of (1.5) and [42, Proposition 5.10(i)].

Due to Lemma 1.1, the integral t ›→ ∫ t f (s)dg(s) is a continuous bounded p-variation path Note that

Other properties of Young integrals for instance the change of variable formula or the integration by parts formula can be seen in [42], [6].

1.2.2 Fractional integrals and fractional derivatives

The content in this subsection is written on the basis of [88], [74] and [86] Let f ∈

L 1 (a, b) For every α ∈ (0, 1), the left-sided and right-sided Riemann- Liouville fractional integral of f of order α are defined for almost all t ∈ (a, b) as follows

G(α) t (s − t) 1−α ds respectively Here G stands for Gamma function G(α ) = ∞ u α −1 e −u du and

(−1) α = e −iπα For a function f (t) with t ∈ [a, b], the left-sided and right-sided

Riemann-Liouville fractional derivative of f of order α at t ∈ (a, b) are defined

(s − t) α ds respectively, if exist The corresponding Weyl representation reads

The definition presented is inspired by the Lebesgue-Stieltjes integral, where fractional derivatives are utilized instead of conventional derivatives We denote the class of functions represented in the form \( I^\alpha \phi_t(I)(\alpha \phi)(t) \) as \( I^+_a(L^p)(I^-_b(L^p)) \), highlighting the significance of fractional calculus in this context.

Definition 1.6 The (fractional) integral of f with respect to g is defined by

The definition is wellposed, i.e independent of the choice of α (see

[86]) Denote by W α,1 ([a, b], R d ), the space of measurable functions f on [a, b] such t h a t ǁ f ǁ

< ∞ and W 1 −α,∞ ([a, b], R d ) the space of measurable functions g on [a, b] such that ǁgǁ

For all 0 < ε < α, it can be seen easily that

∈ W 1 −α,∞ ([a, b], R m ) then the condition in (1.8) is fulfilled with p = 1 and q

= ∞, hence the integral ∫ t f dg is determined by mean of (1.8) for all a ≤ t ≤ b, and

It has been established that if functions f and g exhibit Hölder continuity with exponents α and λ, respectively, and the sum of these exponents satisfies α + λ > 1, then it is possible to select α < λ and 1 − α < α, with p and q both equal to infinity Under these conditions, the integral aligns with the Young integral, as referenced in [86].

1.2.3 Stochastic integrals w.r.t fractional Brownian motions

Given a stochastic process X t and a fractional Brownian motion B H with

Hurst index H > 1/2, since B H is not a semimartingale one can not define

In this article, we explore the construction of the stochastic integral of a process \( X_t \) with respect to fractional Brownian motion (fBm) using Young integrals instead of Ito stochastic calculus We focus on a locally bounded \( p \)-variation version of the fBm, assuming that the sample paths of \( X_t \) exhibit locally bounded \( q \)-variation, where \( 1 + 1 > 1 \) almost surely This approach allows us to establish a coherent framework for understanding the stochastic integration in a pathwise manner.

1.2.4 Young integrals on infinite domains

We introduce here the concept of Young integrals on infinite domains.

Let \( f \) be a function in \( C^{q-\text{var}}([a, b], \mathbb{R}^{d \times m}) \) and \( g \) in \( C^{p-\text{var}}([a, b], \mathbb{R}^{m}) \) with the condition that \( 1 + 1 > 1 \) for all intervals \( [a, b] \) within \( \mathbb{R}^+ \) For convenience, we restrict our discussion to the positive reals, although the entire set of real numbers can also be considered Under these circumstances, the expression \( f(s)dg(s) \) exists for all \( 0 < a < b \) For a fixed \( t \) in \( \mathbb{R} \), we define the integral \( \int_{-\infty}^{\infty} f(s)dg(s) \) as the limit \( \lim_{t \to \infty} \int_{0}^{t} f(s)dg(s) \), provided this limit exists and is finite.

Proposition 1.3 If ∑k ≥0 ǁ f ǁ q−var,∆k < ∞ and sup k≥0 |g | p−var,∆ k < ∞ then G t is well defined for all t ∈ R + Moreover, G is of bounded p- variation and as (1.6) we obtain

Proof First, due to the assumptions, ∫ t f (s)dg(s) exists for all t ∈ R + For t t

0 p q realization x t , ω t of X and B H the integral ∫ b

X t dB H is defined pathwise as the p q t 0

≤ K sup k≥0 | g | p−var,∆ k ∑ k=n ǁ f ǁ p−var,∆ k → 0 as n, m → ∞. obtain n∈N t 0

Assumptions

(H 1) f (t, x) is continuous and there exists C f > 0, b ∈ L 1 ([0, T], R d ) and for every

N ≥ 0 there exists L N > 0 such that the following properties hold:

(H 2) g(t, x) is differentiable in x and there exist some constants C g > 0, 0 < β, δ ≤

1, a control function h(s, t) defined on ∆[0, T] and for every N ≥ 0 there exists M N >

0 such that the following properties hold:

(ii) Local Ho¨ lder continuity ǁ∂ x g(t, x) − ∂ y g(t, y)ǁ ≤ M N |x − y| δ , ∀x, y ∈ R d , |x|, |y| ≤ N, ∀t ∈ [0, T], (iii) Generalized Ho¨ lder continuity in time

(H 3) The parameters in (H 1) and (H 2) statisfy the inequalities

In this study, we focus on the existence and uniqueness of the solution for the equation (2.3) within the space \( C^{q-var}([0, T], \mathbb{R}^d) \), where the parameter \( p \) is fixed in the interval (1, 2) and \( q > 0 \) is an appropriate constant The condition on the function \( f \) aligns with classical requirements found in ordinary differential equations Furthermore, as demonstrated by Davie in [30, Example 1, p 23], the condition on the function \( g \) must be of class \( C^\gamma \) with \( \gamma > p \), as this criterion is essential; he provides an example of an autonomous deterministic equation where \( g \in C^\gamma \) with \( 0 < \gamma < p \) results in multiple solutions.

, 1 Σ , one can choose consecutively constants q 0, q such that

It can be seen from the above assumptions that

Existence and uniqueness theorem for deterministic equations

uniqueness theorem for deterministic equa- tions

In this section, we work with the deterministic Young equation x t = x 0 + ∫0t f (s, x s )ds + ∫0t g(s, x s )dω s , t

We now consider x ∈ C q−var ([t 0, t 1], R d ) with some [t 0, t 1] ⊂ [0, T] and define the mapping

Note that a fixed point of F is a solution of (2.3) on [t 0, t 1] with the boundary condition x(t 0)

2.2.1 Existence and uniqueness of a global solution

For the proof of our main theorem on existence and uniqueness of solutions we first need some auxiliary estimates stated in the following lemmas.

Lemma 2.1 Assume that (H 1 )-(H 3 ) are satisfied.

(i) If x ∈ C q−var ([t 0, t 1], R d ) then g(ã, x ) ∈ C q 0 −var ([t 0, t 1], R d ìm ) and

0 ,t 1 ]) (2.13) (ii) For all s < t and for all x i ∈ R d such that |x i | ≤ N, i = 1, 2, 3, 4, then

(iii) For any x, y ∈ C q−var ([t 0, t 1], R d ) such that x t 0 = y t 0 and ǁxǁ∞,[t 0 ,t 1 ] ≤ N, ǁyǁ∞,[t 0 ,t 1 ] ≤ N we have

Proof (i) For s < t in [t 0, t 1], we have

≤ C g |x t − x s | + h(s, t) β Due to Lemma A3 with the note that q 0 β > 1,

(ii) This part is similar to [74, Lemma 7.1] with our function h(s, t) β playing the role of |t − s| β

0 ≤ t 0 < t 1 ≤ T be arbitrary, q be chosen in (2.5) and F be defined by (2.12) Then for any x ∈ C q−var ([t 0, t 1], R d ) we have F(x) ∈ C q−var ([t 0, t 1], R d ), thus

Moreover, the following statements hold

(ii) Let N ≥ 0 be arbitrary but fixed Suppose that x, y ∈

C q−var ([t 0, t 1], R d ) be such that ǁxǁ∞,[t

(i) Since 1 + 1 > 1, by virtue of (2.13), the Young integral ∫ t g(s, x s )dω s exists for all t ∈ [t 0, t 1] Using (1.6), (2.10) and (2.12) we have

0 ,t 1 ]Σ For s < t in [t 0, t 1] using the assumption (H 1) we have

≤ ∫ t1 C + b(u)Σ du 1 + ǁxǁ q-var,[t 0 ,t 1 ]Σ due to Lemma A3 and that the function (s, t) ›→ ∫ t

∆[t 0, t 1] is a control Then (2.15) holds since

(ii) By virtue of (1.6), (2.14) and the condition x t 0 = y t 0 , we have

1 ](t 1 − t 0 ). Inequality (2.16) is a direct consequence of these estimates for I(x) and J(x) Q

Before proving the existence and uniqueness theorem, we need the following lemma of Gronwall type We refer to [74, Lemma 7.6], [42, Lemma 10.52] and [29, Lemma 2.7] for other versions of Gronwall Lemma.

Lemma 2.3 (Gronwall-type Lemma) Let 1 ≤ p ≤ q be arbitrary and satisfy 1 +

1 > 1 Assume that ω ∈ C p−var ([0, T], R) and y ∈ C q−var ([0, T], R d ) satisfy

| ≤ Aˆ 1/q + a ∫ t y du + a ∫ t y dω , ∀s, t ∈ [0, T], s < t, (2.17) t s s,t s s for some fixed control function Aˆ on ∆[0, T] and some constants a 1, a 2 > 0 Then for every u, v ∈ [0, T], u Σ < v,

2a (v−u)+κN (ω) q− q 1 where K ∗ = 1 , κ = log K∗ +2 and N (ω) is estimated as in (1.12) with

Proof From the assumption, for each [s, t] ⊂ [0, T],

+ a 2 ( K ∗ + 1) | ω | p-var,[s,t] | y | q-var,[s,t] (2.20) For given [u, v] ⊂ [0, T], construct a sequence τ as in (1.9) with à = 2a ( 1 +1) , i i.e for i = 0, , N := N [u,v],à , ∀[s, t] ⊂ [τ i , τ i +1] 2 K ∗

Combining this with (2.20) we have for i = 0, , N; t ∈ [τ i , τ i +1],

Applying the Gronwall Lemma A1 for the function |y | q-var,[τ i ,t] of variable t, t ∈ [τ i , τ i +1], we obtain

K ∗ + 1 τi |Σ (2.21) Since |y τ i+1 | ≤ ǁyǁ q-var,[τ i ,τ i+1 ], by induction we obtain

N [u,v],à (ω) ≤ 1 + [2a 2(K ∗ + 1)] p |ω | p (ii) It can be seen from the proof of Lemma 2.3 that ǁ y ǁ ∞,[τ i ,τ i+1 ] ≤ ǁ y ǁ q−var,[τ i ,τ i+1 ] and then ǁyǁ

(iii) If y satisfies the following condition (similar to (2.19))

[s,t] ) (2.22) where a 1, a 2, a 3 are positive real constants, then Σ

= 0 As such one may choose

We are now at the position to state and prove the main theorem of this sec- tion.

Theorem 2.1 Consider the Young differential equation

(2.11), starting from an ar- bitrary initial time t 0 ∈ [0, T), with T being an arbitrary fixed positive number and x 0 ∈

R d being an initial condition Assume that the conditions (H 1 )-(H 3 ) hold Then, (2.11) has a unique solution x in

C q−var ([t 0, T], R d ), where q is chosen satisfying (2.5). à e 1 [u,v], à

Proof The proof proceeds in several steps.

Step 1: Existence of local solution.

Firstly, to prove the existence of the solution in a small enough interval we use the greedy sequence of time τ i ∗ constructed in (1.13) where we use the con- trol function ω˜ (s, t) := ∫ t

2M(K + 2) with K , M is defined in (2.7) and (2.8) respectively.

Let s 0 ∈ [t 0, T) be arbitrary but fixed Put r 0 = min{n : τ n ∗ s 1= min{τ r ∗ 0 , T} Then,

Recall the mapping F defined by the formula (2.12) with t 0, t 1 replaced by s 0, s 1, respectively By Lemma 2.2 and (2.24)–(2.25), for s 0, s 1 determined above we have F :

C q−var ([s 0, s 1], R d ) −→ C q−var ([s 0, s 1], R d ) and ǁ F ( x )ǁ q-var, s ,s = |F(x) s | + |F(x) | ≤ |x s | + 1 1 + ǁ x ǁ q-var, s ,s Σ

We now show that if x ∈ C q−var ([s 0, s 1], R d ) then F(x) ∈ C (q− G )-var ([s 0, s 1], R d ) with small enough G Indeed, since ω˜ is a control, q > p, we can choose G > 0 such that q − G ≥ p For all s < t in [s 0, s 1], using (2.15) we have

0 ,s 1 ]Σ and the assertion follows by an application of Lemma A3 Consider the set

In the context of the Banach space C q−var ([s 0, s 1], R d ), the mapping F : B 1 → B 1 is continuous, and B 1 is identified as a closed convex set To demonstrate that F acts as a compact operator on B 1, we establish that for any sequence y n ∈ F(B 1), there exists a convergent subsequence in the p-var norm, indicating that F(B 1) is relatively compact within B 1 To utilize Proposition A.6 effectively, we must show that the sequence (y n) is equicontinuous and bounded in the (q − G)-var norm Specifically, for the sequence defined by y n = F(x n) where x n ∈ B 1, we have the inequality sup ǁy nǁ (q− G)-var,[s ,s ] ≤ |x s 0 | + 2M(K + 2)(1 + |x s 0 |) ω˜ (s 0 , s 1 ) + |ω | p-var,[s ,s ]Σ.

It means that y n are bounded in C([s 0, s 1], R d ) with supremum norm, as well as bounded in

The inequality |y n − y n | ≤ 2M(K + 2)(1 + |x s |) ω˜ (s, t) + |ω | Σ indicates that the sequence (y n) is equicontinuous, leading to the conclusion that y n converges to a limit along a subsequence in C q−var ([s 0, s 1], R d ) This establishes the compactness of the set F(B 1), confirming that F(B 1) is relatively compact within C q−var ([s 0, s 1], R d ) Consequently, F is recognized as a compact operator mapping B 1 into itself By applying the Schauder-Tychonoff fixed point theorem, we can assert the existence of a function xˆ ∈ B 1 such that F(xˆ) = xˆ, thereby demonstrating that a solution xˆ ∈ B 1 to equation (2.11) exists on the interval [s 0, s 1].

Step 2: Uniqueness of a local solution.

Now, we assume that x, y are two solutions in C q−var ([s 0, s 1], R d ) of (2.11) such that x s 0

= y s 0 It follows that F(x) = x and F(y) = y Put

0 ,s 1 ]}, and z = x − y, we have z s 0 = 0 and ǁxǁ∞,[s 0 ,s 1 ], ǁyǁ∞,[s 0 ,s 1 ] ≤ N 0.

Following the estimates of I, J in the proof of Lemma 2.2(ii), there exist a 1, a 2 >

− z s | ≤ a 1 ∫ t |z |du + a 2 |ω | p- ar,[s,t] v (|z s | + |z | q- var,[s,t] ) for all [s, t] ⊂ [s 0, s 1] which has the form (2.19) Lemma 2.3 leads to ǁzǁ q−var,[s

0 ,s 1 ] 0 or z ≡ 0 on [s 0, s 1] The uniqueness of the local solution is proved.

Next, by the additivity of the Riemann and Young integrals, the solution can be concatenated Namely, let 0 < t 1 < t 2 < t 3≤ T Let x t be a solution s n 0 1 0 1 t s 0 p-var,

In the context of the Young differential equation (2.11), let \( u \) be defined on the interval \([t_1, t_2]\) and \( y_t \) be a solution on \([t_2, t_3]\) with the condition \( y_{t_2} = x_{t_2} \) We can construct a continuous function \( z(t) \) defined on \([t_1, t_3]\) by setting \( z(t) = x(t) \) for \([t_1, t_2]\) and \( z(t) = y(t) \) for \([t_2, t_3]\) This function \( z(t) \) serves as a solution to the Young differential equation over the interval \([t_1, t_3]\) Additionally, if \( z(t) \) is a solution on \([t_1, t_3]\), then its restrictions to the intervals \([t_1, t_2]\) and \([t_2, t_3]\) will also yield solutions to the corresponding equations with the appropriate initial conditions.

Finally, apply the estimate (1.16) we can easily get the unique global solution to the equation (2.11) on [t 0, T].

Put n 0 = min{n : τ n > t 0} The interval [t 0, T] can be covered by N ∗ − n 0 intervals [t i , t i +1], i = 0, N ∗ − n 0, determined by times t i = τ n ∗

0 +i−1, i = 1, , N ∗ − n 0, with parameter à being defined by (2.24) and t N ∗ := T The arguments in

Step 1 and Step 2 are applicable to each of intervals [t i , t i +1], i = 0, N ∗ − n 0, implying the existence and uniqueness of solutions on those intervals Then, starting at x(t 0) = x t 0 the unique solution of (2.11) on [t 0, t 1] is extended uniquely to [t 1, t 2], then further by induction up to [t N ∗ −1, t N ∗ ] The solution x of (2.11) on [t 0, T] then exists uniquely.

In order to study the flow generated by the solution of system (2.1), we need also to consider the backward version of (2.11) in the following form

T ∫ T g(s, x )dω , t ∈ [0, T], (2.26) s s where x T ∈ R d is the initial value of the backward equation (2.26).

Theorem 2.2 Consider the backward equation (2.26) on [0, T] Assume that the conditions (H 1 )-(H 3 ) hold Then the backward equation (2.26) has a unique solution x ∈ C q−var ([0, T], R d ), where q is chosen as above satisfying (2.5).

Proof We make a change of variables fˆ(u, x) := − f (T − u, x), gˆ(u, x) := −g(T − u, x), ωˆ (u) := ω(T − u), y u := x T −u , where u ∈ [0, T] Then x T = y 0, and by putting v = T − t and u = T − s we have

Furthermore, by virtue of the property (1.7) of the Young integral we have

Therefore, the backward equation (2.26) is equivalent to the forward equation y v = y 0

In this section, we analyze the forward equation represented by ∫ ∫ s u s u u u , v ∈ [0, T], with the initial condition y 0 = x T ∈ R d We will confirm that the conditions outlined in Theorem 2.1 are satisfied for this equation It is important to observe that if ω is a member of C p−var ([0, T], R m), then its corresponding ωˆ also belongs to this space Additionally, the condition (H 2) is evidently satisfied for gˆ, while (H 1)(i) is applicable for fˆ Furthermore, we note that if condition (ii) of (H 1) holds for f, then it will similarly hold for fˆ.

The inequality |fˆ(v, x)| = |f(T − v, x)| ≤ C_f |x| + b(T − v) = C_f |x| + bˆ(v), for v ∈ [0, T], indicates that bˆ(v) = b(T − v) belongs to L¹([0, T], R^d), thus satisfying condition (H 1)(ii) for fˆ As a result, Theorem 2.1 can be applied to the forward equation (2.27), confirming that it has a unique solution y in C^q-var([0, T], R^d) Since equation (2.27) is equivalent to the backward equation (2.26), the theorem is thereby proved.

Remark 2.2 (i) In the above theorem , one can start at t 0 ∈ (0, T] and prove the existence and uniqueness of the solution to the backward equation on [0, t 0].

(ii) From Theorem 2.1 and Theorem 2.2, starting from an arbitrary initial time t 0 ∈

[0, T], the solution is proved to exist uniquely on whole [0, T].

Theorem 2.3 (Different trajectories do not intersect) Assume that the conditions

In the context of the Young differential equation (2.11) on the interval [0, T], let x_t and x̂_t represent two solutions If these solutions are equal at a specific point a within [0, T], then it follows that they must be identical for all points t in the same interval This implies that the solutions of the differential equation either coincide completely or do not intersect at any point.

To prove that \( x_t = \hat{x}_t \) for some \( a \in [0, T] \), we first consider the case when \( a = 0 \) In this scenario, Theorem 2.1 guarantees the uniqueness of the solution, leading to \( x_t = \hat{x}_t \) for all \( t \in [0, T] \) Now, if \( a \in (0, T] \), the restrictions of the functions \( x_t \) and \( \hat{x}_t \) on the interval \([a, T]\) are solutions to the equation (2.11) starting from \( a \) Thus, by Theorem 2.1, we conclude that \( x_t = \hat{x}_t \) for all \( t \in [a, T] \) Furthermore, examining the restrictions of \( x_t \) and \( \hat{x}_t \) on the interval \([0, a]\), we find that they also satisfy the equations \( x_t = x_0 \).

)dω s , t ∈ [0, a], with the initial values x 0 and xˆ0 respectively Since x a x t and xˆ t are solutions of the same backward equation xˆ a , the two functions x t = x a

)ds − ∫ a g(s, x )dω s , t ∈ [0, a], (2.28) with the same initial value x a Theorem 2.2 asserts the uniqueness of solution of (2.28) on [0, a], hence x t must coincide with xˆ t on [0, a] and the theorem is proved Q

Remark 2.3 (Locality of Young differential equations) By virtue of Theorems 2.1,

2.2 and 2.3, under the assumptions of Theorem 2.1, the equation (2.11) has locality properties like ODE: we can solve it locally and extend the solution both forward and backward, and any two solutions meeting each other at some time should coincide in the common interval of definitions.

2.2.2 Estimate of the solution growth

The previous section established the existence of a unique solution to the system (2.11) within the space C q−var ([0, T], R d ) for q > p Furthermore, it can be demonstrated that this solution also resides in the subspace C p−var ([0, T], R d ) The p−var norm of the solution is estimated through the construction of the greedy sequence of times defined in (1.9) with à := 1.

In this article, we denote the number of occurrences within the interval [u, v] as N[u,v](ω), highlighting its dependence on both ω and the specified interval The proof of Proposition 2.1 closely follows the approach used in Lemma 2.3, which relies on Gronwall's lemmas and a discretization scheme, serving as an extension of previous findings.

[36, Theorem 2.4] We refer to [22, Proposition 2.2] for the similar results for delay equations.

Proposition 2.1 The solution x of (2.11) belongs to C p−var ([0, T], R d ) and its supre- mum and p−variation norms on each [u, v] ⊂ [0, T] are estimated as follows x ∞, u,v

N [u,v] (ω), (2.30) where κ = log K +2 , and N [u,v] (ω) is estimated in (1.12).

Proof Due to Lemma 2.2, for s, t ∈ [0, T] t ∫ t

+ M | ω | p-var,[s,t] Σ( K + 2) + ǁ x ǁ ∞,[0,T] + ( K + 1) | x | q-var,[0,T]Σ Observe that ∫ t b(u)duΣp, (t − s) p , |ω | p are control functions on ∆[0, T]. That x is of bounded p−variation on [0, T] is followed by Lemma A3. t ǁ ǁ [ ] Σ Σ Σ

To estimate the norms of x firstly recall that

Applying Lemma 2.3, Remark 2.1 (iii) we obtain (2.29) and (2.30) Q

Corollary 2.1 If in addition, g is bounded by ǁgǁ∞ , the following estimate holds ǁ x ǁ p−var,[u,v]

Proof Under the assumptions of g, (2.33) is rewritten as

Repeating the arguments in Lemma 2.3, one obtains the similar result with κ 0, i.e. ǁ x ǁ q−var,[u,v]

In this subsection, we consider the linear YDE dx t = A(t)x t dt + ∑ C m i ( t ) x t dω i , x t

∈ R d , t ∈ [0, T], (2.36) t 0 i=1 where ω t = (ω 1 , ω 2 , ã ã ã , ω m ) and A, C are continuous matrix valued functions on [0, T] To facilitate the presentation throughout this subsection we assume m = 1, t 0= 0 and consider the system dx t = A(t)x t dt + C(t)x t dω t , x 0∈ R d , t ∈ [0, T], (2.37)

In the general case for arbitrary m, it is evident that the condition (H 2)(iii) outlined in Assumption 2.1 is not satisfied for this system Consequently, we must reestablish the theorem regarding the existence and uniqueness of solutions for equation (2.37) The proof will follow a similar approach, incorporating some modified estimates.

Lemma 2.4 Let T > 0 be arbitrary If C ∈ C q−var ([0, T], R d ×d ), x ∈ C q−var ([0, T], R d ), then for all s < t in [0, T],

Proof The proof follows directly from the definitions of the p−var seminorm and the supremum norm, namely

Theorem 2.4 Assume that A ∈ C([0, T], R d ×d ), C ∈ C q−var ([0, T], R d ×d ) with q > p and 1 + 1 > 1 Then equation (2.37) has a unique solution in the space q p t t t

C q−var ([a, b], R d ) with some [a, b] ⊂ [0, T] define the mapping

Then for s ≤ t ∈ [a, b], using Lemma 2.4 we have

(2.41) Then F ∗ (x) ∈ C p−var ([a, b], R d ), and due to the linearity,

Fix 0 < à ∗ < min{1, M ∗ }, we construct τ n ∗ as in (1.13) with ω˜ (s, t) α = t − s and à = à ∗ /M ∗ , namely

[0,T] ∞, u u which is a closed ball in Banach space C q−var ([τ 0, τ 1], R d ) Using (2.42) and the fact that p < q, we have ǁ F ∗ ( x )ǁ q−var,[τ 0 ,τ 1 ] ≤ | x 0 | + M ∗ ( τ 1 − τ 0 + | ω | p−var,[τ

Hence, F ∗ : B → B On the other hand, for any x, y ∈ B ǁ F ∗ ( x ) − F ∗ ( y )ǁ q−var,[τ 0 ,τ 1 ] ≤ à ∗ ǁ x − y ǁ q−var,[τ 0 ,τ 1 ]

Since \( a^* < 1 \), \( F^* \) acts as a contraction mapping on \( B \), ensuring the existence of a unique fixed point of \( F^* \) within \( B \), which corresponds to a local solution of equation (2.37) over the interval \([τ_0, τ_1]\) By employing induction, we can extend this solution to the intervals \([τ_i, τ_{i+1}]\) for \( i = 0, \ldots, N^* - 1\) Consequently, a unique global solution to equation (2.37) is established, and it is clear that this solution resides in \( C^{p-var}([0, T], \mathbb{R}^d) \).

Note that by considering the backward equation, one can prove that starting from x 0 at time t 0∈ [0, T] the solution is proved to exists uniquely on [0, T].

Estimate (2.38) and (2.39) can then be derived using the arguments of Lemma

Continuity and differentiability of the solution

In this section, we define X(ã, t₀, x₀, ω) as the solution to equation (2.11) over the interval [0, T], starting from the point x₀ ∈ Rᵈ at time t₀ By allowing ω to vary within Cᵖ-var([0, T], Rᵐ), we demonstrate the continuity of X as a function of (x₀, ω) across the space Cᵖ-var([0, T], Rᵈ) × Rᵈ For simplicity, in Proposition 2.2, we assume t₀ = 0 without loss of generality, and we refer to the notation X(ã, t₀, x₀, ω) simply as X(ã, x₀, ω).

2.3.1 The continuity of the solution

Proposition 2.2 Suppose that the assumptions of Theorem 2.1 are satisfied then the solution mapping

Proof First fixing (x 0, ω) ∈ R d × C p−var ([0, T], R m ), by Proposition 2.1, we can choose N 0 (depending on x 0, ω) such that for all |x 0− x 0 J | ≤ 1, ǁω − ω J ǁ p-var,[0,T] ≤ 1 ǁX(ã, x 0 J , ω J )ǁ q-var,[0,T] ≤ N 0

We use here for short notations x ã = X(ã, x 0, ω), x ã J = X(ã, x 0 J , ω J ) and z ã x ã − x ã J Prove similarly to that in Proposition 2.2(ii), we have

| + |z | q- var,[s,t] Σ for some generic positive constant D Due to Lemma 2.3 and Remark 2.1, there exist constants D depending on parameters of the equation (2.11), T and N 0, such that ǁ z ǁ q−var,[0,T] = ǁ x − x J ǁ q−var,[0,T] ≤ D | x 0 − x 0 J | + ω J − ω p-var,[0,T]Σ

Theorem 2.5 Suppose that the assumptions of Theorem 2.1 are satisfied then the so- lution mapping

To establish the proof, we begin by fixing a point (x₀, ω) in Rᵈ × Cᵖ-var([0, T], R) and analyzing the forward and backward equations (2.11) and (2.26) This allows us to extend the solution X(ã, t₀, x₀, ω) of equation (2.11), which has the initial value x₀ at time t₀, across the entire interval [0, T] According to Proposition 2.1, we can select a constant N₀ that depends on x₀ and ω, ensuring that the p-variation norm of X(ã, t₀, x₀, ω) remains bounded by N₀ for all t₀ within [0, T], provided that |x₀ - x₀ᴊ| ≤ 1 and ǁω - ωᴊǁ p-var,[0,T] ≤ 1.

Now we fix (t 1, t 2, x 0, ω) and let (t 1 J , t 2 J , x 0 J , ω J ) be in a neighborhood of (t 1, t 2, x 0, ω) such that

By triangle inequality, Proposition 2.2 we have

2 ,t 2 J ] , where Y is the solution of (2.26), D is a generic constant This quantity tends to 0 as

2.3.2 The differentiability of the solution

(H 4) f is of C 0,1 on [0, T] × R d , i.e f (t, x) and ∂ x f (t, x) are continuous w.r.t.

Given x 0∈ R d , consider (2.11) and the linearization equation along the solu- tion x ã

= X(ã, x 0, ω) which starts from x 0 at time 0 dy t = ∂ x f (t, x t )y t dt + ∂ x g(t, x t )y t dω t , t ∈ [0, T], y 0 ∈ R d (2.43)

Due to Propostion 2.1 and Theorem 2.2, one can choose N 0 such that for all ǁx 0 J − x 0ǁ ≤ 1, ǁX(ã, x 0 J , ω)ǁ q−var,[0,T] ≤ N 0 From the assumption (H 2 ), for

On the other hand, q 0≥ q > 1 and q 0 β > 1, using Lemma A3 we conclude that ǁ∂ x g(ã, x ã )ǁ q

0 −var,[0,T] ≤ M N 0 N 0 + h(0, T) β < ∞ Applying Proposition 2.4 in which A(t) := ∂ x f (t, x t ) is of continuity and C(t) := ∂ x g(t, x t ) is of bounded q 0−variation on [0, T] and 1 + 1 > 1, the equation (2.43) has unique solution y ã = Y(ã, y 0, ω) in C p−var ([0, T], R d ).

In this subsection, we simplify the notation by omitting ω when discussing the solutions The following theorem demonstrates that the solution is differentiable with respect to the initial condition, aligning with findings from [34, Theorem 17.4] that present a similar result.

Theorem 2.6 Suppose that the assumptions (H 1 )- (H 4 ) are satisfied then the solution mapping is differentiable.

Proof Fix x 0 and consider x J ∈ B¯ (x 0, 1) Put x 1 := X(t, x J ), y t := Y(t, x J − x 0) and z t := x 1 − x t − y t Then z 0 = 0 and z ∈ C p−var ([0, T], R d ) We will prove that ǁ z ǁ p−var,[0,T] → 0 as |x J − x | → 0.

|x 0 J − x 0 | 0 0 The proof includes several steps.

Denote by F ∗ , G ∗ - the nonlinear remaining terms of f , g, i.e. f (s, x 1 ) − f (s, x s ) = ∂ x f (s, x s )(x 1 − x s ) + F ∗ (s, x 1 − x s ), g(s, x 1 ) − g(s, x s ) = ∂ x g(s, x s )(x 1 − x s ) + G ∗ (s, x 1 − x s ), s ∈ [0, T].

It can be seen that F ∗ (ã, x 1 − x ã ) is of continuity and G ∗ (ã, x 1 − x ã ) is of bounded ã ã q 0−variation on [0, T]

+( K + 1) | ω | p−var,[s,t] ǁ ∂ x g (ã, x ã )ǁ q 0 −var,[0,T] ǁ z ǁ q 0 −var,[s,t] This deduces

+( K + 1) | ω | p−var,[s,t] ǁ ∂ x g (ã, x ã )ǁ q 0 −var,[0,T] ǁ z ǁ p−var,[s,t]. Since ǁ∂ x g(ã, x ã )ǁ p is bounded and ∂ x f (ã, x ã ) is continuous on [0, T ], by Remark 2.1 that q 0 var,[0,T]

(ii) there exists a constant D depending on the solution x such ǁzǁ p−var,[0,T] ≤ D (|z 0 | + H ∗ ) = D.H ∗ where H ∗ = ǁF ∗ (ã, x 1 − x ã )ǁ ∞,[0,T] ∨ ǁG ∗ (ã, x 1 − x ã )ǁ q 0 −var,[0,T].

Step 2: Now we estimate H ∗ By the assumptions on f , g and the compact- ness of K := [0, T] × B¯ (0, N 0) ⊂ R × R d , there exist a number h > 0 and increasing functions P, Q : [0, h] R + , P(0) = Q(0) = 0 and lim P(u) u→0 ã

≤ ǁ x 1 − x ǁ p−var,[0,T] Q ǁ x 1 − x ǁ p−var,[0,T] (2.46) whenever ǁx 1 − xǁ p−var,[0,T] ≤ h Namely, one can choose P, Q are the modulus of continuity of ∂ x f , ∂ x g on K,

It is evident that P, Q are increasing functions Since ∂ x f , ∂ x g are continuous hence uniformly continuous on K, then P( G ), Q( G ) satisfy the derived proper- ties.

By the assumption on g the second integral in (2.47) can be estimated as

Next, we estimate the first integral in (2.47) Since q 0 β > 1 and q 0 δ > p, one can choose γ ∈ (0, 1) such that q 0 βγ > 1, q 0 δγ > p.

Σ 1 Σ 1 Σ and then the first integral in (2.47) is less than

, x h(s, t) q 0 βγ are control functions Applying Lemma A3 we obtain

Combining (2.45)–(2.49) we obtain H ∗ ≤ Dǁx 1 − xǁ p−var,[0,T] R(ǁx 1 − xǁ p−var,[0,T]) which leads to ǁ z ǁ p−var,[0,T] ≤ D ǁ x 1 − x ǁ p−var,[0,T] R(ǁ x 1 − x ǁ p−var,[0,T]), where D is a generic constant and R(u) → 0 as u → 0 The proof is completed.Q

The stochastic differential equations driven by fBm

Consider system (2.1) dx t = f (t, x t )dt + g(t, x t )dB H , x t 0 ∈ R d , t ∈ [0, T].

As a consequence of Theorem 2.1, (2.1) is solved pathwise for each ω ∈ (Ω, F, P).

Theorem 2.7 Under the assumptions ( H 1) − ( H 3), system (2.1) possesses a unique solution X(ã, t 0, x 0, ω) ∈ C p−var ([0, T], R d ) such that for each t ∈ [0, T]

Proof The first conclusions are derived by Theorem 2.1 Since the space

C 0,p−var ([0, T], R m ) of ω is separable, Theorem 2.5 implies the measurability of the solution.

In what follows we reuse the notation X(ã, t 0, x 0, ω) to refer to the path wise solution of (2.1) on [0, T] starting from x 0 at time t 0.

Lemma 2.5 If the random variable η defined on the probability space (Ω¯ , F¯ , P¯ ) satis- fies Ee κη 2 < ∞ for some κ > 0 then the moment generating function of |η| r is finite on whole R, for all 0 < r < 2.

Proof Using Young inequality, for t > 0 we have t|η| r = (κ|η| 2 ) r/2 t

2− r where D is a generic constant Then Ee t|η| r ≤ DEe κ|η| 2 < ∞ Q ã ǁx − xǁ x x t

Proposition 2.3 For each T > 0 there exists constant D = D(T) depending on T and random variable ξ = ξ(ω) such that almost all

Proof By the self similarity property of B H , we only need to prove for [0, 1] Due to [47, Proposition 2.1, p 18], there exists random variable ξ(ω) and κ > 0 satisfying

Ee κξ 2 < ∞ such that for all s, t ∈ [0, 1], s ≤ t,

Hence, |ω | p−var,[0,1] ≤ Dξ(ω), where D := sup u∈[0,1] u H −1/p √| log u| < ∞ Q

Proposition 2.4 For each t 0, t ∈ [0, T], x 0 ∈ R d the mapping solution X(t, t 0, x 0, ã) is of finite moments of any order n ∈ R + Moreover the tail probability of X is estimated as

TD 1 2 ∞ nH 2n(1−H)−1 log λ − log D 2 Σ where D is a constant.

Proof Due to Proposition 2.1, ǁX(ã, t 0, x 0, ω)ǁ∞,[0,T] ≤ D 2 e p p−var,[0,T]

From Proposition 2.3 and Lemma 2.5 and that 1 < p < 2, Ee p−var,[0,t] < ∞ for all k This implies that E|X(t, t 0, x 0, ã)| n < ∞ For the second part, we assume

( TD ) p for convenience that t 0= 0 and write x(t) = X(t, t 0, x 0, ã) Then

The generation of stochastic two parameter flows

This section demonstrates that equation (2.1) produces a stochastic two-parameter flow on \( \mathbb{R}^d \) In autonomous cases, the solution adheres to the cocycle property, thereby creating a random dynamical system For insights on the smoothness and diffeomorphism characteristics of the flow related to rough differential equations, we refer to works [60], [61], and [63].

In analogy with the theory of ordinary differential equation we give a defini- tion of the Cauchy operator of equation (2.2), which is an operator in R d acting along trajectoties of (2.2).

Definition 2.1 (Cauchy operator) Suppose that on any compact interval of R the conditions (H 1 )-(H 3 ) hold For any −∞ < t 1 ≤ t 2 < +∞, any ω ∈ C p (R, R m ) the

Cauchy operator Ψ t 2 ,t 1 (ã, ω) of the equation (2.2) defined as

R d is the mapping along trajectories of (2.2) from time moment t 1 to time moment t 2 , i.e., for any vector x t 1 ∈ R d we define Ψ t 2 ,t 1 (x t 1 , ω) to be the vector x t 2 ∈ R d of solution x of the equation x t = x t 1

Theorem 2.8 Assume that the conditions ( H 1) − ( H 3) hold on any compact interval of R For any −∞ < t 1 ≤ t 2 < +∞ the Cauchy operator Ψ t 2 ,t 1 (ã, ω) of (2.2) is a homeomorphism Moreover, Ψ t 1 ,t 1 (ã, ω) = id.

Proof By Theorem 2.3 the Cauchy operator Ψ t 2 ,t 1 (ã, ω) is an injection Using similar arguments in the proof of Theorem 2.3, we see that the equation x t = x t 1

, t 2], (2.50) with x t 2 ∈ R d and initial value x t 1 , is equivalent to the backward equation x t = x t 2

51) with initial value x t 2 ∈ R d Hence Theorem 2.2 is applicable and provides ex- istence of solution for any terminal value x t 2 of the forward equation on [t 1, t 2] Consequently, the Cauchy operator Ψ t 2 ,t 1 (ã, ω) is a surjection, thus a bijection.

It is clear from the proof of Theorem 2.1 and Theorem 2.2 that the solutions of

(2.2) depend continuously on the initial values Therefore, the Cauchy operator Ψ t 2 ,t 1 (ã, ω) acts continuously on R d Similar conclusion holds for the inverse Ψ

2 ,t 1 (ã, ω) by using backward equation Hence Ψ t 2 ,t 1 (ã, ω) is a homeomorphism and trivially Ψ t 1 ,t 1 (ã, ω) = id Q

Theorem 2.9 Assume that the conditions ( H 1) − ( H 3) hold on any compact inter- val of R The family of Cauchy operators of (2.2) generates a stochastic two pa- rameter flow of homeomorphisms of R d Namely, for −∞ < t 1≤ t 2 < +∞ and ω ∈ C 0,p−var (R, R m ) we define Ψ t ,t (ã, ω) = id according to Definition 2.1 and setting Ψ t 1 ,t 2 (ã, ω) Ψ − t

2 ,t 1 (ã, ω)then the family Ψ t 2 ,t 1 (ã, ω), t 1, t 2∈ R, is a two

0 1 1 s s s s s parameter flow of homeomorphisms of R d on R.

Theorem 2.2 establishes the measurability of Ψ t 2, t 1 (x 0, ω) within the probability space (Ω, F, P) Conditions (ii) and (iii) of Definition 1.7 are derived from Theorem 2.8, while condition (iv) is a consequence of the definition Ψ t 1, t 2 (ã, ω) = Ψ − t 1 (ã, ω) for t 1 ≤ t 2 Furthermore, the proof of Theorem 2.8 demonstrates that the inverse x(t 2, t 1, ã, ω) adheres to the backward equation (2.51).

Condition (v) of Definition 1.7 follows from the definition of the Cauchy op- erators and Theorem 2.3 and (i) of Definition 1.7 holds due to Theorem 2.5 Q

Prove similarly to Theorem 2.2, one obtains the continuity of the solution of the linearize equation of (2.1) along a fixed solution We then have the following theorem.

Theorem 2.10 Suppose that assumptions (H 1 ) − (H 4 ) are satisfied on every com- pact interval of R Then equation (2.1) generates a stochastic two-parameter flow of

In this chapter, we focus on the autonomous system described by the equation dx_t = f(x_t)dt + g(x_t)dB_H, where the functions f and g are time-independent and meet the criteria outlined in assumptions (H1) to (H3) for every compact set in R.

It follows from definition of Young integral that

From the existence and uniqueness theorem, the solution X(t, s, x 0, ω) of the equation (2.52) satisfies

X(t, s, x 0, ω) = X(t − s, 0, x 0, θ s ω), ∀t, s ∈ R, therefore the mapping ϕ : R × Ω × R d → R d defined by ϕ(t, ω)x 0 := X(t, 0, x 0, ω) (2.54) possesses a cocycle property ϕ(t + s, ω)x 0= ϕ(t, θ s ω) ◦ ϕ(s, ω)x 0, ∀x 0∈ R d , ω ∈ Ω, t, s ∈ R Hence, the following Theorem is a direct consequence of Theorem 2.9. t

Theorem 2.11 System (2.52) generates a continuous random dynamical system given by (2.54).

Conclusions and discussions

This chapter presents the findings on the existence and uniqueness of solutions for the stochastic differential equation driven by fractional Brownian motion, as established in Theorem 2.7, which builds upon the deterministic system results from Theorems 2.1 and 2.2 Additionally, Theorem 2.4 provides estimates for the moments and tail probabilities of the solution to this equation We derive a formula for estimating the growth of the solution and demonstrate the continuity and differentiability of the pathwise solution in Theorems 2.2 and 2.6 Furthermore, Theorem 2.9 introduces the generation of the stochastic flow, which, in the autonomous case, leads to the creation of a random dynamical system as outlined in Theorem 2.11.

The theorem regarding the existence and uniqueness of solutions to equation (2.1) imposes stringent conditions on the function g, especially when compared to Itô stochastic differential equations (SDEs) driven by Brownian motion (Bm) This disparity may stem from the fact that our analysis does not leverage the probabilistic framework of fractional Brownian motion (fBm) due to the pathwise approach employed We anticipate that similar conclusions may still be valid under less stringent regularity conditions on g, but this will be addressed in future research.

We are also interested in the problems on the numerical solution of the sys- tems In

The solution is developed based on the convergence of the approximation scheme, focusing on autonomous equations However, the findings are anticipated to apply to non-autonomous cases as well, provided that the coefficient functions in the dt component meet less stringent conditions.

Lyapunov spectrum of nonautonomous linear fSDEs

This chapter focuses on analyzing the Lyapunov spectrum of a linear system represented by the equation \( dx_t = A(t)x_t dt + C(t)x_t dB_H \), where \( B_H \) denotes an m-dimensional fractional Brownian motion (fBm) defined in a canonical space For simplicity, we assume \( m = 1 \) and explore the dynamics of the system with initial conditions \( x_0 \in \mathbb{R}^d \) for \( t \geq 0 \) The findings presented here remain applicable for cases where \( m > 1 \) using analogous reasoning.

The system (3.1) produces a stochastic two-parameter flow of linear mappings Inspired by previous studies [19], [20], we investigate the Lyapunov spectrum associated with this flow, focusing on both the randomness and regularity of the spectrum.

The chapter is written on the basis of [23].

Recall from [31] that the classical definition of Lyapunov exponent for func- tion h :

The generation of stochastic flow of linear operators

In analogy with Theorem 2.9, we obtain the result on the generation of the stochastic two

- parameter flow of the linear system dx t = A(t)x t dt + C(t)x t dB H , x t 0 = x 0∈ R d , t ∈ [0, T] (3.3) i t t

1 2 m under the assumption of Theorem 2.4 However, here we would like to present more details for the case of linear equations.

Theorem 3.1 Suppose that the assumptions of Theorem 2.4 are satisfied on every com- pact interval of R Then equation (3.3) generates a stochastic two-parameter flow of linear operators of R d on R.

Proof First note that the same method in the proof of Theorem 2.4 can be applied to prove the existence and uniqueness of solution Φ t,t 0 (ω) of the matrix- valued differential equation Φ t,t 0

C(s)Φ t 0 s,t 0 dω s , t ∈ [t 0 , T] (3.4) in which ω = B H (ω) and E is the unit matrix It is easy to show that the solution Φ ã,ã (ω) : {(t, s) ∈ [t 0, T] 2 | s ≤ t} → R d ìd , has properties that Φ s,s (ω) = I d ìd for all s ≥ 0 and Φ t,s (ω) ◦ Φ s,τ (ω) = Φ t,τ (ω), ∀t 0 ≤ τ ≤ s ≤ t ≤ T (3.5)

The solution Φ ã,ã (ω) represents the mapping along the trajectories of the matrix equation (3.4) in forward time Similar to the ordinary differential equation (ODE) scenario, the solution to this matrix equation serves as the Cauchy operator for the vector equation dx t = A(t)x t dt + C(t)x t dB H, with the initial condition x t 0 = x 0∈ R d for t within the interval [0, T].

Next, consider the adjoint matrix-valued pathwise differential equation dΨ t,t 0 = −A T (t)Ψ t,t 0 dt − C T (t)Ψ t,t 0 dω t (3.7) with initial value Ψ t 0 ,t 0 = E, and A T , C T are the transpose matrices of A and

C, respectively By similar arguments we can prove that there exists a unique solution Ψ t,t 0 (ω) of (3.7) Introduce the transformation u(t) = Ψ t,t 0 (ω) T x t By the formula of integration by parts (see [42, Proposition 6.12 and Exercise 6.13] or a fractional version in Za¨hle [86]), we conclude that du t = [dΨ t,t 0 (ω) T ]x t + Ψ t,t 0 (ω) T dx t

In other words, u t = u t 0 = x t 0 = x 0 or equivalently Ψ t,t 0 (ω) T x t = x 0 Combin- ing with Φ in equation (3.4) we conclude that Ψ t,t 0 (ω) T Φ t,t 0 (ω)x 0= x 0 for all

The linear operator Φ t,t 0 is nondegenerate for all t ≥ t 0, as demonstrated by the existence of its inverse, Φ t,t 0 (ω) −1, which equals Ψ t,t 0 (ω) T Consequently, for any x 0 ≠ 0, it follows that Φ t,t 0 (ω)x 0 ≠ 0 Furthermore, for all t 0 ≤ s ≤ t ≤ T, the operator Φ t,s (ω) also proves to be nondegenerate, with its inverse represented as Φ t,s (ω) −1 = Ψ t,s (ω) T By establishing the relationship Φ s,t (ω) = Ψ t,s (ω) T for the interval [0, T], we define the family of operators Φ t,s (ω), which clearly constitutes a continuous two-parameter flow.

On the other hand, prove similarly to Theorem 2.2 and Theorem 2.9, Φ t,s (ω, x 0), Ψ t,s (ω, x 0) are continuous w.r.t (t, s, x 0) and are measurable in ω This com- pletes the proof.

Lyapunov exponent of Young integrals w.r.t B H

In this section, we establish some properties of path wise Young integral driven by

The p-var norm of B H on a compact interval is finite for any order of moments, as demonstrated by Lemma 2.5 and Proposition 2.3 Therefore, we define Γ as E B H for the purposes of the following sections.

We give here a more detail on estimation of Γ k as follows Observe that for ν = 1/p < H, | ω | p−var,[0,1] ≤ | ω | ν−Hol,[0,1].

Fix k ≥ max{ H 2 ν , 2p + 2}, k ∈ N Apply [42, Corollary A2] for α = ν + 1 and [71, Remark−1.2.2, p 7] we obtain k

≤ in which G(n) is the Gamma function This implies

Remark 3.1 The estimation in (3.9) still holds if we replace [0, 1] by [a, a + 1] for any a ∈ R.

Due to the ergodicity of (Ω, F, P, θ) lim 1 n−1 p 1 n−1 p n→∞ n∈N n ∑ k=0

Hence, for almost surely all ω ∈ (Ω, F, P), the following limit holds lim

To study the Lyapunov exponent of system (3.1) we consider here some esti- mates on the Lyapunov exponent of functions as indefinitely Young integrals w.r.t to B H

Lemma 3.1 Assuming that c 0 := ǁcǁ∞,R+ < ∞ and the integral exists for all t ∈ R + Then

Proof Fix T > 0, and assume that Π n is a sequence of partition of [0, T] such that the mesh |Π n | → 0 as n → ∞ Denote

Then X n (t, ω) → X(t, ω) as n → ∞ It is evident that X n is a Gaussian random variable with zero mean Since

= (22H − 1)H ∫ u ∫ t |a − b| 2H−2 dadb (3.14) for all v < u ≤ s < t (see [71, p 7-8]), we have

≤ Dc 2 t 2H , where D is a generic constant.

Since X n → X, a.s, X(t, ) is a centered normal random variable with V(X(t, ã)) ≤ c 2 t 2H This implies that EX(t, ) 2k ≤ Dt 2kH with D is a constant depends on H, k, c 0 Fix 0 < ε < 1 − H and choose k large enough so that k(1 − ε − H) ≥ 1 we then have

Using Borel-Caltelli lemma, we conclude that X (n,ã) → 0 as n → ∞ almost surely.

For the final statement, since X (n,ã) → 0 as n → ∞ we have

On the other hand, ∫ t c( s ) dω s ≤ K | ω | q−var,[|t∫,t] cˆ Therefore lim t

Lemma 3.2 Let g be a bounded q−variation function on every compact interval, 1 +

Proof We fix ω such that

|g | q−var,[t,t+1]) ≤ λ, let ε > 0 be arbitrary, there exists a constant D depending on ε such that |g s | ≤

, where D is a generic constant depends on ε This implies χ(G t ) ≤ λ.

(ii) Prove similarly to Proposition 1.3, G t is defined For each ε > 0 such that 2ε < λ, there exists a constant D such that

| g s | ≤ De (−λ+ε)s , | g | q−var,[s,s+1] ≤ De (−λ+ε)s With 0 ≤ t ≤ t 1, we have

≤ De (−λ+2ε)|t∫ Therefore, |G t | ≤ De (−λ+2ε)t we conclude that χ(G t ) ≤ −λ.

The second conclusion can be proved similarly to that in (i) Q

Motivated by Lemma 3.1 and Lemma 3.2, throughout this chapter we always consider the paths ω in a set of full probability Ω J ∈ F such that on this set(3.11), (3.12) and (3.13) hold everywhere.

Lyapunov spectrum for nonautonomous linear fSDEs

From now on, we study the system g dω

. dx t = A(t)x t dt + C(t)x t dB H , x 0∈ R d , t ≥ 0, (3.15) t under the following condition on the coefficient matrices A, C.

(H 2) For some δ > 0, Cˆ := ǁCǁ q−var,δ,R + := sup

These assumptions guarantee the existence of a unique solution to equation (3.15) on the positive real line, R+, leading to the creation of a stochastic flow of linear operators on R^d, denoted as Φ_{t,s}(ω) This flow is defined by its Cauchy operators {Φ_{t,s}(ω), s, t ∈ R+}, where {Φ_{t,s}(ω)x₀} indicates the solution's value at time t ∈ R+, starting from an initial condition x₀ ∈ R^d at time s ∈ R+ Additionally, in condition (H2), it can be assumed, without loss of generality, that δ = 1.

M 0 := max{2Aˆ, [8KCˆ] p } (3.16) where K := (1 − 2 1 − 1 − 1 ) −1 The equation (3.15) is solved pathwise by the p q deterministic one dx t = A(t)x t dt + C(t)x t dω t , x t 0 = x 0∈ R d , t ≥ 0, (3.17) where ω = B H (ω) From now on in this chapter we consider ω in Ω J which ensures (3.11), (3.12) and (3.13).

The Lyapunov spectrum of a linear differential equation encompasses the Lyapunov exponents associated with its solutions Additionally, it can be defined through the two-parameter flow of the linear operator generated by the system.

We introduce the concept of Lyapunov exponents for two-parameter flows of linear operators, building upon previous work This leads to the definition of the Lyapunov spectrum Additionally, we denote G k as the Grassmannian manifold representing all linear k-dimensional subspaces of R d.

Definition 3.1 (i) Given a stochastic two-parameter flow Φ t,s (ω) of linear operators of R d on the time interval [t 0, ∞), the extended-real numbers (real numbers or symbol

∞ or −∞) λ k (ω) := inf sup lim 1log |Φ t,t 0 (ω)y|, k = 1, , d, (3.18)

V∈G d−k+1 y∈V t →∞ t are called Lyapunov exponents of the flow Φ t,s (ω) The collection {λ 1(ω), , λ d (ω)} is called Lyapunov spectrum of the flow Φ t,s (ω).

(ii) For any u ∈ [t 0, ∞) the linear subspaces of R d

E u (ω) := y ∈ R d lim 1log |Φ t,u (ω)y| ≤ λ k (ω)Σ, k = 1, , d, (3.19) t→∞ t k are called Lyapunov subspaces at time u of the flow Φ t,s (ω) The flag of nonincreasing linear subspaces of R d

R d = E u (ω) ⊃ E u (ω) ⊃ ã ã ã ⊃ E u (ω) ⊃ {0} is called Lyapunov flag at time u of the flow Φ t,s (ω).

It is easily seen that the Lyapunov exponents in Definition 3.1 are indepen- dent of t 0, and are ordered: λ 1(ω) ≥ λ 2(ω) ≥ ã ã ã ≥ λ d (ω), ω ∈ Ω.

The classical definition of the Lyapunov spectrum for a linear system of ordinary differential equations (ODE) relies on the normal basis of the system's solutions, as outlined in reference [31], and utilizes the traditional definition of the Lyapunov exponent presented in equation (3.2) Notably, Millionshchikov [67] highlighted the equivalence of these definitions.

In the following remark we restate some facts in [31].

Remark 3.2 (i) For every invertible matrix B(ω), the fundamental solution matrix Φ t,t 0 (ω)B(ω) satisfies d

∑ λ i ( ω ) i=1 where α i (ω) is the Lyapunov exponent of its i th column.

(ii) Furthermore, we have Lyapunov inequality d

Note that if Lyapunov exponents of the columns of a fundamental solution matrix

{α i (ω), i = 1, , d} satisfy the equality ∑ d 1 α(ω) = lim 1 log | det Φ t,t 0 (ω)| then i= t→∞ t

{α 1(ω), , α d (ω)} is the spectrum of the flow Φ s,t (ω), i.e.

Proposition 3.1 (i) The Lyapunov exponents λ k (ω), k = 1, , d, of Φ t,s (ω) are measurable functions of ω ∈ Ω.

(ii) For any u ∈ [t 0, ∞), the Lyapunov subspaces E u (ω), k = 1, , d, of Φ t,s (ω) are measurable and invariant with respect to the flow in the following sense Φ t,s (ω)E s (ω) = E t (ω), for all s, t ∈ [t 0, ∞), ω ∈ Ω, k = 1, , d.

Proof The proof is similar to that in [19, Theorems 2.5, 2.7, 2.8] Q

Now we are able to formulate the first main result of this chapter on the Lyapunov spectrum of the equation (3.17).

Theorem 3.2 Let Φ t,s (ω) be the flow generated by (3.15) and {λ 1(ω), , λ d (ω)} be its Lyapunov spectrum hence of equation (3.15) Then under assumption (H 1 ), (H 2 ), the Lyapunov exponents λ k (ω), k = 1, , d, can be computed via a discrete- time interpolation of the flow, i.e. λ k (ω) := V inf sup lim 1 log |Φ t,t (ω)y|, k = 1, , d (3.20)

Morever, the spectrum is bounded by a constant, namely

Proof Recall from (2.38) that for each s ∈ R + sup t∈[s,s+1]log ǁΦ t,s (ω)ǁ ≤ 1 + M 0(1 + |ω | p ) (3.22)

Fix k ∈ {1, , d} and y ∈ R d Suppose 0 = t 0 < t 1 < t 2 < t 3ã ã ã is an increasing sequence of positive real numbers on which the upper limit lim log |Φ t,t 0 (ω)y| =: z ∈ R¯ is realized, i.e., t→∞ t lim 1 log |Φ t ,t (ω)y| = z. m→∞ t m m 0

Let n m denotes the largest natural number which is smaller than or equal to t m Using the flow property of Φ t,s (ω) and properties (3.11) we have z = lim 1 log |Φ t ,t (ω)y| m→∞ t m m 0

1 Σ1 + M 0(1 + |ω | p )Σ m→∞ lim n m 1 m 0 m→∞ n m p−var,[n m ,n m +1] m→∞ 1n m log |Φ n m ,t 0 (ω)y| lim t→∞ t∈N t log |Φ t,t 0 (ω)y|. t 0 N ∞ p p−var,

On the other hand, clearly

Consequently, we obtain for all k ∈ {1, , d} and y ∈ R d the equality lim 1 log |Φ t,t 0 (ω)y| = lim 1 log |Φ t,t 0 (ω)y|, t→∞ t t∈N which proves (3.20).

Since Φ t,s (ω) = (Ψ t,s (ω) T ) −1 where Ψ is the solution matrix of the adjoint equa- tion (3.7), it follows that lim 1 log |Φ n,t (ω)y| ≥ lim − 1 log ǁΨ n,t (ω)ǁ = − lim

.1 where the last inequality can be proved similarly to the one in (3.23) This proves (3.21).

Remark 3.3 The discretization scheme in Theorem 3.2 can be formulated for any step size r > 0.

Corollary 3.1 (Integrability condition) Under the assumptions (H 1 ) and (H 2 ), Φ t,s (ω) satisfies the following integrability condition

E sup t 0 ≤s≤t≤t 0 +1 log + ǁΦ t,s (ω) ±1 ǁ ≤ 1 + M 0.1 + Γ p Σ, (3.24) for any t 0≥ 0, where M 0 is determined by (3.16) and we use the notation log + a := max{log a, 0}, a > 0.

Proof The proof follows directly from (3.22) for Φ and Ψ, from Remark 3.1, with note that for the inverse flow Ψ t,s ( ω ) T = Φ t,s ( ω )

3.3.2 Lyapunov spectrum of triangular systems

In the theory of ordinary differential equations (ODE), it is established that a linear triangular system can be solved sequentially, allowing for straightforward computation of its Lyapunov spectrum through its coefficients This section presents a similar finding for the deterministic system described by dX_t = A(t)X_t dt + C(t)X_t dω_t, where X = (x_1, x_2, , x_d) and both A(t) and C(t) are d-dimensional upper triangular matrices that meet specific conditions (H_1) and (H_2) Additionally, the driving path ω adheres to the conditions outlined in equations (3.11) to (3.13).

We first consider the equation (3.25) in the one dimensional case dz t = a(t)z t dt + c(t)z t dω t , z(0) = z 0 (3.26)

Using change variable formula !!!check reference (see Za¨hle [86, Theorem 3.1]), (3.26) can be solved explicitly as z = z e

Moreover, we have the following lemma.

Lemma 3.3 The following estimates hold for any nontrivial solution z ƒ≡ 0 of (3.26)

Proof (i) The statement is evident due to (3.27) Namely,

(ii) Due to linearity it suffices to prove for the case z 0 = 1 what we will assume here Introduce notations f t = ∫ t a(s)ds, g t = ∫ t c(s)dω s , then z t | f | q−var,[s,t] ≤ ( t − s )ǁ a ǁ ∞,[s,t] , | g | q−var,[s,t] ≤ K ǁ c ǁ q−var,[s,t] | ω | p−var,[s,t] , for all 0 ≤ s < t, and χ(e f t ) = a, χ(e g t ) = 0.

Let ε be arbitrary, there exists D 1 such that e f s < D 1 e (a+ε/3)s , e g s < D 1 e εs/3 , ∀s ≥ 0.

Now, for t 0≥ 0 be arbitrary, ǁ e f ǁ∞,[t ,t +1] ≤ D 2 e (a+ε/3)t 0 , ǁ e g ǁ∞,[t ,t +1] ≤ D 2 e εt 0 /3 hold for D 2= max{D 1, D 1 e a+ε/3 } On the other hand, by mean value theorem and the continuity of f , for any s, t ∈ [t 0, t 0+ 1]

| e g | q−var,[t ,t +1] ≤ ǁ e g ǁ∞,[t ,t +1] | g | q−var,[t ,t +1] For s, t ∈ [t 0, t 0+ 1] we have

This proves the second claim.

Next we will show by induction that the Lyapunov spectrum of system (3.25) is a , 1 k d with a : li m

(s) ds provided that the limit is well- d e f i n e d a n d e xact.

The following lemma is a modified version of [31,

Lemma 3.4 Assume that a finite sequence of functions g i : R + → R, g i are continu- ous and have finite q-var norm on any compact interval of R + , i = 1, , n, satisfies then χ

(ii) The first statement is known due to [31, Theorem

2, p 19] For the second one, we just need to prove for k = 2, the general case is obtained by induction.

From Lemma 2.4, it can be seen that q

Therefore the the second statement is a consequence of the first one and (i) Q

By similar arguments using integration by part formula, the non- homogeneou s one dimensional linear equation dx

can be solved explicitly as x = e

∫ s a(u)du− ∫ s c(u)dω u h 2 ( s ) dω Σ , provided that h 1 , h 2 are in C q−var ([0, t], R) for all t > 0 This allow us to solve triangular systems by substitution as seen in the following theorem.

Theorem 3.3 Under assumptions (H 1 ) – (H 2 ), if there exist the exact limits a := lim 1

∫ t a kk (s)ds, k = 1, d (3.29) kk t→∞ t 0 then the spectrum of system (3.25) hence (3.15) is given by

Lemma 3.3 Then due to and χ(Yk) = akk, χ Yk q−var,[t,t+1] Σ ≤ a kk , χ((Y k )

We construct a fundamental solution matrix X(t) = x ij Σ d× d of (3.25) as follows. t t t 0 0 × 0

Yi ∫ t (Y i ) −1 ∑ k a ij ( s ) x jk ds + ∫ t ( Y i ) −1 ∑ k c ij ( s ) x jk dω s if i < k,

 t t ik s j=i+1 t ik s j=i+1 in which, tik

0, if a kk − a ii ≥ 0, Now we consider the d th collumn of X and prove by inductive that χ(xjd), χ xjd q−var,[t,t+1] Σ ≤ a dd , j = 1, 2, , d.

Namely, by Lemma 3.3 the statement is true for m = d Assume that for all i + 1 ≤ j ≤ d χ(x jd ), χ( x j d

(t) Y k Σ Σ we will point out that χ(xid), χ xid q−var,[t,t+1] Σ ≤ a dd

Firstly, since A is bounded , due to

In addition, we can prove that χ(I t ) ≤ a dd − a ii χ( | I | q−var,[t,t+1]) ≤ a dd − a ii Indeed, with I t t k s ds where χ(k s ) ≤ λ, for each ε > 0 we have

[t, t + 1] in which D is a constant depending on ε This implies |

I | q−var,[t,t+1] ≤ De (λ+ε)t The proof for the case I t k s ds is similar.

−1 q−var,[t,t+1]Σ ≤ −a ii and C satisfies (H 2), i.e. χ(C(t)), χ( |C | q−var,[t,t+1]) ≤ 0

Combining this with the inductive hypothesis that χ x t

. x j q−var,[t,t+1] Σ ≤ a dd , ∀i + 1 ≤ j ≤ d and applying Lemma

3.4 we obtain χ (Yi)− t 1 ∑ cij(t)xjdΣ , χ 

Applying Lemma 3.4 to the variables Y_i, I, and J, we find that the sum χ(x_id), χ x_id q−var,[t,t+1] Σ is less than or equal to a_ii + a_dd - a_ii, which simplifies to a_dd Consequently, the Lyapunov exponent of the d-th column, X_d, in matrix X is bounded by a_dd By combining this with the fact that χ(x_dd) equals a_dd, we can conclude that χ_t(X_d) is also equal to a_dd.

Similarly, χ(X i ) = a ii for i = 1, 2, , d, in which X i is the column i th of

X Finally, since ∑ d a ii = lim t →∞ 1 log | det X(t)|, X(t) is a normal matrix solution to (3.25) and the Lyapunov spectrum of (3.25) is {a 11, a 22, , a dd } It follows that the spectrum of (3.15) is

Remark 3.4 The Lyapunov spectrum of (3.15) in Theorem 3.3 is nonrandom a.s.

The concept regularity has been introduced by Lyapunov for linear ODEs, and since then has attracted lots of interests (see e.g [4, Chapter 3, p 115], [20], [69], or

[8, Section 1.3]) For a linear YDE, we define the concept of Lyapunov regularity via the generated two-parameter flow of linear operators in R d

Definition 3.2 Let Φ t,s (ω) be a two-parameter flow of linear operators of R d and

{λ 1(ω), , λ d (ω)} be the Lyapunov spectrum of Φ t,s (ω) Then the non-negative

∑ λ k − lim 1 log | det Φ t,0 ( ω )| k=1 t→∞ is called coefficient of nonregularity of the two-parameter flow Φ t,s (ω).

The coefficient of nonregularity of the linear YDE (3.17) is, by definition, the coefficient of nonregularity of the two-parameter flow generated by (3.17).

A two-parameter flow is called Lyapunov regular if its coefficient of nonregularity equals 0 identically A linear YDE is called Lyapunov regular if its coefficient of non- regularity equals 0.

A two-parameter linear flow Φ t,s (ω) is considered Lyapunov regular if its determinant, det Φ t,s (ω), and any trajectory exhibit exact Lyapunov exponents, meaning that the limit defined in (3.18) is precise.

We define the adjoint equation of (3.17) by dy t = −A T (t)y t dt − C T (t)y t dω t (3.31) t t i=

The following lemma is a version of Perron Theorem from the classical ODE case.

Lemma 3.5 (Perron Theorem) Let α 1≥ ã ã ã ≥ α d and β 1 ≤ ã ã ã ≤ β d be the Lyapunov spectrum of (3.17) and (3.31) respectively Then (3.17) is Lyapunov regular if and only if α i + β i = 0 for all i = 1, , d.

Proof The proof goes line by line with the ODE version in [31, p 170-173].

Theorem 3.4 (Lyapunov theorem on regularity of triangular system) Suppose that the matrices A(t), C(t) are upper triangular and satisfy (H 1 ) – (H 2 ) Then system

(3.25) is Lyapunov regular if and only if there exists lim 1 ∫ t a kk (s)ds, k = 1, d.

Proof The only if part is proved in Theorem 3.3 For the if part, the proof is similar to the

[31, p 174] Indeed, based on the normal basis of R d which forms the unit matrix we construct a fundamental basis X˜ upper triangular matrix and the diagonal entry is

Y 1 , Y 2 , , Y d , of the system which is an t t t where Y k are defined in Theorem 3.3.

We select an upper triangular matrix B = B(ω) with diagonal elements equal to 1, allowing us to define X := X˜ B as a normal basis of the equation (2.37), where x_i represents the column vectors By setting Y = (y_ij) = (X^(-1))^T and applying the reasoning from Lemma 3.5 under the regularity assumption, we establish that Y serves as a normal basis for (3.31) Additionally, it is noted that y_kk = (Y_k)^(-1) and that the relationship χ(x_k) + χ(y_k) = 0 holds for all k = 1, , d.

1 ∫ t a kk ds and similarly, t t t→∞ t t 0 s χ(y k ) ≥ χ((Y k ) −1 ) = − lim 1

0 ≥ lim ∫ t a kk (s)ds − lim ∫ t a kk (s)ds ≥ 0

1 1 t→∞ t t0 t→∞ t t 0 which implies that there exists the limit lim 1

Almost sure Lyapunov regularity

In this subsection, for simplicity of presentation we consider all the equations in the whole time line R The half-line case R + can be easily treated in a similar manner.

In this subsection, we explore a unique scenario where the coefficient functions are autonomous, represented as A(ã) ≡ A and C(ã) ≡ C Under these conditions, the stochastic two-parameter flow Φ t,s (ω) from equation (3.15) produces a linear random dynamical system, denoted as Φ J Consequently, we have the relationship x t = x 0.

0 u+s dθ s ω u , it follows due to the autonomy that Φ s,t (ω) = Φ t −s,0(θ s ω) Hence Φ J t (ω) :Φ t,0(ω) satisfies the cocycle property Φ J t+s (ω) = Φ J t (θ s ω) ◦ Φ J s (ω). Assign t 0= 0, if follows from (3.24) that sup t∈[0,1

By applying the multiplicative ergodic Theorem (MET, see [75] and [4, Chap- ter

For the generated Φ J from (3.15), a Lyapunov spectrum is established, featuring both exact and nonrandom Lyapunov exponents as provided by MET, which aligns with the Lyapunov spectrum outlined in Definition 3.1 Furthermore, the flag of Oseledets’ spaces corresponds with the flag of Lyapunov spaces.

In general, it might not be true that system (3.15) is regular for almost sure ω.

Under the additional assumptions of A(t) and C(t), we can develop a linear random dynamical system that is almost surely Lyapunov regular in a pathwise sense This construction leverages the Bebutov flow, as explored by Millionshchikov in his studies Specifically, we assume that A meets a more stringent condition to facilitate this development.

Consider the shift dynamical system S A (A)(ã) := A(ã + t) in the space C b := C b (R,

R d ×d ) of bounded and uniformly continuous matrix-valued continuous function on R with the supremum norm The closed hull H A := ∪ t S t (A) in C b is then compact ( see e.g [55, Theorem 4.9, p 63]), hence we can construct on H A a

∫ ∫ t u u u u probability structure such that (H A , F A , à A , S A ) is a probability space where à A is a S-invariant probability measure due to Krylov-Bogoliubov theorem ( [89,

When implementing Millionshchikov’s method using Bebutov flows in our system, it is essential to construct the elements (H C, F C, à C, S C) with an additional regularity condition for C The space C 0,α−Hol ([a, b], R d ×d ) represents the closure of smooth paths from the interval [a, b] to R d ×d in the α-Hölder norm Additionally, C 0,α−Hol (R, R d ×d ) encompasses all functions x: R → R d ×d such that x restricted to any compact interval I is in C 0,α−Hol (I, R d ×d ) This space is equipped with the compact open topology, which is generated by the Hölder norm metric d α (x, y) := ∑ m≥1.

Following [55, Chapter 2, p 62], for each c ∈ C 0,α−Hol (R, R d ×d ), an interval

[a, b] and δ > 0, we define the module of α-Ho¨lder on [a, b]: m [a,b] (c, δ) := |c | α,δ,[a,b]

By the same arguments as in [55, Theorem 4.9, Theorem 4.10, p 62-64] we get the following result, of which the proof is given in the Appendix.

Lemma 3.6 A set H ⊂ C 0,α−Hol (R, R k ) has a compact closure if and only if the following conditions hold: sup |c(0)| < ∞, (3.32) c∈H lim sup m [a,b] (c, δ) = 0, [a, b] (3.33) δ→0 c∈H

To construct a Bebutov flow for C, assume that there exists α > 1 such that

C ∈ C 0,α−Hol (R, R d ×d ) satisfies a condition stronger than (H 2):

(H J ) ǁCǁ∞, = sup |C(t)| < ∞ and lim sup

Consider the set of translations C r (ã) := C(r + ã) ∈ C 0,α−Hol (R, R d ìd ) Under conditions (3.34), Lemma 3.6 concludes that the closure set H C := {C r (ã) : r ∈ R} is compact on the separable complete metric space C 0,α−Hol (R, R d ×d ), in fact the shift

S C c(ã) = c(t + ã) also preserves the norm on C 0,α−Hol (R, R d ìd ) Prove similarly to [5,

According to Theorem 5, if S C is continuous with respect to t and uniformly continuous with respect to c on a sufficiently large compact interval, then S C is continuous in the joint variable (t, c) within the space R × C 0,α−Hol (R, R d ×d ) This establishes that S C qualifies as a measurable dynamical system Specifically, we first demonstrate that for a fixed value of c, the mapping S C: R → C 0,α−Hol (R, R d ×d ) remains continuous To achieve this, we select a large enough m for a given G and a fixed t 0 ∈ R.

∞ n=1+m 2 −n < G /2, we will show that with |t − t 0| is less than 1 and small enough then ǁS C c − S C cǁ α−Hol,[−m,m] < G /2 and then t t 0 d α (S C c, S C c) ≤ ǁS C c − S C cǁ α−Hol,[−m,m] + ∑ ∞ 2 −n < G t t 0 t t 0 n=1+m

Choose n > 0 such that t 0+ s + 1, t 0+ s − 1 ∈ [−n, n] for all s ∈ [−m, m] For given G there exists c˜

We obtain ∈ C 1 ([−n, n], R d ×d ) such that ǁc˜ − cǁ α−Hol,[−n,n] < G /6. ǁ S C c − S C c ǁ α−Hol,[−m,m]

Now fix t and c 0, for each G > 0 choose m, n similarly We have ǁ S C c − S C c 0 ǁ α−Hol,[−m,m] ≤ ǁ c − c 0 ǁ α−var,[−n,n] ≤ 2 n d ( c, c 0 ).

This proves the continuity of S C w.r.t c.

The shift dynamical system S C c(ã) = c(t + ã) maps H C into itself, hence by Krylov- Bogoliubov theorem, there exists at least one probability measure à C on H C that is invariant under S C , i.e à C (S C ã) = à C (ã), for all t ∈ R.

It makes sense then to construct the product probability space B = H A ×

H C ì Ω with the product sigma field F A ì F C ì F, the product measure à B : à A ì à C ì P and the product dynamical system Θ = S A ì S C ì θ given by Θ t (A˜, C˜, ω) := (S A (A˜), S C (C˜), θ t ω). t t

Now for each point b = (A˜, C˜, ω) ∈ B, the fundamental (matrix) solution Φ ∗ (t, b) of the equation dx t = A˜(t)x t dt + C˜(t)x t dω t , x 0 ∈ R d , (3.35)

0 t t defined by Φ ∗ (t, b)x 0 := x t with x t being the value at t of the (vector) solution x(ã) which starts at x 0 at time 0.

Theorem 3.5 Φ ∗ : R × B × R d → R d defines a RDS over the metric dynamical system (B, à B , Θ).

Proof Φ ∗ satisfies the cocycle property due to the existence and uniqueness theorem and the fact that x t+s = x 0 + s

That Φ ∗ is continuous w.r.t (t, x 0) is evident Consider b 1 = (A˜ 1 , C˜ 1 , ω 1 ) ∈ B in the neighborhood of b and the equation dx 1 = A˜ 1 (t)x 1 dt + C˜ 1 (t)x 1 dω 1 , x 1 = x 0 ∈ R d For z t = x 1 − x t , t ∈ R we have z = ∫ t[A˜ 1 (s)x 1 − A˜(s)x ]ds + ∫ t C˜ 1 (s)x 1 dω 1 − ∫ t

At this point one can repeat the arguments of Lemma 2.3 to get

|z t | ≤ D(|z 0| + ǁA˜ 1 − A˜ǁ∞,R + d α (C˜ 1 , C˜) + d(ω 1 , ω)) where D is a generic constant Since z 0= 0, we conclude that the solution is continuous and hence measurable w.r.t b.

Now we can apply the multiplicative ergodic theorem to get the following main result of this section.

Theorem 3.6 (Millionshchikov theorem) Under assumptions (H 1 J ), (H 2 J ) the nonau- tonomous linear stochastic (ω-wise) equation (3.15) is Lyapunov regular for almost all b ∈ B in the sense of the probability measure à B

The proof demonstrates that A˜ and C˜ meet the conditions (H 1) and (H 2), making Corollary 3.1 applicable The integrability condition for the product probability measure à B directly follows from equation (3.24) Consequently, the conclusions of the multiplicative ergodic theorem are valid for almost all b in B, indicating the Lyapunov regularity of equation (3.35) for almost surely b in B with respect to the probability measure à B.

Random attractors for nonautonomous fSDEs 67

Ngày đăng: 02/04/2022, 07:11

Nguồn tham khảo

Tài liệu tham khảo Loại Chi tiết
[1] E. Alos, O. Mazet, D. Nualart. Stochastic calculus with respect to Gaussian processes. The Annals of Probability, 29, No. 2, (2001), 766–801 Sách, tạp chí
Tiêu đề: The Annals of Probability
Tác giả: E. Alos, O. Mazet, D. Nualart. Stochastic calculus with respect to Gaussian processes. The Annals of Probability, 29, No. 2
Năm: 2001
[2] H. Amann Ordinary Differential Equations: An Introduction to Nonlinear Anal- ysis. Walter de Gruyter, Berlin New York, (1990) Sách, tạp chí
Tiêu đề: Ordinary Differential Equations: An Introduction to Nonlinear Anal- ysis
[3] V. I. Arnold. Ordinary Differential Equations. Springer-Verlag Berlin Heidel- berg, (1992) Sách, tạp chí
Tiêu đề: Ordinary Differential Equations
[4] L. Arnold. Random Dynamical Systems. Springer, Berlin Heidelberg New York, (1998) Sách, tạp chí
Tiêu đề: Random Dynamical Systems
[5] I. Bailleul, S. Riedel, M. Scheutzow. Random dynamical systems, rough paths and rough flows. J. Diff. Equat., 262, Iss. 12, (2017), 5792–5823 Sách, tạp chí
Tiêu đề: J. Diff. Equat
Tác giả: I. Bailleul, S. Riedel, M. Scheutzow. Random dynamical systems, rough paths and rough flows. J. Diff. Equat., 262, Iss. 12
Năm: 2017
[7] B. Boufoussi, S. Hajji Functional differential equations driven by a frac- tional Brownian motion. Computers &amp; Mathematics with Applications, 62, Iss. 2, (2011), 746–754 Sách, tạp chí
Tiêu đề: Computers & Mathematics with Applications
Tác giả: B. Boufoussi, S. Hajji Functional differential equations driven by a frac- tional Brownian motion. Computers &amp; Mathematics with Applications, 62, Iss. 2
Năm: 2011
[10] B. F. Bylov, R. E. Vinograd, D. M. Grobman and V. V. Nemytskii. Theory of Lyapunov Exponents, Nauka, Moscow, (1966), in Russian Sách, tạp chí
Tiêu đề: Nauka, Moscow
Tác giả: B. F. Bylov, R. E. Vinograd, D. M. Grobman and V. V. Nemytskii. Theory of Lyapunov Exponents, Nauka, Moscow
Năm: 1966
[11] T. Cass, C. Litterer, T. Lyon. Integrability and tail estimates for Gaussian rough differential equations. Annal of Probability, 41(4), (2013), 3026–3050 Sách, tạp chí
Tiêu đề: Annal of Probability
Tác giả: T. Cass, C. Litterer, T. Lyon. Integrability and tail estimates for Gaussian rough differential equations. Annal of Probability, 41(4)
Năm: 2013
[12] T. Caraballo, M. J. Garrido-Atienza, B. Schmalfuò, and J. Valero, Asymp- totic behaviour of a stochastic semilinear dissipative functional equation without uniqueness of solutions. Dis. and Cont. Dyn. Sys..Series B, 14, (2010), 439–455 Sách, tạp chí
Tiêu đề: Dis. and Cont. Dyn. Sys..Series B
Tác giả: T. Caraballo, M. J. Garrido-Atienza, B. Schmalfuò, and J. Valero, Asymp- totic behaviour of a stochastic semilinear dissipative functional equation without uniqueness of solutions. Dis. and Cont. Dyn. Sys..Series B, 14
Năm: 2010
[13] T. Caraballo, S. Keraani Analysis of a stochastic SIR model with fractional Brownian motion. Stochastic Analysis and Applications, 36(5), (2018), 1–14, [14] C. Castaing, M. Valadier. Convex Analysis and Measurable Multifunctions Sách, tạp chí
Tiêu đề: Stochastic Analysis and Applications", 36(5), (2018), 1–14,[14] C. Castaing, M. Valadier
Tác giả: T. Caraballo, S. Keraani Analysis of a stochastic SIR model with fractional Brownian motion. Stochastic Analysis and Applications, 36(5)
Năm: 2018
[15] D. N. Cheban, Uniform exponential stability of linear almost periodic systems in Banach spaces. Electronic Journal of Differential Equations, 2000, No. 29, (2000), 1–18 Sách, tạp chí
Tiêu đề: Electronic Journal of Differential Equations
Tác giả: D. N. Cheban, Uniform exponential stability of linear almost periodic systems in Banach spaces. Electronic Journal of Differential Equations, 2000, No. 29
Năm: 2000
[16] Y. Chen, H. Gao, M. J. Garrido-Atienza, B. Schmalfuò. Pathwise solutions of SPDEs driven by Ho¨ lder-continuous integrators with exponent larger than 1/2 and random dynamical systems. Discrete Contin. Dyn. Syst., 34(1), (2014), 79–98 Sách, tạp chí
Tiêu đề: Discrete Contin. Dyn. Syst
Tác giả: Y. Chen, H. Gao, M. J. Garrido-Atienza, B. Schmalfuò. Pathwise solutions of SPDEs driven by Ho¨ lder-continuous integrators with exponent larger than 1/2 and random dynamical systems. Discrete Contin. Dyn. Syst., 34(1)
Năm: 2014
[17] L.Coutin. An introduction to (stochastic) calculus with respect to fractional Brownian motion. Se´minaire de Probabilite´s XL. Lecture Notes in Mathematics, vol 1899. Springer, Berlin, Heidelberg, (2007) Sách, tạp chí
Tiêu đề: Se´minaire de Probabilite´s XL. Lecture Notes in Mathematics,vol 1899. Springer, Berlin, Heidelberg
[18] F. Comte and E. Renault. Long memory continuous time models. J. Econo- metrics, 73(1), (1996), 101–149 Sách, tạp chí
Tiêu đề: J. Econo-metrics
Tác giả: F. Comte and E. Renault. Long memory continuous time models. J. Econo- metrics, 73(1)
Năm: 1996
[19] N. D. Cong. Lyapunov spectrum of nonautonomous linear stochastic dif- ferential equations. Stoch. Dyn., 1, No. 1, (2001), 1–31 Sách, tạp chí
Tiêu đề: Stoch. Dyn
Tác giả: N. D. Cong. Lyapunov spectrum of nonautonomous linear stochastic dif- ferential equations. Stoch. Dyn., 1, No. 1
Năm: 2001
[20] N. D. Cong. Almost all nonautonomous linear stochastic differential equa- tions are regular. Stoch. Dyn., 4, No. 3, (2004), 351–371 Sách, tạp chí
Tiêu đề: Stoch. Dyn
Tác giả: N. D. Cong. Almost all nonautonomous linear stochastic differential equa- tions are regular. Stoch. Dyn., 4, No. 3
Năm: 2004
[22] N. D. Cong, L. H. Duc, P. T. Hong. Pullback attractors for stochastic Young differential delay equations. J. Dyn. Diff. Equat., to appear Sách, tạp chí
Tiêu đề: J. Dyn. Diff. Equat
[23] N. D. Cong, L. H. Duc, P. T. Hong. Lyapunov spectrum of nonautonomous linear Young differential equations. J. Dyn. Diff. Equat, (2020), 1749–1777 Sách, tạp chí
Tiêu đề: J. Dyn. Diff. Equat
Tác giả: N. D. Cong, L. H. Duc, P. T. Hong. Lyapunov spectrum of nonautonomous linear Young differential equations. J. Dyn. Diff. Equat
Năm: 2020
[24] H. Crauel, A. Debussche, F. Flandoli. Random attractors J. Dyn. Diff. Equat.9, (1997), 307–341 Sách, tạp chí
Tiêu đề: J. Dyn. Diff. Equat
Tác giả: H. Crauel, A. Debussche, F. Flandoli. Random attractors J. Dyn. Diff. Equat.9
Năm: 1997
[25] H. Crauel, F. Flandoli Attractors for random dynamical systems. Probab. Theory Relat. Fields 100, 365–393, (1994) Sách, tạp chí
Tiêu đề: Probab. TheoryRelat. Fields

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w