1. Trang chủ
  2. » Luận Văn - Báo Cáo

dynamic markov bridges and market microstructure theory and applications 2018

239 4 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Dynamic Markov Bridges and Market Microstructure Theory and Applications
Tác giả Umut ầetin, Albina Danilova
Người hướng dẫn Peter W. Glynn, Editor-in-chief, Andreas E. Kyprianou, Editor-in-chief, Yves Le Jan, Editor-in-chief
Trường học London School of Economics
Thể loại monograph
Năm xuất bản 2018
Thành phố London
Định dạng
Số trang 239
Dung lượng 2,1 MB

Cấu trúc

  • Preface

  • Contents

  • Frequently Used Notation

  • Part I Theory

    • 1 Markov Processes

      • 1.1 Markov Property

      • 1.2 Transition Functions

      • 1.3 Measures Induced by a Markov Process

      • 1.4 Feller Processes

        • 1.4.1 Potential Operators

        • 1.4.2 Definition and Continuity Properties

        • 1.4.3 Strong Markov Property and Right Continuity of Fields

      • 1.5 Notes

    • 2 Stochastic Differential Equations and Martingale Problems

      • 2.1 Infinitesimal Generators

      • 2.2 Local Martingale Problem

      • 2.3 Stochastic Differential Equations

        • 2.3.1 Local Martingale Problem Connection

        • 2.3.2 Existence and Uniqueness of Solutions

        • 2.3.3 The One-Dimensional Case

      • 2.4 Notes

    • 3 Stochastic Filtering

      • 3.1 General Equations for the Filtering of Markov Processes

      • 3.2 Kushner–Stratonovich Equation: Existence and Uniqueness

      • 3.3 Notes

    • 4 Static Markov Bridges and Enlargement of Filtrations

      • 4.1 Static Markov Bridges

        • 4.1.1 Weak Conditioning

        • 4.1.2 Strong Conditioning

      • 4.2 Connection with the Initial Enlargement of Filtrations

      • 4.3 Notes

    • 5 Dynamic Bridges

      • 5.1 Dynamic Markov Bridges

        • 5.1.1 Gaussian Case

        • 5.1.2 The General Case

          • 5.1.2.1 Existence of a Strong Solution on the Time Interval [0,1) and the Bridge Property

          • 5.1.2.2 Conditional Distribution of Z

          • 5.1.2.3 Ornstein–Uhlenbeck Bridges

      • 5.2 Dynamic Bessel Bridge of Dimension 3

        • 5.2.1 Main Result

        • 5.2.2 Proof of the Main Result

        • 5.2.3 Dynamic Bridge in Its Natural Filtration

        • 5.2.4 Local Martingale Problems and Some L2 Estimates

      • 5.3 Notes

  • Part II Applications

    • 6 Financial Markets with Informational Asymmetries and Equilibrium

      • 6.1 Model Setup

      • 6.2 On Insider's Optimal Strategy

      • 6.3 Notes

    • 7 Kyle–Back Model with Dynamic Information: No Default Case

      • 7.1 Existence of Equilibrium

      • 7.2 On the Uniqueness of Equilibrium

      • 7.3 Notes

    • 8 Kyle–Back Model with Default and Dynamic Information

      • 8.1 On Insider's Optimal Strategy

      • 8.2 Existence of Equilibrium

      • 8.3 Comparison of Dynamic and Static Private Information

      • 8.4 Notes

  • Appendix A

    • A.1 Dynkin's π-λ Theorem

    • A.2 Weak Convergence of Measures

    • A.3 Optional Times

    • A.4 Regular Conditional Probability

    • A.5 Brief Review of Martingale Theory

  • References

  • Index

Nội dung

Markov Property

Markov processes describe the development of random events where future outcomes depend solely on the current state, not on previous states This section will clarify the concept of the Markov property in a broader context.

In our analysis, we consider a probability space represented as (Ω,F, P), where T is defined as the interval [0,∞) and E denotes a locally compact separable metric space The Borel σ-field on this space is indicated by E A common instance of E, frequently encountered in practical applications, is the d-dimensional Euclidean space, denoted as R^d.

For eacht ∈ T, letX t (ω)= X(t, ω)be a function fromΩ toEsuch that it is

F/E-measurable, i.e.X t − 1 (A)∈F for everyA∈ E Note that whenE=Rthis corresponds to the familiar case ofX t being a real random variable for everyt ∈T.

Under this setupX=(X t ) t ∈ T is called a stochastic process.

We will now define two σ-algebras that are crucial in defining the Markov property Let

The filtration \( F_{t}^{0} \) encompasses the historical data of the process \( X \) up to time \( t \), while \( F_{t} \) represents the future development of \( X \) beyond time \( t \) This sequence of σ-algebras, denoted as \( (F_{t}^{0})_{t \in T} \), is an increasing filtration Additionally, we define \( F_{0} \) as the σ-algebra generated by both the past and future of the process, expressed as \( F_{0} = \sigma(F_{t}^{0}, F_{t}) \).

The Markov property of a process is influenced by the chosen filtration within a model, as a process may exhibit this property in its natural filtration but not in a broader one To accurately define the Markov property for a process X, it is essential to specify a filtration Specifically, there should be a filtration (F_t) on the probability space where F_t0 is a subset of F_t for all t in T, indicating that X is adapted to the filtration (F_t).

U Çetin, A Danilova, Dynamic Markov Bridges and Market Microstructure,

Probability Theory and Stochastic Modelling 90, https://doi.org/10.1007/978-1-4939-8835-8_1

Definition 1.1 (X t ,F t ) t ∈ T is a Markov process if

The Markov property asserts that, at any given time t, the future behavior of the process X is independent of its past, as represented by the σ-algebra F t This principle highlights that only the current state influences future outcomes, regardless of historical events The following theorem presents two alternative and valuable interpretations of the Markov property.

Theorem 1.1 For (X t ,F t ) t ∈ T the condition (1.1) is equivalent to any of the following: i) ∀t∈T , B∈F t andA∈F t ,

Proof First we show the implication (1.1)⇒i) Lett ∈ T , B ∈F t andA∈ F t Then,

Next we show i)⇒ii) Let t ∈ T , A ∈ F t and fix an arbitrary B ∈ F t Then,

To show ii)⇒(1.1) considert∈T , A∈F t andB ∈F t and observe that

We present alternative formulations of the Markov property in the following proposition, with the proof relying on the Monotone Class Theorem, which we encourage readers to explore independently From this point forward, we will refer to the set of bounded functions.

Proposition 1.1 For (X t ,F t ) t ∈ T the condition (1.1) is equivalent to any of the following: i) ∀Y ∈bF t

E[f (X u )|F t ] =E[f (X u )|X t ],whereC K ( E )is the set of continuous functions on E with compact support.

Transition Functions

The Markov property (1.1) allows us to define for anys < t a mappingP s,t :

P (X t ∈A|X s )=P s,t (X s , A), and P s,t (x,ã) is a probability measure on ( E ,E) for each x ∈ E Moreover, whenevers < t < u, we have, due to (1.1) and the tower property of conditional expectations,

The existence of a family of such mappings is the defining characterisation of Markov processes In order to make this statement precise we need the following definition.

Definition 1.2 The collection{P s,t (ã,ã);0 ≤ s < t < ∞}is a Markov transition function on( E ,E)if∀s < t < uwe have i) ∀x∈E:A→P s,t (x, A)is a probability measure onE; ii) ∀A∈E :x→P s,t (x, A)isE-measurable; iii) ∀x∈E ,∀A∈E the followingChapman–Kolmogorovequation is satisfied:

The Chapman–Kolmogorov equation exemplifies the Markov property, illustrating that during a journey from point x to a set A within the interval [s, u], the initial segment of the journey up to time t is independent of the subsequent segment This independence, as highlighted by the Markov property, is precisely what the Chapman–Kolmogorov equation conveys.

Example 1.1 (Brownian Motion) E = RandE is the Borel σ-algebra onR For realxandyandt > s≥0 put p s,t (x, y)= 1

The collection \( P_{s,t} \) serves as a transition function, with \( p_{s,t}(x, y) \) referred to as the transition density Notably, this transition function is time homogeneous, meaning that \( P_{s,t}(x, A) \) relies solely on the difference \( t - s \) for a fixed \( x \).

A Also note that in this casespatial homogeneityholds, too; namely,p t (x, y)is a function ofx−yonly.

ForXa Markov process with conditional distribution defined byP s,t , and initial distributionμwe can easily write for anyf ∈E n and 0≤t 1 < < t n ,

In particular, iff is the indicator function ofA 1 × .×A n , then the above gives the finite dimensional joint distribution of the process.

If an initial distribution, μ, and a transition function (P s,t) are provided, one can create a stochastic process on the canonical space of all functions from T to E This process will have joint distributions that align with the right-hand side of equation (1.2), as established by Kolmogorov’s extension theorem Therefore, defining a Markov process through equation (1.1) is essentially the same as specifying a transition function.

Up to now the transition functionP s,t has been assumed to be a strict probability kernel, namely,P s,t (x, E )=1 for everyx ∈ Eands ∈ T , t ∈ T We will extend this by allowing

A submarkovian transition function arises naturally in the study of Markov processes that are terminated upon reaching a specific boundary When the equality is satisfied, the transition function is classified as (strictly) Markovian We can transform the submarkovian case into a Markovian one by introducing a new element Δ that is not part of the existing set E.

In the one-point compactification of a set E, the new point Δ represents the point at infinity, and even if E is already compact, Δ is included as an isolated point We can subsequently define a new transition function P s,t for instances where s is less than t and A belongs to E.

The transition function P s,t is confirmed to be Markovian, indicating that Δ acts as an absorbing state or trap Following a minor adjustment to the probability space, we can proceed with this assumption.

Next we define the functionζ :Ω → [0,∞]by ζ (ω)=inf{t ∈T:X t (ω)=Δ}, (1.3) where inf∅ = ∞by convention Thus,ζ (ω)= ∞if and only ifX t (ω)∈Efor all t∈T The random variableζ is called thelifetimeofX.

Observe that so far we have not definedP t,t There are interesting cases when

P t,t (x,ã)is not the identity operator, thenxis called a ‘branching point’ However, we will not consider this case and maintain the assumption that

A particular case of Markov processes occurs when the transition function is time- homogeneous, i.e for anyx∈E Δ ,A∈E Δ ands≤t

A time-homogeneous Markov process is defined by the equation P s,t (x, A) = P t − s (x, A) for a collection P t (ã,ã) If a transition function P s,t is provided, it is possible to construct a time-homogeneous Markov process, denoted as (t, X t ) for t in T, which features a time-homogeneous transition function This construction is simple and can be explored further by the reader.

For the sake of brevity in the rest of this section we will consider only time- homogeneous Markov processes and their transition functions of the formP t (ã,ã).

However, all the results, unless explicitly stated otherwise, will apply to time- inhomogeneous Markov processes.

For time-homogeneous Markov process Chapman–Kolmogorov equation trans- lates into

P t + s =P t P s , thus, the familyP t forms a semigroup.

Measures Induced by a Markov Process

In a Markov process X with an initial distribution represented by μ = ε x (a point mass at x), we define P x as the probability measure on F 0 generated by X This measure, known as the law of X when X 0 = x, is associated with the expectation operator E x For instance, if Y belongs to bF 0, we can analyze its behavior under this framework.

In the case thatX 0 is random with distributionμ, to computeP (X t ∈ A), which coincides with

To evaluate the integral on the right-hand side of equation (1.4), it is essential that the function \( x \rightarrow P_x[Y] \) is measurable The subsequent proposition confirms this requirement.

Proposition 1.2 For eachΛ∈F 0 , the functionx→P x (Λ)isE Δ -measurable.

The proposition is valid when Λ = X - t1(A) for some A in EΔ The proof can be finalized using Dynkin's π-λ Theorem A.1 This enables the definition of a new measure based on any probability measure μ on EΔ.

The family of measures,P x , yields yet another representation of the Markov property (1.1):

Next we want to demonstrate that (1.5) extends to sets more general than[X s + t ∈

A] =X − s + 1 t (A) To do so suppose there exists a ‘shift’ operator(θ t ) t ≥ 0 such that for everyt,θ t :Ω →Ω and

The equation (X s ◦θ t )(ω)=X s (θ t (ω))=X s + t (ω) (1.6) illustrates the function of the mapping θ t, which effectively truncates the path of X prior to time t A trivial shift operator θ exists when Ω represents the space of all functions from T to E Δ, as demonstrated in the construction of the Markov process via Kolmogorov’s extension theorem In this scenario, θ t (ω) is defined as X(t+ ã, ω), where X denotes the coordinate process This principle also holds true when Ω consists of all right-continuous or continuous functions Consequently, we will assume the existence of a shift operator within our space and will utilize the implications of equation (1.6) accordingly.

With this notation the extension of (1.5) to more general sets is achieved by an application of Dynkin’sπ−λTheoremA.1, which yields the following proposition.

More generally, for allY ∈bF 0

It's important to note that the implications of Proposition 1.3 remain valid when substituting P (or E) with P μ (or E μ) Additionally, P μ, particularly P x, is defined solely on the σ-field F 0, unlike P, which is defined on F We will later expand P μ to a broader σ-algebra through the process of completion.

Before proceeding further we give some examples of Markov processes which play an important role in applications.

E= any countable set, e.g the set of integers.

E = the set of all subsets ofE.

If we writep ij (t )=P (X t =j|X 0 =i)fori ∈Eandj ∈ E, then we can define for anyA⊂E,

Then, the conditions forP being a transition function are easily seen to be satisfied byP t

Example 1.3 (Poisson Process) E=N Forn∈Eandm∈E

P t (n,{m}) 0 ifm < n, e − λt (λt ) m − n (m − n) ! ifm≥n, so that we can define a valid transition function on( E ,E)whereE is the set of all subsets ofE Note the spatial homogeneity as in the Example1.1.

Example 1.4 (Brownian Motion Killed at0) LetE=(0,∞), and define forx >0 andy >0 q t (x, y)=p t (x, y)−p t (x,−y), wherep t is as in defined in Example1.1 Let forA∈E

The transition function \( Q_t \) is identified as a submarkovian transition function, specifically for \( t > 0 \) where \( Q_t(x, E) < 1 \) This function represents the transition density of a Brownian motion that begins at a strictly positive value and is terminated upon reaching zero More precisely, \( Q_t(x, E) = P_x(T_0 > t) \), where \( P_x \) denotes the law of standard Brownian motion starting at \( x > 0 \), and \( T_0 \) is the first hitting time of zero As a submarkovian process, killed Brownian motion possesses a finite lifetime, defined by \( \zeta = \inf\{t > 0: X_t = 0\} \), with the probability of hitting infinity in a finite time being zero It is important to note that killed Brownian motion lacks spatial homogeneity.

Example 1.5 (Three-Dimensional Bessel Process) E = (0,∞) Note that the transition densityq t defined in Example1.4satisfies for anyx >0 andy >0

Now, if we define p t (3) (x, y)= 1 xq t (x, y)y, and let

A p (3) t (x, y) dy, we can check thatP t (3) is a Markovian transition function, i.e.P t (3) (x, E)=1 for all t >0 andx∈E This is the transition density of three-dimensional Bessel process.

Feller Processes

Potential Operators

In this section we consider a time homogeneous Markov process (X t ,F t ) with transition function(P t )and seek for a class of functions onEsuch that(f (X t ),F t ) is a supermartingale.

We will denoteE x [f (X t )]withP t f (x)for anyf ∈bE That is,

Since P t is a transition function we have P t f ∈ bE Similarly,P t f ∈ E + for everyf ∈ E + , whereE + is the class of positive (extended-valued)E-measurable functions.

Definition 1.3 Letf ∈E + andα≥0 Thenf isα-superaveraging relative toP t if

If in addition we have f =lim t ↓ 0 e − αt P t f, (1.9) we sayf isα-excessive.

Note that if we apply the operatore − αs P s to both sides of (1.8), we obtain e − αs P s f ≥e − α(t + s) P s P t f =e − α(t + s) P t + s f, thus,e − αt P t f is decreasing intso that the limit in (1.9) exists.

Proposition 1.4 Iff isα-superaveraging andf (X t )is integrable for eacht ∈ T , then(e − αt f (X t ),F t )is aP x -supermartingale for everyx∈E Δ

Proof Fors≤t f (X s )≥e − αt P t f (X s )=e − αt E X s [f (X t )] =e − αt E x [f (X t + s )|F s ], so that e − αs f (X s )≥e − α(s + t ) E x [f (X t + s )|F s ], which establishes the proposition

We will next consider an important class of superaveraging functions.

Definition 1.4 We say that the transition function is Borelian if for anyA∈E

The above measurability condition is equivalent to the following:

∀f ∈bE :(t, x)→P t f (x) isB×E-measurable Thus, if(X t )is right-continuous, then the map

The function \( P_t f(x) \) is right continuous and B×E-measurable Consequently, as we focus on right continuous functions in this section, we will derive results based on the assumption that \( P_t \) is Borel measurable.

Definition 1.5 Letf ∈ bE,α >0 and(P t )be Borelian Then, theα-potential of f is the function given by

Note that the first integral in the definition is well defined due to the Borelian property of(P t ) The second equality follows from Fubini’s theorem Consequently,

When applying the sup norm to the space of continuous functions, the operator \( U_\alpha \) is established as a bounded operator with an operator norm of \( \alpha^1 \) The collection of operators \( \{ U_\alpha, \alpha > 0 \} \) is recognized as the resolvent of the semigroup \( (P_t) \).

Proposition 1.5 Suppose(P t )is Borelian Iff ∈bE + , thenU α f isα-excessive. Proof e − αt P t (U α f ) ∞

0 e − α(t + s) P t + s f ds ∞ t e − αs P s f ds, which is less than or equal to

0 e − αs P s f ds =U α f, and converges toU α f astconverges to 0

Proposition 1.6 Suppose(X t )is progressively measurable and(P t )is Borelian. Forf ∈bE + andα >0, define

Then,(Y t ,F t )is a progressively measurableP x -martingale.

The first term on the right side is progressively measurable being continuous and adapted The second term is also progressively measurable sinceU α f ∈E, andX is progressively measurable.

Definition and Continuity Properties

LetCdenote the class of all continuous functions onE Δ SinceE Δ is compact, each f ∈Cis bounded Thus, we can define the usual sup-norm onCas follows: f = sup x ∈ E Δ

Let \( C_0 \) represent the space of continuous functions on \( E \) that vanish at infinity, meaning for any \( \epsilon > 0 \), there exists a compact set \( K \subset E \) such that \( f(x) < \epsilon \) for all \( x \in K^c \) This space can be considered a subclass of \( C \) by defining \( f(\Delta) = 0 \) for any \( f \in C_0 \) Additionally, let \( C_c \) denote the subclass of \( C_0 \) consisting of functions with compact support It is evident that both \( C \) and \( C_0 \), equipped with the sup-norm, are Banach spaces, with \( C_0 \) serving as the completion of \( C_c \).

Definition 1.6 A Markov processXwith a Borelian transition functionP t is called a Feller process ifP 0 is the identity mapping, and i) For anyf ∈C,P t f ∈Cfor allt ∈T; ii) For anyf ∈C t lim→ 0 P t f −f =0 (1.10)

A semigroup(P t )satisfying this conditions is called Feller.

Remark 1.2 It is clear that ii) implies : ii’) For anyf ∈C, x∈E Δ , t lim→ 0 P t f (x)=f (x) (1.11)

In fact, these conditions are equivalent under i) For a proof of this result, see Proposition III.2.4 in [100].

Remark 1.3 It is common in the literature to state the Feller property for the class

C0 ( E ) Indeed, we can replaceCin above conditions withC0 ( E )sinceC0 ( E )=C0 and each member ofCis the sum of a member ofC0plus a constant.

For the rest of this section we assume that(P t )is Feller.

SinceP t f ∈ C, we have that the first term converges to 0 asy → x SinceP is Markovian,P t f =P s P t − s f Thus,

|P t f (y)−P s f (y)| = |P s P t − s f (y)−P s f (y)| ≤ P s P t − s f−f = P t − s f−f, which converges to 0 ass↑t Finally, the last term is bounded by

P s f−g, which also converges to 0 asg→f in the sup norm

Our objective is to establish that for every Feller process, X, there exists a càdlàg Feller process, ˜X, such that ˜X_t = ˜X_t, P_x-a.s for all t ∈ T and x ∈ E Δ, while ensuring that the transition functions of both X and ˜X are the same The initial step involves demonstrating this relationship.

We denote bydthe metric onE δ and bymthe metric onE Note that convergence inmimplies convergence ind However, the reverse implications do not hold in general.

Proposition 1.7 Let (X t ,F t ) t ∈ T be a Feller process Then, (X t ) t ∈ T satisfies Dynkin’s stochastic continuity property Namely, for anyε >0andt ∈T i) h lim→ 0 sup x ∈ E Δ sup s ∈[ (t − h) ∨ 0,t + h ] P x (d(X s , X t ) > ε)=0.

In particular,Xis stochastically continuous, i.e.lim s → t P x (d(X t , X s ) > ε)=0 for allε >0. ii) For any compactK⊂E h lim→ 0sup x ∈ K sup s ∈[ 0,h ] P x (m(X s , x) > ε)=0.

Proof i) We first show the statement for t = 0 Consider the function f : R+ →

[0,1] defined by f (y) = 1 − y ε for y with y ≤ ε and 0 otherwise Let

Markov processes are defined as φ (x, ã) = f (d(x, ã)), where d represents the metric of the space E Δ It is important to note that the difference between φ (x, y) and φ (x, y) is equal to the difference between φ (y, x) and φ (y, x), which is less than or equal to d(y, y)/ε This indicates that for every x in E Δ, the function φ (x, ã) is continuous on E Δ with compact support Given that E Δ is compact, a finite number of open balls with centers x i and radius α (where α is less than ε/2) can be found to cover E Δ Consequently, for any point x in E Δ, there exists a j such that x is contained within the ball B α (x j).

The conclusion follows from Theorem1.2and the arbitrariness ofα.

Fort >0 it suffices to observe that whenevers < t, we have

A similar observation holds fors > t. ii) The statement can be proved verbatim after substituting E Δ with K and d withm

Remark 1.4 Although in the proof above we have proved the convergence in probability for the measure P x for x ∈ E Δ , this also implies convergence in probability forP since for anyA∈F 0 ,

To proceed with the path regularity the next basic result will be useful.

Lemma 1.1 Letf ∈C Then,U α ∈Cand α lim→∞ αU α f −f =0.

A class of functions defined inE Δ is said toseparate pointsif for two distinct memberx, yofE Δ there exists a function in that class such thatf (x)=f (y).

Let{O n , n∈N}be a countable base of the open sets ofE Δ and define

∀x∈E Δ :ϕ n (x)=d(x,O¯ n ), wheredis the metric onE Δ Note thatϕ n ∈C.

Proposition 1.8 The following countable subset ofCseparates points.

Proof For anyx = y, there existsO n such thatx ∈ ¯O n andy /∈ ¯O n Thus, 0 ϕ n (x) < ϕ n (y) Since lim α →∞ αU α f −f = 0, we can find a large enoughα such that

The following analytical lemma will help us prove that we can obtain a version of

Xwith right and left limits.

Lemma 1.2 LetDbe a class of continuous functions from E Δ toRwhich separates points Lethbe any function onRto E Δ Suppose thatSis a dense subset ofRsuch that for eachg∈D,

(g◦h)| S has right and left limits inR. Then,h| S has right and left limits inR.

Proposition 1.9 Let(X t ,F t ) t ∈ T be a Feller process, andSbe any countable dense subset of T Then forP-a.a.ω, the sample functionX(ã, ω)restricted toShas right limits in[0,∞)and left limits in(0,∞).

Let \( g \) be a member of class \( D \) defined in Proposition 1.8, such that \( g = U_k f \) for some \( f \in C \) According to Proposition 1.4, the process \( \{ e^{-kt} g(X_t), t \in T \} \) is a \( P_x \)-supermartingale for any \( x \in E \Delta \) By Theorem A.11, this process possesses right and left limits when restricted to \( S \), except on a null set that may depend on \( g \) However, since \( D \) is countable, we can select a null set that is applicable for all \( g \in D \) Consequently, the function \( t \to g(X(t, \omega)) \) has right and left limits when restricted to \( S \), and by Lemma 1.2, it follows that \( X \) shares this property.

In view of the above proposition we can define

For all \( t \geq 0 \), the processes \( \tilde{X}_t(\omega) \) and \( \hat{X}_t \) are defined as the limits of \( X_u(\omega) \) as \( u \) approaches \( t \) from below and above, respectively, for all \( \omega \) where these limits exist The set of \( \omega \) where these limits do not exist is a \( P_x \)-null set for every \( x \in E^\Delta \), allowing for arbitrary definitions of \( \tilde{X} \) and \( \hat{X} \) on that set It is important to note that \( \tilde{X} \) is càdlàg, meaning it has right-continuous paths with left limits, while \( \hat{X} \) is càglàd, indicating it has left-continuous paths with right limits.

Theorem 1.3 Suppose that(X t ,F t )is a Feller process with semigroup(P t ) Then, ˜

XandX, defined in (1.12), areˆ P x -modifications ofXfor anyx ∈E Δ Moreover, if (F t )is augmented with theP-null sets, then both(X˜ t ,F t ) (Xˆ t ,F t )become Feller processes with semigroup(P t ).

Proof Fix an arbitraryx ∈ E Δ andt ∈ T Consider any sequence(u n )⊂S such thatu n ↓ t Due to the previous proposition the limit lim n →∞ X u n existsP x -a.s. and the stochastic continuity ofXshown in Proposition1.7implies that n lim→∞ X u n =X t , P x −a.s.

The left-hand side of the equation equals \( \tilde{X}_t \) almost surely with respect to \( P_x \), indicating that \( \tilde{X} \) is a \( P_x \)-version of \( X \) This relationship holds for all \( x \in E \Delta \) simultaneously, establishing that \( \tilde{X} \) is also a \( P \)-version of \( X \) Consequently, \( \tilde{X} \) becomes adapted to the filtration \( (F_t) \) once it is supplemented with \( P \)-null sets Therefore, for any function \( f \) in \( C \), the adaptation holds.

This shows that(P t )is a transition function for(X˜ t ,F t ), too The same arguments apply toX.ˆ

Strong Markov Property and Right Continuity of Fields

This section aims to demonstrate the robust Markov property of a Feller process, along with the right-continuity of its suitably augmented natural filtration We assume that the Feller process exhibits right-continuous paths, and for additional information on optional and stopping times, please refer to Appendix A.3.

Theorem 1.4 For each optionalT, we have for eachf ∈Candu >0:

Proof Observe that on[T = ∞]claim holds trivially Let

Then it follows from Lemma A.3 that (T n ) is a sequence of stopping times decreasing toT and taking values in the dyadic setD = {k2 − n : k ≥ 1, n≥ 1}. Moreover, by TheoremA.5, we have

Thus, ifΛ∈F T +, thenΛ d :=Λ∩ [T n =d] ∈F d for everyd ∈D The Markov property applied att =dyields Λ d f (X d + u )dP Λ d

Thus, enumerating the possible values ofT n Λ ∩[ T < ∞] f (X T n + u )dP d ∈ D Λ d f (X d + u )dP d ∈ D Λ d

Sincef andP t f are bounded and continuous, andXis right continuous, we obtain asn→ ∞ Λ ∩[ T < ∞] f (X T + u )dP Λ ∩[ T < ∞] P u f (X T )dP

A direct application of Dynkin’s π − λ Theorem A.1 to the conclusion of Theorem1.4yields

Definition 1.7 The Markov process(X t ,F t ) t ∈ T is said to have the strong Markov property if (1.13) holds for each optional timeT.

Theorem 1.4 establishes that a Feller process with right continuous paths possesses the strong Markov property Additionally, this theorem indicates that even when analyzed after an optional time, the process maintains its strong Markov nature and retains the same transition probabilities.

Corollary 1.1 For each optional T, the process (X T + t ,F T + t ) t ∈ T is a Markov process with (P t ) as transition semigroup Moreover, it has the strong Markov property.

We will now prove that the appropriately augmented filtration of a strong Markov process is right continuous Observe that replacingT withtand shrinkingF t toF t 0 we may rewrite (1.13) as follows:

20 1 Markov Processes since onF 0 measuresP andP μ coincide Let’s denote theP μ -completion ofF 0

(resp.F t 0 ) byF μ (resp.F t μ ) Note that the following result does not require the Feller property.

Theorem 1.5 Let (X t ,F t ) t ∈ T be a strong Markov process Then the family

The conditional expectation in (1.14) is contained within the σ-algebra F t μ, as it includes all P μ-null sets Consequently, by applying a monotone class argument, we can determine that for any Y in the bounded σ-algebra bF t, the expected value is also encompassed within this framework.

Let C be the class of sets A in F that satisfy condition (1.15) It can be verified that C is a subσ-field of F Notably, F t is an element of C, and it follows that F t μ is also included in C This implies that C contains σ(F t μ, F t), which equals F μ Since F t μ + is a subset of F μ, any set A in F t μ + also satisfies (1.15), indicating that A belongs to F t μ Therefore, we conclude that F t μ equals F t μ +, as A was chosen arbitrarily and F t μ is defined to be a subset of F t μ +.

The theorem presented establishes a right continuous filtration based on a specific initial distribution μ, highlighting its dependence on the chosen μ To address this dependency, we will introduce a more refined σ-field.

F t μ , (1.16) where μ ranges over all finite measures on E Δ By directly computing the intersections we see that

Corollary 1.2 The family (F t ) t ∈ T is right continuous, as well as the family (F t μ ) t ∈ T for eachμ.

We have seen in Proposition1.2that for anyY ∈bF 0 , the functionx →E x [Y]is

E Δ -measurable Clearly, it will be too much to ask that this still holds whenY ∈bF.

In order to obtain the right measurability we need to enlarge the Borel fieldE Δ as follows:

E μ whereμranges over all finite measures onE Δ andE μ is the completion ofE Δ with μ-null sets Then we have the following

Theorem 1.6 IfY ∈bF, then the mapping x →E x [Y] belongs toE Also, for eachμand eachT, optional time relative to(F t μ ), we have

We finish this section with the following important result calledBlumenthal’s zero- one law.

Theorem 1.7 LetΛ∈F 0 Then, for eachxwe haveP x (Λ)=0orP x (Λ)=1.

Proof First suppose thatΛ ∈ F 0 0 Then,Λ = X − 0 1 (A)for someA ∈ E Since

The equation P x (Λ) = P x (X 0 − 1 (A)) = 1 A (x) can only yield values of 0 or 1 If Λ belongs to the set F 0, then for any x, it follows that Λ is part of F 0 ε x Consequently, there exists a subset Λ x within F 0 0 such that the difference between Λ and Λ x, combined with the difference between Λ x and Λ, forms a P x -null set This indicates that P x (Λ) equals P x (Λ x ), which, as previously established, can only be 0 or 1.

Notes

This chapter builds on the foundations laid in the first two chapters of Chung and Walsh, with the notable exception of Proposition 1.7, which follows the proof structure of Theorem 4.2 from another source The content presented here is standard and aligns with classical references like Blumenthal and Getoor, as well as Sharpe We focus exclusively on essential topics that are crucial for comprehending the subsequent material.

[27] is the definitive source for the potential theory of Markov processes Sharpe

Meyer’s comprehensive theory of process and stochastic calculus, particularly in relation to Markov processes, is detailed in [106] For those seeking insights into h-transforms, the Martin boundary, and time reversal within Markov processes, Chung and Walsh [40] are recommended as valuable resources.

The Feller process discussed in this book, also known as the Feller–Dynkin process, is detailed in Chapter III of Rogers and Williams [101] Due to the multiple definitions of the Feller property available in the literature, readers should exercise caution when applying results from different sources.

This chapter examines the well-posedness of martingale problems as defined by Stroock and Varadhan, which is essential for addressing the filtering equations discussed in Chapter 3 We will establish a crucial link between solutions of martingale problems and stochastic differential equations (SDEs) Our primary focus will be on the relationship between SDEs and martingale problems, necessitating the development of the concept of an infinitesimal generator to effectively formulate the martingale problem.

Infinitesimal Generators

Definition 2.1 Let(P t )be Fellerian and define the operatorA:C→Cby

The domain ofAis denoted withD(A)and it contains the functionsf ∈ Cfor which the above limit exists and belongs toC The operatorAas defined is said to be theinfinitesimal generatorof(P t ).

To get an intuitive grasp of the operatorAobserve that iff ∈C

E[f (X t + h )−f (X t )|F t ] =P h f (X t )−f (X t ), by the very definition of the Markov property Thus, iff ∈D(A)we may write

E[f (X t + h )−f (X t )|F t ] =hAf (X t )+o(h). © Springer Science+Business Media, LLC, part of Springer Nature 2018

U Çetin, A Danilova, Dynamic Markov Bridges and Market Microstructure,

Probability Theory and Stochastic Modelling 90, https://doi.org/10.1007/978-1-4939-8835-8_2

This implies thatAdescribes the movement ofXover a very short period of time, which justifies the name infinitesimal generator.

The next proposition states some basic properties of an infinitesimal generator and gives some examples of functions in its domain.

Proposition 2.1 Let(P t )be Fellerian andAits generator.

2 Iff ∈D(A)andt≥0, thenP t f ∈D(A)and d dtP t f =AP t f =P t Af.

Sinces →P s is continuous by Theorem1.2, we have that the above converges toP t f −f ∈ C as h → 0 This proves that t

2 P t f ∈D(A)can be shown as above In particular

=P t Af, by Theorem1.2 This shows thatt →P t f has a right derivative which is equal toP t Af Moreover, the above also implies thatAP t f =P t Af In order to find the left derivative, consider

3 This is a direct consequence of 2

Corollary 2.1 IfAis the infinitesimal generator of a Feller semigroup(P t ), then

D(A)is dense inCandAis a closed operator.

0 P s f ds ∈ D(A)by the previous proposition, we have thatD(A)is dense inC To show that Ais closed let(f n ) ⊂ D(A)andf n → f,Af n → g inC. However,P t f n −f n = t

P s g ds by lettingntend to∞ Dividing both sides of above bytand lettingt ↓0, we obtain

Example 2.1 Let X be the linear Brownian motion Then, using its transition density from Example1.1 one can directly verify thatD(A) ⊃ C 2 ( E Δ ,R) and

Af = 1 2 f for anyf ∈C 2 ( E Δ ,R), whereE Δ is the one-point compactification of

The next theorem provides the first glimpse into the martingale problem.

Theorem 2.1 Iff ∈D(A), then the process

Af (X s ) ds is a(P x ,F t 0 )-martingale for anyx∈E Δ

Conversely, iff ∈Cand there exists a functiong∈Csuch that f (X t )−f (X 0 )− t

0 g(X s ) ds is a(P x ,F t 0 )-martingale for everyx ∈E Δ , thenf ∈D(A)andAf =g.

Proof Sincef andAf are bounded,M f is integrable Moreover,

To show the converse statement observe that, taking expectation with respect to

0 P s g−gds, which goes to 0 ast →0

The martingale problem can be understood as the inverse of the theorem previously mentioned; it involves identifying a process \(X\) given a generator \(A\) such that the theorem's conditions are satisfied for a broad range of functions \(f\), which belong to a specific subset of the defined function space.

Definition 2.2 IfXis a Markov process, then a Borel measurable functionf :E→

Ris said to belong to the domainD A of the extended infinitesimal generator if there exists a Borel measurable functiong:E→Rsuch thatP x -a.s t

0 g(X s ) ds t ≥ 0 is a(P x ,F t 0 )-local martingale for anyx∈E.

Example 2.2 Consider three-dimensional Bessel process with transition density p t (x, y)=y x

Local Martingale Problem

Note that three-dimensional Bessel process is the unique solution to

Consider a function f: [0,∞) → [0,∞) that is twice continuously differentiable and has compact support, satisfying f(x) = x for x in the interval [0,1] This function can be extended to include infinity by defining f(∞) = 0, ensuring continuity on the extended domain E Δ However, it is important to note that this extended function does not belong to the domain D(A).

On the other hand,f ∈D A To observe that setg(x) f (x) x + 1 2 f (x) and note that|g(x)|< 1 x +K, for some constantK Thus, by Ito’s formula, f (X t )−f (x)− t

In this section, we focus on the state space \( E = \prod_{i=1}^d [l_i, \infty) \), where the interval is defined as \( [l_i, \infty) = \mathbb{R} \) if \( l_i = -\infty \) The metric used on \( E \) is the Euclidean distance, confirming that it is a Polish space Additionally, we will examine a specific form of the generator.

In the context of a matrix field \( a \) defined on \( \mathbb{R}^+ \times E \) and a vector field \( b \) on \( \mathbb{R}^+ \times E \), it is essential that for all indices \( i, j = 1, \ldots, d \), the mappings \( (t, x) \mapsto a_{ij}(t, x) \) and \( (t, x) \mapsto b_i(t, x) \) are Borel measurable Additionally, for each pair \( (t, x) \), the matrix \( a(t, x) \) must be symmetric and nonnegative, ensuring that for any vector \( \lambda \in \mathbb{R}^d \), the condition \( \sum_{i,j} a_{ij}(t, x) \lambda_i \lambda_j \geq 0 \) holds true.

Moreover, in many results that will be presented in this section we requirea andb satisfy the following additional assumption.

1 For eachi, j = 1, , d, the map (t, x) → a ij (t, x) is locally bounded on

2 For everyn≥1and any compactK⊂E there existsC n :(0, n)→R + with n

C n (s)ds s Therefore, pathwise uniqueness holds.

In this case the solution is explicitly given by

X t =Φ(t ) ξ+ t s Φ − 1 (r)b 0 (r)dr+ t s Φ − 1 (r)σ (r)dW r , whereΦis the unique solution to the equation ˙ Φ(t )=b 1 (t )Φ(t ), Φ(s)=I.

The integration by parts formula can be validated once the Lebesgue and stochastic integrals are properly defined In this context, Φ − 1 emerges as the unique absolutely continuous solution to the adjoint equation ˙ y(t) = −y(t)b 1(t), with the initial condition y(s) = I.

We refer the reader to Chap III of [62] for the general theory of linear systems of ODEs with integrable coefficients.

The following classical results regarding the uniqueness and existence of solutions of SDEs can be found, e.g in Chap 5 of [50].

Theorem 2.6 Suppose that for a given open setU ⊂ E the coefficientsb andσ satisfy b(t, x)−b(t, y) + σ (t, x)−σ (t, y) ≤K T x−y, 0≤t ≤T , x, y ∈U.

LetX and Y be two strong solutions of (2.6) starting from s relative to (ξ, W ). Define τ =inf{t≥s:X t ∈/UorY t ∈/U}.

Then,P (X t ∧ τ = Y t ∧ τ , t ∈ [s, T]) = 1 If there exists a sequenceU n increasing to E =R d such that eachU n satisfies the above condition for everyT > 0, then pathwise uniqueness holds for (2.6).

The first statement of the above theorem follows from the proof of Theorem 5.3.7 in

[50] by noting that our standing assumption onaandbguarantees that all integrals in the proof are well defined The second conclusion follows from thatτ n → ∞, P- a.s., whereτ n is the first exit time fromU n

The next theorem is Theorem 5.3.11 from [50] The additional claim on uniqueness is a direct consequence of the theorem above.

Theorem 2.7 Suppose that E = R d and the coefficients b and σ are locally bounded Moreover, assume that for eachT > 0andn ≥ 1, there exist constants

Let \( \Omega, F, (F_t)_{t \geq 0}, P \) be a filtered probability space with fixed \( ans \geq 0 \) that meets the criteria outlined in Definition 2.9, and assume that \( E[\xi^2] < \infty \) Under these conditions, there exists a unique strong solution \( X = (X_t)_{t \geq s} \) to equation (2.6), starting from \( s \) on this space relative to \( (\xi, W) \) Furthermore, for every \( t \geq s \), it holds that \( X_t \in F_t \vee \sigma(\xi) \).

Remark 2.8 Under additional assumptions on the coefficients, one can obtain upper bounds on the second moment of the solution given by the previous theorem More precisely, suppose in addition to the conditions of Theorem2.7,K T ,n =K T , andb satisfies b(t, x) 2 ≤K T (1+ x 2 ), t ∈ [0, T], x∈E

Then (see, e.g Theorem 5.2.9 and Problem 5.3.15 in [77]), there exists aC l˜ andr < r˜ and the pair(l 1 , r 1 )satisfyingl < l˜ 1 < X 1 0 ≤ X 2 0 < r 1 X s 2 ] K(s)(X s 1 −X 2 s ) + ds , and the conclusion follows as in the previous case

The following theorem, due to Yamada and Watanabe, can be found in [77], Corollary 5.3.23, or in [70], Theorem IV.1.1 We denote the Wiener measure on

Theorem 2.12 Suppose that there exists a weak solution to (2.6) starting from some r ≥ 0 with initial distribution μ, and pathwise uniqueness holds for (2.6) Then, there exists a E ⊗ B(C([r,∞),R d ))/B(C([r,∞), E ))- measurable function, F : E ×C([r,∞),R d ) → C([r,∞), E ), which is also

E ⊗B t (C([r,∞),R d ))corresponds to the completion ofE ⊗B t (C([r,∞),R d )) withμ×W-null sets, such that

Moreover, if a given filtered probability space,(Ω,F, (F t ) t ≥ 0 , P ), is rich enough to support a Brownian motion and a random variableξ ∈ F r with distributionμ, then the processF (ξ, W ã )is the strong solution to (2.6) starting fromrrelative to (ξ, W ).

As a consequence of this theorem we have the following

Corollary 2.4 Pathwise uniqueness implies uniqueness in the sense of probability law.

Combining the above corollary with Corollary2.3and Proposition2.7we obtain

Corollary 2.5 Suppose Assumption 2.1 holds and there exists a unique strong solution to (2.6) starting fromsrelative to the pair(x, W )for alls≥0andx ∈E

Then, the martingale problem forAis well posed.

The above result shows that existence and uniqueness of strong solutions for any deterministic initial condition imply the strong Markov property of solutions of (2.6)

2.3 Stochastic Differential Equations 55 via Remark2.4 Moreover, if we are willing to assume that the coefficients of the SDE are bounded and time-homogeneous, solutions will have the Feller property.

Theorem 2.13 Suppose that E=R d and the coefficientsbandσare bounded and do not depend on time Moreover, assume that there exists a constantKsuch that b(x)−b(y) + σ (x)−σ (y) ≤Kx−y, t∈ [0, T], x, y∈E

Then the solutions of (2.6) have the Feller property.

According to Theorem 2.7, there is a unique strong solution to equation (2.6) for any given initial condition x Additionally, Corollary 2.5, in conjunction with Theorem 2.3, confirms that these solutions exhibit strong Markov properties Let (P_t)_{t ≥ 0} represent the corresponding transition semi-group.

(X x ) x ∈ E the family of strong solutions to

0 b(X s )ds on a fixed filtered probability space(Ω,F, (F t ) t ≥ 0 ,P) Revuz and Yor show on p 380 of [100] that

E sup s ≤ t X s x −X y s 2 ≤a t x−y 2 for some constanta t that depends ont only SinceP t f (x)=E[f (X t x )]the above inequality implies thatx →P t f (x)is continuous for anyf ∈C0.

Next, we show thatP t f ∈C0wheneverf ∈C0 Indeed,

≤2k 2 r − 2 (t+t 2 ) where k is the uniform bound onσ andb Thus, letting firstx thenr to infinity yields that lim x →∞|P t f (x)| =0.

Finally, lim t → 0 P t f (x) = lim t → 0E[f (X x t )] = f (x)by the continuity of the solutions of (2.6) and the boundedness off

Theorem 2.12 will be instrumental in establishing the existence of a strong solution when the coefficients of a given SDE do not satisfy the hypothesis of Theorems2.7and2.8 In order to use this theorem one needs to show the existence of a weak solution.

The One-Dimensional Case

The findings discussed earlier can be greatly enhanced for one-dimensional time-homogeneous stochastic differential equations (SDEs) Specifically, we define the set E as [l, ∞), where l is greater than or equal to -∞, and adopt the convention that E equals R when l is equal to ∞ Our focus is on identifying the solutions to these equations.

In abovebandσ are measurable andσ >0 on(l,∞) In this case Engelbert and

Schmidt demonstrated the existence and uniqueness of a weak solution up to the exit time from the interval (l, ∞), contingent on an integrability condition that is essential when b ≡ 0 For an in-depth analysis, we direct readers to references [47–49] The subsequent theorem integrates Theorems 5.5.7 and 5.5.15 from [77].

Then, for anyx∈(l,∞)there exists a weak solution to

0 b(X s )ds, t < ζ, (2.18) whereζ =inf{t ≥0:X t =lor∞} Moreover, the solution is unique in law.

To establish a global solution to the equation (2.18), it is essential to demonstrate that ζ equals infinity, indicating that its solution does not exit the interval (l, ∞) This necessitates a thorough understanding of the behavior of X near the boundaries l and ∞ In the context of one-dimensional stochastic differential equations (SDEs), this behavior is entirely characterized by the scale function, denoted as s, and the speed measure, m, which are defined on the Borel subsets of the interval (0, ∞) Specifically, the scale function is expressed as s(x) = x * c * exp.

The equations presented, specifically equation (2.19) involving the integral of 2y and c multiplied by b(z) and σ²(z) with respect to z and y, and equation (2.20) defining m(dx) as 2σ²(x)s(x)dx, highlight the significance of the constant c, which is selected from the interval (1, ∞) It is important to note that the function s is well-defined as indicated by equation (2.17) Furthermore, the choice of c being arbitrary implies that the resulting function is unique only up to an affine transformation.

The left endpoint,l, is calledexitif for somez∈(l,∞) z l m(a, z)s (a)da 0 in view of Corollary2.6.

For values of 0 < δ < 2, the right endpoint 0 is reached almost surely, indicating the potential for a global solution despite 0 not being an entrance boundary In this scenario, 0 acts as a regular boundary, suggesting that solutions originating from 0 could be combined to form a global solution However, this construction hinges on the condition P0(T0 > 0) = 1, where P0 represents the law of the squared Bessel process starting from 0 with dimension 0 ≤ δ < 2 This condition is not satisfied, as P0(T0 > 0) = 0 for any ε > 0.

2 dt, where Γ is the gamma function and the last equality follows from (13) in [59]. Lettingε→0 yields the claim.

The Engelbert–Schmidt conditions, while general, do not guarantee global solutions for equation (2.18) when E = R We will demonstrate the existence of a unique strong solution for equation (2.21) for all x ≥ 0, utilizing the findings of Yamada and Watanabe, which affirm that a unique strong solution arises from weak existence and strong uniqueness Additionally, we present a result that establishes strong uniqueness for one-dimensional stochastic differential equations (SDEs) under conditions that are less stringent than the typical Lipschitz condition found in broader contexts For a comprehensive proof, we direct readers to Section IX.3 of [100].

Then, ifX 1 andX 2 are two strong solutions of (2.18), so isX 1 ∨X 2 In particular, uniqueness in law implies pathwise uniqueness for (2.18).

In the discussed theorem, establishing weak uniqueness is a prerequisite for achieving pathwise uniqueness in stochastic differential equations (SDEs) Notably, when the drift coefficient remains constant, this initial step can be bypassed.

Corollary 2.7 Suppose thatσ satisfies (2.22) andb(x)=bfor someb∈Rfor all x∈E Then, for anyx ∈E pathwise uniqueness holds for (2.18).

Proof LetX 1 andX 2 be two strong solutions of (2.18) Then, Theorem2.16yields

X:=X 1 ∨X 2 is also a strong solution ConsiderY :=X−X 1 , which satisfies

Clearly,Y is a nonnegative local martingale Thus, it is a nonnegative supermartin- gale Since it starts from 0, it has to stay at 0 Therefore,Y =X 1 =X 2

Corollary 2.8 Consider E = [0,∞) For anyδ ≥ 0 and x ∈ E there exists a unique strong solution to (2.21).

Proof Consider the auxiliary SDE onR

Theorem 2.14 guarantees the existence of a non-exploding weak solution to the specified stochastic differential equation (SDE) Additionally, Corollary 2.7 establishes pathwise uniqueness for this SDE As a result, Theorem 2.12 confirms the existence of a unique strong solution to equation (2.24).

When δ = x = 0, the unique strong solution is 0 According to the comparison result from Theorem 2.9, the solution of equation (2.24) remains nonnegative for any x in the set E and for δ ≥ 0 Therefore, equation (2.24) can be interpreted as a stochastic differential equation (SDE).

Static Markov Bridges

Dynamic Markov Bridges

Dynamic Bessel Bridge of Dimension 3

Applications

Ngày đăng: 28/08/2021, 13:51

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN