1. Trang chủ
  2. » Khoa Học Tự Nhiên

Stochastic processes from physics to finance wolfgang paul, jörg

287 14 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Stochastic Processes From Physics To Finance
Tác giả Wolfgang Paul, Jörg Baschnagel
Trường học Martin-Luther-Universität Halle
Chuyên ngành Physics
Thể loại book
Năm xuất bản 2013
Thành phố Heidelberg
Định dạng
Số trang 287
Dung lượng 5,03 MB

Cấu trúc

  • Stochastic Processes

    • Preface to the Second Edition

    • Preface to the First Edition

    • Contents

  • Chapter 1: A First Glimpse of Stochastic Processes

    • 1.1 Some History

    • 1.2 Random Walk on a Line

      • 1.2.1 From Binomial to Gaussian

      • 1.2.2 From Binomial to Poisson

      • 1.2.3 Log-Normal Distribution

    • 1.3 Further Reading

      • Section 1.1

      • Section 1.2

  • Chapter 2: A Brief Survey of the Mathematics of Probability Theory

    • 2.1 Some Basics of Probability Theory

      • 2.1.1 Probability Spaces and Random Variables

      • 2.1.2 Probability Theory and Logic

        • Maximum Entropy Approach

        • Maximum Entropy Approach to Statistical Physics

      • 2.1.3 Equivalent Measures

      • 2.1.4 Distribution Functions and Probability Densities

      • 2.1.5 Statistical Independence and Conditional Probabilities

      • 2.1.6 Central Limit Theorem

      • 2.1.7 Extreme Value Distributions

    • 2.2 Stochastic Processes and Their Evolution Equations

      • 2.2.1 Martingale Processes

      • 2.2.2 Markov Processes

        • Stationary Markov Processes and the Master Equation

        • Fokker-Planck and Langevin Equations

    • 2.3 Itô Stochastic Calculus

      • 2.3.1 Stochastic Integrals

      • 2.3.2 Stochastic Differential Equations and the Itô Formula

    • 2.4 Summary

    • 2.5 Further Reading

      • Section 2.1

      • Section 2.2

      • Section 2.3

  • Chapter 3: Diffusion Processes

    • 3.1 The Random Walk Revisited

      • 3.1.1 Polya Problem

      • 3.1.2 Rayleigh-Pearson Walk

        • A Polymer Model

      • 3.1.3 Continuous-Time Random Walk

    • 3.2 Free Brownian Motion

      • 3.2.1 Velocity Process

        • The Velocity Distribution

      • 3.2.2 Position Process

        • The Position Distribution

    • 3.3 Caldeira-Leggett Model

      • 3.3.1 Definition of the Model

      • 3.3.2 Velocity Process and Generalized Langevin Equation

    • 3.4 On the Maximal Excursion of Brownian Motion

    • 3.5 Brownian Motion in a Potential: Kramers Problem

      • 3.5.1 First Passage Time for One-dimensional Fokker-Planck Equations

      • 3.5.2 Kramers Result

    • 3.6 A First Passage Problem for Unbounded Diffusion

    • 3.7 Kinetic Ising Models and Monte Carlo Simulations

      • 3.7.1 Probabilistic Structure

      • 3.7.2 Monte Carlo Kinetics

      • 3.7.3 Mean-Field Kinetic Ising Model

    • 3.8 Quantum Mechanics as a Diffusion Process

      • 3.8.1 Hydrodynamics of Brownian Motion

      • 3.8.2 Conservative Diffusion Processes

      • 3.8.3 Hypothesis of Universal Brownian Motion

        • Interpretation

      • 3.8.4 Tunnel Effect

      • 3.8.5 Harmonic Oscillator and Quantum Fields

    • 3.9 Summary

    • 3.10 Further Reading

      • Section 3.1

      • Section 3.2

      • Section 3.5

      • Section 3.7

      • Section 3.8

  • Chapter 4: Beyond the Central Limit Theorem: Lévy Distributions

    • 4.1 Back to Mathematics: Stable Distributions

    • 4.2 The Weierstrass Random Walk

      • 4.2.1 Definition and Solution

        • Solution and Properties

        • Fractional Diffusion Equation

      • 4.2.2 Superdiffusive Behavior

        • Fractal Dimension

      • 4.2.3 Generalization to Higher Dimensions

        • Polya's Problem Revisited

    • 4.3 Fractal-Time Random Walks

      • 4.3.1 A Fractal-Time Poisson Process

        • The Kohlrausch Function

      • 4.3.2 Subdiffusive Behavior

    • 4.4 A Way to Avoid Diverging Variance: The Truncated Lévy Flight

      • Illustration

      • Addendum

    • 4.5 Summary

    • 4.6 Further Reading

      • Books

      • Review Articles

  • Chapter 5: Modeling the Financial Market

    • 5.1 Basic Notions Pertaining to Financial Markets

      • Credit Risk

      • Operational Risk

      • Market Risk

      • Efficient Market Hypothesis

      • Geometric Brownian Motion

      • Derivatives: Options and Futures

    • 5.2 Classical Option Pricing: The Black-Scholes Theory

      • 5.2.1 The Black-Scholes Equation: Assumptions and Derivation

        • Riskless Hedging and Legendre Transformation

      • 5.2.2 The Black-Scholes Equation: Solution and Interpretation

        • Solution

        • Interpretation

      • 5.2.3 Risk-Neutral Valuation

        • Example: Black-Scholes Price of a Call Option

        • Geometric Brownian Motion and Martingales

      • 5.2.4 Deviations from Black-Scholes: Implied Volatility

    • 5.3 Models Beyond Geometric Brownian Motion

      • 5.3.1 Statistical Analysis of Stock Prices

        • Prices Versus Returns

        • Distribution of Asset Prices

        • Stochastic Behavior of the Volatility

        • A Simple Model

        • Addendum

      • 5.3.2 The Volatility Smile: Precursor to Gaussian Behavior?

        • Drift and Volatility for Short Maturities

        • Black-Scholes Versus Bachelier

        • The Volatility Smile

      • 5.3.3 Are Financial Time Series Stationary?

      • 5.3.4 Agent Based Modeling of Financial Markets

        • An Order Book Based Market Model

        • A Physicist's Market Model

    • 5.4 Towards a Model of Financial Crashes

      • 5.4.1 Some Empirical Properties

      • 5.4.2 A Market Model: From Self-organization to Criticality

        • The Market in Normal Periods: The Cont-Bouchaud Model

        • Critical Crashes: The Sornette-Johansen Model

    • 5.5 Summary

    • 5.6 Further Reading

      • Finance Books

      • Physics Books

  • Appendix A: Stable Distributions Revisited

    • A.1 Testing for Domains of Attraction

    • A.2 Closed-Form Expressions and Asymptotic Behavior

      • The Gaussian: alpha=2

      • The Symmetric Cases: beta=0

      • The Asymmetric Cases: beta=±1

  • Appendix B: Hyperspherical Polar Coordinates

  • Appendix C: The Weierstrass Random Walk Revisited

    • Interpolation: Mellin Transforms

    • The One-dimensional Case

  • Appendix D: The Exponentially Truncated Lévy Flight

    • Case: 0<alpha<1

    • Case: 1<alpha<2

  • Appendix E: Put-Call Parity

  • Appendix F: Geometric Brownian Motion

  • References

  • Index

Nội dung

Some History

Let us start this historical introduction with a quote from the superb review article

On the Wonderful World of Random Walks by E.W Montroll and M.F Shlesinger

[145], which also contains a more detailed historical account of the development of probability theory:

In the 17th century, travel was both burdensome and costly, leaving gentlemen with limited options to fill their leisure time With daily activities like eating, hunting, and socializing often falling short, many turned to gambling as a preferred pastime, while others found solace in prayer.

The origins of probability theory and stochastic processes can be traced back to gambling, which has long been a part of human activity During the Enlightenment, gambling outcomes shifted from being perceived as divine interventions to subjects of rational analysis A notable figure from the 17th century, Chevalier de Méré, famously questioned the odds of a gambling game, prompting a pivotal correspondence between mathematicians Pascal and Fermat This exchange is widely regarded as the foundation of probability theory.

The first book on probability theory was written by Christiaan Huygens (1629–

In 1657, the work titled "De Ratiociniis in Ludo Aleae" (On Reasoning in the Game of Dice) was published, marking an early exploration of probability The foundational text of modern probability theory, "Ars Conjectandi" (The Art of Conjecturing), was authored by Jakob Bernoulli (1662–1705) and released posthumously in 1713, establishing key principles in the field.

W Paul, J Baschnagel, Stochastic Processes, DOI 10.1007/978-3-319-00327-6_1, © Springer International Publishing Switzerland 2013

• a critical discussion of Huygens’ book,

• combinatorics, as it is taught today,

• probabilities in the context of gambling games, and

• an application of probability theory to daily problems, especially in economics.

In the early 18th century, the foundational elements of stochastic concepts in economics began to emerge, highlighting the significance of integrating probabilistic descriptions of economic processes with risk control techniques from gambling This fusion has led to the development of risk management strategies in financial markets, which have gained considerable attention and importance over the past 30 years.

Louis Bachelier's Ph.D thesis, "Théorie de la Spéculation," marked a significant advancement in the stochastic modeling of financial asset prices, earning him his doctorate in mathematics on March 19, 1900 Guided by the esteemed mathematician Henri Poincaré, Bachelier's work is notable for its groundbreaking insights and contributions to the field of finance.

• It already contained many results of the theory of stochastic processes as it stands today which were only later mathematically formalized.

• It was so completely ignored that even Poincaré forgot that it contained the so- lution to the Brownian motion problem when he later started to work on that problem.

Brownian motion is the archetypical problem in the theory of stochastic processes.

In 1827, Scottish botanist Robert Brown observed the erratic motion of pollen particles in a fluid, a phenomenon later explained by the kinetic theory of gases This theory, rooted in Daniel Bernoulli's 1738 work "Hydrodynamica," laid the groundwork for the contributions of scientists like Einstein and Smoluchowski.

1917) successful treatments of the Brownian motion problem in 1905 and 1906, respectively Through the work of Maxwell (1831–1879) and Boltzmann (1844–

1906), Statistical Mechanics, as it grew out of the kinetic theory of gases, was the main area of application of probabilistic concepts in theoretical physics in the 19th century.

Brownian motion, prevalent in fields like physics, chemistry, biology, finance, sociology, and politics, illustrates how seemingly insignificant and unpredictable events—such as particle collisions or individual investor decisions—collectively produce observable phenomena, like the movement of pollen or daily stock market fluctuations While the individual occurrences are too complex to analyze in detail, their statistical properties play a crucial role in shaping the overall macroscopic behavior observed in these systems.

Random Walk on a Line

From Binomial to Gaussian

This article explores the connection between particle diffusion, specifically Fick's equation, and a jump process by analyzing approximations of the binomial distribution applicable for a large number of jumps (N→ ∞) over extended time periods.

AssumingN1 we can use Stirling’s formula to approximate the factorials in the binomial distribution lnN!

Using Stirling’s formula, we get lnp(m, N )

To approximate the binomial distribution near its maximum, we express the expectation value as \( m = m + \delta m = 2Np - N + \delta m \) This formulation allows us to explore the behavior of the distribution in relation to its peak.

Using these relations, we get lnp(m, N )

The variance of the binomial distribution is given by σ² = 4Npq To approximate the distribution near its center and account for fluctuations around the mean, we consider the last terms in the equation, resulting in δm(q−p), where δm represents the order of fluctuations.

These terms can be neglected ifNp→ ∞ We therefore finally obtain p(m, N )→ 2

The Gaussian or normal distribution, named after C.F Gauss (1777–1855), features a factor of 2 in its exponential function due to the nature of probabilities for fixed N, where only every other m holds a non-zero probability This distribution is termed 'normal' because of its widespread occurrence in statistics When summing random variables, such as the jump distances of a random walker, which have finite first and second moments, the resulting variable also exhibits a normal distribution.

N i = 1 x i is distributed according to a normal distribution forN→ ∞ This is the gist of the central limit theorem, to which we will return in the next chapter.

In certain instances, the variables x²i or xi may not exist, leading to a limiting distribution of the sum variable that is not Gaussian, but rather a stable or Lévy distribution This type of distribution is named after mathematician Paul Lévy, who began exploring these concepts in the 1930s Notably, the Gaussian distribution is a specific case of stable distributions The significance of these distributions has grown in various applications of stochastic process theory, which will be further explored in Chapter 4.

We now want to leave the discrete description and perform a continuum limit. Let us write x=mx, i.e.,x = mx, t=N t, (1.15)

D=2pq(x) 2 t , so that we can interpret p(mx, N t )= 2x

(x− x) 2 2Dt as the probability of finding our random walker in an interval of width 2x around a certain positionxat timet We now require that x→0, t→0, and 2pq(x) 2 t =D=const (1.16)

The diffusion coefficient, denoted as D and measured in units of length squared per time, characterizes the movement of the random walker The probability of locating the random walker within a specific interval of width dx around position x is expressed as p(x, t)dx = 1.

When we look closer at the definition ofxabove, we see that another assumption was buried in our limiting procedure: x(t )=xm =2 p−1 2

So our limiting procedure also has to include the requirement x→0, t→0 and 2(p− 1 2 )x t =v=const (1.18)

When the probability of stepping in either direction is equal (p = 1/2), the average position of the random walker remains at zero, resulting in a velocity of zero However, introducing asymmetry in the transition rates (p ≠ q) leads to a net velocity for the walker In the case where velocity is zero, the position x equals zero, and the mean squared displacement is given by x² = 2Dt Ultimately, the probability density for the position of the random walker at time t can be expressed as p(x, t) = 1.

, (1.19) with starting condition p(x,0)=δ(x) and boundary conditions p(x, t ) x →±∞ −→ 0.

By substitution one can convince oneself that (1.19) is the solution of the following partial differential equation:

∂x 2 p(x, t ), (1.20) which is Fick’s equation for diffusion in the presence of a constant drift velocity.

To derive the evolution equation for the probability density from the discrete random walker, we need to approach the random walker's behavior from the perspective of rate equations This shift in perspective allows us to effectively close the loop in our analysis.

The probability of a discrete random walker being at position m at time N changes in the next time interval t based on its previous positions Specifically, the probability can be expressed as p(m, N+1) = p * p(m−1, N) + q * p(m+1, N), indicating that the walker performs one jump during each time interval.

The walker must leap from either the left or right position, illustrating a master equation for a stochastic process In the following chapter, we will explore the types of stochastic processes to which this evolution equation applies.

In order to introduce the drift velocity,v, and the diffusion coefficient,D, into this equation let us rewrite the definition ofD:

We therefore can write q=(D−vqx) t

Inserting this into (1.21) and subtractingp(m, N )we get p(m, N+1)−p(m, N ) t = vp xp(m−1, N )− vq xp(m+1, N )

Reinserting the definitions ofv andDinto the last term, it is easy to show that it identically vanishes When we now perform the continuum limit of this equation

1.2 Random Walk on a Line 11 keepingvandDconstant, we again arrive at (1.20) The Fickian diffusion equation therefore can be derived via a prescribed limiting procedure from the rate equation (1.21) Most important is the unfamiliar requirement(x) 2 /t=const which does not occur in deterministic motion and which captures the diffusion behavior,x 2 ∝t, of the random walker.

From Binomial to Poisson

The limiting behavior of the binomial distribution reveals that the Gaussian distribution is not the sole limiting distribution that can be derived from it To arrive at the Gaussian distribution, it is essential to consider the condition where Np approaches infinity.

N→ ∞ Let us ask the question of what the limiting distribution is for

Again we are only interested in the behavior of the distribution close to its maximum and expectation value, i.e., forr≈Np; however, nowrN, and p N (r)= N! r!(N−r)!p r q N − r

≈(Np) r r! (1−p) N , where we have approximated all terms(N−1)up to(N−r)byN So we arrive at

The Poisson distribution is fully defined by its first moment To analyze the two limiting cases of the binomial distributions we derived, refer to Figures 1.3 and 1.4.

In the analysis of distribution approximations, we observe that with parameters N0 and p=0.8, the binomial and Gaussian distributions exhibit indistinguishable characteristics, while the Poisson distribution, despite sharing the same mean, is significantly broader and fails to accurately approximate the binomial distribution in this context Conversely, when p is adjusted to 0.01 while maintaining N0, the Poisson distribution becomes a superior approximation of the binomial distribution, effectively capturing its asymmetry around the maximum, in contrast to the inherently symmetric Gaussian distribution.

Fig 1.3 Plot of the binomial distribution for a number of steps N = 1000 and the probability of a jump to the right p = 0.8 (open circles).

This is compared with the

Gaussian approximation with the same mean and width

(solid curve) and the Poisson distribution with the same mean (dashed curve)

Fig 1.4 Plot of the binomial distribution for a number of steps N = 1000 and the probability of a jump to the right p = 0.01 (open circles).

This is compared with the

Gaussian approximation with the same mean and width

The comparison between the solid curve representing the Gaussian distribution and the dashed curve representing the Poisson distribution with the same mean reveals that for larger values, both the binomial and Poisson distributions exhibit a higher probability than the Gaussian distribution Conversely, when values are close to zero, this trend is reversed.

To quantify such asymmetry in a distribution, one must look at higher moments. For the Poisson distribution one can easily derive the following recurrence relation between the moments, r n

1.2 Random Walk on a Line 13 whereλ=Np= r So we have for the first four moments r 0

The second central moment, σ², serves as an indicator of the distribution's width, while the normalized third central moment, known as skewness, quantifies the distribution's asymmetry This relationship is expressed mathematically as ˆκ₃ = (r - r)³ / σ³ = λ - 1/2.

The normalized fourth central moment, or kurtosis, measures the ‘fatness’ of the distribution or the excess probability in the tails, ˆ κ 4=(r− r) 4 −3(r− r) 2 2 σ 4 =λ − 1 (1.27)

Excess has to be understood in relation to the Gaussian distribution, p G (x)= 1

, (1.28) for which due to symmetry all odd central moments(x−x) n withn≥3 are zero. Using the substitutiony=(x− x) 2 /2σ 2 we find for the even central moments x− x 2n

The Gamma function, denoted as Γ(x), leads to the conclusion that κˆ4 equals zero for the Gaussian distribution Notably, as the parameter λ approaches infinity, the Poisson distribution converges to the Gaussian distribution.

Log–Normal Distribution

The log-normal distribution is a significant example of a strongly asymmetric distribution, derived from the Gaussian distribution By transforming the variable with x = ln(y) and focusing on x > 0, we can express the log-normal distribution as p_G(x)dx = p_LN(y)dy, highlighting its unique characteristics.

The log–normal distribution is normalized in the following way,

The moments are given by y n

The maximum of the distribution is at y max =y 0 e −σ 2 (1.34)

A further quantity to characterize the location of a distribution is its median, which is the point where the cumulative probability is equal to one-half,

Withx=lnyandx med=lny med, this is equivalent to x med

2, or, because of the symmetry of the Gaussian distribution, x med = x =lny 0 ⇒y med =y 0 (1.36)

Further Reading

Fig 1.5 Plot of the log–normal distribution, with vertical dashed lines indicating the positions of the maximum, median and mean of the distribution

We therefore find the relative order y max =y 0 e − σ 2 < y med =y 0

Ngày đăng: 31/05/2022, 14:38

w