ergodicity.process package

Submodules

ergodicity.process.basic module

basic.py

This module provides foundational stochastic processes used in simulations, particularly focusing on Itô and non-Itô processes. These processes are employed to model various types of continuous-time stochastic behaviors, including Brownian motion, Bessel processes, and more complex, specialized stochastic processes like the Brownian bridge, Brownian meander, and fractional Brownian motion.

Classes:

  • EmptyProcess: A process that remains constant at 0 or 1, used for placeholder or null-action purposes.

  • StandardBesselProcess: Represents a standard Bessel process, modeling the Euclidean distance of a Brownian motion from the origin.

  • StandardBrownianBridge: Models a Brownian motion constrained to start and end at specified points over a fixed time interval.

  • StandardBrownianExcursion: Models a Brownian motion conditioned to stay positive and to start and end at zero over a fixed time interval.

  • StandardBrownianMeander: Models a Brownian motion conditioned to stay positive with an unconstrained endpoint.

  • BrownianMotion: Represents standard Brownian motion (Wiener process), a fundamental stochastic process in various fields.

  • CauchyProcess: Models random motion with heavy-tailed distributions following a Cauchy distribution.

  • StandardFractionalBrownianMotion: Models fractional Brownian motion with long-range dependence and self-similarity, governed by the Hurst parameter.

  • FractionalBrownianMotion: Extends the fractional Brownian motion to include a deterministic trend (mean).

  • GammaProcess: Models a process with independent, stationary increments following a gamma distribution.

  • InverseGaussianProcess: Models independent, stationary increments following an inverse Gaussian distribution.

  • StandardMultifractionalBrownianMotion: Represents a multifractional Brownian motion with a time-varying Hurst parameter.

  • SquaredBesselProcess: Models the square of the Euclidean norm of a d-dimensional Brownian motion.

  • VarianceGammaProcess: Represents a variance-gamma process with a mix of Gaussian and gamma process characteristics.

  • WienerProcess: A standard implementation of Brownian motion, a cornerstone of stochastic models.

  • PoissonProcess: Models the occurrence of random events at a constant average rate, a pure jump process.

  • LevyStableProcess: Generalizes the Gaussian distribution to allow for heavy tails and skewness.

  • LevyStableStandardProcess: A standardized version of the Lévy stable process.

  • MultivariateBrownianMotion: Models correlated Brownian motion in multiple dimensions.

  • MultivariateLevy: Extends the Lévy stable process to multiple dimensions, allowing for complex, correlated phenomena.

  • GeneralizedHyperbolicProcess: A versatile process encompassing a wide range of distributions like variance-gamma and normal-inverse Gaussian.

  • ParetoProcess: Represents a process based on the Pareto distribution, modeling heavy-tailed phenomena.

Dependencies:

  • math, numpy, matplotlib, plotly: Libraries used for mathematical operations and visualization.

  • scipy.stats: Statistical functions used to model different distributions.

  • stochastic: Provides the base stochastic processes extended by this module.

  • aiohttp.client_exceptions: Used for exception handling in certain client processes.

This module is essential for defining different stochastic processes used throughout the library, including basic and advanced processes for financial modeling, physics, biology, and more.

class ergodicity.process.basic.BrownianMotion(name: str = 'Standard Brownian Motion', process_class: ~typing.Type[~typing.Any] = <class 'stochastic.processes.continuous.brownian_motion.BrownianMotion'>, drift: float = 0.0, scale: float = 1.0)[source]

Bases: ItoProcess

BrownianMotion represents a fundamental continuous-time stochastic process, also known as Wiener process, which models random motion observed in particles suspended in a fluid. This process, denoted as (W_t)_{t≥0}, is characterized by its independent increments, continuous paths, and Gaussian distribution. Mathematically, for 0 ≤ s < t, the increment W_t - W_s follows a normal distribution N(μ(t-s), σ²(t-s)), where μ is the drift and σ is the scale parameter. Key properties include: stationary and independent increments, continuous sample paths (almost surely), and self-similarity. The process starts at 0 (W_0 = 0) and has an expected value of E[W_t] = μt and variance Var(W_t) = σ²t. As an Itô process, Brownian motion is fundamental in stochastic calculus and serves as a building block for more complex stochastic processes. It finds widespread applications in various fields, including physics (particle diffusion), finance (stock price modeling), biology (population dynamics), and engineering (noise in electronic systems). This implementation allows for both standard (μ = 0, σ = 1) and generalized Brownian motion. The class is initialized with customizable drift and scale parameters, defaulting to standard values. It’s categorized under the “brownian” type, reflecting its nature. The _has_wrong_params attribute is set to True, indicating that the parameters might need adjustment or special handling in certain contexts, particularly when integrating this process into larger systems or when transitioning between different time scales.

custom_increment(X: float, timestep: float = 0.01) Any[source]

Custom increment function for the process.

Parameters:
  • X (float) – The current value of the process

  • timestep (float) – The time step for the simulation

Returns:

The increment value

Return type:

float

class ergodicity.process.basic.CauchyProcess(name: str = 'Cauchy Process', process_class: ~typing.Type[~typing.Any] = <class 'stochastic.processes.continuous.cauchy.CauchyProcess'>)[source]

Bases: NonItoProcess

CauchyProcess represents a continuous-time stochastic process that models random motion with heavy-tailed distributions. This process, denoted as (C_t)_{t≥0}, is characterized by its stable distribution with index α = 1, making it a special case of Lévy processes. Unlike Brownian motion, the Cauchy process has undefined moments beyond the first order, including an undefined mean and infinite variance. It exhibits several key properties: stationary and independent increments, self-similarity, and sample paths that are continuous but highly irregular with frequent large jumps. For any time interval [s, t], the increment C_t - C_s follows a Cauchy distribution with location parameter 0 and scale parameter |t-s|. The process is non-Gaussian and does not satisfy the conditions of the central limit theorem, leading to its classification as a NonItoProcess. CauchyProcess finds applications in various fields, including physics (modeling resonance phenomena), finance (risk assessment in markets with extreme events), and signal processing (robust statistical methods). It’s particularly useful in scenarios where extreme events or outliers play a significant role. This implementation is initialized with a name and process class, and is categorized under the “cauchy” type. The lack of defined drift and stochastic terms reflects the process’s unique nature, where traditional moment-based analysis does not apply. Researchers and practitioners should be aware of the challenges in working with Cauchy processes, including the inapplicability of standard statistical tools that rely on finite moments.

class ergodicity.process.basic.EmptyProcess(name: str = 'Empty Process', zero_or_one: float = 1)[source]

Bases: ItoProcess

A process that is always zero or one. It may be used as a placeholder, for testing purposes, or often in the agents module as a null-action.

Parameters:
  • name (str) – The name of the process

  • zero_or_one (float) – The value of the process (0 or 1)

custom_increment(X: float, timestep: float = 0.01) Any[source]

Custom increment function for the process.

Parameters:
  • X (float) – The current value of the process

  • timestep (float) – The time step for the simulation

Returns:

The increment value, which is always 0 in this case

Return type:

float

simulate(t: float = 10, timestep: float = 0.01, num_instances: int = 5, save: bool = False, plot: bool = False) Any[source]

Simulate the process.

Parameters:
  • t (float) – Total time for the simulation

  • timestep (float) – Time step for the simulation

  • num_instances (int) – Number of instances to simulate

  • save (bool) – Whether to save the simulation results

  • plot (bool) – Whether to plot the simulation results

Returns:

The simulated data

Return type:

np.ndarray

class ergodicity.process.basic.FractionalBrownianMotion(name: str = 'Fractional Brownian Motion', process_class: Type[Any] = None, mean: float = 0.0, scale: float = 1.0, hurst: float = 0.5)[source]

Bases: StandardFractionalBrownianMotion

FractionalBrownianMotion extends the StandardFractionalBrownianMotion, offering a more generalized implementation with additional parametric flexibility. This process, denoted as (X^H_t)_{t≥0}, is a continuous-time Gaussian process characterized by its Hurst parameter H and a constant mean μ. It is defined as X^H_t = μt + B^H_t, where B^H_t is the standard fractional Brownian motion. The process inherits key properties from fBm, including self-similarity with parameter H, stationary increments, and long-range dependence for H > 0.5. Its covariance structure is given by Cov(X^H_t, X^H_s) = 0.5(|t|^2H + |s|^2H - |t-s|^2H), independent of μ. The mean parameter allows for modeling scenarios with deterministic trends superimposed on the fractal behavior of fBm. This implementation is particularly useful in fields where both long-term correlations and underlying trends are significant, such as in financial econometrics for modeling asset returns with both momentum and mean-reversion, in climate science for analyzing temperature anomalies with long-term trends, and in telecommunications for studying network traffic with evolving baselines. The class is initialized with a name, optional process class, mean (defaulting to the standard drift term), and Hurst parameter (defaulting to a predefined value). It’s categorized specifically under the “fractional” type, emphasizing its nature as a fractional process. Researchers and practitioners should note that while the added mean parameter enhances modeling flexibility, it does not affect the fundamental fractal properties governed by the Hurst parameter. Care should be taken in estimation and interpretation, particularly in distinguishing between the effects of the mean trend and the intrinsic long-range dependence of the process.

custom_increment(X: float, timestep: float = 0.01) Any[source]

Custom increment function for the process.

Parameters:
  • X (float) – The current value of the process

  • timestep (float) – The time step for the simulation

Returns:

The increment value

Return type:

float

class ergodicity.process.basic.GammaProcess(name: str = 'Gamma Process', process_class: ~typing.Type[~typing.Any] = <class 'stochastic.processes.continuous.gamma.GammaProcess'>, rate: float = 1.0, scale: float = 1.0)[source]

Bases: NonItoProcess

GammaProcess represents a continuous-time stochastic process with independent, stationary increments following a gamma distribution. This process, denoted as (G_t)_{t≥0}, is characterized by its rate parameter α > 0 and scale parameter θ > 0. For any time interval [s, t], the increment G_t - G_s follows a gamma distribution with shape α(t-s) and scale θ. Key properties include: strictly increasing sample paths (making it suitable for modeling cumulative processes), infinite divisibility, and self-similarity. The process has expected value E[G_t] = αθt and variance Var(G_t) = αθ²t. As a Lévy process, it possesses jumps and is not a semimartingale, hence its classification as a NonItoProcess. GammaProcess finds diverse applications: in finance for modeling aggregate claims in insurance or cumulative losses, in reliability theory for describing degradation processes, and in physics for studying certain types of particle emissions. It’s particularly useful in scenarios requiring non-negative, increasing processes with possible jumps. This implementation is initialized with a name, process class, rate (α, defaulting to 1.0), and scale (θ, defaulting to a predefined stochastic term). It’s categorized under the “gamma” type. The separate rate and scale parameters offer flexibility in modeling, allowing for fine-tuning of both the frequency (via rate) and magnitude (via scale) of the increments. Practitioners should be aware of the process’s non-Gaussian nature and the implications for statistical analysis and risk assessment, particularly in heavy-tailed scenarios where the gamma process can provide a more realistic model than Gaussian-based alternatives.

class ergodicity.process.basic.GeneralizedHyperbolicProcess(name: str = 'Generalized Hyperbolic Process', process_class: ~typing.Type[~typing.Any] = None, plambda: float = 0, alpha: float = 1.7, beta: float = 0, loc: float = 0.0005, delta: float = 0.01, t_scaling: ~typing.Callable[[float], float] = <function GeneralizedHyperbolicProcess.<lambda>>, **kwargs)[source]

Bases: NonItoProcess

GeneralizedHyperbolicProcess represents a highly flexible class of continuous-time stochastic processes that encompasses a wide range of distributions, including normal, Student’s t, variance-gamma, and normal-inverse Gaussian as special or limiting cases. This process, denoted as (X_t)_{t≥0}, is characterized by five parameters: α (tail heaviness), β (asymmetry), μ (location), δ (scale), and λ (a shape parameter, often denoted as ‘a’ in the implementation). The process is defined through its increments, which follow a generalized hyperbolic distribution. Key properties include: semi-heavy tails (heavier than Gaussian but lighter than power-law), ability to model skewness, and a complex autocorrelation structure. The process allows for both large jumps and continuous movements, making it highly adaptable to various phenomena. It’s particularly noted for its capacity to capture both the central behavior and the extreme events in a unified framework. The GeneralizedHyperbolicProcess finds extensive applications in finance for modeling asset returns, particularly in markets exhibiting skewness and kurtosis; in risk management for more accurate tail risk assessment; in physics for describing particle movements in heterogeneous media; and in signal processing for modeling non-Gaussian noise. This implementation is initialized with parameters α, β, μ (loc), and δ (scale), with additional parameters possible through kwargs. It’s categorized under both “generalized” and “hyperbolic” types, reflecting its nature as a broad, hyperbolic-based process. The class uses a custom increment function, indicated by the _external_simulator flag set to False. This allows for precise control over the generation of process increments, crucial for accurately representing the complex distribution. Researchers and practitioners should be aware of the computational challenges in parameter estimation and simulation, particularly in high-dimensional settings or with extreme parameter values. The flexibility of the generalized hyperbolic process comes with increased model complexity, requiring careful consideration in application and interpretation. Its ability to nest simpler models allows for sophisticated hypothesis testing and model selection in empirical studies.

Attention! The class must be used with caution. The generalized hyperbolic distribution is not convolution-closed. It means that the sum of two generalized hyperbolic random variables may be not a generalized hyperbolic random variable. This makes the simulation of the process using finite differential problematic. A related problem is that it is unclear how to scale time increments dt in the simulations. The process may not be self-similar. Or if we make it self-similar by design, many options are possible which lead to different processes.

property a

Return the shape parameter ‘a’ of the process, parametrized for the usage with scipy.

Returns:

The shape parameter ‘a’

Return type:

float

property alpha

Return the shape parameter (α) of the process.

Returns:

The shape parameter (α)

Return type:

float

apply_time_scaling(t: float)[source]

Apply the time scaling function to the given time (increment) value.

Parameters:

t (float) – he time (increment) value

Returns:

The scaled time value

Return type:

float

property b

Return the skewness parameter ‘b’ of the process, parametrized for the usage with scipy.

Returns:

The skewness parameter ‘b’

Return type:

float

property beta

Return the skewness parameter (β) of the process.

Returns:

The skewness parameter (β)

Return type:

float

custom_increment(X: float, timestep: float = 0.01) Any[source]

Custom increment function for the process.

Parameters:
  • X (float) – The current value of the process

  • timestep (float) – The time step for the simulation

Returns:

The increment value

Return type:

float

property delta

Return the scale parameter (σ) of the process.

Returns:

The scale parameter (σ)

Return type:

float

differential() str[source]

” Return the differential equation of the process.

Returns:

The differential equation of the process

Return type:

str

express_as_elementary() str[source]

Express a given Generalized Hyperbolic process as a function of an elementary Hyperbolic process.

Returns:

The Generalized Hyperbolic process expressed in terms of elementary Hyperbolic processes

Return type:

str

property loc

Return the location parameter (μ) of the process.

Returns:

The location parameter (μ)

Return type:

float

property plambda

Return the shape parameter ‘λ’ of the process.

Returns:

The shape parameter ‘λ’

Return type:

float

property scale

Return the scale parameter ‘scale’ of the process, parametrized for the usage with scipy.

Returns:

The scale parameter ‘scale’

Return type:

float

property t_scaling

Return the time scaling function of the process.

Returns:

The time scaling function

Return type:

Callable[[float], float]

class ergodicity.process.basic.InverseGaussianProcess(name: str = 'Inverse Gaussian Process', process_class: ~typing.Type[~typing.Any] = <class 'stochastic.processes.continuous.inverse_gaussian.InverseGaussianProcess'>, mean: ~typing.Callable[[float], float] = <function InverseGaussianProcess.<lambda>>, scale: float = 1.0)[source]

Bases: NonItoProcess

InverseGaussianProcess represents a continuous-time stochastic process with independent, stationary increments following an inverse Gaussian distribution. This process, denoted as (IG_t)_{t≥0}, is characterized by its mean function μ(t) and scale parameter λ > 0. For any time interval [s, t], the increment IG_t - IG_s follows an inverse Gaussian distribution with mean μ(t) - μ(s) and shape parameter λ(t-s). Key properties include: strictly increasing sample paths, infinite divisibility, and a more complex self-similarity structure compared to the Gamma process. The process has expected value E[IG_t] = μ(t) and variance Var(IG_t) = μ(t)³/λ. As a Lévy process, it exhibits jumps and is not a semimartingale, hence its classification as a NonItoProcess. InverseGaussianProcess finds applications in various fields: in finance for modeling first passage times in diffusion processes or asset returns with asymmetric distributions, in hydrology for describing particle transport in porous media, and in reliability theory for modeling degradation processes with a natural barrier. It’s particularly useful in scenarios requiring non-negative, increasing processes with possible jumps and where the relationship between mean and variance is non-linear. This implementation is initialized with a name, process class, mean function (defaulting to the identity function λ(t) = t), and scale parameter (defaulting to a predefined stochastic term). It’s categorized under both “inverse” and “gaussian” types, reflecting its nature as an inverse Gaussian process. The flexible mean function allows for modeling time-varying trends, while the scale parameter controls the variability of the process. Practitioners should be aware of the process’s unique distributional properties, particularly its skewness and heavy right tail, which can be advantageous in modeling phenomena with occasional large positive deviations.

class ergodicity.process.basic.LevyStableProcess(name: str = 'Levy Stable Process', process_class: Type[Any] = None, alpha: float = 2, beta: float = 0, scale: float = 0.7071067811865476, loc: float = 0, comments: bool = True, tempering: float = 0, truncation_level: float = None, truncation_type: str = 'hard', scaled_scale: bool = False)[source]

Bases: NonItoProcess

LevyStableProcess represents a versatile class of continuous-time stochastic processes that generalize the Gaussian distribution to allow for heavy tails and skewness. This process, denoted as (X_t)_{t≥0}, is characterized by four parameters: α (stability), β (skewness), σ (scale), and μ (location). The α parameter, ranging from 0 to 2, determines the tail heaviness, with α = 2 corresponding to Gaussian behavior. Key properties include: stable distribution of increments, self-similarity, and, for α < 2, infinite variance and potential for large jumps. The process offers remarkable flexibility, encompassing Gaussian (α = 2), Cauchy (α = 1, β = 0), and Lévy (α = 0.5, β = 1) processes as special cases. This implementation extends the basic Lévy stable process with options for tempering and truncation, allowing for more nuanced modeling of extreme events. Tempering introduces exponential decay in the tails, while truncation (either ‘hard’ or ‘soft’) limits the maximum jump size. These modifications can be crucial in financial modeling to ensure finite moments or in physical systems with natural limits. The process finds wide applications in finance for modeling asset returns and risk, in physics for describing anomalous diffusion, in telecommunications for network traffic analysis, and in geophysics for modeling natural phenomena. The class includes built-in validity checks and informative comments about special cases. It allows for scaled parameterization and provides methods for generating increments, including tempered and truncated variants. The differential and elementary expressions offer insights into the process’s structure. Researchers and practitioners should be aware of the computational challenges in simulating and estimating Lévy stable processes, particularly for small α values, and the interpretative complexities introduced by tempering and truncation. This implementation strikes a balance between the theoretical richness of Lévy stable processes and the practical needs of modeling real-world phenomena with potentially bounded extreme events.

property alpha

The stability parameter (0 < α ≤ 2).

Returns:

The stability parameter

Return type:

float

property beta

The skewness parameter (-1 ≤ β ≤ 1).

Returns:

The skewness parameter

Return type:

float

characteristic_function() str[source]

Express the characteristic function of the Lévy stable distribution corresponding to the process.

Returns:

The characteristic function of the process

Return type:

str

custom_increment(X: float, timestep: float = 0.01) Any[source]

Custom increment function for the process.

Parameters:
  • X (float) – The current value of the process

  • timestep (float) – The time step for the simulation

Returns:

The increment value

Return type:

float

differential() str[source]

Express the Levy process as a differential equation.

Returns:

The differential equation of the process

Return type:

str

express_as_elementary() str[source]

Express a given Levy process as a function of an elementary Levy process.

Returns:

The Levy process expressed in terms of elementary Levy processes

Return type:

str

property loc

The location parameter (μ).

Returns:

The location parameter

Return type:

float

property scale

The scale parameter (σ > 0). :return: The scale parameter :rtype: float

property scaled_scale

This getter method is needed because the scale parameter has a different scale in the library which is used for the simulation.

Returns:

The scaled scale parameter

Return type:

bool

property tempered

Whether the process is tempered.

Returns:

Whether the process is tempered

Return type:

bool

tempered_stable_rvs(timestep)[source]

Generate a tempered stable random variable. Tempering is applied by multiplying the Levy stable random variable with an exponential factor. Tempering parameter must be non-negative.

Parameters:

timestep (float) – The time step for the simulation

Returns:

A tempered stable random variable

Return type:

float

property tempering

The tempering parameter (λ). :return: The tempering parameter :rtype: float

truncate(value: float) float[source]

Truncate the value based on the truncation level and type. Truncation can be ‘hard’ (capping the value) or ‘soft’ (applying exponential tempering). Truncation is a common technique to limit the impact of extreme values in a process. It may be needed to apply for a Levy process because of its heavy tails, especially for small alpha. Truncation parameter must be positive.

Parameters:

value (float) – The value to truncate

Returns:

The truncated value

Return type:

float

Raises:

ValueError – If the truncation type is invalid (not ‘hard’ or ‘soft’)

property truncated

Whether the process is truncated.

Returns:

Whether the process is truncated

Return type:

bool

property truncation_level

The truncation level for the process.

Returns:

The truncation level

Return type:

float

property truncation_type

The type of truncation (‘hard’ or ‘soft’).

Returns:

The type of truncation

Return type:

str

class ergodicity.process.basic.LevyStableStandardProcess(name: str = 'Standard Levy Stable Process', process_class: Type[Any] = None, alpha: float = 1, beta: float = 0)[source]

Bases: LevyStableProcess

LevyStableStandardProcess represents a standardized version of the Lévy stable process, a class of continuous-time stochastic processes known for their ability to model heavy-tailed distributions and asymmetry. This process, denoted as (X_t)_{t≥0}, is a special case of the general Lévy stable process with fixed scale and location parameters (set to 1/2**0.5 and 0 correspondingly). It is primarily characterized by two parameters: Scale is set to 1/2**0.5 because it corresponds to the standard deviation of the process when the process is Gaussian.

  1. α (alpha): The stability parameter, where 0 < α ≤ 2. This parameter determines the tail heaviness of the distribution. As α approaches 2, the process behaves more like Brownian motion, while smaller values lead to heavier tails and more extreme jumps.

  2. β (beta): The skewness parameter, where -1 ≤ β ≤ 1. This parameter controls the asymmetry of the distribution. When β = 0, the process is symmetric.

The process is standardized with a scale parameter of 1/√2 and a location parameter of 0. This standardization allows for easier comparison and analysis across different α and β combinations.

Key properties of the LevyStableStandardProcess include:

  • Self-similarity: The distribution of the process at any time t is the same as that at time 1, up to a scaling factor.

  • Stable distributions: The sum of independent copies of the process follows the same distribution, up to scaling and shifting.

  • Potential for infinite variance: For α < 2, the process has infinite variance, capturing extreme events more effectively than Gaussian processes.

The ‘standard’ type is added to the process classification. This allows for more straightforward theoretical analysis and comparison between different parameterizations of the Lévy stable family.

Researchers and practitioners should be aware that while this standardized form offers analytical advantages, it may require rescaling and shifting for practical applications. The process’s rich behavior, especially for α < 2, necessitates careful interpretation and often specialized numerical methods for simulation and statistical inference.

class ergodicity.process.basic.MultivariateBrownianMotion(name: str = 'Multivariate Brownian Motion', process_class: Type[Any] = None, drift: List[float] = array([0., 0., 0.]), scale: List[List[float]] = array([[1., 0.6, 0.3], [0.6, 1., 0.6], [0.3, 0.6, 1.]]))[source]

Bases: ItoProcess

MultivariateBrownianMotion represents a generalization of the standard Brownian motion to multiple dimensions, providing a powerful tool for modeling correlated random processes in various fields. This continuous-time stochastic process, denoted as (X_t)_{t≥0} where X_t is a vector in R^n, is characterized by its drift vector μ ∈ R^n and a positive semi-definite covariance matrix Σ. For any time interval [s,t], the increment X_t - X_s follows a multivariate normal distribution N(μ(t-s), Σ(t-s)). Key properties include: independent and stationary increments, continuous sample paths in each dimension, and the preservation of the Markov property. The process starts at 0 (X_0 = 0) and has an expected value of E[X_t] = μt and covariance matrix Cov(X_t) = Σt. As a multivariate Itô process, it extends the mathematical framework of stochastic calculus to vector-valued processes, enabling the modeling of complex, interrelated phenomena. MultivariateBrownianMotion finds extensive applications across various domains: in finance for modeling correlated asset prices and risk factors, in physics for describing the motion of particles in multiple dimensions, in biology for analyzing the joint evolution of different species or genes, and in engineering for simulating multi-dimensional noise in control systems. This implementation is initialized with a name, optional process class, drift vector, and scale matrix (representing Σ), allowing for flexible specification of the process’s statistical properties. It’s categorized under both “multivariate” and “brownian” types, reflecting its nature as a vector-valued extension of Brownian motion. The class handles the dimensionality automatically based on the input drift vector, and stores the state in the _X attribute. The _external_simulator flag is set to False, indicating that the simulation is handled internally. Researchers and practitioners should be aware of the increased complexity in simulating and analyzing multivariate processes, particularly in high dimensions, and the importance of ensuring the positive semi-definiteness of the scale matrix for valid covariance structures.

custom_increment(X: List[float], timestep: float = 0.01) Any[source]

Custom increment function for the process.

Parameters:
  • X (List[float]) – The current value of the process

  • timestep (float) – The time step for the simulation

Returns:

The increment value

Return type:

List[float]

simulate(t: float = 10, timestep: float = 0.01, save: bool = False, plot: bool = False) Any[source]

Simulate the Multivariate Brownian Motion process, plot, and save it.

Parameters:
  • t (float) – The time horizon for the simulation

  • timestep (float) – The time step for the simulation

  • save (bool) – Whether to save the simulation results

  • plot (bool) – Whether to plot the simulation results

Returns:

The simulated process dataset as a NumPy array of shape (num_instances+1, num_steps)

Return type:

Any

simulate_2d(t: float = 10, timestep: float = 0.01, save: bool = False, plot: bool = False) Any[source]

Simulate a 2D Multivariate Brownian Motion.

Parameters:
  • t (float) – Total time for the simulation

  • timestep (float) – Time step for the simulation

  • save (bool) – Whether to save the simulation results

  • plot (bool) – Whether to plot the simulation results

Raises:

ValueError – If the process is not 2D

Returns:

Simulated 2D data array

Return type:

Any

simulate_3d(t: float = 10, timestep: float = 0.01, save: bool = False, plot: bool = False) Any[source]

Simulate a 3D Multivariate Brownian Motion.

Parameters:
  • t (float) – Total time for the simulation

  • timestep (float) – Time step for the simulation

  • save (bool) – Whether to save the simulation results

  • plot (bool) – Whether to plot the simulation results

Raises:

ValueError – If the process is not 3D

Returns:

Simulated 3D data array

Return type:

Any

simulate_live(t: float = 10, timestep: float = 0.01) Any[source]

Simulate the process live and save as a video file.

Parameters:
  • t (float) – Total time for the simulation

  • timestep (float) – Time step for the simulation

Returns:

Video file name of the simulation

Return type:

str

simulate_live_2d(t: float = 10, timestep: float = 0.01, save: bool = False, speed: float = 1.0) str[source]

Simulate the 2D process live and save as a video file.

Parameters:
  • t (float) – Total time for the simulation

  • timestep (float) – Time step for the simulation

  • save (bool) – Whether to save the simulation data

  • speed (float) – Speed multiplier for the video

Returns:

Video file name of the simulation

Return type:

str

simulate_live_2dt(t: float = 10, timestep: float = 0.01, save: bool = False, speed: float = 1.0) tuple[str, str][source]

Simulate the 2D process live with time as the third dimension and save as a video file and interactive plot.

Parameters:
  • t (float) – Total time for the simulation

  • timestep (float) – Time step for the simulation

  • save (bool) – Whether to save the simulation data

  • speed (float) – Speed multiplier for the video (default is 1.0, higher values make the video faster)

Returns:

Tuple of video file name and interactive plot file name

Return type:

tuple[str, str]

simulate_live_3d(t: float = 10, timestep: float = 0.01, save: bool = False, speed: float = 1.0) str[source]

Simulate the 3D process live and save as a video file.

Parameters:
  • t (float) – Total time for the simulation

  • timestep (float) – Time step for the simulation

  • save (bool) – Whether to save the simulation data

  • speed (float) – Speed multiplier for the video

Returns:

Video file name of the simulation

Return type:

str

simulate_weights(t: float = 10, timestep: float = 0.01, save: bool = True, plot: bool = False) ndarray[source]

Simulate the weights (relative shares) of the instances of a Multivariate Brownian Motion process.

Parameters:
  • t (float) – The time horizon for the simulation

  • timestep (float) – The time step for the simulation

  • save (bool) – Whether to save the simulation results

  • plot (bool) – Whether to plot the simulation results

Returns:

The simulated weights dataset as a NumPy array of shape (num_instances+1, num_steps)

Return type:

np.ndarray

class ergodicity.process.basic.MultivariateLevy(name: str = 'Multivariate Levy Stable Process', process_class: Type[Any] = None, alpha: float = 1.5, beta: float = 0, loc: ndarray = array([0., 0., 0.]), comments: bool = False, tempering: float = 0, truncation_level: float = None, truncation_type: str = 'hard', pseudocorrelation_matrix: ndarray = array([[1., 0.6, 0.3], [0.6, 1., 0.6], [0.3, 0.6, 1.]]), pseudovariances: ndarray = array([1, 1, 1]))[source]

Bases: LevyStableProcess

MultivariateLevy represents a sophisticated extension of the Lévy stable process to multiple dimensions, providing a powerful tool for modeling complex, correlated heavy-tailed phenomena. This continuous-time stochastic process, denoted as (X_t)_{t≥0} where X_t is a vector in R^n, inherits the fundamental characteristics of Lévy stable distributions while incorporating cross-dimensional dependencies.

Key parameters:

  • α (alpha): Stability parameter (0 < α ≤ 2), governing tail heaviness across all dimensions.

  • β (beta): Skewness parameter (-1 ≤ β ≤ 1), controlling asymmetry.

  • scale: Global scale parameter for the process.

  • loc: Location vector (μ ∈ R^n), shifting the process in each dimension.

  • correlation_matrix: Specifies the correlation structure between dimensions.

  • pseudovariances: Vector of pseudovariances for each dimension, generalizing the concept of variance.

Advanced features:

  • Tempering: Optional exponential tempering to ensure finite moments.

  • Truncation: ‘Hard’ or ‘soft’ truncation options to limit extreme values.

The process is constructed using a Cholesky decomposition of the correlation matrix, scaled by pseudovariances, ensuring a valid covariance structure. This approach allows for modeling complex interdependencies while maintaining the heavy-tailed nature of Lévy stable processes in each dimension.

Key properties:

  1. Multivariate stability: The sum of independent copies of the process follows the same distribution, up to affine transformations.

  2. Heavy tails and potential infinite variance in each dimension for α < 2.

  3. Complex dependency structures captured by the correlation matrix and pseudovariances.

The class implements custom simulation methods, including a specialized increment function that respects the multivariate structure. It includes extensive error checking to ensure the validity of input parameters, particularly for the correlation matrix and pseudovariances.

Researchers and practitioners should be aware of the computational challenges in simulating and estimating multivariate Lévy stable processes, especially in high dimensions or with extreme parameter values. The interplay between α, the correlation structure, and pseudovariances requires careful interpretation. While offering great flexibility in modeling complex, heavy-tailed multivariate phenomena, users should exercise caution in parameter selection and model interpretation, particularly when dealing with empirical data.

custom_increment(X: ndarray, timestep: float = 1.0) ndarray[source]

Generate a custom increment for the multivariate process.

Parameters:
  • X (np.ndarray) – The current value of the process

  • timestep (float) – The time step for the simulation

Returns:

The multivariate increment

Return type:

np.ndarray

property loc

Return the location vector of the process.

Returns:

The location vector

Return type:

np.ndarray

plot(times, data, save: bool = False, plot: bool = False)[source]

Plot the simulation results.

Parameters:
  • times (np.ndarray) – The time points for the simulation

  • data (np.ndarray) – The simulated data array

  • save (bool) – Whether to save the plot

  • plot (bool) – Whether to display the plot

Raises:

ValueError – If there are insufficient time points to plot the simulation

Returns:

None

Return type:

None

plot_2d(data_2d: ndarray, save: bool = False, plot: bool = True)[source]

Plot the 2D simulation results.

Parameters:
  • data_2d (np.ndarray) – 2D simulation data

  • save (bool) – Whether to save the plot

  • plot (bool) – Whether to display the plot

:return None :rtype None

plot_2dt(data_2d: ndarray, save: bool = False, plot: bool = True)[source]

Plot the 2D simulation results in a 3D graph with time as the third dimension.

Parameters:
  • data_2d (np.ndarray) – 2D simulation data

  • save (bool) – Whether to save the plot

  • plot (bool) – Whether to display the plot

:return None :rtype None

plot_3d(data_3d: ndarray, save: bool = False, plot: bool = True)[source]

Plot the 3D simulation results.

Parameters:
  • data_3d (np.ndarray) – 3D simulation data

  • save (bool) – Whether to save the plot

  • plot (bool) – Whether to display the plot

:return None :rtype None

property pseudocorrelation_matrix

Return the correlation matrix of the process.

Returns:

The correlation matrix

Return type:

np.ndarray

property pseudovariances

Return the pseudovariances of the process.

Returns:

The pseudovariances

Return type:

np.ndarray

simulate(t: float = 2.0, timestep: float = 0.1, num_instances: int = 1, save: bool = False, plot: bool = False) ndarray[source]

Simulate the multivariate Lévy process.

Parameters:
  • t (float) – Total time for the simulation

  • timestep (float) – Time step for the simulation

  • num_instances (int) – Number of simulation instances to generate

  • save (bool) – Whether to save the simulation data

  • plot (bool) – Whether to plot the simulation results

Returns:

The simulated data array of shape (dims+1, num_steps)

:rtype

simulate_2d(t: float = 1.0, timestep: float = 0.01, save: bool = False, plot: bool = False) ndarray[source]

Simulate the 2D Multivariate Lévy Process.

Parameters:
  • t (float) – Total time for the simulation

  • timestep (float) – Time step for the simulation

  • save (bool) – Whether to save the simulation data

  • plot (bool) – Whether to plot the simulation results

Raises:

ValueError – If the process is not 2D

Returns:

The simulated 2D data array of shape (3, num_steps)

Return type:

np.ndarray

simulate_3d(t: float = 1.0, timestep: float = 0.01, save: bool = False, plot: bool = False) ndarray[source]

Simulate the 3D Multivariate Lévy Process.

Parameters:
  • t (float) – Total time for the simulation

  • timestep (float) – Time step for the simulation

  • save (bool) – Whether to save the simulation data

  • plot (bool) – Whether to plot the simulation results

Raises:

ValueError – If the process is not 3D

Returns:

The simulated 3D data array of shape (4, num_steps)

:rtype

simulate_live(t: float = 1.0, timestep: float = 0.01) str[source]

Simulate the process live and save as a video file.

Parameters:
  • t (float) – Total time for the simulation

  • timestep (float) – Time step for the simulation

Returns:

The filename of the saved video

Return type:

str

simulate_live_2d(t: float = 1.0, timestep: float = 0.01, save: bool = False, speed: float = 1.0) str[source]

Simulate the 2D process live and save as a video file.

Parameters:
  • t (float) – Total time for the simulation

  • timestep (float) – time step for the simulation

  • save (bool) – Whether to save the video

  • speed (float) – Speed of the simulation

Returns:

The filename of the saved video

Return type:

str

simulate_live_2dt(t: float = 1.0, timestep: float = 0.01, save: bool = False, speed: float = 1.0) tuple[str, str][source]

Simulate the 2D process live with time and save as a video file.

Parameters:
  • t (float) – Total time for the simulation

  • timestep (float) – Time step for the simulation

  • save (bool) – Whether to save the video

  • speed (float) – Speed of the simulation

Returns:

The filenames of the saved video and interactive object

Return type:

tuple[str, str]

simulate_live_3d(t: float = 1.0, timestep: float = 0.01, save: bool = False, speed: float = 1.0) str[source]

Simulate the 3D process live and save as a video file.

Parameters:
  • t (float) – Total time for the simulation

  • timestep (float) – Time step for the simulation

  • save (bool) – Whether to save the video

  • speed (float) – Speed of the simulation

Returns:

The filename of the saved video

Return type:

str

class ergodicity.process.basic.ParetoProcess(name: str = 'Pareto Process', process_class: Type[Any] = None, shape: float = 2.0, scale: float = 1.0, loc: float = 0.0)[source]

Bases: NonItoProcess

ParetoProcess represents a continuous-time stochastic process based on the Pareto distribution, known for modeling phenomena with power-law tail behavior. This process, denoted as (X_t)_{t≥0}, is characterized by three parameters: shape (α), scale (σ), and location (μ). The Pareto distribution is renowned for its “80-20 rule” or “law of the vital few” property, making it particularly suitable for modeling size distributions in various natural and social contexts.

Key parameters:

  • shape (α > 0): Determines the tail behavior of the distribution. Smaller values lead to heavier tails, representing more extreme events.

  • scale (σ > 0): Sets the minimum scale of the process, effectively acting as a threshold parameter.

  • loc (μ): Shifts the entire distribution, allowing for flexibility in modeling.

The process exhibits several important properties:

  1. Heavy-tailed behavior: For α < 2, the process has infinite variance, capturing extreme events more effectively than processes based on normal distributions.

  2. Scale invariance: The relative probabilities of large events remain consistent regardless of scale.

  3. Power-law decay: The probability of extreme events decays as a power law, rather than exponentially.

This implementation uses a custom increment function (_external_simulator = False), allowing for precise control over the generation of process increments. The class performs validity checks to ensure that the shape and scale parameters are strictly positive, which is crucial for maintaining the integrity of the Pareto distribution.

Researchers and practitioners should be aware of the challenges in parameter estimation, especially for small shape values where moments may not exist. The process’s heavy-tailed nature can lead to counterintuitive results in statistical analyses and requires careful interpretation. While powerful in modeling extreme phenomena, the Pareto process should be used judiciously, with consideration of its underlying assumptions and their applicability to the system being modeled.

custom_increment(X: float, timestep: float = 0.01) Any[source]

Custom increment function for the process.

Parameters:
  • X (float) – The current value of the process

  • timestep (float) – The time step for the simulation

Returns:

The increment value

Return type:

float

differential() str[source]

Return the differential equation of the process.

Returns:

The differential equation of the process

Return type:

str

express_as_elementary() str[source]

Express a given Pareto process as a function of elementary processes.

Returns:

The Pareto process expressed in terms of elementary processes

Return type:

str

property loc

Return the location parameter (μ) of the process.

Returns:

The location parameter (μ)

Return type:

float

property scale

Return the scale parameter (σ) of the process.

Returns:

The scale parameter (σ)

Return type:

float

property shape

Return the shape parameter (α) of the process.

Returns:

The shape parameter (α)

Return type:

float

class ergodicity.process.basic.PoissonProcess(name: str = 'Poisson Process', process_class: ~typing.Type[~typing.Any] = <class 'stochastic.processes.continuous.poisson.PoissonProcess'>, rate: float = 2.0)[source]

Bases: NonItoProcess

PoissonProcess represents a fundamental continuous-time stochastic process that models the occurrence of random events or arrivals at a constant average rate. This process, denoted as (N_t)_{t≥0}, is characterized by its rate parameter λ > 0, which represents the average number of events per unit time. For any time interval [s,t], the number of events N_t - N_s follows a Poisson distribution with parameter λ(t-s). Key properties include: independent increments, stationary increments, right-continuous step function sample paths, and the memoryless property. The process starts at 0 (N_0 = 0) and has an expected value of E[N_t] = λt and variance Var(N_t) = λt. As a pure jump process, it is classified as a NonItoProcess, distinct from continuous processes like Brownian motion. PoissonProcess finds extensive applications across various fields: in queueing theory for modeling arrival processes, in reliability theory for describing failure occurrences, in insurance for modeling claim arrivals, in neuroscience for representing neuronal firing patterns, and in physics for modeling radioactive decay. This implementation is initialized with a name, process class, and a rate parameter (defaulting to 2.0), allowing for flexible modeling of event frequencies. It’s categorized under the “poisson” type, reflecting its nature as a Poisson process. The simplicity of its single-parameter definition belies its powerful modeling capabilities, particularly for discrete events in continuous time. Researchers and practitioners should be aware of both its strengths in modeling random occurrences and its limitations, such as the assumption of constant rate and independence between events, which may not hold in all real-world scenarios. Extensions like non-homogeneous Poisson processes or compound Poisson processes can address some of these limitations for more complex modeling needs.

simulate(t: float = 10, timestep: float = 0.01, num_instances: int = 5, save: bool = False, plot: bool = False) Any[source]

Simulate the Poisson process, plot, and save it.

Parameters:
  • t (float) – The time horizon for the simulation

  • timestep (float) – The time step for the simulation

  • num_instances (int) – The number of process instances to simulate

  • save (bool) – Whether to save the simulation results

  • plot (bool) – Whether to plot the simulation results

Returns:

The simulated Poisson process dataset as a NumPy array of shape (num_instances+1, num_steps)

Return type:

Any

class ergodicity.process.basic.SquaredBesselProcess(name: str = 'Squared Bessel Process', process_class: ~typing.Type[~typing.Any] = <class 'stochastic.processes.continuous.squared_bessel.SquaredBesselProcess'>, dim: int = 10)[source]

Bases: ItoProcess

SquaredBesselProcess represents a continuous-time stochastic process that models the square of the Euclidean norm of a d-dimensional Brownian motion. This process, denoted as (R²_t)_{t≥0}, is characterized by its dimension parameter d > 0, which need not be an integer. For a d-dimensional Brownian motion (B_t), the squared Bessel process is defined as R²_t = ||B_t||². It satisfies the stochastic differential equation dR²_t = d dt + 2√(R²_t) dW_t, where W_t is a standard Brownian motion. Key properties include: non-negativity, the dimension parameter d determining its behavior (recurrent for 0 < d < 2, transient for d ≥ 2), and its role in the Pitman-Yor process for d = 0. The process exhibits different characteristics based on d: for d ≥ 2, it never reaches zero; for 0 < d < 2, it touches zero but immediately rebounds; for d = 0, it is absorbed at zero. SquaredBesselProcess finds applications in various fields: in financial mathematics for modeling interest rates and volatility (particularly in the Cox-Ingersoll-Ross model), in population genetics for describing the evolution of genetic diversity, and in queueing theory for analyzing busy periods in certain queue models. As an Itô process, it follows the rules of Itô calculus, making it amenable to standard stochastic calculus techniques. This implementation is initialized with a name, process class, and dimension parameter (defaulting to a predefined value). It’s categorized under both “squared” and “bessel” types, reflecting its nature as the square of a Bessel process. The drift and stochastic terms are set to default values, with the actual dynamics governed by the dimension parameter. Researchers and practitioners should be aware of the process’s unique properties, particularly its dimension-dependent behavior, which can be crucial in accurately modeling and analyzing phenomena in various applications.

class ergodicity.process.basic.StandardBesselProcess(name: str = 'Bessel Process', process_class: ~typing.Type[~typing.Any] = <class 'stochastic.processes.continuous.bessel.BesselProcess'>, dim: int = 10)[source]

Bases: ItoProcess

A standard Bessel process. A StandardBesselProcess represents a continuous-time stochastic process that models the Euclidean distance of a Brownian motion from its starting point. As an Itô process, it follows the rules of stochastic calculus and is defined mathematically as R_t = ||B_t||, where (B_t)_{t≥0} is a d-dimensional Brownian motion and ||·|| denotes the Euclidean norm. This process is characterized by its dimension (d), which influences its behavior, including non-negativity, martingale properties (for d=2), and recurrence/transience (recurrent for d≤2, transient for d>2). The Bessel process maintains continuous sample paths and finds applications in mathematical finance for interest rate modeling, statistical physics for particle diffusion studies, and probability theory as a fundamental process. It is initialized with a name, process class, and dimension, using default drift and stochastic terms inherited from the ItoProcess parent class. The StandardBesselProcess is categorized as both a “bessel” and “standard” process type, reflecting its nature and standardized implementation.

class ergodicity.process.basic.StandardBrownianBridge(name: str = 'Standard Brownian Bridge', process_class: ~typing.Type[~typing.Any] = <class 'stochastic.processes.continuous.brownian_bridge.BrownianBridge'>, b: float = 0.0)[source]

Bases: ItoProcess

A StandardBrownianBridge represents a continuous-time stochastic process that models a Brownian motion constrained to start and end at specified points, typically 0 and b, over a fixed time interval [0, 1]. This process, denoted as (B_t)_{0≤t≤1}, is defined by B_t = W_t - tW_1, where (W_t) is a standard Brownian motion. The bridge process is characterized by its non-independent increments and its “tied-down” nature at the endpoints. It exhibits several key properties: it’s a Gaussian process, has continuous sample paths, and maintains a covariance structure of min(s,t) - st. The StandardBrownianBridge finds applications in statistical inference, particularly in goodness-of-fit tests, as well as in finance for interest rate modeling and in biology for modeling evolutionary processes. As an Itô process, it adheres to the principles of stochastic calculus. It is initialized with a name, process class, and an endpoint value b, using default drift and stochastic terms inherited from the ItoProcess parent class. The StandardBrownianBridge is explicitly categorized as both a “bridge” and “standard” process type, reflecting its nature as a standard implementation of the Brownian bridge concept. The _independent attribute is set to False, highlighting the process’s non-independent increment property, which distinguishes it from standard Brownian motion.

class ergodicity.process.basic.StandardBrownianExcursion(name: str = 'Standard Brownian Excursion', process_class: ~typing.Type[~typing.Any] = <class 'stochastic.processes.continuous.brownian_excursion.BrownianExcursion'>)[source]

Bases: ItoProcess

A StandardBrownianExcursion represents a continuous-time stochastic process that models a Brownian motion conditioned to be positive and to start and end at zero over a fixed time interval, typically [0, 1]. This process, denoted as (E_t)_{0≤t≤1}, can be conceptualized as the absolute value of a Brownian bridge scaled to have a maximum of 1. Mathematically, it’s related to the Brownian bridge (B_t) by E_t = |B_t| / max(|B_t|). The excursion process is characterized by its non-negative paths, non-independent increments, and its constrained behavior at the endpoints. It exhibits several key properties: it’s a non-Markovian process, has continuous sample paths, and its probability density at time t is related to the Airy function. The StandardBrownianExcursion finds applications in various fields, including queueing theory, where it models busy periods, in statistical physics for studying polymer chains, and in probability theory as a fundamental object related to Brownian motion. As an Itô process, it adheres to the principles of stochastic calculus, although its specific dynamics are more complex due to its constrained nature. It is initialized with a name and process class, using default drift and stochastic terms inherited from the ItoProcess parent class. The StandardBrownianExcursion is explicitly categorized as both an “excursion” and “standard” process type, reflecting its nature as a standard implementation of the Brownian excursion concept. The _independent attribute is set to False, emphasizing the process’s non-independent increment property, which is a crucial characteristic distinguishing it from standard Brownian motion and highlighting its unique constrained behavior.

class ergodicity.process.basic.StandardBrownianMeander(name: str = 'Standard Brownian Meander', process_class: ~typing.Type[~typing.Any] = <class 'stochastic.processes.continuous.brownian_meander.BrownianMeander'>)[source]

Bases: ItoProcess

A StandardBrownianMeander represents a continuous-time stochastic process that models a Brownian motion conditioned to stay positive over a fixed time interval, typically [0, 1], with an unconstrained endpoint. This process, denoted as (M_t)_{0≤t≤1}, can be constructed from a standard Brownian motion (B_t) by M_t = |B_t| / √(1-t) for 0 ≤ t < 1, with a specific limiting distribution at t = 1. The meander process is characterized by its non-negative paths, non-independent increments, and its free endpoint behavior. It exhibits several key properties: it’s a non-Markovian process, has continuous sample paths, and its transition probability density is related to the heat kernel on the half-line with absorbing boundary conditions. The StandardBrownianMeander finds applications in various fields, including queuing theory for modeling busy periods with unfinished work, in financial mathematics for studying asset prices conditioned on positivity, and in probability theory as a fundamental object related to Brownian motion and its local time. As an Itô process, it adheres to the principles of stochastic calculus, although its specific dynamics are more complex due to its constrained nature. It is initialized with a name and process class, using default drift and stochastic terms inherited from the ItoProcess parent class. The StandardBrownianMeander is explicitly categorized as both a “meander” and “standard” process type, reflecting its nature as a standard implementation of the Brownian meander concept. The _independent attribute is set to False, emphasizing the process’s non-independent increment property, which is a crucial characteristic distinguishing it from standard Brownian motion and highlighting its unique constrained behavior while allowing for endpoint flexibility.

class ergodicity.process.basic.StandardFractionalBrownianMotion(name: str = 'Standard Fractional Brownian Motion', process_class: ~typing.Type[~typing.Any] = <class 'stochastic.processes.continuous.fractional_brownian_motion.FractionalBrownianMotion'>, hurst: float = 0.5)[source]

Bases: NonItoProcess

StandardFractionalBrownianMotion (fBm) represents a generalization of classical Brownian motion, characterized by long-range dependence and self-similarity. This continuous-time Gaussian process, denoted as (B^H_t)_{t≥0}, is uniquely determined by its Hurst parameter H ∈ (0,1), which governs its correlation structure and path properties. The process exhibits several key features: stationary increments, self-similarity with parameter H, and long-range dependence for H > 0.5. Its covariance function is given by E[B^H_t B^H_s] = 0.5(|t|^2H + |s|^2H - |t-s|^2H). When H = 0.5, fBm reduces to standard Brownian motion; for H > 0.5, it shows persistent behavior, while for H < 0.5, it displays anti-persistent behavior. Unlike standard Brownian motion, fBm is not a semimartingale for H ≠ 0.5 and thus does not fit into the classical Itô calculus framework, hence its classification as a NonItoProcess. The process finds wide applications in various fields: in finance for modeling long-term dependencies in asset returns, in network traffic analysis for capturing self-similar patterns, in hydrology for studying long-term correlations in river flows, and in biophysics for analyzing anomalous diffusion phenomena. This implementation is initialized with a name, process class, and Hurst parameter (defaulting to 0.5), and is categorized as both “standard” and “fractional”. The _independent attribute is set to False, reflecting the process’s inherent long-range dependence. Researchers and practitioners should be aware of the unique challenges in working with fBm, including the need for specialized stochastic calculus tools and careful interpretation of its long-range dependence properties in practical applications.

class ergodicity.process.basic.StandardMultifractionalBrownianMotion(name: str = 'Multifractional Brownian Motion', process_class: ~typing.Type[~typing.Any] = <class 'stochastic.processes.continuous.multifractional_brownian_motion.MultifractionalBrownianMotion'>, hurst: ~typing.Callable[[float], float] = <function StandardMultifractionalBrownianMotion.<lambda>>)[source]

Bases: NonItoProcess

StandardMultifractionalBrownianMotion (mBm) represents a generalization of fractional Brownian motion, allowing for a time-varying Hurst parameter. This continuous-time Gaussian process, denoted as (B^H(t)_t)_{t≥0}, is characterized by its Hurst function H(t) : [0,∞) → (0,1), which governs its local regularity and correlation structure. The process exhibits several key features: non-stationary increments, local self-similarity, and variable long-range dependence. Its covariance structure is complex, approximated by E[B^H(t)_t B^H(s)_s] ≈ 0.5(t^(H(t)+H(s)) + s^(H(t)+H(s)) - |t-s|^(H(t)+H(s))). When H(t) is constant, mBm reduces to fractional Brownian motion. The process allows for modeling phenomena with time-varying fractal behavior, where the local regularity evolves over time. As a non-stationary and generally non-Markovian process, it is classified as a NonItoProcess, requiring specialized stochastic calculus techniques. StandardMultifractionalBrownianMotion finds wide applications in various fields: in finance for modeling assets with time-varying volatility and long-range dependence, in image processing for texture analysis with varying local regularity, in geophysics for studying seismic data with evolving fractal characteristics, and in network traffic analysis for capturing time-dependent self-similar patterns. This implementation is initialized with a name, process class, and a Hurst function (defaulting to a constant function H(t) = 0.5, which corresponds to standard Brownian motion). It’s categorized under “multifractional”, “fractional”, “standard”, and “brownian” types, reflecting its nature as a generalized Brownian motion. The _independent attribute is set to False, emphasizing the process’s complex dependence structure. Researchers and practitioners should be aware of the challenges in working with mBm, including parameter estimation of the Hurst function, interpretation of local and global properties, and the need for advanced numerical methods for simulation and analysis.

class ergodicity.process.basic.VarianceGammaProcess(name: str = 'Variance Gamma Process', process_class: ~typing.Type[~typing.Any] = <class 'stochastic.processes.continuous.variance_gamma.VarianceGammaProcess'>, drift: float = 0.0, variance: float = 1, scale: float = 1.0)[source]

Bases: NonItoProcess

GeneralizedHyperbolicProcess represents a highly flexible class of continuous-time stochastic processes that encompasses a wide range of distributions, including normal, Student’s t, variance-gamma, and normal-inverse Gaussian as special or limiting cases. This process, denoted as (X_t)_{t≥0}, is characterized by five parameters: α (tail heaviness), β (asymmetry), μ (location), δ (scale), and λ (a shape parameter, often denoted as ‘a’ in the implementation). The process is defined through its increments, which follow a generalized hyperbolic distribution. Key properties include: semi-heavy tails (heavier than Gaussian but lighter than power-law), ability to model skewness, and a complex autocorrelation structure. The process allows for both large jumps and continuous movements, making it highly adaptable to various phenomena. It’s particularly noted for its capacity to capture both the central behavior and the extreme events in a unified framework.

Parameter restrictions are crucial for the proper definition of the process:

  • α > 0: Controls the tail heaviness, with larger values leading to lighter tails.

  • |β| < α: Determines the skewness, with β = 0 yielding symmetric distributions.

  • δ > 0: Acts as a scaling factor.

  • μ can take any real value.

  • λ (if provided via kwargs) can be any real number, affecting the shape of the distribution.

The GeneralizedHyperbolicProcess finds extensive applications in finance for modeling asset returns, particularly in markets exhibiting skewness and kurtosis; in risk management for more accurate tail risk assessment; in physics for describing particle movements in heterogeneous media; and in signal processing for modeling non-Gaussian noise. This implementation is initialized with parameters α, β, μ (loc), and δ (scale), with additional parameters possible through kwargs. It’s categorized under both “generalized” and “hyperbolic” types, reflecting its nature as a broad, hyperbolic-based process. The class uses a custom increment function, indicated by the _external_simulator flag set to False. This allows for precise control over the generation of process increments, crucial for accurately representing the complex distribution. Researchers and practitioners should be aware of the computational challenges in parameter estimation and simulation, particularly in high-dimensional settings or with extreme parameter values. The flexibility of the generalized hyperbolic process comes with increased model complexity, requiring careful consideration in application and interpretation. Its ability to nest simpler models allows for sophisticated hypothesis testing and model selection in empirical studies.

class ergodicity.process.basic.WienerProcess(name: str = 'Wiener Process', process_class: ~typing.Type[~typing.Any] = <class 'stochastic.processes.continuous.wiener.WienerProcess'>)[source]

Bases: ItoProcess

WienerProcess represents the fundamental continuous-time stochastic process, also known as standard Brownian motion, which forms the cornerstone of many stochastic models in science and finance. This process, denoted as (W_t)_{t≥0}, is characterized by its properties of independent increments, continuous paths, and Gaussian distribution. For any time interval [s,t], the increment W_t - W_s follows a normal distribution N(0, t-s). Key properties include: almost surely continuous sample paths, non-differentiability at any point, self-similarity, and the strong Markov property. The process starts at 0 (W_0 = 0) and has an expected value of E[W_t] = 0 and variance Var(W_t) = t. As the quintessential Itô process, it serves as the building block for more complex stochastic differential equations and is central to Itô calculus. WienerProcess finds ubiquitous applications across various fields: in physics for modeling Brownian motion and diffusion processes, in financial mathematics for describing stock price movements and as a basis for the Black-Scholes model, in signal processing for representing white noise, and in control theory for modeling disturbances in dynamical systems. This implementation is initialized with a name and process class, with drift term fixed at 0 and stochastic term at 1, adhering to the standard definition. It’s categorized under both “wiener” and “standard” types, emphasizing its nature as the canonical continuous-time stochastic process. The simplicity of its parameter-free definition belies the complexity and richness of its behavior, making it a versatile tool in stochastic modeling. Researchers and practitioners should be aware of both its power as a modeling tool and its limitations, particularly in capturing more complex real-world phenomena that may require extensions or generalizations of the basic Wiener process.

ergodicity.process.constructor module

constructor Submodule

The Constructor submodule provides a flexible, interactive framework for users to dynamically create custom stochastic process classes. By gathering user input such as process name, type, and parameters, this module generates Python class definitions for either Ito or Non-Ito processes, enabling simulation and modeling of a wide variety of stochastic systems.

Key Features:

  1. Custom Process Generation:

    • Users can define a custom stochastic process interactively by providing basic details, such as:

      • Name of the process

      • Whether the process is of Ito or Non-Ito type

      • Parameters of the process (e.g., drift and stochastic terms)

      • The mathematical increment of the process (e.g., for Brownian motion or Lévy processes)

  2. Support for Both Ito and Non-Ito Processes:

    • This module supports both Ito processes, which rely on stochastic calculus involving drift and stochastic terms, and Non-Ito processes, which may involve deterministic or fractional stochastic models.

    • Depending on whether the user selects the process to be Ito or Non-Ito, the generated class structure is modified accordingly.

  3. Interactive Code Generation:

    • Once the user specifies the process details, the submodule dynamically generates and prints the Python class code for that process. This class includes:

      • Parameter initialization.

      • Custom increment calculation based on user input.

    The generated code follows the proper inheritance structure from either the ItoProcess or NonItoProcess base classes, ensuring compatibility with the larger framework.

  4. Simulation-Ready:

    • The generated class can be used directly within the broader stochastic process framework for simulations. The custom increment function defines the process dynamics in a manner that integrates seamlessly with pre-existing tools.

Example Use Case:

A user might wish to simulate a custom geometric Brownian motion (GBM) with specific drift and volatility parameters. By following the interactive prompts, the user could input:

  • Process name: “CustomGBM”

  • Process type: Ito

  • Parameters: “alpha, beta”

  • Drift term: 0.05

  • Stochastic term: 0.2

  • Increment: dW = dt**0.5 * np.random.normal(0, 1)

  • Increment equation: dX = alpha * X * dt + beta * X * dW

The submodule would then generate a fully functional Python class that the user can modify or simulate directly.

## Use Cases and Applications:

  1. Research and Experimentation:

    • This module enables researchers to quickly define and simulate new stochastic processes for testing hypotheses or exploring new dynamics.

  2. Rapid Prototyping:

    • For developers and scientists who need to build custom processes for specific simulations, this tool reduces the time required to write boilerplate code and allows for easy customization.

  3. Educational Purposes:

    • This submodule is a helpful learning tool for students and practitioners to understand the structure of stochastic processes by generating and analyzing different custom processes.

## Example Workflow:

  1. The user runs the create_custom_process function.

  2. The system prompts for various details about the process (name, parameters, etc.).

  3. The user inputs the increment function (drift and diffusion terms for Ito processes).

  4. The module dynamically generates the Python code for the process, including initialization and simulation-ready custom increment logic.

  5. The user receives a ready-to-use Python class that can be integrated into their broader simulations.

The Constructor submodule empowers users with full control over their process definitions, while ensuring consistency with Ito and Non-Ito frameworks in the overall stochastic process toolkit.

ergodicity.process.constructor.create_custom_process()[source]

This function allows the user to create a custom stochastic process class interactively by providing the necessary details such as process name, type, parameters, and increment function. The function then generates the Python class definition for the custom process based on the user input.

Returns:

The generated Python class definition for the custom process.

Return type:

str

ergodicity.process.custom_classes module

custom_classes Submodule

The Custom Classes submodule provides a framework for implementing specialized, non-standard stochastic processes. These classes extend the CustomProcess class and can be used to represent more complex processes that are not typically covered by standard Ito or Non-Ito frameworks. The submodule encourages the definition of processes with unique dynamics that can be customized to fit specific modeling needs.

Key Features:

  1. User-Defined Stochastic Processes:

    • This submodule is designed for developers and researchers who need to implement non-standard processes for simulations. Users can create their own stochastic processes by extending the CustomProcess base class and implementing a custom increment function that defines the process’s dynamics.

  2. Flexibility in Process Definition:

    • The submodule supports defining processes with state-dependent volatility, complex feedback dynamics, or other advanced stochastic behaviors. It can handle cases where the standard Ito or Non-Ito frameworks may not be sufficient for representing the desired phenomena.

  3. Multiplicative or Additive Processes:

    • The custom processes can be either multiplicative (changes are proportional to the current value, like in Geometric Brownian Motion) or additive (changes are independent of the current value). This allows for modeling a wide range of phenomena such as financial asset prices, population growth, or physical processes with varying volatility.

Example Class: Constant Elasticity of Variance Process (CEV)

The Constant Elasticity of Variance Process (CEV) is a custom stochastic process that extends the Geometric Brownian Motion model by introducing state-dependent volatility. It is particularly useful for modeling phenomena where volatility increases or decreases with the process level. This class serves as an example of how to implement custom processes using this submodule.

Key attributes of the CEV Process:

  • State-dependent volatility: The volatility changes depending on the current level of the process, allowing for more realistic modeling of real-world phenomena.

  • Elasticity parameter: A crucial parameter that determines how volatility behaves in relation to the process level. It can produce different types of dynamics:

    • γ = 1: Reduces to Geometric Brownian Motion.

    • γ > 1: Volatility increases with the process level (leverage effect).

    • γ < 1: Volatility decreases with the process level (inverse leverage effect).

  • Mean-reversion: The process has a mean-reverting behavior controlled by the mean reversion rate θ. This feature makes it valuable for modeling financial instruments or other time-dependent quantities that tend to stabilize over time.

Applications of CEV:

  1. Financial Markets: Used in option pricing models to describe the behavior of asset prices with level-dependent volatility.

  2. Population Dynamics: Models population growth where randomness depends on the population size.

  3. Physics: Used to model diffusion in non-homogeneous media, where variance depends on the concentration of the diffusing substance.

Workflow for Creating Custom Processes:

  1. Define Parameters: The process should have its key parameters (drift, volatility, etc.) defined in the class’s constructor.

  2. Override `custom_increment`: The core of the custom process lies in the custom_increment method, which calculates the state changes based on the current process value and other parameters.

  3. Handle Dynamics: Processes may have state-dependent dynamics, feedback loops, or other complex behaviors that are implemented in this method.

  4. Integrate with Simulation Tools: Once defined, the custom process can be integrated into the larger simulation framework to model and analyze specific scenarios.

class ergodicity.process.custom_classes.ConstantElasticityOfVarianceProcess(name='CEV Process', mu=0.1, sigma=0.5, gamma=0.5, theta=0.1)[source]

Bases: CustomProcess

ConstantElasticityOfVarianceProcess (CEV) represents a sophisticated stochastic process that extends the concept of geometric Brownian motion by allowing the volatility to depend on the current level of the process. This continuous-time process, denoted as (S_t)_{t≥0}, is defined by the stochastic differential equation:

dS_t = θ * μ * S_t * dt + σ * S_t^γ * dW_t

where:

  • μ: Long-term mean level or drift

  • σ: Volatility scale parameter

  • γ: Elasticity parameter, determining how volatility changes with the process level

  • θ: Mean reversion rate

  • W_t: Standard Brownian motion

Key parameters:

  1. mu (μ): Influences the overall trend of the process.

  2. sigma (σ): Base level of volatility.

  3. gamma (γ): Elasticity of variance, crucial in determining the process’s behavior.

  4. theta (θ): Rate of mean reversion, controlling how quickly the process tends towards its long-term mean.

Key properties:

  1. State-dependent volatility: Volatility changes with the level of the process, allowing for more realistic modeling of various phenomena.

  2. Multiplicative nature: The process is inherently multiplicative, suitable for modeling quantities that cannot become negative (e.g., prices, populations).

  3. Flexible behavior: Depending on the value of γ, the process can exhibit different characteristics:

    • γ = 1: Reduces to geometric Brownian motion

    • γ < 1: Volatility decreases as the process level increases (inverse leverage effect)

    • γ > 1: Volatility increases as the process level increases (leverage effect)

  4. Mean reversion: The process tends to revert to a long-term mean level, with the speed determined by θ.

This implementation extends the CustomProcess class, providing a specialized increment function that captures the unique dynamics of the CEV process. The process is explicitly set as multiplicative (_multiplicative = True), reflecting its nature in modeling proportional changes.

Applications span various fields:

  • Financial modeling: Asset prices with state-dependent volatility, particularly useful in option pricing.

  • Population dynamics: Species growth with density-dependent randomness.

  • Physics: Diffusion processes in non-homogeneous media.

  • Economics: Interest rate models with level-dependent volatility.

Researchers and practitioners should be aware of several important considerations:

  1. Parameter estimation: The interplay between parameters, especially γ and σ, can make estimation challenging.

  2. Numerical stability: Care must be taken in simulation, particularly when γ < 0.5, to avoid numerical issues.

  3. Analytical tractability: Closed-form solutions are not always available, necessitating numerical methods.

  4. Regime-dependent behavior: The process can exhibit significantly different characteristics in different ranges, requiring careful interpretation.

The ConstantElasticityOfVarianceProcess offers a powerful and flexible framework for modeling phenomena with state-dependent volatility. Its ability to capture a wide range of behaviors makes it valuable in many applications, but also requires careful consideration in parameter selection, simulation techniques, and result interpretation. The process’s rich dynamics provide opportunities for sophisticated modeling but also demand a thorough understanding of its properties and limitations.

custom_increment(X: float, timestep: float = 0.01) Any[source]

Compute the increment of the process at the given state X and timestep.

Parameters:
  • X (float) – Current state of the process

  • timestep (float) – Timestep for the increment

Returns:

Increment of the process at the given state and timestep

Return type:

Any

ergodicity.process.default_values module

This file contains the default values for the parameters of the process initialization and simulation. The parameters are optimized here for the balance between speed and precision.

ergodicity.process.default_values.function_default(t)

ergodicity.process.definitions module

The definitions.py module lays the foundation for the Ergodicity Library, introducing core concepts and defining abstract classes for stochastic processes.

Key Components:

  • `Process` Class:

    • The central abstract class for all stochastic processes in the library.

    • Provides a standardized interface for process initialization, property access, and common methods like simulation, moments calculation, and visualization.

  • `ItoProcess` and `NonItoProcess` Classes:

    • Abstract subclasses of Process that categorize processes into Ito (subject to Ito calculus) and non-Ito types.

  • `CustomProcess` Class:

    • Empowers users to create their own custom stochastic processes within the library’s framework.

  • Decorators and Helper Functions:

    • simulation_decorator: Enhances simulation methods with verbose output for easier tracking.

    • check_simulate_with_differential: Checks if a process’s simulation uses differential methods.

    • create_correlation_matrix and correlation_to_covariance: Facilitate working with correlated processes.

Core Functionalities:

  • Process Initialization and Properties:

    • Standardizes the creation and attribute access of processes, including names, multiplicative nature, independence of increments, and more.

  • Simulation:

    • simulate(): Primary method for generating sample paths of a process.

    • simulate_2d() and simulate_3d(): Extend simulation to 2D and 3D processes.

    • simulate_live_*: Create dynamic visualizations of simulations.

  • Moments and Distributions:

    • moments(): Calculates the cumulative moments (mean, variance, etc.) of a process.

    • k_moments(): Extends moment calculation to higher orders.

    • simulate_distribution(): Simulates the probability distribution of a process.

  • Analysis and Visualization:

    • ensemble_average() and time_average(): Calculate ensemble and time averages.

    • self_averaging_time(): Estimates the time when a process transitions from ensemble-averaged to time-averaged behavior.

    • plot*, visualize_moments: Provide various plotting and visualization capabilities.

  • Ito-Specific Functionality:

    • solve(): Attempts to find analytical solutions for Ito processes using Ito calculus.

    • differential(): Represents the stochastic differential equation of an Ito process.

Intended Audience:

  • Researchers and Practitioners: Leverage the library’s core definitions to work with and analyze a wide variety of stochastic processes.

  • Students and Learners: Gain a deeper understanding of stochastic processes through the clear structure and implementation of fundamental concepts.

  • Developers: Extend the library’s capabilities by building upon these base classes and methods to create new process types and functionalities.

Dependencies:

  • abc.ABC: Provides support for defining abstract base classes (ABC) for ensuring proper subclass implementation.

  • typing.List, Any, Type, Callable: Used for type annotations to enforce type safety and clarity in function signatures.

  • numpy (np): Fundamental package for numerical computation, array manipulation, and random number generation.

  • matplotlib.pyplot (plt): Library for creating static, animated, and interactive visualizations.

  • matplotlib.animation: For creating dynamic animations of the simulated processes.

  • inspect: Used to inspect live objects, retrieve information about classes, functions, methods, and more.

  • sympy (sp): Symbolic mathematics library for defining and solving algebraic expressions and differential equations.

  • pandas.core.tools.times.to_time: Time handling and manipulation tool.

  • .default_values: Internal module that defines default constants used in simulations.

  • ergodicity.configurations: Module that defines custom configurations for the stochastic processes and models.

  • warnings: Provides a framework for issuing runtime warnings, used here for custom warnings.

  • threading: Module to run code concurrently via threads for efficient simulations.

  • os: Interface for interacting with the operating system, primarily for file handling and environment queries.

  • csv: For reading from and writing to CSV files.

  • subprocess: Allows for spawning new processes, connecting to their input/output/error pipes, and obtaining their return codes.

  • plotly.graph_objects (go): Advanced plotting library for interactive visualizations.

  • mpl_toolkits.mplot3d: Used for creating 3D plots with matplotlib.

  • ..custom_warnings: Defines custom warnings specific to this module, like InDevelopmentWarning and KnowWhatYouDoWarning.

  • ergodicity.tools.compute.growth_rate_of_average_per_time: Utility function for computing growth rates over time.

  • ergodicity.tools.compute.average: Utility function for calculating average values over a data set.

class ergodicity.process.definitions.CustomProcess(name: str)[source]

Bases: Process, ABC

Abstract class representing a custom process. This class can be used by user to create custom processes in an easy way.

class ergodicity.process.definitions.ItoProcess(name: str, process_class: Type[Any], drift_term: float, stochastic_term: float)[source]

Bases: Process, ABC

Abstract class representing an Ito process. Ito calculus can be applied to Ito processes. All Ito processes have drift and stochastic term.

drift_term

The drift term of the process

stochastic_term

The stochastic term of the process

closed_formula()[source]

Find the analytical solution for the given Ito process using Ito calculus.

Returns:

The analytical solution for the Ito process

Return type:

Sympy expression

differential() str[source]

Calculate the differential of the process.

Returns:

The differential of the process

Return type:

str

property drift_term: float

Get the drift term of the process.

Returns:

The drift term of the process

Return type:

float

ergodicity_transform()[source]

Find the ergodicity transformation for the given Ito process using Ito calculus.

Returns:

The ergodicity transformation for the Ito process

Return type:

Sympy expression

expected_value_expression(initial_condition=1) str[source]

Return the symbolic expression for the expected value of the process if possible using conventional calculus.

Returns:

The symbolic expression for the expected value of the process

Return type:

Sympy expression

property stochastic_term: float

Get the stochastic term of the process.

Returns:

The stochastic term of the process

Return type:

float

time_average_expression() str[source]

Return the symbolic expression for the time average of the process if possible using Ito calculus.

Returns:

The symbolic expression for the time average of the process

Return type:

Sympy expression

class ergodicity.process.definitions.NonItoProcess(name: str, process_class: Type[Any])[source]

Bases: Process, ABC

Abstract class representing a non-Ito process.

class ergodicity.process.definitions.Process(name: str, multiplicative: bool, independent: bool, ito: bool, process_class: Type[Any] = None, **kwargs)[source]

Bases: ABC

The Process class is the base class for all processes in the Ergodicity Library. This is an abstract class. Here, the majority of the methods available for all classes related to stochastic processes are defined.

name

The name of the process.

Type:

str

multiplicative

Whether the process is multiplicative.

Type:

bool

independent

Whether the increments of the process are independent.

Type:

bool

ito

Whether the process is an Ito process.

Type:

bool

process_class

The class in the stochastic library that corresponds to the process.

Type:

Type[Any]

types

Custom labels to categorize the process.

Type:

List[str]

comments

Whether to display comments about the process when the code runs.

Type:

bool

has_wrong_params

Whether the corresponding process in the stochastic library has parameters in an unexpected format.

Type:

bool

custom

Whether the process is a custom process.

Type:

bool

simulate_with_differential

Whether the process is simulated using differential methods.

Type:

bool

output_dir

The directory where the process results are saved.

Type:

str

increment_process

Whether the process is an increment process.

Type:

bool

memory

The memory of the process.

Type:

int

add_type(new_type: str)[source]

Add a new type to the types list if it is not already present.

Parameters:

new_type (str) – The new type to add

Returns:

None

Return type:

None

always_present_keys = ['self', 'name', 'multiplicative', 'ito', 'types', 'process_class']
closed_formula() str[source]

Calculate the closed formula for the process using symbolic calculations and Ito calculus when possible.

Returns:

The closed formula for the process

Return type:

str

property comments: bool

Get the comments property of the process. It is used to display comments about the process when the code runs.

Returns:

The comments property of the process.

Return type:

bool

correct_params(params, t)[source]

Correct the parameters for the process. This method fixes some unintended behaviour of the stochastic library which is present for some classes. It is not needed for the end user.

Parameters:
  • params (dict) – The parameters to correct

  • t (float) – The total time for the simulation

Returns:

The corrected parameters

Return type:

dict

property custom: bool

Get the custom property of the process. It shows if the process is a custom process (and not a standard process from the library).

Returns:

The custom property of the process

Return type:

bool

custom_increment(X: float, timestep: float = 0.01) Any[source]

Custom increment function for the process. This method simulates a discrete approximation of the increment (differential) of the process.

Parameters:
  • X (float) – The current value of the process

  • timestep (float) – The time step for the simulation

Returns:

The increment of the process

Return type:

Any

custom_simulate_raw(t: float = 10, timestep: float = 0.01, num_instances: int = 5, X0: float = None) Any[source]

Simulate the process using a custom simulation method. It is an intermediate method that is further used in the simulate method which is already used by the user.

Parameters:
  • t (float) – Total time for the simulation

  • timestep (float) – Time step for the simulation

  • num_instances (int) – Number of instances to simulate

  • X0 (float) – Initial value for the process

Returns:

The simulated data of shape (num_instances + 1, num_steps)

Return type:

NumPy array of shape (num_instances + 1, num_steps)

data_for_simulation(t: float = 10, timestep: float = 0.01, num_instances: int = 5) Any[source]

Prepare the data for simulation. This is an intermediate method that is further used in the simulate method which is already used by the user.

Parameters:
  • t (float) – Total time for the simulation

  • timestep (float) – Time step for the simulation

  • num_instances (int) – Number of instances to simulate

Returns:

The number of steps, the times, and the data

Return type:

Any

differential() str[source]

Calculate the differential of the process.

Returns:

The differential of the process

Return type:

str

empirical_properties()[source]

Calculate the empirical statistical properties of the process and compare them with the theoretical properties (parameters). Check if they are equal.

Returns:

None

Return type:

None

ensemble_average(num_instances: int = 1000000, timestep: float = 0.01, save: bool = False) Any[source]

Calculate the finite ensemble average of the process.

Parameters:
  • num_instances (int) – The number of instances to simulate

  • timestep (float) – The time step for the simulation

  • save (bool) – Whether to save the results to a file

Returns:

The ensemble average of the process

Return type:

float

eternal_simulator(timestep: float = 0.01, k: int = 100000) None[source]

Save the simulated images for the process indefinitely every k steps.

Parameters:
  • timestep (float) – Time step for the simulation.

  • num_instances (int) – Number of instances to simulate.

  • k (int) – Number of steps after which to update the plot.

Returns:

None

Return type:

None

Raises:

ValueError – If the eternal simulator is not available for the process

classmethod get_comments()[source]

Get the comments property of the process. This is a class method. It means that it is a method that is called on the class itself, not on an instance of the class.

Returns:

The comments property of the process

Return type:

bool

get_data_for_moments(data, t_m=10, timestep_m=0.01, num_instances_m=10)[source]

Get the data for moments calculation. If the data was already pre-computed and passed to the method, it will be used. Else, the data will be simulated using the specified parameters. Note that if you pass the data and simultaneously define the parameters, the parameters will be ignored, and the data will be used.

Parameters:
  • data (Any) – The data array of the process.

  • t_m (float) – Total time for the simulation.

  • timestep_m (float) – Time step for the simulation.

  • num_instances_m (int) – Number of instances to simulate.

Returns:

The data array for moments calculation.

Return type:

NumPy array

get_params()[source]

Retrieves parameters specific to the current process class, excluding always_present_keys. This is usually needed when creating methods and functions than need to work for an arbitrary process class. For the end user, the method is usually not needed.

Returns:

The process-specific parameters of the process

Return type:

dict

growth_rate_ensemble_average(num_instances: int = 1000000, timestep: float = 0.01, save: bool = False) Any[source]

Calculate the ensemble average of the growth rates of the process.

Parameters:
  • num_instances (int) – The number of instances to simulate

  • timestep (float) – The time step for the simulation

  • save (bool) – Whether to save the results to a file

Returns:

The ensemble average of the growth rates of the process

Return type:

float

growth_rate_time_average(t: float = 1000000, timestep: float = 0.01, save: bool = False) Any[source]

Calculate the time average of the growth rates of the process in a direct way (using one process instance and long simulation time).

Parameters:
  • t (float) – The time to simulate

  • timestep (float) – The time step for the simulation

  • save (bool) – Whether to save the results to a file

Returns:

The time average of the growth rates of the process

Return type:

float

property has_wrong_params: bool

Get the has_wrong_params property of the process. It shows if the corresponding process in the stochastic library has parameters in an unexpected format that must be corrected.

Returns:

The has_wrong_params property of the process

Return type:

bool

increment(timestep_increment: float = 0.01) float[source]

Calculate the increment of the process for a given timestep increment.

Parameters:

timestep_increment (float) – The timestep increment for the process

Returns:

The increment of the process

Return type:

float

increment_intermediate(timestep_increment: float = 0.01) float[source]

Calculate the increment of the process for a given timestep increment. This method is used as an intermediate step for the increment method.

Parameters:

timestep_increment (float) – The timestep increment for the process

Returns:

The increment of the process

Return type:

float

property independent: bool

Get the independent property of the process. It shows if the increments of the process are independent.

Returns:

The independent property of the process

Return type:

bool

property ito: bool

Get the Ito property of the process. It shows if the process is an Ito process/

Returns:

The Ito property of the process

Return type:

bool

k_moments(data=None, order: int = 4, save: bool = False, t: float = 10, timestep: float = 0.01, num_instances: int = 10, visualize: bool = False) Any[source]

Calculate the cumulative moments of the process up to a specified order for every point in time using an optimized iterative approach.

Parameters:
  • data (Any) – The data array of the process.

  • order (int) – The maximum order of moments to calculate.

  • save (bool) – Whether to save the results to a file.

  • t (float) – Total time for the simulation.

  • timestep (float) – Time step for the simulation.

  • num_instances (int) – Number of instances to simulate.

Returns:

A tuple of times and the calculated moments (mean, variance, skewness, kurtosis, etc.).

Return type:

Tuple

memory_update(step)[source]

Update the memory of the process. Used in the construction of processes with memory.

Parameters:

step (int) – The current step

Returns:

The updated memory

Return type:

Any

moments(data=None, save: bool = False, t=10, timestep=0.01, num_instances=10) Any[source]

Calculate the cumulative moments of the process increments up to every point in time using an optimized iterative approach. If the process is multiplicative, the increments are calculated for the logarithm of the data. The moments can be calculated either based on the already simulated data (using simulate method or exporting data) or by simulating the data using the specified parameters.

Parameters:
  • data (Any) – The data array of the process.

  • save (bool) – Whether to save the results to a file.

  • t (float) – Total time for the simulation.

  • timestep (float) – Time step for the simulation.

  • num_instances (int) – Number of instances to simulate.

Returns:

A tuple of times and the calculated moments (mean, variance, skewness, kurtosis, etc.

Return type:

Tuple

moments_dict(data, save, t=10, timestep=0.01, num_instances=10)[source]

Create a dictionary to further calculate the first, the second, the third, and the fourth moments of the process.

Parameters:
  • data (NumPy array) – The data array of the process

  • save (bool) – Whether to save the results to a file

  • t (float) – Total time for the simulation

  • timestep (float) – Time step for the simulation

  • num_instances (int) – Number of instances to simulate

Returns:

A dictionary of moments where the keys are the moment names and the values are the calculated moments

Return type:

Dict

property multiplicative: bool

Get the multiplicative property of the process. It shows if the process is multiplicative.

Returns:

The multiplicative property of the process

Return type:

bool

property name: str

Get the name of the process.

Returns:

The name of the process

Return type:

str

property output_dir: str

Get the output directory of the process. It is the directory where the process results are saved.

Returns:

The output directory of the process

Return type:

str

p_measure() Any[source]

Return the probability density function corresponding to the objective probability measure used in the given process.

Returns:

The probability density function corresponding to the objective probability measure

Return type:

Any

pdf_evolution(t=10, timestep=0.01, num_instances=5) Any[source]

Calculate the evolution of the probability density function of the process.

Parameters:
  • t (float) – The time to simulate

  • timestep (float) – The time step for the simulation

  • num_instances (int) – The number of instances to simulate

Returns:

The evolution of the probability density function of the process

Return type:

Numpy array

plot(data_full, num_instances: int, save: bool = False, plot: bool = False, average_and_max: bool = False, plotlog: bool = False)[source]

Plot the simulation results.

Parameters:
  • data_full (Any) – The full data to plot

  • num_instances (int) – The number of instances to plot

  • save (bool) – Whether to save the plot

  • plot (bool) – Whether to plot the data

  • average_and_max (bool) – Whether to plot the average and max values

  • plotlog (bool) – Whether to plot the data on a logarithmic scale

Returns:

None

Return type:

None

plot_2d(data_2d: ndarray, num_instances: int, save: bool = False, plot: bool = True)[source]

Plot the 2D simulation results.

Parameters:
  • data_2d (np.ndarray) – 2D simulation data

  • num_instances (int) – Number of instances simulated

  • save (bool) – Whether to save the plot

  • plot (bool) – Whether to display the plot

Returns:

None

Return type:

None

plot_2dt(data_2d: ndarray, num_instances: int, save: bool = False, plot: bool = True)[source]

Plot the 2D simulation results in a 3D graph with time as the third dimension.

Parameters:
  • data_2d (np.ndarray) – 2D simulation data

  • num_instances (int) – Number of instances simulated

  • save (bool) – Whether to save the plot

  • plot (bool) – Whether to display the plot

Returns:

None

Return type:

None

plot_3d(data_3d: ndarray, num_instances: int, save: bool = False, plot: bool = True)[source]

Plot the 3D simulation results.

Parameters:
  • data_3d (np.ndarray) – 3D simulation data

  • num_instances (int) – Number of instances simulated

  • save (bool) – Whether to save the plot

  • plot (bool) – Whether to display the plot

Returns:

None

Return type:

None

plot_growth_rate_of_average_3d(instance_range, time_range, instance_step, time_step, simulation_timestep=0.01, step_type='linear', filename='growth_rate_of_average_function', save_html=True)[source]

Draw a 3D graph of average growth rate as a function of number of instances and time.

Parameters:
  • instance_range (Tuple) – Tuple of (min_instances, max_instances)

  • time_range (Tuple) – Tuple of (min_time, max_time)

  • instance_step (float) – Step size for instances

  • time_step (float) – Step size for time

  • simulation_timestep (float) – Time step for the simulation (used in the simulate method)

  • step_type (str) – ‘linear’ or ‘logarithmic’

  • filename (str) – Path to save the results (if None, results won’t be saved)

  • save_html (bool) – Whether to save the interactive 3D plot as an HTML file

Returns:

None

Return type:

None

plot_moment(times, moment, label, mask, save: bool = False)[source]

Visualize a given moment of the process - a helper function for the next methods.

Parameters:
  • times (np.ndarray) – The times at which the process is sampled

  • moment (np.ndarray) – The moment to plot

  • label (str) – The label of the moment

  • mask (int) – The mask to apply to the data

  • save (bool) – Whether to save the plot

Returns:

None

Return type:

None

plot_weights(times, weights, save)[source]

Plot the simulated weights.

Parameters:
  • times (np.ndarray) – Array of time values

  • weights (np.ndarray) – Array of simulated weights

  • save (bool) – Whether to save the plot

Returns:

None

Return type:

None

property process_class: Type[Any]

Get the process class of the process. Here, the process class is the class in the stochastic library that corresponds to the process. It has nothing to do with the Process class itself (inside the Ergodicity Library).

Returns:

The process class of the process

Return type:

Type[Any]

relative_variance_pea(num_instances: int = 5, t: float = 10, timestep: float = 0.01, n: int = 1000000, save: bool = False) float[source]

Calculate the relative variance of the process using the PEA method. It means that we simulate the process for a given time and calculate the relative variance of the process.

Parameters:
  • num_instances (int) – The number of instances of the process to use in the pea estimation

  • t (float) – The time to simulate

  • timestep (float) – The time step for the simulation

  • n (int) – the number of instances to use in the estimation of the ensemble average as approximation of ensemble average (should be rather large)

  • save (bool) – Whether to save the results to a file

Returns:

The relative variance of the process

save_to_file(data, file_name: str, save: bool = False)[source]

Save the data to a file.

Parameters:
  • data (Any) – The data to save

  • file_name (str) – The name of the file

  • save (bool) – Whether to save the file

Returns:

None

Return type:

None

Raises:

Exception – If the file is created but empty

self_averaging_time(num_instances: int = 5, t: float = 10, timestep: float = 0.01, n: int = 1000, plot: bool = True) float[source]

Calculate the self-averaging time of the process. Self-averaging time is the time after which the process ensemble starts to behave as a time average and stops behave as an ensemble average (so stops self-averaging). This method will work properly only if you select t that is long enough. We do not know any formal method to select t that is long enough for arbitrary process, so you have to check it by yourself. The method works only for multiplicative processes.

Parameters:
  • num_instances (int) – The number of instances of the process to use in the pea estimation

  • t (float) – The time to simulate

  • timestep (float) – The time step for the simulation

  • n (int) – the number of instances to use in the estimation of the ensemble average as approximation of ensemble average (should be rather large)

  • plot (bool) – Whether to plot the increments

Returns:

The empirical estimation of self-averaging time of the process

Return type:

float

separate(data)[source]

Separate the discrete time moments and corresponding process data in the dataset typically generated by many methods in the Ergodicity Library. It is usually needed if you need to extract simulated data without time moments, or just time moments, if you need to plot the data, or if you need to pass the data to a custom method. It returns the times and the data as arrays.

Parameters:

data (Any) – The data to separate

Returns:

The times and the data

Return type:

Any

simulate(t: float = 10, timestep: float = 0.01, num_instances: int = 5, save: bool = False, plot: bool = False, average_and_max: bool = False, plotlog: bool = False, X0: float = None) Any[source]

Simulate the process using the given stochastic process class. A general and widely used method that is used to simulate the process. It uses several intermediate methods defined before. Many functions and methods rely on this method.

Parameters:
  • t (float) – Total time for the simulation

  • timestep (float) – Time step for the simulation

  • num_instances (int) – Number of instances to simulate

  • save (bool) – Whether to save the results to a file

  • plot (bool) – Whether to plot the results

  • average_and_max (bool) – Whether to plot the average and max values

  • plotlog (bool) – Whether to plot the data on a logarithmic scale

  • X0 (float) – Initial value for the process

Returns:

Simulated data array of shape (num_instances + 1, num_steps) where the first row is time and the rest are process values

Return type:

NumPy array of shape (num_instances + 1, num_steps)

simulate_2d(t: float = 10, timestep: float = 0.01, num_instances: int = 5, save: bool = False, plot: bool = False) Any[source]

Simulate a 2D process by combining two 1D simulations.

Parameters:
  • t (float) – Total time for the simulation

  • timestep (float) – Time step for the simulation

  • num_instances (int) – Number of instances to simulate

  • save (bool) – Whether to save the simulation results

  • plot (bool) – Whether to plot the simulation results

Returns:

Simulated 2D data array

Return type:

NumPy array of shape (2 * num_instances + 1, num_steps)

simulate_3d(t: float = 10, timestep: float = 0.01, num_instances: int = 5, save: bool = False, plot: bool = False) Any[source]

Simulate a 3D process by combining three 1D simulations.

Parameters:
  • t (float) – Total time for the simulation

  • timestep (float) – Time step for the simulation

  • num_instances (int) – Number of instances to simulate

  • save (bool) – Whether to save the simulation results

  • plot (bool) – Whether to plot the simulation results

Returns:

Simulated 3D data array

Return type:

NumPy array of shape (3 * num_instances + 1, num_steps)

simulate_distribution(num_instances: int = 100000, t: float = 1, timestep: float = 0.01, save: bool = False, plot: bool = True) Any[source]

Simulate the probability distribution corresponding to the process.

Parameters:
  • num_instances (int) – The number of instances to simulate

  • t (float) – The time to simulate

  • timestep (float) – The time step for the simulation

  • save (bool) – Whether to save the results to a file

  • plot (bool) – Whether to plot the distribution

Returns:

The distribution of the process represented as a histogram

Return type:

Tuple

simulate_ensembles(t: float = 10, timestep: float = 0.01, num_instances: int = 5, num_ensembles: int = 10, save: bool = False, plot: bool = False, plot_separate_ensembles: bool = True, X0: float = None) Any[source]

Simulate multiple ensembles of the process and visualize the results.

Parameters:
  • t (float) – Total time for the simulation

  • timestep (float) – Time step for the simulation

  • num_instances (int) – Number of instances to simulate

  • num_ensembles (int) – Number of ensembles to simulate

  • save (bool) – Whether to save the results to a file

  • plot (bool) – Whether to plot the results

  • plot_separate_ensembles (bool) – Whether to plot the separate ensembles

  • X0 (float) – Initial value for the process

Returns:

The ensemble simulation data and the total average

Return type:

Any

simulate_live(t: float = 10, timestep: float = 0.01, num_instances: int = 5, save: bool = False, speed: float = 1) Any[source]

Simulate the process live and save as a video file.

Parameters:
  • t (float) – Total time for the simulation

  • timestep (float) – Time step for the simulation

  • num_instances (int) – Number of instances to simulate

  • save (bool) – Whether to save the simulation data

  • speed (float) – Speed multiplier for the video (default is 1.0, higher values make the video faster)

Returns:

Video file of the simulation

Return type:

str

simulate_live_2d(t: float = 10, timestep: float = 0.01, num_instances: int = 5, save: bool = False, speed: float = 1.0) str[source]

Simulate the 2D process live and save as a video file.

Parameters:
  • t (float) – Total time for the simulation

  • timestep (float) – Time step for the simulation

  • num_instances (int) – Number of instances to simulate

  • save (bool) – Whether to save the simulation data

  • speed (float) – Speed multiplier for the video (default is 1.0, higher values make the video faster)

Returns:

Video file name of the simulation

Return type:

str

Exception:

CalledProcessError if the video cannot be saved

simulate_live_2dt(t: float = 10, timestep: float = 0.01, num_instances: int = 5, save: bool = False, speed: float = 1.0) tuple[str, str][source]

Simulate the 2D process live with time as the third dimension and save as a video file and interactive plot.

Parameters:
  • t (float) – Total time for the simulation

  • timestep (float) – Time step for the simulation

  • num_instances (int) – Number of instances to simulate (default is 2)

  • save (bool) – Whether to save the simulation data

  • speed (float) – Speed multiplier for the video (default is 1.0, higher values make the video faster)

Returns:

Tuple of video file name and interactive plot file name

Return type:

Tuple

Exception:

CalledProcessError if the video cannot be saved

simulate_live_3d(t: float = 10, timestep: float = 0.01, num_instances: int = 5, save: bool = False, speed: float = 1.0) str[source]

Simulate the 3D process live and save as a video file.

Parameters:
  • t (float) – Total time for the simulation

  • timestep (float) – Time step for the simulation

  • num_instances (int) – Number of instances to simulate

  • save (bool) – Whether to save the simulation data

  • speed (float) – Speed multiplier for the video (default is 1.0, higher values make the video faster)

Returns:

Video file name of the simulation

Return type:

str

Exception:

CalledProcessError if the video cannot be saved

simulate_until(timestep: float = 0.01, num_instances: float = 5, X0: float = None, condition: Callable[[...], bool] = None, save=False, plot=True) Any[source]

Simulate the process until a certain condition is reached.

Parameters:
  • timestep (float) – Time step for the simulation

  • num_instances (int) – Number of instances to simulate

  • X0 (float) – Initial value for the process

  • condition (Callable[..., bool]) – The condition to reach

  • save (bool) – Whether to save the simulation results

  • plot (bool) – Whether to plot the simulation results

Returns:

The simulated data

Return type:

NumPy array of shape (num_instances + 1, num_steps)

simulate_weights(t: float = 10, timestep: float = 0.01, num_instances: int = 5, save: bool = False, plot: bool = False) ndarray[source]

Simulate the weights (relative shares) of the process instances in the ensemble.

Parameters:
  • t (float) – Total time for the simulation

  • timestep (float) – Time step for the simulation

  • num_instances (int) – Number of instances to simulate

  • save (bool) – Whether to save the results to a file

  • plot (bool) – Whether to plot the results

Returns:

Array of simulated weights

Return type:

NumPy array of shape (num_instances + 1, num_steps)

property simulate_with_differential: bool

Get the simulate_with_differential property of the process. It sets if the process is simulated using differential methods. If it is False, the process is simulated using relevant probability distribution.

Returns:

The simulate_with_differential property of the process

Return type:

bool

time_average(t: float = 1000000, timestep: float = 0.01, save: bool = False) Any[source]

Calculate the time average approximation of the process in a direct way (using one process instance and long simulation time).

Parameters:
  • t (float) – The time to simulate

  • timestep (float) – The time step for the simulation

  • save (bool) – Whether to save the results to a file

Returns:

The time average of the process

Return type:

float

time_average_expression() str[source]

Return the symbolic expression for the time average of the process if possible.

Returns:

The symbolic expression for the time average of the process

Return type:

str

property types: List[str]

Get the types of the process. The types are used to categorize the process. They are custom labels that can be used to identify the process.

Returns:

The types of the process

Return type:

List[str]

visualize_moment(data=None, label='Mean', mask=100, save: bool = False, t=10, timestep=0.01, num_instances=10)[source]

Visualize a given moment of the process - a helper function for the next methods.

Parameters:
  • data (NumPy array) – The data array of the process

  • label (str) – The label of the moment

  • mask (int) – The mask to apply to the data

  • save (bool) – Whether to save the plot

Returns:

None

Return type:

None

visualize_moments(data=None, mask=100, save: bool = False, t=10, timestep=0.01, num_instances=10)[source]

Visualize the first, the second, the third, and the fourth moments of the process.

Parameters:
  • times (np.ndarray) – The times at which the process is sampled

  • data (NumPy array) – The data array of the process

  • mask (int) – The mask to apply to the data

  • save (bool) – Whether to save the plot

Returns:

None

Return type:

None

ergodicity.process.definitions.check_simulate_with_differential(self)[source]

Check if the simulate method uses the differential method.

Parameters:

self (Process) – The process object

Returns:

True if the simulate method uses the differential method, False otherwise

Return type:

bool

ergodicity.process.definitions.correlation_to_covariance(correlation_matrix, std_devs)[source]

Transform a correlation matrix into a variance-covariance matrix.

Parameters:
  • correlation_matrix (np.ndarray) – The correlation matrix

  • std_devs (np.ndarray) – The standard deviations of the variables

Returns:

The variance-covariance matrix

Return type:

np.ndarray

ergodicity.process.definitions.create_correlation_matrix(size, correlation)[source]

Create a correlation matrix with given correlations between all elements.

Parameters:
  • size (int) – The size of the correlation matrix

  • correlation (float) – The correlation value for all off-diagonal elements

Returns:

The correlation matrix

Return type:

np.ndarray

ergodicity.process.definitions.simulation_decorator(simulate_func: Callable) Callable[source]

Decorator for simulation methods to add verbose option.

Parameters:

simulate_func (Callable) – The simulation method to decorate

Returns:

The decorated simulation method

Return type:

Callable

ergodicity.process.discrete module

ergodicity.process.increments module

increments Submodule

The Increments Submodule provides a comprehensive framework for generating and managing increments of different stochastic processes. These increments represent the discrete changes in the state of a process over small time steps, and they are the building blocks for simulating various types of continuous-time stochastic processes. The submodule supports a wide range of processes, including standard Wiener processes, Lévy processes, and Fractional Brownian Motions, each of which can exhibit different types of random behavior.

Key Features:

  1. Wide Range of Increments:

    • The submodule supports increments for standard Wiener processes (Brownian motion), Lévy stable processes (including special cases like the Cauchy process), and Fractional Brownian Motion (FBM) with different Hurst parameters.

    • Increments are tailored to ensure that they have unit variance, making them suitable for integration in stochastic differential equations (SDEs) with appropriate scaling.

  2. Handling Non-Gaussian Processes:

    • Lévy stable processes, which generalize Brownian motion by allowing heavy-tailed distributions and non-Gaussian behavior, are included in this submodule. These processes can be used to model extreme events or phenomena with jumps, offering a more flexible framework than Gaussian-based processes.

    • Special cases like the Cauchy Process (Lévy stable with α = 1) are explicitly supported.

  3. Flexible Scaling and Time-Stepping:

    • Each increment takes into account the timestep of the process, ensuring that the increments are appropriately scaled. This allows users to simulate processes over arbitrary time intervals while maintaining correct statistical properties.

    • Time-stepping is critical, especially for non-independent processes like Fractional Brownian Motion, where increments at different times can be correlated depending on the Hurst parameter.

  4. Variance-Controlled Increments:

    • All increments are scaled to have a variance of 1 by default, making them suitable for various applications, including SDEs and Monte Carlo simulations, without requiring further normalization.

    • This ensures consistency across different processes and simplifies the simulation of processes with different characteristics.

Increments Overview:

  1. Wiener Process (Brownian Motion):

    • A standard Wiener process, also known as Brownian motion, is a continuous-time stochastic process with independent, normally distributed increments. This is the canonical process used in SDEs and financial models.

    • The increments ( dW ) are generated using a standard normal distribution with mean zero and variance equal to the timestep ( dt ).

    WP = WienerProcess()

    dW = WP.increment() ```

  2. Lévy Stable Process:

    • Lévy stable processes generalize Brownian motion by introducing heavy-tailed distributions, characterized by the stability parameter ( lpha ). For ( lpha = 2 ), the Lévy process reduces to a standard Wiener process.

    • The submodule provides increments for various values of ( lpha ), including special cases like the Cauchy Process (( lpha = 1 )).

    # Increment for a Lévy process with α = 1.5

    LP15 = LevyStableProcess(alpha=1.5)

    dL_15 = LP15.increment()

    # Cauchy Process (α = 1)

    LPC = LevyStableProcess(alpha=1)

    dCauchy = LPC.increment()

    # Sub-Gaussian Lévy process (α = 0.5)

    LP05 = LevyStableProcess(alpha=0.5)

    dL_05 = LP05.increment() ```

  3. Fractional Brownian Motion:

    • Fractional Brownian Motion (FBM) is a generalization of the Wiener process where the increments are not independent. The degree of correlation between increments is controlled by the Hurst parameter ( H ).

    • The submodule provides FBM increments for different values of ( H ), allowing the user to model long-range dependent processes (for ( H > 0.5 )) or short-range dependent processes (for ( H < 0.5 )).

    # Fractional Brownian Motion with H = 0.5 (standard Brownian motion)

    FBM = StandardFractionalBrownianMotion(hurst=0.5)

    dFBM = FBM.increment()

    # FBM with Hurst parameter H = 0.2 (short-range dependence)

    FBM02 = StandardFractionalBrownianMotion(hurst=0.2)

    dFBM02 = FBM02.increment()

    # FBM with Hurst parameter H = 0.8 (long-range dependence)

    FBM08 = StandardFractionalBrownianMotion(hurst=0.8)

    dFBM08 = FBM08.increment() ```

Ensuring Correct Time-Stepping:

The submodule ensures that the timestep used for generating increments matches the timestep of the underlying process. This consistency is critical, especially when simulating processes over non-uniform time grids or with adaptive time-stepping algorithms.

Researching Non-Independent Processes:

The submodule lays the groundwork for producing increments for non-independent processes, such as processes with memory or feedback. In particular, the behavior of Fractional Brownian Motion highlights the complexities involved in generating increments for correlated processes. Future extensions will explore other processes with complex dependence structures, such as Ornstein-Uhlenbeck processes or mean-reverting processes with stochastic volatility.

Usage Example:

```python

from increments import WienerProcess, LevyStableProcess, StandardFractionalBrownianMotion

# Generate Wiener process increment

WP = WienerProcess()

dW = WP.increment()

# Generate Lévy process increment (α = 1.5)

LP15 = LevyStableProcess(alpha=1.5)

dL_15 = LP15.increment()

# Generate Fractional Brownian Motion increment (H = 0.8)

FBM08 = StandardFractionalBrownianMotion(hurst=0.8)

dFBM08 = FBM08.increment()

ergodicity.process.increments.dependent_increments()[source]

ergodicity.process.lib module

This submodule provides a comprehensive suite of stochastic differential equation (SDE) simulations for various financial and mathematical models. It includes custom decorators for both individual and system-wide simulations, allowing for flexible and efficient modeling of complex stochastic processes.

Key components:

Decorator functions:

custom_simulate: Wraps individual simulation functions custom_simulate_system: Wraps system simulation functions

Individual process simulations, including but not limited to:

Ornstein-Uhlenbeck

Generalized Diffusion

Constant Elasticity of Variance

Cox-Ingersoll-Ross

Vasicek

Lévy processes

Geometric Brownian Motion

Heston model

Jump Diffusion models

Stochastic Volatility models

Fractional Brownian Motion

System simulations:

TestSystemSimulation: A system of SDEs including Ornstein-Uhlenbeck and Geometric Brownian Motion

LotkaVolterraSimulation: Stochastic version of the Lotka-Volterra predator-prey model

Each simulation function is decorated to handle time stepping, data storage, and optional plotting.

ergodicity.process.lib.CauchyOrnsteinUlehnbeckSimulation(verbose: bool = False, **kwargs) Any[source]

Wrapper function for simulation. Handles time stepping, data storage, and optional plotting.

Parameters:
  • verbose (bool) – Print simulation progress if True

  • kwargs – Custom parameters for simulation

Returns:

Simulation data

Return type:

Any

ergodicity.process.lib.ConstantElasticityOfVarianceSimulation(verbose: bool = False, **kwargs) Any[source]

Wrapper function for simulation. Handles time stepping, data storage, and optional plotting.

Parameters:
  • verbose (bool) – Print simulation progress if True

  • kwargs – Custom parameters for simulation

Returns:

Simulation data

Return type:

Any

ergodicity.process.lib.CoxIngersollRossSimulation(verbose: bool = False, **kwargs) Any[source]

Wrapper function for simulation. Handles time stepping, data storage, and optional plotting.

Parameters:
  • verbose (bool) – Print simulation progress if True

  • kwargs – Custom parameters for simulation

Returns:

Simulation data

Return type:

Any

ergodicity.process.lib.DoubleExponentialJumpDiffusionSimulation(verbose: bool = False, **kwargs) Any[source]

Wrapper function for simulation. Handles time stepping, data storage, and optional plotting.

Parameters:
  • verbose (bool) – Print simulation progress if True

  • kwargs – Custom parameters for simulation

Returns:

Simulation data

Return type:

Any

ergodicity.process.lib.FractionalBrownianMotionSimulation(verbose: bool = False, **kwargs) Any[source]

Wrapper function for simulation. Handles time stepping, data storage, and optional plotting.

Parameters:
  • verbose (bool) – Print simulation progress if True

  • kwargs – Custom parameters for simulation

Returns:

Simulation data

Return type:

Any

ergodicity.process.lib.GeneralizedDiffusionSimulation(verbose: bool = False, **kwargs) Any[source]

Wrapper function for simulation. Handles time stepping, data storage, and optional plotting.

Parameters:
  • verbose (bool) – Print simulation progress if True

  • kwargs – Custom parameters for simulation

Returns:

Simulation data

Return type:

Any

ergodicity.process.lib.GeometricBrownianMotionSimulation(verbose: bool = False, **kwargs) Any[source]

Wrapper function for simulation. Handles time stepping, data storage, and optional plotting.

Parameters:
  • verbose (bool) – Print simulation progress if True

  • kwargs – Custom parameters for simulation

Returns:

Simulation data

Return type:

Any

ergodicity.process.lib.GeometricFractionalBrownianMotionSimulation(verbose: bool = False, **kwargs) Any[source]

Wrapper function for simulation. Handles time stepping, data storage, and optional plotting.

Parameters:
  • verbose (bool) – Print simulation progress if True

  • kwargs – Custom parameters for simulation

Returns:

Simulation data

Return type:

Any

ergodicity.process.lib.GeometricOrnsteinUhlenbeckSimulation(verbose: bool = False, **kwargs) Any[source]

Wrapper function for simulation. Handles time stepping, data storage, and optional plotting.

Parameters:
  • verbose (bool) – Print simulation progress if True

  • kwargs – Custom parameters for simulation

Returns:

Simulation data

Return type:

Any

ergodicity.process.lib.HestonSimulation(verbose: bool = False, **kwargs) Any[source]

Wrapper function for simulation. Handles time stepping, data storage, and optional plotting.

Parameters:
  • verbose (bool) – Print simulation progress if True

  • kwargs – Custom parameters for simulation

Returns:

Simulation data

Return type:

Any

ergodicity.process.lib.HullWhiteSimulation(verbose: bool = False, **kwargs) Any[source]

Wrapper function for simulation. Handles time stepping, data storage, and optional plotting.

Parameters:
  • verbose (bool) – Print simulation progress if True

  • kwargs – Custom parameters for simulation

Returns:

Simulation data

Return type:

Any

ergodicity.process.lib.JumpDiffusionSimulation(verbose: bool = False, **kwargs) Any[source]

Wrapper function for simulation. Handles time stepping, data storage, and optional plotting.

Parameters:
  • verbose (bool) – Print simulation progress if True

  • kwargs – Custom parameters for simulation

Returns:

Simulation data

Return type:

Any

ergodicity.process.lib.KouJumpDiffusionSimulation(verbose: bool = False, **kwargs) Any[source]

Wrapper function for simulation. Handles time stepping, data storage, and optional plotting.

Parameters:
  • verbose (bool) – Print simulation progress if True

  • kwargs – Custom parameters for simulation

Returns:

Simulation data

Return type:

Any

ergodicity.process.lib.Levy05OrnsteinUlehnbeckSimulation(verbose: bool = False, **kwargs) Any[source]

Wrapper function for simulation. Handles time stepping, data storage, and optional plotting.

Parameters:
  • verbose (bool) – Print simulation progress if True

  • kwargs – Custom parameters for simulation

Returns:

Simulation data

Return type:

Any

ergodicity.process.lib.Levy15OrnsteinUlehnbeckSimulation(verbose: bool = False, **kwargs) Any[source]

Wrapper function for simulation. Handles time stepping, data storage, and optional plotting.

Parameters:
  • verbose (bool) – Print simulation progress if True

  • kwargs – Custom parameters for simulation

Returns:

Simulation data

Return type:

Any

ergodicity.process.lib.LevyStableOrnsteinUlehnbeckSimulation(verbose: bool = False, **kwargs) Any[source]

Wrapper function for simulation. Handles time stepping, data storage, and optional plotting.

Parameters:
  • verbose (bool) – Print simulation progress if True

  • kwargs – Custom parameters for simulation

Returns:

Simulation data

Return type:

Any

ergodicity.process.lib.LotkaVolterraSimulation(verbose: bool = False, **kwargs) Any[source]

Wrapper function for system simulation. Handles time stepping, data storage, and optional plotting.

Parameters:
  • verbose (bool) – Print simulation progress if True

  • kwargs – Custom parameters for simulation

Returns:

Simulation data

Return type:

Any

ergodicity.process.lib.MeanRevertingSquareRootDiffusionSimulation(verbose: bool = False, **kwargs) Any[source]

Wrapper function for simulation. Handles time stepping, data storage, and optional plotting.

Parameters:
  • verbose (bool) – Print simulation progress if True

  • kwargs – Custom parameters for simulation

Returns:

Simulation data

Return type:

Any

ergodicity.process.lib.MertonJumpDiffusionSimulation(verbose: bool = False, **kwargs) Any[source]

Wrapper function for simulation. Handles time stepping, data storage, and optional plotting.

Parameters:
  • verbose (bool) – Print simulation progress if True

  • kwargs – Custom parameters for simulation

Returns:

Simulation data

Return type:

Any

ergodicity.process.lib.OrnsteinUhlenbeckSimulation(verbose: bool = False, **kwargs) Any[source]

Wrapper function for simulation. Handles time stepping, data storage, and optional plotting.

Parameters:
  • verbose (bool) – Print simulation progress if True

  • kwargs – Custom parameters for simulation

Returns:

Simulation data

Return type:

Any

ergodicity.process.lib.StochasticAlphaBetaRhoSimulation(verbose: bool = False, **kwargs) Any[source]

Wrapper function for simulation. Handles time stepping, data storage, and optional plotting.

Parameters:
  • verbose (bool) – Print simulation progress if True

  • kwargs – Custom parameters for simulation

Returns:

Simulation data

Return type:

Any

ergodicity.process.lib.StochasticVolatilityModelSimulation(verbose: bool = False, **kwargs) Any[source]

Wrapper function for simulation. Handles time stepping, data storage, and optional plotting.

Parameters:
  • verbose (bool) – Print simulation progress if True

  • kwargs – Custom parameters for simulation

Returns:

Simulation data

Return type:

Any

ergodicity.process.lib.SubordinatorSimulation(verbose: bool = False, **kwargs) Any[source]

Wrapper function for simulation. Handles time stepping, data storage, and optional plotting.

Parameters:
  • verbose (bool) – Print simulation progress if True

  • kwargs – Custom parameters for simulation

Returns:

Simulation data

Return type:

Any

ergodicity.process.lib.TestSystemSimulation(verbose: bool = False, **kwargs) Any[source]

Wrapper function for system simulation. Handles time stepping, data storage, and optional plotting.

Parameters:
  • verbose (bool) – Print simulation progress if True

  • kwargs – Custom parameters for simulation

Returns:

Simulation data

Return type:

Any

ergodicity.process.lib.VarianceGammaSimulation(verbose: bool = False, **kwargs) Any[source]

Wrapper function for simulation. Handles time stepping, data storage, and optional plotting.

Parameters:
  • verbose (bool) – Print simulation progress if True

  • kwargs – Custom parameters for simulation

Returns:

Simulation data

Return type:

Any

ergodicity.process.lib.VasicekSimulation(verbose: bool = False, **kwargs) Any[source]

Wrapper function for simulation. Handles time stepping, data storage, and optional plotting.

Parameters:
  • verbose (bool) – Print simulation progress if True

  • kwargs – Custom parameters for simulation

Returns:

Simulation data

Return type:

Any

ergodicity.process.lib.custom_simulate(simulate_func: Callable) Callable[source]

Decorator for individual simulation functions. Handles time stepping, data storage, and optional plotting.

Parameters:

simulate_func (Callable) – Simulation function to be wrapped

Returns:

Wrapper function for simulation

Return type:

Callable

ergodicity.process.lib.custom_simulate_system(simulate_func: Callable) Callable[source]

Decorator for system simulation functions. Handles time stepping, data storage, and optional plotting.

Parameters:

simulate_func (Callable) – Simulation function to be wrapped

Returns:

Wrapper function for system simulation

Return type:

Callable

ergodicity.process.multiplicative module

multiplicative Submodule

This submodule provides a comprehensive framework for simulating and analyzing multiplicative stochastic processes. These processes model phenomena where changes in value are proportional to the current state, making them ideal for capturing growth dynamics, financial modeling, and systems exhibiting exponential behavior. The submodule includes implementations of both univariate and multivariate processes, with support for Brownian motion, Lévy processes, fractional dynamics, and heavy-tailed distributions.

Key Features:

  1. Multiplicative Nature:

    • All processes in this submodule exhibit multiplicative growth, meaning that increments are proportional

      to the current value. This feature is critical for modeling systems where non-negativity and relative

      changes are fundamental, such as in asset prices or population dynamics.

  2. Heavy-Tailed Distributions:

    • Several processes leverage Lévy stable distributions to capture extreme events and heavy-tailed behavior, accommodating scenarios with infinite variance. This is essential for accurately modeling rare but impactful events, especially in risk management, finance, and natural systems.

  3. Multidimensional Capabilities:

    • Multivariate extensions of key processes allow for the modeling of correlated, interacting systems. These include processes where different components are driven by shared stochastic factors, capturing the interdependence between multiple variables, such as portfolios of financial assets or ecological populations.

  4. Flexible Simulation and Analysis:

    • Built-in methods support the simulation of paths, growth rates, and increments, as well as the calculation of expected values, variances, and higher-order moments. Visualizations through Matplotlib and Plotly further aid in analyzing the behavior of these processes over time.

Available Processes:

  • Geometric Brownian Motion (GBM):

    A continuous-time stochastic process commonly used in finance to model stock prices. In this submodule, GBM is extended with customizable drift and volatility parameters and supports both closed-form solutions and simulation of growth rates.

  • Geometric Lévy Process:

    Combines the heavy-tailed properties of Lévy stable distributions with multiplicative dynamics. Useful for modeling processes with large, unpredictable jumps and extreme events.

  • Multivariate Geometric Brownian Motion (MGBM):

    An extension of GBM to multiple dimensions, allowing for the simulation of correlated stochastic processes with multiplicative growth in each dimension. Ideal for portfolios, interrelated economic indicators, and other systems with multiple interacting components.

  • Geometric Fractional Brownian Motion:

    A fractional extension of Brownian motion incorporating the Hurst parameter to model long-range dependence. Suitable for applications requiring memory effects, such as geophysical or economic time series.

  • Geometric Cauchy Process:

    A specific case of a Lévy process with Cauchy distribution, providing a mechanism for modeling extremely heavy-tailed behavior where variance is infinite.

  • Multivariate Geometric Lévy Process:

    Extends the Lévy process framework to multiple dimensions, allowing for correlated heavy-tailed behaviors across multiple interacting components. This class is particularly useful in fields such as finance, where multiple asset prices or risk factors may exhibit simultaneous extreme behaviors.

  • Geometric Generalized Hyperbolic Process:

    A generalization of the Lévy process that includes the generalized hyperbolic distribution, providing a flexible framework for modeling heavy-tailed phenomena with varying skewness and kurtosis.

  • Geometric Bessel Process:

    A Lévy process based on the Bessel distribution, useful for modeling processes with infinite activity and non-negative jumps. This process is particularly relevant in insurance and risk management contexts.

  • Geometric Squared Bessel Process:

    A variant of the Bessel process where the squared values of the process are considered, leading to different behavior and statistical properties.

Helper Functions:

  • implied_levy_correction:

    Calculates and visualizes the correction term for a range of alpha and beta parameters, helping to understand the behavior of Lévy processes under various conditions.

  • estimate_sigma:

    Provides an estimate of the sigma parameter for Geometric Lévy Processes across a grid of alpha and time values, with options for linear and non-linear regression to capture sigma dynamics.

Applications:

This submodule is versatile and applicable across various fields:

  • Finance:

    For modeling asset prices, portfolios, or other financial quantities where multiplicative dynamics and heavy-tailed behavior are important.

  • Risk Management:

    Particularly useful in scenarios where extreme, rare events (such as market crashes or catastrophic losses) need to be modeled.

  • Natural Systems:

    For capturing exponential growth dynamics, population models, or interacting ecological systems.

  • Telecommunications:

    In modeling bursty network traffic or data flows, where traffic patterns exhibit heavy-tailed behavior and large fluctuations.

By leveraging these processes, researchers and practitioners can model and analyze complex, real-world systems that exhibit multiplicative growth, heavy tails, and interdependencies, offering rich insights into stochastic dynamics.

class ergodicity.process.multiplicative.GeometricBesselProcess(name: str = 'Geometric Bessel Process', process_class: Type[Any] = None, dim: int = 10)[source]

Bases: StandardBesselProcess

GeometricBesselProcess represents a multiplicative extension of the StandardBesselProcess. This process combines the characteristics of a Bessel process with the multiplicative nature of geometric processes. The resulting stochastic process, denoted as (S_t)_{t≥0}, is defined as:

dS_t = S_t * dR_t

where R_t is the StandardBesselProcess of dimension d.

Key features of GeometricBesselProcess include: 1. Multiplicative nature: Changes are proportional to the current value, preserving non-negativity. 2. Dimension-dependent behavior: The underlying Bessel process characteristics (recurrence, transience) are preserved. 3. Non-negative: The process is always positive, making it suitable for modeling quantities that cannot be negative.

This implementation extends the StandardBesselProcess class, adapting it to a geometric framework. It inherits the dimension-dependent properties of the Bessel process while providing a multiplicative growth model.

Applications of GeometricBesselProcess span various fields: - Finance: Modeling asset prices or interest rates with specific volatility structures. - Physics: Studying particle diffusion processes with multiplicative growth. - Biology: Analyzing population dynamics with radial growth patterns.

Researchers and practitioners should be aware of the increased complexity in interpretation compared to the standard Bessel process. The multiplicative nature introduces new dynamics that require careful consideration in both theoretical development and practical applications.

custom_increment(X: float, timestep: float = 0.01) float[source]

Calculate the custom increment for the process.

Parameters:
  • X (float) – Current value of the process

  • timestep (float) – Time step for the simulation

Returns:

Increment for the process

Return type:

float

class ergodicity.process.multiplicative.GeometricBrownianMotion(name: str = 'Geometric Brownian Motion', process_class: ~typing.Type[~typing.Any] = <class 'stochastic.processes.continuous.geometric_brownian_motion.GeometricBrownianMotion'>, drift: float = 0.0, volatility: float = 1.0)[source]

Bases: BrownianMotion

GeometricBrownianMotion represents a fundamental continuous-time stochastic process used to model the dynamics of various phenomena, particularly in finance and economics. The process, denoted as (S_t)_{t≥0}.

custom_increment(X: float, timestep: float = 0.01) Any[source]

Calculate the custom increment for the process.

Parameters:
  • X (float) – Current value of the process

  • timestep (float) – Time step for the simulation

Returns:

Increment for the process

Return type:

float

plot_growth_rate(times, growth_rates, save)[source]

Plot the simulated growth rate of the Geometric Brownian Motion process.

Parameters:
  • times (np.ndarray) – Array of time values

  • growth_rates (np.ndarray) – Array of growth rates

  • save (bool) – Whether to save the plot

Returns:

None

Return type:

None

relative_variance_pea_theory(num_instances: int = 5, t: float = 10)[source]

Calculate the theoretical relative variance of a partial ensemble average (PEA). It is used to estimate if PEA is close to its expectation value. If the results is <<1, then PEA is close to its expectation value. Otherwise, the process has exited the self-averaging regime.

Parameters:
  • num_instances (int) – Number of instances to simulate

  • t (float) – Total time for the simulation

Returns:

Relative variance of the process

Return type:

float

self_averaging_time_theory(num_instances: int = 5)[source]

Calculate the theoretical estimate of self-averaging time for the Geometric Brownian Motion process. Self-averaging time is the time when the process ensemble exits self-averaging regime. It means that the ensemble average of the process is no more equal to the expected value.

Parameters:

num_instances (int) – Number of instances to simulate

Returns:

Theoretical estimate of self-averaging time

Return type:

float

simulate_growth_rate(t: float = 10, timestep: float = 0.01, num_instances: int = 5, n_simulations: int = 5, save: bool = False, plot: bool = False) ndarray[source]

Simulate the growth rate of the Geometric Brownian Motion process.

Parameters:
  • t (float) – Total time for the simulation

  • timestep (float) – Time step for the simulation

  • num_instances (int) – Number of instances to simulate

  • save (bool) – Whether to save the results to a file

  • plot (bool) – Whether to plot the results

Returns:

Array of simulated growth rates

Return type:

np.ndarray

class ergodicity.process.multiplicative.GeometricCauchyProcess(name: str = 'Geometric Cauchy Process', scale: float = 0.7071067811865476, loc: float = 0.5)[source]

Bases: GeometricLevyProcess

MultivariateGeometricBrownianMotion represents a sophisticated extension of Geometric Brownian Motion to multiple dimensions, providing a powerful framework for modeling correlated, exponentially growing processes subject to random fluctuations. This continuous-time stochastic process, denoted as (S_t)_{t≥0} where S_t is a vector in R^n, is characterized by the system of stochastic differential equations:

dS_i(t) = μ_i S_i(t) dt + Σ_ij S_i(t) dW_j(t) for i = 1, …, n

where:

  • μ (drift) is a vector representing the average rates of return or growth for each component

  • Σ (scale) is a matrix capturing both volatilities and correlations between components

  • W_t is a vector of standard Brownian motions

Key properties include:

  1. Multiplicative nature in each dimension: Changes are proportional to current values, preserving non-negativity of each component.

  2. Log-normality: The logarithm of each component follows a multivariate normal distribution.

  3. Complex correlation structure: Allows modeling of intricate dependencies between components.

  4. Non-stationary: Variances and covariances increase over time.

This implementation extends the MultivariateBrownianMotion class, adapting it to the geometric nature of the process. It’s explicitly set as multiplicative (_multiplicative = True) and uses an internal simulator (_external_simulator = False) for precise control over the process generation.

Notable features:

  • Flexible initialization: Accepts drift vector and scale matrix as inputs, allowing for detailed specification of growth rates and interdependencies.

  • Numpy integration: Utilizes numpy arrays for efficient computation and manipulation of multi-dimensional data.

  • Parameter handling: The _has_wrong_params flag is set to True, indicating potential need for parameter adjustment in certain contexts.

  • Initial state: _X is initialized to a vector of ones, reflecting the typical starting point for geometric processes.

Researchers and practitioners should be aware of the increased complexity in parameter estimation and interpretation compared to univariate models. The interplay between drift components and the scale matrix requires careful consideration, particularly in high-dimensional settings. While powerful in capturing complex, correlated growth phenomena, users should ensure the model’s assumptions align with the characteristics of the system being studied. The log-normal nature of the process may not be suitable for all applications, and consideration of alternative multivariate processes (e.g., based on Lévy processes) might be necessary for scenarios involving more extreme events or heavier tails.

class ergodicity.process.multiplicative.GeometricFractionalBrownianMotion(name: str = 'Geometric Fractional Brownian Motion', process_class: Type[Any] = None, mean: float = 0.0, volatility: float = 1.0, hurst: float = 0.5)[source]

Bases: NonItoProcess

MultivariateGeometricBrownianMotion represents a sophisticated extension of Geometric Brownian Motion to multiple dimensions, providing a powerful framework for modeling correlated, exponentially growing processes subject to random fluctuations. This continuous-time stochastic process, denoted as (S_t)_{t≥0} where S_t is a vector in R^n, is characterized by the system of stochastic differential equations:

dS_i(t) = μ_i S_i(t) dt + Σ_ij S_i(t) dW_j(t) for i = 1, …, n

where:

  • μ (drift) is a vector representing the average rates of return or growth for each component

  • Σ (scale) is a matrix capturing both volatilities and correlations between components

  • W_t is a vector of standard Brownian motions

Key properties include:

  1. Multiplicative nature in each dimension: Changes are proportional to current values, preserving non-negativity of each component.

  2. Log-normality: The logarithm of each component follows a multivariate normal distribution.

  3. Complex correlation structure: Allows modeling of intricate dependencies between components.

  4. Non-stationary: Variances and covariances increase over time.

This implementation extends the MultivariateBrownianMotion class, adapting it to the geometric nature of the process. It’s explicitly set as multiplicative (_multiplicative = True) and uses an internal simulator (_external_simulator = False) for precise control over the process generation.

Notable features:

  • Flexible initialization: Accepts drift vector and scale matrix as inputs, allowing for detailed specification of growth rates and interdependencies.

  • Numpy integration: Utilizes numpy arrays for efficient computation and manipulation of multi-dimensional data.

  • Parameter handling: The _has_wrong_params flag is set to True, indicating potential need for parameter adjustment in certain contexts.

  • Initial state: _X is initialized to a vector of ones, reflecting the typical starting point for geometric processes.

Researchers and practitioners should be aware of the increased complexity in parameter estimation and interpretation compared to univariate models. The interplay between drift components and the scale matrix requires careful consideration, particularly in high-dimensional settings. While powerful in capturing complex, correlated growth phenomena, users should ensure the model’s assumptions align with the characteristics of the system being studied. The log-normal nature of the process may not be suitable for all applications, and consideration of alternative multivariate processes (e.g., based on Lévy processes) might be necessary for scenarios involving more extreme events or heavier tails.

custom_increment(X: float, timestep: float = 0.01) Any[source]

Calculate the custom increment for the process. :param X: Current value of the process :param timestep: Time step for the simulation :return: Increment for the process

class ergodicity.process.multiplicative.GeometricGeneralizedHyperbolicProcess(name: str = 'Multiplicative Generalized Hyperbolic Process', process_class: ~typing.Type[~typing.Any] = None, plambda: float = 0, alpha: float = 1.7, beta: float = 0, loc: float = 0.0005, delta: float = 0.01, t_scaling: ~typing.Callable[[float], float] = <function GeometricGeneralizedHyperbolicProcess.<lambda>>, **kwargs)[source]

Bases: GeneralizedHyperbolicProcess

GeometricGeneralizedHyperbolicProcess represents a multiplicative version of the Generalized Hyperbolic Process. This continuous-time stochastic process combines the flexibility of the Generalized Hyperbolic distribution with multiplicative dynamics, making it suitable for modeling phenomena where changes are proportional to the current state and exhibit complex distributional characteristics.

The process is defined as: dS(t) = S(t) * dX(t)

where X(t) is a Generalized Hyperbolic Process.

Key properties include: 1. Multiplicative nature: Changes are proportional to the current value, preserving non-negativity. 2. Flexible distribution: Can model a wide range of tail behaviors and asymmetries. 3. Nests several important distributions: Including normal, Student’s t, variance-gamma, and normal-inverse Gaussian.

This implementation extends the GeneralizedHyperbolicProcess class, adapting it to a multiplicative framework. It’s explicitly set as multiplicative (_multiplicative = True) and uses an internal simulator for precise control over the process generation.

Researchers and practitioners should be aware of the increased complexity in parameter estimation and interpretation compared to simpler processes. The rich behavior of this process requires careful consideration in both theoretical development and practical applications.

custom_increment(X: float, timestep: float = 0.01) Any[source]

Calculate the custom increment for the process.

Parameters:
  • X (float) – Current value of the process

  • timestep (float) – Time step for the simulation

Returns:

Increment for the process

Return type:

float

class ergodicity.process.multiplicative.GeometricLevyProcess(name: str = 'Geometric Levy Process', process_class: Type[Any] = None, alpha: float = 2, beta: float = 0, scale: float = 0.7071067811865476, loc: float = 0.5)[source]

Bases: LevyStableProcess

GeometricLevyProcess represents a stochastic process that combines the heavy-tailed characteristics of Lévy stable distributions with the multiplicative nature of geometric processes. This continuous-time process, denoted as (S_t)_{t≥0}, extends the concept of Geometric Brownian Motion to accommodate more extreme fluctuations and asymmetry often observed in complex systems.

where X_t is a Lévy stable process characterized by four key parameters:

  1. α (alpha): Stability parameter (0 < α ≤ 2), controlling tail heaviness. Smaller values lead to heavier tails and more extreme events.

  2. β (beta): Skewness parameter (-1 ≤ β ≤ 1), determining the asymmetry of the distribution.

  3. σ (scale): Scale parameter, influencing the spread of the distribution.

  4. μ (loc): Location parameter, affecting the central tendency of the process.

Key properties of the Geometric Lévy Process include:

  1. Multiplicative nature: Changes are proportional to the current value, preserving non-negativity.

  2. Heavy-tailed behavior: Capable of modeling extreme events more effectively than Gaussian-based processes.

  3. Potential for infinite variance: For α < 2, capturing highly volatile phenomena.

  4. Self-similarity: Exhibiting fractal-like behavior in certain parameter regimes.

This implementation inherits from LevyStableProcess, adapting it to a geometric framework. It’s explicitly set as multiplicative (_multiplicative = True) and uses an internal simulator (_external_simulator = False) for precise control over the process generation.

The class is versatile, finding applications in various fields:

  • Financial modeling: Asset prices with extreme movements, particularly in volatile markets.

  • Risk management: Modeling scenarios with potential for large, sudden changes.

  • Physics: Describing growth processes in complex systems with potential for rapid fluctuations.

  • Telecommunications: Modeling bursty traffic patterns in networks.

Notable features:

  • Flexible parameterization: Allows fine-tuning of tail behavior, skewness, scale, and location.

  • Simulation control: Uses differential-based simulation (_simulate_with_differential = True) for accurate trajectory generation.

  • Type categorization: Classified under both “geometric” and “levy” types, reflecting its dual nature.

Researchers and practitioners should be aware of the increased complexity in parameter estimation and interpretation compared to Gaussian-based models. The rich behavior of this process, especially for α < 2, requires careful consideration in application and analysis. While powerful in capturing extreme behaviors, users should ensure the chosen parameters align with the underlying phenomena being modeled and be prepared for potentially counterintuitive results in statistical analyses.

custom_increment(X: float, timestep: float = 0.01) Any[source]

Calculate the custom increment for the process.

Parameters:
  • X (float) – Current value of the process

  • timestep (float) – Time step for the simulation

Returns:

Increment for the process

Return type:

float

implied_correction(t: float = 10, timestep: float = 0.01, save: bool = True, plot: bool = False) Any[source]

Calculate the implied correction term for the Geometric Levy Process (analogoys to -0.5 * sigma^2 * t for GBM).

Parameters:
  • t (float) – Total time for the simulation

  • timestep (float) – Time step for the simulation

  • save (bool) – Whether to save the results

  • plot (bool) – Whether to plot the results

Returns:

Implied correction

Return type:

float

sigma_divergence(t: float = 10, timestep: float = 0.01) Any[source]

Calculate the divergence between the theoretical and empirical sigma values for the process.

Parameters:
  • t (float) – Total time for the simulation

  • timestep (float) – Time step for the simulation

Returns:

Theoretical and empirical sigma values

Return type:

Any

class ergodicity.process.multiplicative.GeometricSquaredBesselProcess(name: str = 'Geometric Squared Bessel Process', process_class: Type[Any] = None, dim: int = 10)[source]

Bases: SquaredBesselProcess

GeometricSquaredBesselProcess represents a multiplicative extension of the SquaredBesselProcess. This process combines the characteristics of a squared Bessel process with the multiplicative nature of geometric processes. The resulting stochastic process, denoted as (S_t)_{t≥0}, is defined as:

dS_t = S_t * dR²_t

where R²_t is the SquaredBesselProcess of dimension d.

Key features of GeometricSquaredBesselProcess include: 1. Multiplicative nature: Changes are proportional to the current value, preserving non-negativity. 2. Dimension-dependent behavior: The underlying squared Bessel process characteristics are preserved. 3. Non-negative: The process is always positive, making it suitable for modeling quantities that cannot be negative. 4. Inherits properties of squared Bessel process: Including recurrence/transience behavior based on dimension.

This implementation extends the SquaredBesselProcess class, adapting it to a geometric framework. It inherits the dimension-dependent properties of the squared Bessel process while providing a multiplicative growth model.

Applications of GeometricSquaredBesselProcess span various fields: - Finance: Modeling volatility of asset prices or interest rates with specific structures. - Physics: Studying particle diffusion processes with multiplicative squared radial growth. - Biology: Analyzing population dynamics with quadratic radial growth patterns. - Queueing theory: Modeling busy periods with multiplicative squared characteristics.

Researchers and practitioners should be aware of the increased complexity in interpretation compared to the standard squared Bessel process. The multiplicative nature introduces new dynamics that require careful consideration in both theoretical development and practical applications. The dimension parameter d plays a crucial role in determining the behavior of the process, affecting its recurrence properties and long-term behavior.

custom_increment(X: float, timestep: float = 0.01) float[source]

Calculate the custom increment for the process.

Parameters:
  • X (float) – Current value of the process

  • timestep (float) – Time step for the simulation

Returns:

Increment for the process

Return type:

float

class ergodicity.process.multiplicative.MultivariateGeometricBrownianMotion(name: str = 'Multivariate Geometric Brownian Motion', drift: List[float] = array([0., 0., 0.]), scale: List[List[float]] = array([[1., 0.6, 0.3], [0.6, 1., 0.6], [0.3, 0.6, 1.]]))[source]

Bases: MultivariateBrownianMotion

MultivariateGeometricBrownianMotion represents a sophisticated extension of Geometric Brownian Motion to multiple dimensions, providing a powerful framework for modeling correlated, exponentially growing processes subject to random fluctuations. This continuous-time stochastic process, denoted as (S_t)_{t≥0} where S_t is a vector in R^n, is characterized by the system of stochastic differential equations:

dS_i(t) = μ_i S_i(t) dt + Σ_ij S_i(t) dW_j(t) for i = 1, …, n

where:

  • μ (drift) is a vector representing the average rates of return or growth for each component

  • Σ (scale) is a matrix capturing both volatilities and correlations between components

  • W_t is a vector of standard Brownian motions

Key properties include:

  1. Multiplicative nature in each dimension: Changes are proportional to current values, preserving non-negativity of each component.

  2. Log-normality: The logarithm of each component follows a multivariate normal distribution.

  3. Complex correlation structure: Allows modeling of intricate dependencies between components.

  4. Non-stationary: Variances and covariances increase over time.

This implementation extends the MultivariateBrownianMotion class, adapting it to the geometric nature of the process. It’s explicitly set as multiplicative (_multiplicative = True) and uses an internal simulator (_external_simulator = False) for precise control over the process generation.

Notable features:

  • Flexible initialization: Accepts drift vector and scale matrix as inputs, allowing for detailed specification of growth rates and interdependencies.

  • Numpy integration: Utilizes numpy arrays for efficient computation and manipulation of multi-dimensional data.

  • Parameter handling: The _has_wrong_params flag is set to True, indicating potential need for parameter adjustment in certain contexts.

  • Initial state: _X is initialized to a vector of ones, reflecting the typical starting point for geometric processes.

Researchers and practitioners should be aware of the increased complexity in parameter estimation and interpretation compared to univariate models. The interplay between drift components and the scale matrix requires careful consideration, particularly in high-dimensional settings. While powerful in capturing complex, correlated growth phenomena, users should ensure the model’s assumptions align with the characteristics of the system being studied. The log-normal nature of the process may not be suitable for all applications, and consideration of alternative multivariate processes (e.g., based on Lévy processes) might be necessary for scenarios involving more extreme events or heavier tails.

calculate_expected_log_growth_rate(weights: ndarray) ndarray[source]

Calculate the expected log growth rate at each time step.

Parameters:

weights (np.ndarray) – numpy array of shape (num_instances, num_time_steps)

Returns:

numpy array of expected log growth rates for each time step

Return type:

np.ndarray

custom_increment(X: List[float], timestep: float = 0.01) Any[source]

Calculate the custom increment for the process.

Parameters:
  • X (List[float]) – Current values of the process

  • timestep (float) – Time step for the simulation

Returns:

Increment for the process

Return type:

List[float]

plot_weights_and_growth_rate(times, weights, log_growth_rates, save)[source]

Plot the simulated weights and expected log growth rate of the Multivariate Geometric Brownian Motion process.

Parameters:
  • times (np.ndarray) – Array of time values

  • weights (np.ndarray) – Array of weights

  • log_growth_rates (np.ndarray) – Array of expected log growth rates

  • save (bool) – Whether to save the plot

Returns:

None

Return type:

None

simulate_ensemble(t: float = 10, n: int = 5, timestep: float = 0.01, save: bool = False) Any[source]

Simulate a single path for a portfolio consisting of multiple instances.

Parameters:
  • t (float) – Total simulation time

  • timestep (float) – Time step for simulation

  • num_instances (int) – Number of instances in the portfolio

  • save (bool) – Whether to save the results

Returns:

Array of portfolio values and geometric means over time

Return type:

Any

simulate_growth_rate(t: float = 10, timestep: float = 0.01, n_simulations: int = 5, save: bool = True, plot: bool = False) ndarray[source]

Simulate the expected log growth rate of the Multivariate Geometric Brownian Motion process.

Parameters:
  • t (float) – Total simulation time

  • timestep (float) – Time step for simulation

  • n_simulations (int) – Number of simulations to average

  • save (bool) – Whether to save the results

  • plot (bool) – Whether to plot the results

Returns:

Array of simulated expected log growth rates

Return type:

np.ndarray

simulate_weights(t: float = 10, timestep: float = 0.01, save: bool = True, plot: bool = False) ndarray[source]

Simulate the weights of the Multivariate Geometric Brownian Motion process.

Parameters:
  • t (float) – Total simulation time

  • timestep (float) – Time step for simulation

  • save (bool) – Whether to save the results

  • plot (bool) – Whether to plot the results

Returns:

Array of simulated weights

Return type:

np.ndarray

class ergodicity.process.multiplicative.MultivariateGeometricLevy(name: str = 'Multivariate Geometric Levy Process', alpha: float = 1.5, beta: float = 0, scale: float = 1, loc: ndarray = None, correlation_matrix: ndarray = None, pseudovariances: ndarray = None)[source]

Bases: MultivariateLevy

MultivariateGeometricLevy represents an advanced stochastic process that combines the heavy-tailed characteristics of multivariate Lévy stable distributions with the multiplicative nature of geometric processes in multiple dimensions. This sophisticated continuous-time process, denoted as (S_t)_{t≥0} where S_t is a vector in R^n, is defined by the exponential of a multivariate Lévy stable process:

S_i(t) = S_i(0) * exp(X_i(t)) for i = 1, …, n

where X(t) = (X_1(t), …, X_n(t)) is a multivariate Lévy stable process.

Key parameters:

  1. alpha (α): Stability parameter (0 < α ≤ 2), controlling tail heaviness across all dimensions.

  2. beta (β): Skewness parameter (-1 ≤ β ≤ 1), determining asymmetry.

  3. scale: Global scale parameter influencing the spread of the distribution.

  4. loc: Location vector (μ ∈ R^n), shifting the process in each dimension.

  5. correlation_matrix: Specifies the correlation structure between dimensions.

  6. pseudovariances: Vector of pseudovariances for each dimension, generalizing the concept of variance.

Key properties:

  1. Heavy-tailed behavior: Capable of modeling extreme events in multiple dimensions simultaneously.

  2. Complex dependency structure: Captures intricate correlations between components.

  3. Multiplicative nature: Preserves non-negativity in each dimension, suitable for modeling quantities like prices or sizes.

  4. Potential for infinite variance: For α < 2, allowing for highly volatile phenomena in multiple dimensions.

This implementation extends the MultivariateLevy class, adapting it to a geometric framework. It’s explicitly set as multiplicative (_multiplicative = True) and uses an internal simulator (_external_simulator = False) for precise control over the process generation.

Notable features:

  • Flexible parameterization: Allows fine-tuning of tail behavior, skewness, scale, and multidimensional dependencies.

  • Initialization with unit values: _X is initialized to a vector of ones, reflecting the typical starting point for geometric processes.

  • Parameter handling: The _has_wrong_params flag is set to True, indicating potential need for parameter adjustment in certain contexts.

Researchers and practitioners should be aware of several important considerations:

  1. Increased complexity in parameter estimation and interpretation compared to Gaussian-based multivariate models.

  2. Computational challenges in simulating and analyzing high-dimensional heavy-tailed processes.

  3. The need for specialized statistical techniques to handle the lack of finite moments when α < 2.

  4. Careful interpretation of results, especially in risk assessment and forecasting, due to the process’s capacity for extreme behaviors.

While the MultivariateGeometricLevy process offers a powerful framework for modeling complex, correlated, heavy-tailed phenomena in multiple dimensions, its sophisticated nature requires judicious application. Users should ensure that the chosen parameters align with the underlying phenomena being modeled and be prepared for potentially counterintuitive results in statistical analyses. The process’s rich behavior, especially for α < 2, necessitates careful consideration in both theoretical development and practical applications.

custom_increment(X: ndarray, timestep: float = 1.0) ndarray[source]

Calculate the custom increment for the process.

Parameters:
  • X (np.ndarray) – Current values of the process

  • timestep (float) – Time step for the simulation

Returns:

Increment for the process

Return type:

np.ndarray

simulate_ensemble(t: float = 1.0, n: int = 1000, timestep: float = 0.01, save: bool = False) dict[source]

Simulate a single path for a portfolio consisting of multiple instances.

Parameters:
  • t (float) – Total simulation time

  • n (int) – Number of simulations for each time step

  • timestep (float) – Time step for simulation

  • save (bool) – Whether to save the results

Returns:

Dictionary containing portfolio values, geometric means, and weights over time

Return type:

dict

ergodicity.process.multiplicative.comments_false()[source]
ergodicity.process.multiplicative.comments_true()[source]
ergodicity.process.multiplicative.estimate_sigma(num_steps_alpha: int, num_steps_time: int, max_time: float, min_time: float = 1.0, max_alpha: float = 2.0, min_alpha: float = 1.0, timestep: float = 0.001, beta: float = 0, loc: float = 0.0001, scale: float = 0.0001, time_scale: Literal['linear', 'log'] = 'linear', alpha_scale: Literal['linear', 'log'] = 'linear', save_path: str = None, save_html: bool = True) Tuple[ndarray, Any, Any][source]

Estimate sigma(alpha, t) for GeometricLevyProcess.

Parameters:
  • num_steps_alpha (int) – Number of steps for alpha

  • num_steps_time (int) – Number of steps for time

  • max_time (float) – Maximum time

  • min_time (float) – Minimum time (default: 1.0)

  • max_alpha (float) – Maximum alpha (default: 2.0)

  • min_alpha (float) – Minimum alpha (default: 0.01)

  • timestep (float) – Fixed timestep for simulation

  • beta (float) – Fixed beta parameter

  • loc (float) – Fixed loc parameter

  • scale (float) – Fixed scale parameter

  • time_scale (str) – ‘linear’ or ‘log’ for time sampling

  • alpha_scale (str) – ‘linear’ or ‘log’ for alpha sampling

  • save_path (str) – Path to save the results (if None, results won’t be saved)

  • save_html (bool) – If True, save the graph as an interactive HTML file

Returns:

3D numpy array of sigma values, linear regression results, non-linear regression results

Return type:

Tuple[np.ndarray, Any, Any]

ergodicity.process.multiplicative.implied_levy_correction(alpha_range: Tuple[float, float], beta_range: Tuple[float, float], time_range: Tuple[float, float], alpha_step: float, beta_step: float, time_step: float, loc: float, scale: float, timestep: float, save_path: str = None, save_html: bool = True) Any[source]

Calculate and plot the correction term as a function of alpha and beta for a given loc and scale.

Parameters:
  • alpha_range (Tuple[float, float]) – Tuple of (min_alpha, max_alpha)

  • beta_range (Tuple[float, float]) – Tuple of (min_beta, max_beta)

  • time_range (Tuple[float, float] or float) – Tuple of (min_time, max_time) or a single time value

  • alpha_step (float) – Step size for alpha

  • beta_step (float) – Step size for beta

  • time_step (float) – Step size for time

  • loc (float) – Fixed loc parameter

  • scale (float) – Fixed scale parameter

  • timestep (float) – Fixed timestep for simulation

  • num_instances (int) – Number of instances to simulate for each parameter combination

  • save_path (str) – Path to save the results (if None, results won’t be saved)

  • save_html (bool) – If True, save the graph as an interactive HTML file

Returns:

3D numpy array of correction terms

Return type:

NumPy array

ergodicity.process.with_memory module

with_memory Submodule

The With Memory Submodule focuses on stochastic processes that retain and utilize historical information to influence their future behavior. These processes deviate from classical Markovian models, which rely solely on the current state, by incorporating memory mechanisms that adjust their dynamics based on past states or increments. This submodule provides a framework for modeling non-Markovian processes with varying types of memory effects.

Key Features:

  1. Non-Markovian Dynamics:

    • Unlike Markovian processes where future behavior is independent of the past (given the present), processes in this submodule leverage historical data to influence their future states. This makes them suitable for modeling phenomena with long-range dependence or adaptive behavior.

  2. Adaptive Drift and Volatility:

    • The processes typically feature adaptive drift or volatility, which changes based on the process’s past trajectory. This allows for more complex and realistic modeling of systems where trends evolve over time, such as financial markets, physical systems, or biological processes.

  3. Memory Update Mechanism:

    • A core aspect of these processes is the memory update mechanism, which adjusts key parameters like drift or volatility based on historical increments or states. This can lead to a variety of interesting behaviors, such as mean reversion, long-term memory, or even self-learning dynamics.

  4. Wide Applications:

    • Processes with memory are particularly useful in areas where past behavior significantly impacts the future, including:

      • Financial markets: Modeling asset prices with trends influenced by historical performance.

      • Control systems: Adapting control mechanisms based on past errors or deviations.

      • Environmental science: Modeling systems with long-term dependencies, such as climate data.

      • Machine learning: Adaptive stochastic optimization methods that incorporate past performance into their future decisions.

## Illustrative Example: Brownian Motion With Mean Memory

The BrownianMotionWithMeanMemory class provides a concrete example of a process with memory, where the drift term dynamically adjusts based on the process’s history. This process evolves according to the following dynamics:

[ dX_t = mu_t dt + left( rac{sigma}{mu_t} ight) dW_t ]

Where:

  • ( mu_t ) is the time-varying drift that updates based on the process’s history.

  • ( sigma ) is a scale parameter controlling the magnitude of random fluctuations.

  • ( W_t ) is a standard Brownian motion.

Key Characteristics:

  1. Adaptive Drift: The drift term ( mu_t ) is adjusted based on past increments, allowing the process to learn from its own behavior.

  2. Memory Mechanism: A memory update function dynamically modifies the drift using an exponential moving average of the past increments.

  3. Scale Modulation: The volatility is inversely proportional to the drift, introducing a unique coupling between the random and deterministic parts of the process.

Code Example:

class BrownianMotionWithMeanMemory(NonItoProcess):

def __init__(self, name: str = “Brownian Motion With Mean Memory”, process_class: Type[Any] = None,

drift: float = drift_term_default, scale: float = stochastic_term_default):

super().__init__(name, process_class)

self._memory = drift

self._drift = drift

self._scale = scale if scale > 0 else ValueError(“The scale parameter must be positive.”)

self._dx = 0

def custom_increment(self, X: float, timestep: float = timestep_default) -> Any:

dX = timestep * self._drift + (timestep ** 0.5) * self._scale * np.random.normal(0, 1) / self._memory

self._dx = dX

return dX

def memory_update(self, step):

step += 1

delta1, delta2 = 1 / step, (step - 1) / step

new_memory = self._memory * delta2 + delta1 * self._dx

return new_memory

class ergodicity.process.with_memory.BrownianMotionWithMeanMemory(name: str = 'Brownian Motion With Mean Memory', process_class: Type[Any] = None, drift: float = 0.0, scale: float = 1.0)[source]

Bases: NonItoProcess

BrownianMotionWithMeanMemory is an illustrative example of a process with memory which represents an extension of standard Brownian motion, incorporating a dynamic, self-adjusting drift based on the process’s history. This continuous-time stochastic process, denoted as (X_t)_{t≥0}, evolves according to the following dynamics:

dX_t = μ_t dt + (σ / μ_t) dW_t

where:

  • μ_t is the time-varying drift, updated based on the process’s history

  • σ is the scale parameter, controlling the magnitude of random fluctuations

  • W_t is a standard Brownian motion

Key features:

  1. Adaptive Drift: The drift term μ_t is dynamically updated, reflecting the process’s mean behavior over time. This adaptation allows the process to “learn” from its past trajectory.

  2. Memory Mechanism: The process maintains a memory of its increments, used to adjust the drift. This feature introduces a form of long-range dependence not present in standard Brownian motion.

  3. Scale Modulation: The stochastic term is modulated by the inverse of the current drift, creating a unique interplay between the deterministic and random components.

The process is initialized with a name, optional process class, initial drift, and scale parameters. It inherits the core functionality of BrownianMotion while implementing custom increment generation and memory update mechanisms.

Key methods:

  1. custom_increment: Generates the next increment of the process, incorporating the memory-adjusted drift and scale modulation.

  2. memory_update: Updates the memory (drift) based on the most recent increment, using an exponential moving average approach.

Researchers and practitioners should note several important considerations:

  1. Non-Markovian nature: The dependence on history makes this process non-Markovian, requiring specialized analysis techniques.

  2. Parameter sensitivity: The interplay between drift updates and scale modulation can lead to complex dynamics, necessitating careful parameter calibration.

  3. Computational considerations: The continuous updating of the drift parameter may increase computational overhead in simulations.

  4. Theoretical implications: The process’s unique structure may require the development of new mathematical tools for rigorous analysis.

While BrownianMotionWithMeanMemory offers a novel approach to modeling adaptive stochastic processes, its use should be carefully considered in the context of specific applications. The memory mechanism introduces a form of “learning” into the process, potentially capturing more complex behaviors than standard Brownian motion, but also introducing additional complexity in analysis and interpretation.

custom_increment(X: float, timestep: float = 0.01) Any[source]

Generate the next increment of the process, incorporating memory-adjusted drift and scale modulation.

Parameters:
  • X (float) – The current value of the process

  • timestep (float) – The time step for the increment generation

Returns:

The next increment of the process

Return type:

Any

memory_update(step)[source]

Update the memory based on the most recent increment.

Parameters:

step (int) – The current step number

Returns:

The updated memory value

Return type:

float

class ergodicity.process.with_memory.GeometricBrownianMotionWithVolatilityMemory(name: str = 'Geometric Brownian Motion With Volatility Memory', process_class: Type[Any] = None, drift: float = 0.0, initial_volatility: float = 1.0, memory_length: int = 20)[source]

Bases: NonItoProcess

GeometricBrownianMotionWithVolatilityMemory is a process with memory that extends the standard Geometric Brownian Motion. It incorporates an adaptive volatility parameter based on the process’s recent history. This continuous-time stochastic process, denoted as (X_t)_{t≥0}, evolves according to the following dynamics:

dX_t = μ X_t dt + σ_t X_t dW_t

where:

  • μ is the constant drift parameter

  • σ_t is the time-varying volatility, updated based on the process’s recent historical volatility

  • W_t is a standard Brownian motion

Key features:

  1. Adaptive Volatility: The volatility σ_t is dynamically updated, reflecting the process’s recent historical volatility. This adaptation allows the process to adjust its randomness based on recent market conditions or system behavior.

  2. Memory Mechanism: The process maintains a memory of its recent logarithmic returns, used to estimate and adjust the current volatility. This feature introduces a form of short-term dependence not present in the standard Geometric Brownian Motion.

  3. Constant Drift: Unlike the volatility, the drift μ remains constant, providing a stable long-term growth rate.

The process is initialized with a name, optional process class, drift, initial volatility, and memory length parameters. It inherits the core functionality of NonItoProcess while implementing custom increment generation and memory update mechanisms.

Key methods:

  1. custom_increment: Generates the next increment of the process, incorporating the memory-adjusted volatility.

  2. memory_update: Updates the memory (recent returns and current volatility estimate) based on the most recent increment, using an exponential moving average approach for volatility estimation.

Researchers and practitioners should note several important considerations:

  1. Non-Markovian nature: The dependence on recent history makes this process non-Markovian, requiring specialized analysis techniques.

  2. Parameter sensitivity: The adaptive volatility can lead to complex dynamics, potentially exhibiting volatility clustering similar to observed in financial markets.

  3. Computational considerations: The continuous updating of the volatility parameter and maintenance of recent returns may increase computational overhead in simulations.

  4. Theoretical implications: The process’s unique structure may require the development of new mathematical tools for rigorous analysis, particularly in understanding the long-term behavior and moments of the process.

While GeometricBrownianMotionWithVolatilityMemory offers a novel approach to modeling adaptive volatility in growth processes, its use should be carefully considered in the context of specific applications. The memory mechanism introduces a form of “market feedback” into the process, potentially capturing more realistic behaviors than the standard Geometric Brownian Motion, but also introducing additional complexity in analysis and interpretation.

custom_increment(X: float, timestep: float = 0.01) Any[source]

Generate the next increment of the process, incorporating memory-adjusted volatility.

Parameters:
  • X (float) – The current value of the process

  • timestep (float) – The time step for the increment generation

Returns:

The next increment of the process

Return type:

Any

memory_update(step)[source]

Update the memory based on the most recent increment, adjusting the volatility estimate.

Parameters:

step (int) – The current step number

Returns:

The updated memory value

Return type:

dict

class ergodicity.process.with_memory.LevyStableProcessWithAdaptiveSkewness(name: str = 'Levy Stable Process with Adaptive Skewness', process_class: Type[Any] = None, alpha: float = 2, beta: float = 0, scale: float = 0.7071067811865476, loc: float = 0, memory_length: int = 50, **kwargs)[source]

Bases: LevyStableProcess

LevyStableProcessWithAdaptiveSkewness extends the LevyStableProcess by incorporating a memory mechanism that adjusts the skewness parameter (beta) based on recent process behavior. This adaptation allows the process to exhibit time-varying asymmetry while maintaining the heavy-tailed characteristics of Lévy stable processes.

The process evolves according to the following dynamics:

dX_t = μ dt + σ^(1/α) dL_t^(α,β_t)

where: - μ is the location parameter (drift) - σ is the scale parameter - α is the stability parameter (0 < α ≤ 2) - β_t is the time-varying skewness parameter (-1 ≤ β_t ≤ 1), updated based on recent process increments - L_t^(α,β_t) is a standard α-stable Lévy process with time-varying skewness

The memory mechanism adjusts β_t based on an exponential moving average of recent increments, allowing the process to adapt its asymmetry to recent trends in the data.

Key features: 1. Adaptive Skewness: The skewness parameter β_t is dynamically updated, reflecting recent

trends in the process.

  1. Memory Mechanism: The process maintains a memory of recent increments, used to adjust the skewness parameter.

  2. Lévy Stable Properties: Inherits the heavy-tailed nature of Lévy stable processes, while incorporating adaptive asymmetry.

This process can be particularly useful in modeling systems with varying levels of asymmetry, such as financial markets during bull and bear periods, or physical systems with changing directional biases in their random fluctuations.

custom_increment(X: float, timestep: float = 0.01) Any[source]

Generate the next increment of the process, using the memory-adjusted skewness parameter.

Parameters:
  • X (float) – The current value of the process

  • timestep (float) – The time step for the simulation

Returns:

The increment value

Return type:

float

differential() str[source]

Express the Levy process with adaptive skewness as a differential equation.

Returns:

The differential equation of the process

Return type:

str

memory_update(step)[source]

Update the memory based on the most recent increment, adjusting the skewness parameter.

Parameters:

step (int) – The current step number

Returns:

The updated memory value

Return type:

dict

class ergodicity.process.with_memory.LevyStableProcessWithMemory(name: str = 'Levy Stable Process with Memory', process_class: Type[Any] = None, alpha: float = 2, beta: float = 0, scale: float = 0.7071067811865476, loc: float = 0, memory_length: int = 20, **kwargs)[source]

Bases: LevyStableProcess

LevyStableProcessWithMemory extends the LevyStableProcess by incorporating a memory mechanism that adjusts the scale parameter based on recent process behavior. This adaptation allows the process to exhibit time-varying volatility while maintaining the heavy-tailed characteristics of Lévy stable processes.

The process evolves according to the following dynamics:

dX_t = μ dt + σ_t^(1/α) dL_t^(α,β)

where: - μ is the location parameter (drift) - σ_t is the time-varying scale parameter, updated based on recent process increments - α is the stability parameter (0 < α ≤ 2) - β is the skewness parameter (-1 ≤ β ≤ 1) - L_t^(α,β) is a standard α-stable Lévy process

The memory mechanism adjusts σ_t based on an exponential moving average of recent absolute increments, allowing the process to adapt its scale to recent volatility levels.

Key features: 1. Adaptive Scale: The scale parameter σ_t is dynamically updated, reflecting recent

volatility in the process.

  1. Memory Mechanism: The process maintains a memory of recent absolute increments, used to adjust the scale parameter.

  2. Lévy Stable Properties: Inherits the heavy-tailed and potentially skewed nature of Lévy stable processes, while incorporating adaptive behavior.

This process can be particularly useful in modeling systems with varying levels of volatility or risk, such as financial markets during periods of calm and turbulence, or physical systems with regime changes in their random fluctuations.

custom_increment(X: float, timestep: float = 0.01) Any[source]

Generate the next increment of the process, using the memory-adjusted scale parameter.

Parameters:
  • X (float) – The current value of the process

  • timestep (float) – The time step for the simulation

Returns:

The increment value

Return type:

float

memory_update(step)[source]

Update the memory based on the most recent increment, adjusting the scale parameter.

Parameters:

step (int) – The current step number

Returns:

The updated memory value

Return type:

dict

class ergodicity.process.with_memory.MultivariateGBMWithAdaptiveCorrelation(name: str = 'Multivariate GBM with Adaptive Correlation', drift: List[float] = array([0., 0., 0.]), scale: List[List[float]] = array([[1., 0.6, 0.3], [0.6, 1., 0.6], [0.3, 0.6, 1.]]), memory_length: int = 50)[source]

Bases: MultivariateGeometricBrownianMotion

MultivariateGBMWithAdaptiveCorrelation extends the MultivariateGeometricBrownianMotion by incorporating a memory mechanism that adjusts the correlation structure (and thus the scale matrix) based on recent process behavior. This adaptation allows the process to exhibit time-varying correlations between components while maintaining the core properties of a multivariate geometric Brownian motion.

The process evolves according to the following dynamics:

dS_i(t) = μ_i S_i(t) dt + Σ_ij(t) S_i(t) dW_j(t) for i = 1, …, n

where: - μ_i remains constant as in the parent class - Σ_ij(t) is the time-varying scale matrix, reflecting changing correlations - W_j(t) remains as in the parent class

The memory mechanism adjusts Σ(t) based on an exponential moving average of recent cross-products of returns, allowing the process to adapt its correlation structure to recent patterns in the data.

Key features: 1. Adaptive Correlation: The scale matrix Σ(t) is dynamically updated, reflecting recent

correlation patterns between components of the process.

  1. Memory Mechanism: The process maintains a memory of recent returns for each component, used to adjust the scale matrix.

  2. Preserved Drift: The drift vector μ remains constant, preserving the average growth rates while allowing for adaptive correlations.

custom_increment(X: List[float], timestep: float = 0.01) Any[source]

Calculate the custom increment for the process.

Parameters:
  • X (List[float]) – Current values of the process

  • timestep (float) – Time step for the simulation

Returns:

Increment for the process

Return type:

List[float]

memory_update(step)[source]

Update the memory of the process. Used in the construction of processes with memory.

Parameters:

step (int) – The current step

Returns:

The updated memory

Return type:

Any

class ergodicity.process.with_memory.MultivariateGBMWithAdaptiveDrift(name: str = 'Multivariate GBM with Adaptive Drift', drift: List[float] = array([0., 0., 0.]), scale: List[List[float]] = array([[1., 0.6, 0.3], [0.6, 1., 0.6], [0.3, 0.6, 1.]]), memory_length: int = 50)[source]

Bases: MultivariateGeometricBrownianMotion

MultivariateGBMWithAdaptiveDrift extends the MultivariateGeometricBrownianMotion by incorporating a memory mechanism that adjusts the drift vector based on recent process behavior. This adaptation allows the process to exhibit time-varying growth rates while maintaining the core properties of a multivariate geometric Brownian motion.

The process evolves according to the following dynamics:

dS_i(t) = μ_i(t) S_i(t) dt + Σ_ij S_i(t) dW_j(t) for i = 1, …, n

where: - μ_i(t) is the time-varying drift for the i-th component - Σ_ij and W_j(t) remain as in the parent class

The memory mechanism adjusts μ(t) based on an exponential moving average of recent returns, allowing the process to adapt its growth rates to recent trends in the data.

Key features: 1. Adaptive Drift: The drift vector μ(t) is dynamically updated, reflecting recent

growth patterns in each component of the process.

  1. Memory Mechanism: The process maintains a memory of recent returns for each component, used to adjust the drift vector.

  2. Preserved Correlation Structure: The scale matrix Σ remains constant, preserving the correlation structure between components while allowing for adaptive growth rates.

custom_increment(X: List[float], timestep: float = 0.01) Any[source]

Calculate the custom increment for the process.

Parameters:
  • X (List[float]) – Current values of the process

  • timestep (float) – Time step for the simulation

Returns:

Increment for the process

Return type:

List[float]

memory_update(step)[source]

Update the memory of the process. Used in the construction of processes with memory.

Parameters:

step (int) – The current step

Returns:

The updated memory

Return type:

Any

class ergodicity.process.with_memory.MultivariateGeometricLevyWithAdaptiveAlpha(name: str = 'Multivariate Geometric Levy with Adaptive Alpha', alpha: float = 1.5, beta: float = 0, scale: float = 1, loc: ndarray = None, correlation_matrix: ndarray = None, pseudovariances: ndarray = None, memory_length: int = 50)[source]

Bases: MultivariateGeometricLevy

MultivariateGeometricLevyWithAdaptiveAlpha extends the MultivariateGeometricLevy by incorporating a memory mechanism that adjusts the stability parameter (alpha) based on recent process behavior. This adaptation allows the process to exhibit time-varying tail behavior while maintaining the core properties of a multivariate geometric Lévy process.

The process evolves similarly to its parent class, but with a time-varying alpha:

S_i(t) = S_i(0) * exp(X_i(t)) for i = 1, …, n

where X(t) = (X_1(t), …, X_n(t)) is a multivariate Lévy stable process with time-varying α(t).

Key features: 1. Adaptive Stability: The stability parameter α(t) is dynamically updated, reflecting recent

extreme value behavior in the process.

  1. Memory Mechanism: The process maintains a memory of recent increments, used to adjust the stability parameter.

  2. Preserved Correlation Structure: Other parameters (β, scale, correlation) remain constant, preserving the overall structure while allowing for adaptive tail behavior.

custom_increment(X: ndarray, timestep: float = 1.0) ndarray[source]

Calculate the custom increment for the process.

Parameters:
  • X (np.ndarray) – Current values of the process

  • timestep (float) – Time step for the simulation

Returns:

Increment for the process

Return type:

np.ndarray

memory_update(step)[source]

Update the memory of the process. Used in the construction of processes with memory.

Parameters:

step (int) – The current step

Returns:

The updated memory

Return type:

Any

class ergodicity.process.with_memory.MultivariateGeometricLevyWithAdaptiveBeta(name: str = 'Multivariate Geometric Levy with Adaptive Beta', alpha: float = 1.5, beta: float = 0, scale: float = 1, loc: ndarray = None, correlation_matrix: ndarray = None, pseudovariances: ndarray = None, memory_length: int = 50)[source]

Bases: MultivariateGeometricLevy

MultivariateGeometricLevyWithAdaptiveBeta extends the MultivariateGeometricLevy by incorporating a memory mechanism that adjusts the skewness parameter (beta) based on recent process behavior. This adaptation allows the process to exhibit time-varying asymmetry while maintaining the core properties of a multivariate geometric Lévy process.

The process evolves similarly to its parent class, but with a time-varying beta:

S_i(t) = S_i(0) * exp(X_i(t)) for i = 1, …, n

where X(t) = (X_1(t), …, X_n(t)) is a multivariate Lévy stable process with time-varying skewness parameter β(t).

Key features: 1. Adaptive Skewness: The skewness parameter β(t) is dynamically updated, reflecting recent

asymmetry in the process increments.

  1. Memory Mechanism: The process maintains a memory of recent increments, used to adjust the skewness parameter.

  2. Preserved Stability and Scale: Other parameters (α, σ, correlation) remain constant, preserving the overall stability and scale while allowing for adaptive asymmetry.

custom_increment(X: ndarray, timestep: float = 1.0) ndarray[source]

Calculate the custom increment for the process.

Parameters:
  • X (np.ndarray) – Current values of the process

  • timestep (float) – Time step for the simulation

Returns:

Increment for the process

Return type:

np.ndarray

memory_update(step)[source]

Update the memory of the process. Used in the construction of processes with memory.

Parameters:

step (int) – The current step

Returns:

The updated memory

Return type:

Any

class ergodicity.process.with_memory.MultivariateGeometricLevyWithAdaptiveCorrelation(name: str = 'Multivariate Geometric Levy with Adaptive Correlation', alpha: float = 1.5, beta: float = 0, scale: float = 1, loc: ndarray = None, correlation_matrix: ndarray = None, pseudovariances: ndarray = None, memory_length: int = 50)[source]

Bases: MultivariateGeometricLevy

MultivariateGeometricLevyWithAdaptiveCorrelation extends the MultivariateGeometricLevy by incorporating a memory mechanism that adjusts the correlation structure based on recent process behavior. This adaptation allows the process to exhibit time-varying dependencies between components while maintaining the core properties of a multivariate geometric Lévy process.

The process evolves similarly to its parent class, but with a time-varying correlation structure:

S_i(t) = S_i(0) * exp(X_i(t)) for i = 1, …, n

where X(t) = (X_1(t), …, X_n(t)) is a multivariate Lévy stable process with time-varying correlation matrix R(t).

Key features: 1. Adaptive Correlation: The correlation matrix R(t) is dynamically updated, reflecting recent

dependency patterns between components of the process.

  1. Memory Mechanism: The process maintains a memory of recent increments, used to adjust the correlation structure.

  2. Preserved Marginal Behavior: Other parameters (α, β, scale) remain constant, preserving the marginal distributions while allowing for adaptive dependencies.

custom_increment(X: ndarray, timestep: float = 1.0) ndarray[source]

Calculate the custom increment for the process.

Parameters:
  • X (np.ndarray) – Current values of the process

  • timestep (float) – Time step for the simulation

Returns:

Increment for the process

Return type:

np.ndarray

memory_update(step)[source]

Update the memory of the process. Used in the construction of processes with memory.

Parameters:

step (int) – The current step

Returns:

The updated memory

Return type:

Any

class ergodicity.process.with_memory.MultivariateGeometricLevyWithAdaptiveScale(name: str = 'Multivariate Geometric Levy with Adaptive Scale', alpha: float = 1.5, beta: float = 0, scale: float = 1, loc: ndarray = None, correlation_matrix: ndarray = None, pseudovariances: ndarray = None, memory_length: int = 50)[source]

Bases: MultivariateGeometricLevy

MultivariateGeometricLevyWithAdaptiveScale extends the MultivariateGeometricLevy by incorporating a memory mechanism that adjusts the scale parameter based on recent process volatility. This adaptation allows the process to exhibit time-varying volatility while maintaining the core properties of a multivariate geometric Lévy process.

The process evolves similarly to its parent class, but with a time-varying scale:

S_i(t) = S_i(0) * exp(X_i(t)) for i = 1, …, n

where X(t) = (X_1(t), …, X_n(t)) is a multivariate Lévy stable process with time-varying scale parameter σ(t).

Key features: 1. Adaptive Scale: The scale parameter σ(t) is dynamically updated, reflecting recent

volatility in the process.

  1. Memory Mechanism: The process maintains a memory of recent absolute increments, used to adjust the scale parameter.

  2. Preserved Distribution Shape: Other parameters (α, β, correlation) remain constant, preserving the overall shape of the distribution while allowing for adaptive volatility.

custom_increment(X: ndarray, timestep: float = 1.0) ndarray[source]

Calculate the custom increment for the process.

Parameters:
  • X (np.ndarray) – Current values of the process

  • timestep (float) – Time step for the simulation

Returns:

Increment for the process

Return type:

np.ndarray

memory_update(step)[source]

Update the memory of the process. Used in the construction of processes with memory.

Parameters:

step (int) – The current step

Returns:

The updated memory

Return type:

Any

class ergodicity.process.with_memory.OrnsteinUhlenbeckWithAdaptiveRate(name: str = 'Ornstein-Uhlenbeck With Adaptive Rate', process_class: Type[Any] = None, mean: float = 0.0, initial_rate: float = 0.1, volatility: float = 1.0)[source]

Bases: NonItoProcess

OrnsteinUhlenbeckWithAdaptiveRate is a process with memory that extends the standard Ornstein-Uhlenbeck process. It incorporates an adaptive mean-reversion rate based on the process’s history. This continuous-time stochastic process, denoted as (X_t)_{t≥0}, evolves according to the following dynamics:

dX_t = θ_t (μ - X_t) dt + σ dW_t

where:

  • θ_t is the time-varying mean-reversion rate, updated based on the process’s history

  • μ is the long-term mean level

  • σ is the volatility parameter

  • W_t is a standard Brownian motion

Key features:

  1. Adaptive Mean-Reversion: The mean-reversion rate θ_t is dynamically updated, reflecting the process’s tendency to return to its mean over time. This adaptation allows the process to “learn” from its past trajectory and adjust its mean-reversion speed.

  2. Memory Mechanism: The process maintains a memory of its past states, used to adjust the mean-reversion rate. This feature introduces a form of long-range dependence not present in the standard Ornstein-Uhlenbeck process.

  3. Constant Volatility: Unlike the mean-reversion rate, the volatility σ remains constant, providing a stable measure of random fluctuations.

The process is initialized with a name, optional process class, long-term mean, initial mean-reversion rate, and volatility parameters. It inherits the core functionality of NonItoProcess while implementing custom increment generation and memory update mechanisms.

Key methods:

  1. custom_increment: Generates the next increment of the process, incorporating the memory-adjusted mean-reversion rate.

  2. memory_update: Updates the memory (mean-reversion rate) based on the most recent state and increment, using an exponential moving average approach.

Researchers and practitioners should note several important considerations:

  1. Non-Markovian nature: The dependence on history makes this process non-Markovian, requiring specialized analysis techniques.

  2. Parameter sensitivity: The adaptive mean-reversion rate can lead to complex dynamics, necessitating careful parameter calibration.

  3. Computational considerations: The continuous updating of the mean-reversion rate parameter may increase computational overhead in simulations.

  4. Theoretical implications: The process’s unique structure may require the development of new mathematical tools for rigorous analysis, particularly in understanding the long-term behavior and stationary distribution (if it exists).

While OrnsteinUhlenbeckWithAdaptiveRate offers a novel approach to modeling adaptive mean-reverting processes, its use should be carefully considered in the context of specific applications. The memory mechanism introduces a form of “learning” into the process, potentially capturing more complex behaviors than the standard Ornstein-Uhlenbeck process, but also introducing additional complexity in analysis and interpretation.

custom_increment(X: float, timestep: float = 0.01) Any[source]

Generate the next increment of the process, incorporating memory-adjusted mean-reversion rate.

Parameters:
  • X (float) – The current value of the process

  • timestep (float) – The time step for the increment generation

Returns:

The next increment of the process

Return type:

Any

memory_update(step)[source]

Update the memory based on the most recent state and increment.

Parameters:

step (int) – The current step number

Returns:

The updated memory value

Return type:

dict

Module contents

process Module

The Process Module serves as the core component of this library, encapsulating a wide variety of stochastic processes, both classic and advanced, for modeling random dynamics in continuous time. The module is highly extensible and organized into submodules, each focusing on specific classes of processes or characteristics, such as memory effects, increments, and custom processes. Whether you’re modeling financial markets, physical systems, or biological phenomena, this module provides a robust framework for simulating and analyzing processes with diverse properties.

Key Submodules:

  1. Basic Submodule:

    • Contains fundamental stochastic processes like Wiener Process (Brownian motion), Levy Stable Process, and Fractional Brownian Motion.

    • These processes form the building blocks for more advanced models and serve as templates for defining custom processes.

  2. Increments Submodule:

    • Provides functions and classes for generating increments for various stochastic processes.

    • Increments are essential for simulating the paths of processes, and this submodule ensures they have variance 1 and are consistent with their process’s timestep.

  3. With Memory Submodule:

    • Focuses on non-Markovian processes, where future behavior depends on past increments or states.

    • Includes processes like BrownianMotionWithMeanMemory, which adapt their drift or volatility based on historical information, capturing long-term dependencies or “memory” effects.

  4. Multiplicative Submodule:

    • Contains processes that exhibit multiplicative dynamics, where changes are proportional to the current value.

    • Includes processes like Geometric Brownian Motion, widely used in financial modeling, and Geometric Levy Process, which generalizes this to heavy-tailed distributions.

  5. Constructor Submodule:

    • Allows users to create custom processes interactively, defining parameters, drift, and stochastic terms.

    • It provides a mechanism for dynamically constructing stochastic processes based on user input, with full flexibility in defining how the process evolves.

  6. Custom Classes Submodule:

    • Contains specialized stochastic processes, such as the ConstantElasticityOfVarianceProcess (CEV), which models processes with state-dependent volatility.

    • These custom classes offer advanced modeling capabilities for scenarios where traditional processes are insufficient.

Key Features of the Process Module:

  • Extensibility: The module is designed to be extended with new processes, either through direct inheritance from the base classes or by using the Constructor submodule for dynamic process creation.

  • Simulation and Analysis: Every process in the module includes methods for simulating sample paths, generating increments, and calculating statistical properties like variance, mean, or higher moments.

  • Real-World Applications: The processes implemented here can model diverse phenomena in fields such as:

    • Finance: Asset prices, interest rates, and volatility models.

    • Physics: Diffusion processes and particle motion.

    • Biology: Population dynamics and evolutionary processes.

    • Machine Learning: Stochastic optimization algorithms.

Usage:

This module is intended for users who need to simulate and analyze stochastic processes in a flexible and extensible manner. The core functionality is built around the following concepts:

  • Increment Calculation: Each process has a method to calculate its increment over a given timestep.

  • Customizability: Many processes allow users to specify custom drift, volatility, and other parameters.

  • Dynamic Process Creation: Users can define custom processes on the fly, making this module suitable for both researchers and practitioners who need tailored stochastic models.

Future Extensions:

The Process Module is continuously evolving, with plans to incorporate:

  • More memory-based processes: To simulate processes with complex dependencies on past behavior.

  • Higher-dimensional processes: Such as multivariate stochastic processes with intricate correlation structures.

  • Hybrid processes: Combining different types of stochastic dynamics, such as additive and multiplicative effects, within the same model.

The Process Module is a versatile and powerful tool for stochastic modeling, designed to meet the needs of both academic researchers and industry professionals.