Ejecutar una cadena de Markov.
Vamos a un "flip" (en el índice de $i$) de ser el caso de que $X_{i-1}$ $X_{i}$ son de signos opuestos y ambos superan $1.5$ en tamaño. Como podemos escanear a través de la realización de $(X_i)$ buscando volteretas, podemos aprovechar la simetría de la distribución Normal estándar para describir el proceso con sólo cuatro estados:
El Inicio, antes de $X_1$ que se observa.
Cero, donde $-1.5 \le X_{i-1} \le 1.5$.
Uno, donde $|X_{i-1}| \gt 1.5$.
Volteado, cuando un cambio ocurre en $i$.
Inicio de las transiciones en el (mixto), estado $$\mu = (1-2p, 2p, 0)$$
(corresponding to the chances of being in states (Zero, One, Flipped)) where $$p = \Pr(X_1 \lt -1.5) = \Pr(X_1 \gt 1.5) \approx 0.0668072.$$ Because Start is never seen again, let's not bother to track it any further.
Zero transitions into One with probability $2p$ (when $|X_i|\gt 1.5$) and otherwise stays at Zero.
One transitions into Flipped with probability $p$: this occurs when $|X_i| \gt 1.5$ and $X_i$ has the opposite sign of $X_{i-1}$. It also transitions back to One with probability $p$ when $|X_i| \gt 1.5$ and $X_i$ has the same sign as $X_{i-1}$. Otherwise it transitions to Zero.
Flipped is an absorbing state: once there, nothing changes regardless of the value of $X_i$.
Thus the transition matrix (ignoring the transient Start) for (Zero, One, Flipped) is therefore
$$\mathbb{P} = \left(
\begin{array}{ccc}
1-2 p & 2 p & 0 \\
1-2 p & p & p \\
0 & 0 & 1 \\
\end{array}
\right)$$
After leaving the start state (and entering the mixed state $\mu$), $20-1$ transitions will be made in the scan for a flip. The desired probability therefore is the third entry (corresponding to Flipped) in $$\mu \cdot \mathbb{P}^{20-1} \approx 0.149045.$$
Computational Details
We don't need to do $18$ matrix multiplications to obtain $\mathbb{P}^{19}$. Instead, after diagonalizing
$$\mathbb{P} = \mathbb{Q}^{-1} \mathbb{E} \mathbb{Q},$$
the answer for any exponent $n$ (even huge ones) can be computed via just one matrix multiplication as
$$\mu\cdot\mathbb{P}^n = \left(\mu\cdot\mathbb{Q}^{-1}\right) \mathbb{E}^n \mathbb{Q}$$
with
$$\mu \cdot \mathbb{Q}^{-1} = \left(1,\frac{-4 p^2+p+1-\sqrt{(2-7 p) p+1}}{2 \sqrt{(2-7 p) p+1}},-\frac{-4
p^2+p+1+\sqrt{(2-7 p) p+1}}{2 \sqrt{(2-7 p) p+1}}\right),$$
$$\mathbb{Q} = \left(
\begin{array}{ccc}
0 & 0 & 1 \\
\frac{\left(1+p+\sqrt{-7 p^2+2 p+1}\right) \left(3 p-1+\sqrt{-7 p^2+2 p+1}\right)}{8 p^2}
& -\frac{1+p+\sqrt{-7 p^2+2 p+1}}{2 p} & 1 \\
\frac{\left(1+p-\sqrt{-7 p^2+2 p+1}\right) \left(3 p-1-\sqrt{-7 p^2+2 p+1}\right)}{8 p^2}
& -\frac{1+p-\sqrt{-7 p^2+2 p+1}}{2 p} & 1 \\
\end{array}
\right)
$$
and
$$\mathbb{E}^n = \left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & \left(\frac{1}{2} \left(1-p-\sqrt{-7 p^2+2 p+1}\right)\right)^n & 0 \\
0 & 0 & \left(\frac{1}{2} \left(1-p+\sqrt{-7 p^2+2 p+1}\right)\right)^n \\
\end{array}
\right)$$
A million-iteration simulation (using R
) supports this result. Its output,
Mean LCL UCL
0.1488040 0.1477363 0.1498717
estimates the answer as $0.1488$ with a confidence interval $[0.1477, 0.1499]$ that includes $0.149045$.
n <- 20 # Length of the sequence
n.iter <- 1e6 # Length of the simulation
set.seed(17) # Start at a reproducible point
x <- rnorm(n.iter*n) # The X_i
y <- matrix(sign(x) * (abs(x) > 3/2), n, n.iter)
flips <- colSums(y[-1, ] * y[-n, ] == -1) # Flip indicators
x.bar <- mean(flips >= 1) # Mean no. of flipped sequences
s <- sqrt(x.bar * (1-x.bar) / n.iter) # Standard error of the mean
(c(Mean=x.bar, x.bar + c(LCL=-3,UCL=3) * s)) # The results