Hacer una serie de taylor $f(\mathbf{x}) = f(\mathbf{a}) + (\mathbf{x} - \mathbf{a})^\mathsf{T} D f(\mathbf{a}) + \frac{1}{2!} (\mathbf{x} - \mathbf{a})^\mathsf{T} D^2 f(\mathbf{a}) (\mathbf{x} - \mathbf{a}) + \cdots$ de la divergencia de Kullback-Leibler en la variable $\widehat{\theta}$ alrededor de $\theta$ se obtiene
$D_\text{KL}(\theta\parallel\widehat{\theta})\approx D_\text{KL}(\theta\parallel \widehat{\theta})|_{\widehat{\theta}=\theta}+(\widehat{\theta}-\theta)^\mathsf{T}\frac{\partial D_\text{KL}(\theta\parallel \widehat{\theta})}{\partial\widehat{\theta}}|_{\widehat{\theta}=\theta}+\frac{1}{2}(\widehat{\theta}-\theta)^\mathsf{T}\frac{\partial^2 D_\text{KL}(\theta\parallel \widehat{\theta})}{\partial\widehat{\theta}\partial\widehat{\theta}}|_{\widehat{\theta}=\theta}(\widehat{\theta}-\theta)$
y podemos ver que los dos primeros términos serán cero y el último será la matriz de información de Fisher,
$(a)\quad D_\text{KL}(\theta\parallel \widehat{\theta})|_{\widehat{\theta}=\theta}=\int p(x; \theta)\ln\frac{p(x;\theta)}{p(x; \widehat{\theta})} dx|_{\widehat{\theta}=\theta}=\int p(x; \theta)\ln\frac{p(x;\theta)}{p(x; \theta)} dx=\int p(x; \theta)\ln(1) dx=0$
$(b)\quad \frac{\partial D_\text{KL}(\theta\parallel \widehat{\theta})}{\partial\widehat{\theta}}|_{\widehat{\theta}=\theta}= \frac{\partial}{\partial\widehat{\theta}}\int p(x; \theta)\ln\frac{p(x;\theta)}{p(x; \widehat{\theta})} dx|_{\widehat{\theta}=\theta} = \frac{\partial}{\partial\widehat{\theta}}\int p(x; \theta)(\ln p(x;\theta) - \ln p(x; \widehat{\theta})) dx|_{\widehat{\theta}=\theta}=-\int p(x; \theta)\frac{\frac{\partial}{\partial\widehat{\theta}} p(x; \widehat{\theta})}{p(x; \widehat{\theta})} dx|_{\widehat{\theta}=\theta}=-\int \frac{\partial}{\partial\widehat{\theta}} p(x; \widehat{\theta})dx|_{\widehat{\theta}=\theta}=-\frac{\partial}{\partial\widehat{\theta}} \int p(x; \widehat{\theta})dx|_{\widehat{\theta}=\theta}=-\frac{\partial}{\partial\theta} \int p(x; \theta)dx=-\frac{\partial}{\partial\theta} 1=0$
$(c)\quad\frac{\partial^2 D_\text{KL}(\theta\parallel \widehat{\theta})}{\partial\widehat{\theta}\partial\widehat{\theta}}|_{\widehat{\theta}=\theta}=\frac{\partial^2}{\partial\widehat{\theta}\partial\widehat{\theta}}\int p(x; \theta)\ln\frac{p(x;\theta)}{p(x; \widehat{\theta})} dx|_{\widehat{\theta}=\theta}=\frac{\partial^2}{\partial\widehat{\theta}\partial\widehat{\theta}}\int p(x; \theta)(\ln p(x;\theta)-\ln p(x; \widehat{\theta})) dx|_{\widehat{\theta}=\theta}=-\int p(x; \theta)\frac{\partial^2}{\partial\widehat{\theta}\partial\widehat{\theta}}\ln p(x; \widehat{\theta}) dx|_{\widehat{\theta}=\theta}=-\int p(x; \theta)\frac{\partial^2}{\partial\theta\partial\theta}\ln p(x; \theta) dx={\cal I(\theta)}$
Así que utilizando (a)+(b)+(c) se obtiene que
$D_\text{KL}(\theta\parallel\widehat{\theta})\approx \frac{1}{2}(\widehat{\theta}-\theta)^\mathsf{T}{\cal I(\theta)}(\widehat{\theta}-\theta)$
Por lo tanto, $$d_{\text{KL}(\theta\parallel\widehat{\theta})}(\widehat{\theta},\theta)=\sqrt{2 D_\text{KL}(\theta\parallel\widehat{\theta})}\approx\sqrt{(\widehat{\theta}-\theta)^\mathrm{T}{{\cal I(\theta)}}(\widehat{\theta}-\theta)}=||\widehat{\theta}-\theta||_{{\cal I(\theta)}}^{\frac 1 2}=d_{\cal I(\theta)}(\widehat{\theta},\theta)$$
donde $d_{\cal I(\theta)}(\widehat{\theta},\theta)$ es la métrica definida por la matriz de información de Fisher
Como señala @user1936752 mientras que la matriz de información de fisher es simétrica porque es una métrica $d_{\cal I(\theta)}(\widehat{\theta},\theta)=d_{\cal I(\theta)}(\theta,\widehat{\theta})$ la divergencia de Kulback-Leibler no es una métrica como $D_\text{KL}(\theta\parallel\widehat{\theta})\neq D_\text{KL}(\widehat{\theta}\parallel\theta)$ por lo que tenemos que $d_{\cal I(\theta)}(\widehat{\theta},\theta)\neq d_{\cal I(\widehat{\theta})}(\widehat{\theta},\theta)$ porque
$$d_{\cal I(\theta)}(\widehat{\theta},\theta)\approx d_{\text{KL}(\theta\parallel\widehat{\theta})}(\widehat{\theta},\theta)\neq d_{\text{KL}(\widehat{\theta}\parallel\theta)}(\widehat{\theta},\theta)\approx d_{\cal I(\widehat{\theta})}(\widehat{\theta},\theta)$$
Espero que sea de ayuda