Sí, la matriz de covarianza de todas las variables--explicativo y respuesta-contiene la información necesaria para encontrar todos los coeficientes, siempre y una intercepción (constante) plazo se incluye en el modelo. (Aunque las covarianzas no proporciona ninguna información sobre el término constante, se puede encontrar desde los medios de la información).
Análisis
Deje que los datos para las variables explicativas, se pueden organizar como $n$-dimensiones vectores columna $x_1, x_2, \ldots, x_p$ y la variable de respuesta sea el vector columna $y$, que se considera ser una realización de una variable aleatoria $Y$. Los cálculos por mínimos cuadrados ordinarios $\hat\beta$ de los coeficientes en el modelo de
$$\mathbb{E}(Y) = \alpha + X\beta$$
are obtained by assembling the $p+1$ column vectors $X_0 = (1, 1, \ldots, 1)^\prime, X_1, \ldots, X_p$ into an $n \times p+1$ array $X$ and solving the system of linear equations
$$X^\prime X \hat\beta = X^\prime y.$$
It is equivalent to the system
$$\frac{1}{n}X^\prime X \hat\beta = \frac{1}{n}X^\prime y.$$
Gaussian elimination will solve this system. It proceeds by adjoining the $p+1\times p+1$ matrix $\frac{1}{n}X^\prime X$ and the $p+1$-vector $\frac{1}{n}X^\prime y$ into a $p+1 \times p+2$ array $$ and row-reducing it.
The first step will inspect $\frac{1}{n}(X^\prime X)_{11} = \frac{1}{n}X_0^\prime X_0 = 1$. Finding this to be nonzero, it proceeds to subtract appropriate multiples of the first row of $A$ from the remaining rows in order to zero out the remaining entries in its first column. These multiples will be $\frac{1}{n}X_0^\prime X_i = \overline X_i$ and the number subtracted from the entry $A_{i+1,j+1} = X_i^\prime X_j$ will equal $\overline X_i \overline X_j$. This is just the formula for the covariance of $X_i$ and $X_j$. Moreover, the number left in the $i+1, p+2$ position equals $\frac{1}{n}X_i^\prime y - \overline{X_i}\overline{y}$, the covariance of $X_i$ with $y$.
Thus, after the first step of Gaussian elimination the system is reduced to solving
$$C\hat{\beta} = (\text{Cov}(X_i, y))^\prime$$
and obviously--since all the coefficients are covariances--that solution can be found from the covariance matrix of all the variables.
(When $C$ is invertible the solution can be written $C^{-1}(\text{Cov}(X_i, y))^\prime$. The formulas given in the question are special cases of this when $p=1$ and $p=2$. Writing out such formulas explicitly will become more and more complex as $p$ grows. Moreover, they are inferior for numerical computation, which is best carried out by solving the system of equations rather than by inverting the matrix $C$.)
The constant term will be the difference between the mean of $s$ and the mean values predicted from the estimates, $X\hat{\beta}$.
Ejemplo
Para ilustrar, los siguientes R
código crea algunos datos, calcula sus covarianzas, y obtiene los mínimos cuadrados de los coeficientes estimados únicamente a partir de esa información. Se compara con las estimaciones obtenidas a partir de los mínimos cuadrados estimador lm
.
#
# 1. Generate some data.
#
set.seed(17)
n <- 10; p <- 2
z <- matrix(rnorm(n*(p+1)), nrow=n)
y <- z[, p+1]; x <- z[, -(p+1)]
#
# 2. Find the OLS coefficients from the covariances only.
#
a <- cov(x)
b <- cov(x,y)
t(beta.hat <- solve(a, b)) # Coefficients from the covariance matrix
mean(y - x %*% beta.hat) # Intercept from the covariance matrix and means
coef(lm(y ~ x)) # OLS solution
La salida muestra el acuerdo entre los dos métodos:
> t(beta.hat <- solve(a, b)) # Coefficients from the covariance matrix
[,1] [,2]
[1,] -0.424551 -1.006675
> mean(y - x %*% beta.hat) # Intercept from the covariance matrix and means
[1] 0.946155
> coef(lm(y ~ x))
(Intercept) x1 x2
0.946155 -0.424551 -1.006675