Con la notación habitual, organizar las variables independientes por columnas en un $n\times (p+1)$ matriz $X$, con uno de ellos lleno de la constante de $1$, y organizar el dependiente de los valores de $Y$ a una $n$-vector (columna). Suponiendo que el adecuado inversas, de existir, el deseado de las fórmulas son
$$b = (X^\prime X)^{-1} X^\prime Y$$
for the $p+1$ parameter estimates $b$,
$$s^2 = (Y^\prime Y - b^\prime X^\prime Y)/(n-p-1)$$
for the residual standard error, and
$$V = (X^\prime X)^{-1} s^2$$
for the variance-covariance matrix of $b$.
When only the summary statistics $\newcommand{\m}{\mathrm m} \m_X$ (the means of the columns of $X$, forming a $p+1$-covector which includes a mean of $1$ for the constant), $m_Y$ (the mean of $Y$), $\text{Cov}(Y)$, $\text{Var}(Y)$, and $\text{Cov}(X,Y)$ are available, first recover the needed values via
$$(X^\prime X)_0 = (n-1)\text{Cov}(X) + n \m_X \m_X^\prime,$$
$$Y^\prime Y = (n-1)\text{Var}(Y) + n m_Y,$$
$$(X^\prime Y)_0 = (n-1)\text{Cov}(X,Y) + n m_Y \m_X.$$
Then to obtain $X^\el primer X$, border $(X^\prime X)_0$ symmetrically by a vector of column sums (given by $n \m_X$) with the value $n$ on the diagonal; and to obtain $X^\prime Y$, augment the vector $(X^\prime Y)_0$ with the sum $n m_Y$. For instance, when using the first column for the constant term, these bordered matrices will look like
$$X^\prime X = \pmatrix{
n & n\m_X \\
n\m_X^\prime & (X^\prime X)_0
}$$
and
$$X^\prime Y = \left(n m_Y, (X^\prime Y)_0\right)^\prime$$
in block-matrix form.
If the means are not available--the question did not indicate they are--then replace them all with zeros. The output will estimate an "intercept" of $0$, of course, and its standard error of the intercept will likely be incorrect, but the remaining coefficient estimates and standard errors will be correct.
Code
The following R
code generates data, uses the preceding formulas to compute $b$, $s^2$, and the diagonal of $V$ from only the means and covariances of the data (along with the values of $n$ and $p$ of course), and compares them to standard least-squares output derived from the data. In all examples I have run (including multiple regression with $p\gt 1$) agreement is exact to the default output precision (about seven decimal places).
For simplicity--to avoid doing essentially the same set of operations three times--this code first combines all the summary data into a single matrix v
and then extracts $X^\prime X$, $X^\prime Y$, and $Y^\prime$ Y a partir de sus entradas. Los comentarios de la nota de lo que está sucediendo en cada paso.
n <- 24
p <- 3
beta <- seq(-p, p, length.out=p)# The model
set.seed(17)
x <- matrix(rnorm(n*p), ncol=p) # Independent variables
y <- x %*% beta + rnorm(n) # Dependent variable plus error
#
# Compute the first and second order data summaries.
#
m <- rep(0, p+1) # Default means
m <- colMeans(cbind(x,y)) # If means are available--comment out otherwise
v <- cov(cbind(x,y)) # All variances and covariances
#
# From this point on, only the summaries `m` and `v` are used for the calculations
# (along with `n` and `p`, of course).
#
m <- m * n # Compute column sums
v <- v * (n-1) # Recover sums of squares of residuals
v <- v + outer(m, m)/n # Adjust to obtain the sums of squares
v <- rbind(c(n, m), cbind(m, v))# Border with the sums and the data count
xx <- v[-(p+2), -(p+2)] # Extract X'X
xy <- v[-(p+2), p+2] # Extract X'Y
yy <- v[p+2, p+2] # Extract Y'Y
b <- solve(xx, xy) # Compute the coefficient estimates
s2 <- (yy - b %*% xy) / (n-p-1) # Compute the residual variance estimate
#
# Compare to `lm`.
#
fit <- summary(lm(y ~ x))
(rbind(Correct=coef(fit)[, "Estimate"], From.summary=b)) # Coeff. estimates
(c(Correct=fit$sigma, From.summary=sqrt(s2))) # Residual SE
#
# The SE of the intercept will be incorrect unless true means are provided.
#
se <- sqrt(diag(solve(xx) * c(s2))) # Remove `diag` to compute the full var-covar matrix
(rbind(Correct=coef(fit)[, "Std. Error"], From.summary=se)) # Coeff. SEs