Doesn't actually exist

me likes this. most of my proofs would come from the field of psychometrics or quantitative psychology though (mostly factor analysis and stuctural equation modelling).

here i'm doing the (rather simple) proof of how the linear factor analysis model can be parameterised as a covariance structure model. it's relevant because as a linear factor model it is unsolvable, but as a covariance structure model it is possible to obtain parameter estimates.

let the obseverd score \(x\) be defined as the linear factor model \(x = \Lambda F+\epsilon_{i}\) since it is known that (in the case of multivariate normality) \(E(xx')=\Sigma\) it trivially follows that:

which happens because the erros are random and assumed uncorrelated with the Factors and estimated loadings. Now by linearity of expectation and substituting the covariance matrix of the Factors and of the errors we can see that:

TS Contributor

Okay, since this day is soon over (at least according to Swedish time) and no one posted a proof yet today, I'll post another proof. I'll give a very simple, and possibly boring, proof this time. I'll prove that \(\bar{x}\) is the value that minimizes the sum \(\sum_{i=1}^n{(x_i-a)^2}\) (1).

By taking the first derivative with respect to a and setting it equal to zero, we get \(\sum_{i=1}^n{-2(x_i-a)}=0 \Leftrightarrow -2\sum_{i=1}^n{x_i}+2na=0 \Leftrightarrow \sum_{i=1}^n{x_i}=na \Leftrightarrow \bar{x}=a\).

By checking the second order condition we see that it's equal to 2n, which is always positive, so now we know that \(\bar{x}\) is at least a local minimum. By investigating (1) it is easily seen that it is also a global minimum.

Doesn't actually exist

a while ago (before Englund became an MVC) I posted a proof about another result in factor analysis. I thought it would be nice to resurrect it (briefly) and add it here to our small (but growing) compendium of proofs. the original thread is here

By definition of \(\psi_i\), we know that the diagonal of \((\mathbf{S} - (\mathbf{LL'} + \mathbf{\Psi}))\) is all zeroes. Since
\((\mathbf{S} - (\mathbf{LL'} + \mathbf{\Psi})))\) and \((\mathbf{S} - \mathbf{LL'})\) have the same elements except on the diagonal, we know that

Doesn't actually exist

i don't quite understand why but pretty much NO ONE in the Statistics world even touches on Factor Analysis. when it comes to dimension reduction techniques almost all of the undergrad stats textbooks i've seen that deal with intro to multivariate analysis stop at principal components. there may be like some small subsection in some namless appendix that says something about Factor Analysis... but that's it!

with strong correlation patterns among the vectors within the data matrix \( \mathbf{X} \in Mat_{n,p}(\mathbb{R}) \). The problem with multicollinearity is that single components within the vector of parameters \( \boldsymbol{\beta} \in \mathbb{R}^k \) can take absurdly large values. So the general idea is to restrict the length of said vector to a prespecified positve real number. Let this restriction been noted by \( \left\| \boldsymbol{\beta} \right\|_2^2=c \), whereas \( \left\|\cdot \right\|_2 \) is just the euclidian norm on \( \mathbb{R}^n\).

whereas the Lagrange parameter is assumed to be positive and \(\mathbf{\Theta} \subseteq \mathbb{R}^k \times \mathbb{R}_{>0}\) is the associated parameter space. The optimization problem is equivalent to

Also this is the unique global minimizer of \( Q_n \) due to the fact that the problem under consideration is just a sum auf convex functions and \( \hat{\boldsymbol{\beta}} \) is the only local minimizer, so one doesn't need to check the second order conditon and the associated hessians.

Super Moderator

The Pearson product-moment coefficient of correlation can be interpreted as the cosine of the angle between variable vectors in \(n\) dimensional space. Here, I will show the relationship between the Pearson and Spearman (rank-based) correlation coefficients for the bivariate normal distribution through the following series: