Factor analysis through the lens of prediction
Factor analysis is a classical statistical method underlying some of the most important results in psychology. In this blog post, I study it through the lens of prediction.
Factor analysis was created by psychologists to study differences between people (Spearman, 1904; Thurstone, 1931). The core idea of factor analysis is that a person’s idiosyncratic behaviors – e.g. their specific responses to a set of survey questions – can be predicted using a small number of latent variables called factors.
For example, Spearman proposed that a single latent variable underlies a person’s cognitive ability, which he dubbed the general factor, or “g factor”. In personality research, factor analysis forms the basis of the standard “Big 5” model, which posits five factors underlie differences in personality.
A bit more abstractly, factor analysis seeks to model data-generating processes in which vectors are drawn from an unknown probability distribution . To do this, factor analysis uses vectors sampled from to fit an approximate distribution , where takes a particularly simple form that I’ll describe shortly.
In many introductions to the topic, factor analysis is presented as an inferential tool that discovers or tests for a particular latent structure. The conventional goal of performing factor analysis is to draw inferences about , by way of inspecting the parameters of a fitted .
In this blog post, I’m going to provide a complementary perspective. I’ll present factor analysis as a model-fitting procedure, where the goal is to build a model that can predict out-of-sample data. Three simple questions I’ll address in this blog post are:
- What is the model “architecture”?
- How do you fit the model?
- Once fitted, what predictions can it make?
In a sense, this is more of an ML-flavored introduction to factor analysis – one that, at least for me, felt like the more intuitive way into the topic.
Model
In classical factor analysis, the model is a probability distribution over . In particular, is a multivariate normal distribution with a certain, restricted type of covariance matrix:
The parameters of are:
- The mean
- The “loading matrix” , where is a hyperparameter of the model.
- The diagonal matrix , which contains nonnegative entries.
The covariance matrix is the heart of factor analysis. It expresses the core inductive bias of the model: that data are generated by sampling from a Gaussian lying in a -dimensional subspace (as summarized by ), then corrupting it with per-dimension noise (as summarized by the diagonal matrix ).
Equivalently, the model describes the following generative process:
Where is a sample from the standard multivariate normal over , and , and is independent of .
Note that determines the number of free parameters. In general, covariance matrices have parameters. In factor analysis, we have free parameters, which is less than the general case when .
Regardless of the choice of , factor analysis has a non-unique parameterization, as is invariant to rotations of . In most applications of factor analysis, selecting one such rotation for is a key step in obtaining an interpretable model. However, from the prediction-oriented perspective of this blog post, the choice of basis is irrelevant, because it does not change the probabilities assigned by .
Fitting
Our goal is to approximate some unknown probability distribution from which we have drawn samples .
In ML terms, one could say factor analysis addresses an unsupervised learning problem: the learning of some unknown distribution using empirical samples, using some intentionally restricted hypothesis class (here, low-rank Gaussian distributions).
Estimating the parameters of from is done using maximum likelihood estimation, typically via the EM algorithm (Jöreskog, 1969; Rubin & Thayer, 1982). The log-likelihood of the data is the usual formula for multivariate Gaussian distributions:
We noted earlier that (the number of factors) is a hyperparameter. In conventional applications of factor analysis, selecting is done using tools like scree plots, likelihood-ratio tests, and AIC/BIC statistics. As an alternative that would be more familiar to an ML practitioner, one could split into training and validation sets (row-wise), and perform cross-validated parameter selection to identify the value of that maximizes the likelihood on the validation set.
Making predictions
I tend to think of models as input-output machines. In goes ; out comes some . In such cases, it’s straightforward to understand what an “out-of-sample prediction” is – just feed in some that you didn’t use to build the machine, then get its output . Hopefully, that prediction is correct.
In the case of factor analysis, we have a probabilistic model that maps vectors to probability densities. So one immediate notion of making an “out-of-sample prediction” might be assigning a probability density to an unseen observation .
But factor analysis also supports a more useful notion of prediction. Because is a distribution over , we can condition on some observed dimensions then predict the others. Concretely, suppose we partition the dimensions (items) into two sets, which we’ll call “support” and “test”. Then for any sample , we can write:
Then, we can derive the distribution of conditioned on . First note that the rows of and the diagonal of can be partitioned the same way:
Since , the joint distribution of the two blocks is
Applying the standard Gaussian conditioning formula:
where
This conditional distribution is a sort of “personalized model”, tuned for the individual (or whatever) that represents. Namely, it tells us what responses from that individual are likely on the test items given our observations of the individual’s responses on the support items .
Nicely, the support set can be any subset of the total items. We are not restricted to predicting one particular variable from one particular set of inputs chosen in advance.
This leads to at least one natural notion of train/test evaluation: after training , we can split the items into support and test sets, gather fresh samples , then see how well can predict variation in the test items after conditioning on the support items.
tl;dr
In slightly more informal terms, here’s everything I wrote above:
Suppose you perform a measurement procedure in which you take real-valued measurements from a person. Plot their measurements as a point in this -dimensional “measurement space”. Repeat this across many people, and soon enough you have a cloud of points in , where each point is a person.
In a nutshell, factor analysis consists of fitting a low-rank Gaussian over this cloud of points.
Once you fit the Gaussian, you can use it to make out-of-sample predictions in at least the following sense: when a new person walks into the lab, you can take measurements from that person, then use the Gaussian to predict their responses on the remaining through conditioning.
Appendix
Probabilistic PCA vs. Factor Analysis
Factor analysis and probabilistic principal component analysis (PPCA) have deep similarities, but they are not identical. Like factor analysis, PPCA aims to learn a multivariate Gaussian model of the data.The difference is the following:
- PPCA has covariance matrices of the form
- Factor analysis has covariance matrices of the form
The interpretational distinctions between PPCA and factor analysis have been discussed in detail (Fabrigar et al., 1999), but from the “prediction perspective” of this blog post, PPCA and factor analysis amount to slightly different model families.
PPCA with dimensions can be understood to be slightly less expressive than factor analysis with factors, as it does not have the flexibility of fitting per-dimension variance using (instead, it fits a single ). This has the usual variance-bias tradeoff implication (i.e. PPCA has a less expressive model space, but is easier to learn).
Assigning factors to an individual
Though the focus of this post is in predicting observable variables, it would be an omission to not mention how one estimates factor values for an individual, given their measurements . First, recall how the random variable relates to :
Where and . So the joint distribution of and can be written as a Gaussian:
The standard conditioning formula for multivariate Gaussians gives the distribution of conditional on as:
- Spearman, C. (1904). "General Intelligence" Objectively Determined and Measured. The American Journal of Psychology.
- Thurstone, L. L. (1931). Multiple factor analysis. Psychological Review, 38(5), 406.
- Jöreskog, K. G. (1969). A general approach to confirmatory maximum likelihood factor analysis. Psychometrika, 34(2), 183–202.
- Rubin, D. B., & Thayer, D. T. (1982). EM algorithms for ML factor analysis. Psychometrika, 47(1), 69–76.
- Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4(3), 272.