I looked at version 8, and it's ok for me. However, and although it's my opinion, I think you should probably add "method of correlated vector" in the keywords.

EDIT :

Concerning this sentence here "Since the method relies on the indicator variables of the latent variable, it is susceptible to sampling error when the number of indicator variables is small.", I suppose you're referring to psychometric sampling error ? Jensen used this term to describe the situation when you have an unrepresentative sampling of test contents (e.g., battery of test with mainly verbal tests or not enough in one (or more) of the constructs). In that case you should probably write it as "psychometric sampling error". That will avoid confusions in the terms.

There are a few ways it can go wrong. E.g. if one uses only a specific type of subtest (e.g. 10 types of arithmetic tests), or there is not much variance between subtest factor-loadings (restriction of variance, common in normal IQ batteries because they selected only useful subtests on purpose), or just ordinary sampling error due to low number of subtests (also often the case, since N_subtest is often only 5-10). For these reasons, MCV is not a very robust metric (susceptible to artifacts). The term you propose seems to be an inclusive concerning errors in estimating the true correlation between factor loadings and the vector of interest.

Quote:Certainly, like any other statistic, a g factor based on a limited number of mental tests and a limited number of subjects will contain error. There are three main sources of such error: (1) subject sampling error, because a sample does not perfectly represent the population; (2) psychometric sampling error, because a limited number of diverse mental tests does not perfectly represent the total population of mental tests, actual or conceivable; and (3) all factor scores in a common factor analysis, including g factor scores, by any method of derivation, are only estimates of the true factor scores, which remain unknown, in the same sense that obtained scores are estimates of true scores, with some determinable margin of probable error, in classical measurement theory. It has been deter- mined mathematically that the average minimum correlation between estimated factor scores and their corresponding hypothetical true factor scores rapidly increases as a function of the ratio of the number of tests to the number of first- order factors (Gorsuch, 1983, p. 259). With 11 tests and two first-order factors, as in this psychometric battery, the minimum correlation between estimated and true factor scores would be + .84, and the actual correlation could be well above this value.

Quote:Just as we can think statistically in terms of the sampling error of a statistic, when we randomly select a limited group of subjects from a population, or of measurement error, when we obtain a limited number of measurements of a particular variable, so too we can think in terms of a psychometric sampling error. In making up any collection of cognitive tests, we do not have a perfectly representative sample of the entire population of cognitive tests or of all possible cognitive tests, and so any one limited sample of tests will not yield exactly the same g as another limited sample. The sample values of g are affected by subject sampling error, measurement error, and psychometric sampling error. But the fact that g is very substantially correlated across different test batteries means that the variable values of g can all be interpreted as estimates of some true (but unknown) g, in the same sense that, in classical test theory, an obtained score is viewed as an estimate of a true score.

Quote:The deviation from perfect construct validity in g attenuates the values of r(g× d). In making up any collection of cognitive tests, we do not have a perfectly representative sample of the entire universe of all possible cognitive tests. Therefore any one limited sample of tests will not yield exactly the same gas another such sample. The sample values of g are affected by psychometric sampling error, but the fact thatgis very substantially correlated across different test batteries implies that the differing obtained values of g can all be interpreted as estimates of a “true” g. The values of r (g× d) are attenuated by psychometric sampling error in each of the batteries from which a gfactor has been extracted. We carried out a separate study to empirically estimate the values for this correction.

Dragt, J. (2010). Causes of group differences studied with the method of correlated vectors: A psychometric meta-analysis of Spearman’s hypothesis.

Just the first three hits I got from googling "psychometric sampling error". Note, however, that the S factor is not a psychological construct, so calling it "psychometric" does not make sense. What subject-neutral term do you propose? It is sampling error involving the indicator variables of a latent variable/factor.

There is another thing that has been bugging me and creating confusion. I use "N" to refer to both the number of individuals in a sample, and N for the number of indicator variables (IV) or subtests. Really I ought to use two different terms. Perhaps just use subscript for when talking about indicator variables.

"IV sampling error" is a pretty neutral term.

---

I have added a new revision, #9. The only changes are in the section concerning MCV, as well as the extra paragraph in the abstract summarizing the MCV results. I have replaced "indicator variable" with "IV". I have added "method of correlated vectors" to the list of key words as MH suggested.

(2014-Oct-09, 03:10:57)Emil Wrote: There is another thing that has been bugging me and creating confusion. I use "N" to refer to both the number of individuals in a sample, and N for the number of indicator variables (IV) or subtests. Really I ought to use two different terms. Perhaps just use subscript for when talking about indicator variables.

(2014-Oct-09, 08:11:38)Emil Wrote: It kinda is a waste of time to keep getting re-approval for small language changes like this. There really is no journal policy about it yet. Maybe we should make one.

Yes, it is a waste of our time. The policy should be: "Approval is not needed for minor alterations, especially language related ones". Given that policy, you have your approvals. You can publish.