Abstract

A computer-implemented apparatus and method for determining the impacts of predetermined customer characteristics associated with measured physical attributes. A manifest variable database is utilized for storing manifest variable data that is indicative of the measured physical attributes. A partial least squares determinator is connected to the manifest variable database for determining statistical weights based upon the stored manifest variable data. A weights database which is connected to the partial lease squares determinator is utilized for storing the determined weights. A latent variable determinator is connected to the weighing database for determining scores for latent variables based upon the stored manifest variables and upon the stored weights. The latent variables are indicative of the predetermined customer characteristics. Additionally, a latent variable database is connected to the latent variable score determinator for storing the determined latent variable scores. An impact determinator is connected to the latent variable database for determining impact relationships among the latent variables based upon the stored latent variable scores.

The desire to improve quality has spread to nearly all manufacturing and service industries, business and social organizations and government. Today there is great interest in improving quality of products, services and the environment through systematic performance evaluation followed by business process improvement. Some (but admittedly few) industries are fortunate to have easily quantifiable metrics to measure the quality of their products and services. Using these metrics a continuous improvement process can be implemented, whereby (a) the product or service is produced using existing processes and assessed using quantifiable metrics, (b) the existing processes are then changed based on the results of the metrics, and (c) the efficacy of the change is tested by producing the product or service again using the changed process and assessing using the same metrics.

For most industries, however, finding a good, quantifiable metric has proven quite elusive. For most industries, business processes have become quite complex and quite hard to describe in quantitative terms. Human intuition and judgment play an important role in production of goods and services; and ultimately human satisfaction plays the decisive role in determining which goods and services sell well and which do not. In addition, there is a growing body of evidence suggesting that employee on-the-job satisfaction also has an enormous impact upon a company's bottom line.

Human intuition and judgment, customer satisfaction, employee satisfaction. These are intangible variables that are not directly measurable and must therefore be inferred from data that are measurable. Therein lies the root of a major problem in applying continuous improvement techniques to achieve better quality. The data needed to improve quality are hidden, often deeply within reams of data the organization generates for other purposes. Even surveys expressly designed to uncover this hidden data can frequently fail to produce meaningful results unless the data are well understood and closely monitored.

Experts in statistical analysis know to represent such intangible variables as “latent variables” that are derived from measurable variables, known as “manifest variables.” However, even experts in statistical analysis cannot say that manifest variable A will always measure latent variable B. The relationship is rarely that direct. More frequently, the relationship between manifest variable A and latent variable B involves a hypothesis, which must be carefully tested through significant statistical analysis before being relied upon.

The current state of the art is to analyze these hypotheses on a piecemeal basis, using statistical analysis packages such as SPSS or SAS. However, these packages are not designed or intended for the casual user. These packages lack a semi-automated (let alone an automated mechanism) for examining the “manifest” variables (i.e., measured survey data) within many different cuts or segments. Also, these packages have difficulty dealing with surveys where the data are incomplete or few responses have been gathered.

Moreover, these packages lack a cohesive semi-automated mechanism for determining the impact of these measured manifest variables upon “latent” variables (such as customer satisfaction) which cannot be directly measured due to their intangible nature.

The present invention is directed to overcoming these and other disadvantages of previous systems. In accordance with the teachings of the present invention, a computer-implemented apparatus and method is provided for determining the impacts of predetermined customer characteristics associated with measured physical attributes. A manifest variable database is utilized for storing manifest variable data that is indicative of the measured physical attributes. A partial least squares determinator is connected to the manifest variable database for determining statistical weights based upon the stored manifest variable data. A weights database which is connected to the partial lease squares determinator is utilized for storing the determined weights.

A latent variable determinator is connected to the weighting database for determining scores for latent variables based upon the stored manifest variables and upon the stored weights. The latent variables are indicative of the predetermined customer characteristics. Additionally, a latent variable database is connected to the latent variable score determinator for storing the determined latent variable scores. An impact determinator is connected to the latent variable database for determining impact relationships among the latent variables based upon the stored latent variable scores.

For a more complete understanding of the invention, its objects and advantages, reference may be had to the following specification and to the accompanying drawings.

With reference to FIG. 1, there is shown the top-level software modules of the present invention. The software modules process customer profile case data 50 in order to determine their impacts upon predetermined customer characteristics (e.g., customer satisfaction). Typically, case data 50 contains the responses to customer survey forms that customers are asked to complete. The surveys may ask, for example, the customer to indicate on a scale from 1-10 their levels of satisfaction relative to certain attributes associated with a store. One such attribute might be whether a product from the store had delivered the store's product to the customer in a timely fashion.

The present invention utilizes a novel modular software structure in order to ascertain the impacts between manifest variables and latent variables and the impacts between latent variables. Within the field of the present invention, manifest variable are variables whose values can be directly measured. For example, while customer satisfaction cannot be directly measured (and hence is viewed as a latent variable), variables, such as the delivery time of a product to a customer, can be directly measured since they are physical attributes or manifestations. These measured variables are viewed within the present invention as manifest variables.

The first software module of the present invention is a partial least squares (PLS) software module 58 that determines partial least square-related statistical measures based upon case data 50. PLS software module 58 determines the optimal weights that should be used in analyzing the case data 50. In the preferred embodiment, the PLS data output is stored in a relational database, such as, Microsoft Access. Within this embodiment, the data output is stored in the data_M table 66.

A latent variable score software module 70 uses the PLS data output, case data 50, and model specifications 138 to produce latent variable-related data outputs 74. One of the primary outputs of the LV score software module 70 is the latent variable score which indicates a standard score, typically from 0 to 100, indicating perceived level of performance. The LV score data output 74 is preferably stored in data_L table 78 which is accessible to the next software module of the present invention—that is, the Impact software module 82.

PLS software module 58, LV score software module 70 and impact software module 82 provide for a modular design wherein the data results at the users request can be viewed upon execution of each software module. If the user determines that execution of a particular module is not within a predefined threshold, then the user uses the novel structure of the present invention to rerun the software module with different data and then use that data in the subsequent software modules.

A user employs system processor 89 to establish what data from the customer profile case data 50 is to be used for a particular run. The identifying data for a particular run (or cut) is stored in cut table 91. Since software modules 58, 70, and 82 can use different cut data, the particular cut information for each software module is stored in run table 93.

Moreover, the present invention provides diagnostic rules associated with each software module in order to assist the user to detect out-of-tolerance conditions by generating diagnostic outputs 128, 144, and 152 for each software module.

The “sub” method as depicted by reference numeral 192 allows the PLS software module 58 to substitute data from other data sets in order to perform the PLS weight substitution determination function. The “calc” method as depicted at reference numeral 196 indicates that calculations involving substituted data should commence. The “calc'sub” methods are used together in the following manner. When the PLS software module 58 determines that superset data needs to be used for determination of PLS weights, then the “sub” method is utilized for performing the substitution. After the substitution, the PLS software module 58 utilizes the “calc” method to then perform calculations based upon the substituted data set. Methods (such as calc method 192) produce the data within the box to which the method's arrow points.

The PLS software module 58 determines that a substitution is to occur by examining whether the number of samples within a cut of case data is below a predetermined threshold which, in the preferred embodiment, is one hundred samples. If data substitution is to occur, then the PLS software module 58 invokes the sub method 206 which indicates to the substitution data locator 210 that superset data is to be copied from a database. In the preferred embodiment, the substitution data locator software module 210 utilizes a copy method 214 in order to retrieve superset data from a remote data_M database 218. The term “remote” refers to storing superset data in an “offline” database separate from the main database of data_M table 66 and data_L table 78 (not shown). The historical superset data is stored in remote data_M database 218 in order to keep the size of the main databases from becoming too large since the present invention is continually adding historical data to its knowledge base.

Score refers to a standard score, typically from 0 to 100 indicating the main responses from the questionaire. Standard deviation refers to the standard statistical measure of dispersion. Valid N refers to a standard statistical term for numbered usable cases. Valid N unweighted refers to a standard statistical term for numbered usable cases before weighting. R mean refers to a standard statistical term for the amount of variance accounted for by the analysis. R standard deviation refers to a standard statistical term for the standard deviation of the raw mean. The data type standardized and unstandardized weights refer to the calculation of the contention that the manifest variable makes to the associated latent variables. The manifest variable data types standardized loading and unstandardized loading refer to the correlation between manifest and latent variable scores.

If the PLS software module 58 determines that data substitution is not needed, then block 188 is executed by the “use” method 184. The data generated by block 188 is conceptually similar to the data generated by block 222 except that the weights-related statistical measures and the loading-related statistical measures are not generated since data substitution was not performed. The output of block 188 is stored in the data_M table 66.

Irrespective of whether data substitution is performed, diagnostics as generated by diagnostics block 128 is generated. PLS diagnostics rules 128 generates diagnostic information at various levels and which are unique to the PLS software module 58. The diagnostic levels vary from Level 0 (which terminates the execution of the PLS software module 58 only for the most severe errors) up to Level 2 (wherein processing of the PLS software module 58 terminates upon encountering lower threshold types of errors). Diagnostic level 1 is preferably set as the default.

The diagnostics allow the present invention to obtain the maximum amount of information from a run (even if the run fails in some manner) since the diagnostics indicate in detail how far execution had proceeded before the failure was encountered.

For the PLS software module 58, the following diagnostics are utilized: reliability, LV block eigen values, and overall fit measures. The reliability diagnostic measures indicate chronback's alpha, a standard statistical measure. The latent variable block eigen values indicate a standard statistical measure to show the magnitude of manifest loadings on its associated latent. The overall fit measures indicate the degree of between the actual covariance data and the estimated convariance data.

FIG. 2b shows the processing steps for the PLS software module and a data breakdown of the case data 50. Case data 50 contains manifest variable data related to one or more cases (i.e., surveys). Case data 50 has a time stamp 100 in order to examine, among other things, trends involving the customer profile case data 50 over a period of time. Customer profile case data 50 as mentioned above contains manifest variable data, such as first manifest variable 108 and second manifest variable 112.

The PLS software module executes data cutting function 114 in order to retrieve particular data records from case data 50 and from model specification data 138 based upon a predetermined criteria. For example, the predetermined criteria may include retrieving the case data associated with all males that had purchased a particular automobile. Based upon that criteria, the case data and the models associated with the case data are retrieved.

The PLS software module performs the PLS weight substitution determination function 116 in order to determine whether a particular case within the customer profile case data 50 needs to use weights from historical weights model data 104. Typically, weight substitution is effected when a case contains fewer data points than a predetermined threshold. The records of the historical weights model data 104 that is similar to the present case data are used to provide PLS weights.

For example, case data that involves a questionnaire that yielded responses from only twenty customers for a particular dealership proves to have too few data points to supply its own PLS weights. Accordingly, historical weights model data 104 utilizes responses previously gathered from that dealership or similar types of dealerships in order to form the PLS weights for that particular case.

The PLS software module 58 also performs a case data normalization function 124. Case data normalization 124 places the manifest variable data on a common scale, for example, on a scale from “0” to “100”. This function is provided since responses to different questions may have been scored differently. For example, one question may ask for a response on the scale from 1-5 and another question may ask for a response on the scale from 1-20. Normalization ensures that the responses as contained within the manifest variable data are processed on equal footing with each other.

A PLS statistical calculation function 125 determines PLS statistics based upon the normalized case data and weights. If function 125 used substituted weights in its calculations, then data output 222 is generated. If function 125 did not use substituted weights in its calculations, then data output 188 is generated. The PLS statistical calculation function is described in the following reference: H. Wold (1982) Soft Modeling: The Basic Design and Some Extensions; K. G. Jöreskogand H. Wold (Eds.) Systems Under Indirect Observation: Causality. Structure and Prediction,Vol. 2, Amsterdam: North Holland, pp.1-54.

After the statistical measures are generated, the user can view the diagnostic messages that were obtained via the PLS diagnostic function 127. PLS diagnostic function 127 uses first diagnostic rules 128 to determine if any errors or warnings had occurred. Moreover, the user can view the data results of the PLS software module to determine if the data results are within predetermined tolerances. Examples of diagnostic and error reports generated by the PLS software module are depicted in FIGS. 11a-12b and discussed below.

A user may wish to rerun the PLS software module 58 with different model specifications in order to assess whether better results from the PLS software module 58 can be obtained, or the user may proceed to execute the LV score software module at function block 132. If the user wishes to continue execution, the LV score software module utilizes the data generated from the PLS software module in order to produce latent variable scores. It should be noted, however, that the present invention is not limited to a user viewing the results after execution of each software module. The present invention includes full automation of the impact analysis process by allowing complete execution of the PLS software module 58, the LV score software module 70, and the impact software module 82 before returning control back to the user.

FIG. 3 depicts the software implementation of the preferred embodiment for the LV score software module 70. The LV score software module 70 uses the weights as determined by the PLS software module 58 in order to produce such data as LV scores. The LV score calculations are generally described in the following reference: H. Wold (1975) Soft Modeling by Latent Variables: the Non-Linear Iteractive Partial Least Squares Approach; Perspectives in Probability and Statistics: Papers in Honor of M.S. Bartlett. J. Gani (Ed.) Academic Press, London.

The LV score software module 70 obtains the weights from the data_M table 66. Since typically the weights as determined by the PLS software module 58 are still resident in memory 230 of the computer, the weights are obtained from computer memory 230. Moreover, LV score software module 70 utilizes the data cutter 180 to obtain the appropriate data from case data 50 and models specifications tables 138. LV case data 234 is used to derive the mean measure of the appropriate latent variables, and to derive the appropriate associated impact.

The LV score software module 70 invokes the vector of normalized weight software module 238 when the case is equal to “yes” and the method “use” is present. The vector of normalized weights software module 238 is used to compute the case weighted latent variable means such that the sum of those weights equals the number of observations in that segment. The normalized weights are stored in the LVWGT_01 database 242. After the weights are normalized, the latent variable scores are generated by block 246 and are stored in the data_L table 78.

In addition, block 256 generates the following: standard deviation, valid N, a valid N unweighted. Standard deviation as determined by block 246 indicates the standard statistical term for the dispersion of the latent variable. The data type valid N refers to the number of usable cases in the latent variable scores. The data type valid N unweighted refers to the number of usable cases in latent variables prior to weighting.

Additional process blocks are executed if certain predetermined criteria are present. The LV case scores software module 250 is executed when the criterion “case” is set to “yes” and the calc method has been invoked. The LV case scores software module 250 generates case-level data containing the LV scores for each respondent within each segment which is stored in the LVO1 table 254.

Block 258 is executed when “score X” is “yes”. The conditional term “score X” refers to a user option that computes and stores intermediate results for subsequent impact pooling if desired. Block 258 generates score correlation and score counts data which are stored in data_LX table 262. Score correlation data refers to a number of correlations in a correlation matrix showing the relationships between LV's. Score count refers to a number of pairwise valid Ns in a matrix showing the number of observations used in each intercorrelation generated by the module.

The LV case scores that were generated by block 250 and stored in the LV01 database 254 are in the preferred embodiment converted to SPSS files 266 by converter 272. This conversion allows analysts to perform additional statistical analysis within the powerful SPSS statistical software package which is provided by the SPSS company.

Additionally, LV score software module 70 can produce LV diagnostics in block 144 which include the statistics shown in the diagnostic report depicted in FIG. 13a.

Impact software module 82 uses data cutter 180 to acquire the appropriate case data 50 and model specification data 138 for the present run. Moreover, substitution data locator software module 210 is used in order to provide the impact software module 82 with data from previous runs. Data substitution with respect to the impact software module 82 is typically performed when the number of valid observations is too low to ensure consistent reliability. The data are retrieved by the substitution datalocator software module 210 from a remote data_L database 276. The data retrieved from the remote data_L database 276 includes direct impacts and total impacts. The impact calculations are typically regression coefficients multiplied by five.

Additionally, impact software module 82 utilizes impact diagnostic rules 152 in order to determine how well the models had performed. The impact diagnostic rules 152 include the statistics shown in the diagnostic report depicted in FIG. 14a.

FIGS. 5a-5b depict the database schema utilized in the preferred embodiment of the present invention. FIG. 5a depicts those tables which are specially used to construct the model specification data 138. The phase table 400 provides project management information such as the project title (e.g., of a customer satisfaction survey project), the name, start and end dates of the project, and other such project management-related information. A project phase typically has multiple models and therefore phase table 400 has a one-to-many relationship with the model table 404. Model table 404 contains such model-related data as the model's name, a unique model identifier, and dates and times when the model was run and for what particular data cuts the model was used.

A model can have multiple versions and therefore model table 404 has a one-to-many relationship with the model version table 408. The model version table 408 contains such version-related model data as what options were involved with a particular model version. Each version of a model is formally defined in the model definition table 412.

Model definition table 412 depicts what manifest variables and latent variables are included for a particular model. The latent variables are defined in greater detail in the latent variable table 416. Latent variable table 416 includes the variable name as well as minimum and maximum values of the latent variable. Similarly, the manifest variables are defined in the manifest variable table 420.

FIG. 5b depicts the tables which are specially used to store the results generated by the present invention. Data results that are related to manifest variables are stored in the data_M table 66. For example, the PLS software module 58 (not shown) stores its data results in this table. The data_M table 66 provides the “links” to the other tables that are involved with manifest variable data. The manifest variable table 420 is linked to the data_M table 66 by the field “Cvarname”.

The fields which ensure the uniqueness for a particular record in the data_M table 66 are boldfaced and generally depicted by reference numeral 424. The primary keys for the other tables on FIGS. 5a and 5b are similarly depicted.

The results related to the latent variables are stored in the data_L table 78. Within the data_L table 78 are listed the primary keys as well as other fields, such as “N value” which holds the value for a particular latent variable. The links to the other tables via the fields in data_L table 78 and data_M table 66 are provided next to the fields enclosed in brackets. For example, the data_L table 78 is linked to the cutdef table 428 by the field “Kcutdef”.

Cutdef table 428 includes statistical information on various cuts of data that are performed upon the case data and model data. The cutrun table 432 stores dynamic run option data that reflect decision the user has made about the model version.

FIGS. 6-14b illustrate various data results of the present invention. FIG. 6 depicts a computer screen 450 wherein various run parameters are specified. Within computer screen 450, the client name, project, and other project management-related data are specified within the region generally depicted by reference numeral 454. Run parameters are specified within the region depicted by reference numeral 458. Such parameters include the minimum number of case data before weight substitution and copy impacts are utilized.

FIG. 7 depicts exemplary manifest variables and latent variables. Row 490 is the unique cut identifier that is being used in this example. The unique cut identifier value contains three individual pieces of information. The “001” value in the cut identifier identifies the category. The “01” value in the cut identifier indicates the designated group. The value “002” in the cut identifier indicates the designated cut. The unique cut identifier can contain temporal information, and the identifier can be used as part of a hierarchial three structure to isolate subgroups as well as larger embedding entities.

Column 494 identifies the latent variables and the manifest variables that are involved for this particular cut identifier. Column 498 identifies descriptive labels associated with each of the variables in column 494. The latent variables are identified on FIG. 7 by an identifier of “LV”. The manifest variables are designated by the identifier “MV”. For example, the latent variable “BROCH1” has associated the following manifested variable: BROCHUR1, BROCHUR2, BROCHUR3, BROCHUR4, AND BROCHUR5. The latent variable “BROCH1” is related to the general category of information brochures, while the manifest variable “BROCHUR1” relates to the measurable excitement generated by the brochures. The measured manifest variable “BROCHUR1” is used by the present invention in order to assess impact upon the latent variable “BROCH1”.

FIGS. 8a and 8b depict the scores and other statistical measures associated with the manifest and latent variables. The statistical measures as determined by the PLS software module are displayed, for example, in region 500 for the brochure-related manifest variables. The LV score software module generated data for the latent variable “BROCH1” is depicted by reference numeral 504.

The manifest and latent variable data results depicted in regions 500 and 504 show the output created by block 58 and block 70 for our example, using brochure-related data.

FIGS. 9a-9b depict impact data results associated with such latent variables “Broch1”. The direct impact data results 508a and total impact data results 512a are provided on FIG. 9a. FIG. 9b includes a second set of impact results, namely direct impact data results 508b and total impact data results 512b which involve case data that has a smaller number of case samples.

Referring to FIG. 9a, direct impact data results 508a include R-square data results 516, adjusted R-square data results 520, and intercept data results 524. Each of these data results are placed under a satisfaction column as well as a recommendation column. The satisfaction column shows the computed impacts linking latent variables to an optimally weighted score for satisfaction. The recommendation column shows the computed impacts linking latent variables to standardized, unweighted scores for reported intention to recommend the product (i.e., a single data point in the case level data).

The difference between direct impact 508a and direct impact 508b resides in the fact that the direct impact 508b examined fewer case data samples than in the direct impact 508a run.

FIG. 10 provides a run status report wherein for a particular project the status for each particular cut is provided. For example, FIG. 10 indicates that the PLS software module, the LV score software module and the Impact software module were performed for a total of 480 case data samples 469 which represented valid usable data points for cut “000—00—001” .

FIGS. 11a-11b depict a diagnostics report that is generated by the PLS software module. The following statistical measures are provided in the PLS diagnostics report: Valid cases, minimum, maximum, Raw mean, Raw standard deviation, Percentage mean, Percentage standard deviation, standardized weight, standard loading, unstandardized weighting, unstandardized loading and communality (a standardized statistical term that measures the amount of shared variance). These diagnostic statistical measures are provided for each of the manifest variables for a particular run.

The LV block eigen values signify a standard statistical measure to show the magnitude of manifest loadings on its associated latent variables. The overall fit measures 548 include the following statistical measures: degrees of freedom, average commonalities, RMS data, RMS multivariate covariance, RMS multivariate covariance explained, RMS latent variable correlation, RMS psi (the convariance matrix and the prediction residuals), average R-squared, RMS latent variable correlation explained, and RMS fit. For a general discussion of each of these statistical measures. See Jöreskog K. G. and Sörbom, D. (1981) LISREL V: Users Guide, Chicago: National Educational Resources.

FIGS. 12a and 12b depict errors that the PLS software module had detected. Depending upon the severity of the error detected and the diagnostic level value as set by the user, the PLS module may terminate execution.

FIG. 12a depicts an error wherein “weights had a mixed sign”. The mixed sign error detection exemplifies the advantages that the present invention offers over previous methods. Although it is not mathematically incorrect to have a model with negative mixed signs, a model with negative mixed signs typically does not reflect the “real world.” For example, a model having negative mixed signs would indicate that customer satisfaction decreases as a particular manifest variable increases (such as percentage of on-time deliveries). Such an indication is unrealistic, and the present invention specifically detects this situation and informs the user of it.

FIG. 12b depicts the error wherein loadings were marked “bad” in the model and consequently the weight matrix for the model could not be created. Such a situation would arise when the pis weights are missing for any number and reasons such as, lack of variance in a variable.

FIGS. 13a-13c depict diagnostic results as generated by the LV score software module. The statistical measures that are examined within the diagnostics generated by LV score software module include: the number of the manifest in the latent variable (“MV#”), number of case samples, minimum, maximum, mean, and standard deviations. Moreover, the diagnostics generated by the LV score software module also include: collinearity statistical measures: eigen value, the condition index (a standard statistical term) (“cond_num”), constant, and percentage of decomposition variance shared. FIGS. 13b and 13c are continuations of the report.

FIGS. 14a-14b depict exemplary diagnostic data results generated by the impact software module. In this example, FIGS. 14a and 14b depict three types of diagnostics generated by the impact software module: latent variable correlation statistical measures, path coefficients, and satisfaction/recommendation statistical measures. The latent variable correlation statistical measures signify intercorrelations between latent variables. The path coefficient statistical measures signify the magnitude of standardized path coefficients from latent variables to the dependent variable of interest which in this case is satisfaction. The satisfaction/recommendation statistical measures signify intercorrelation between the two dependent variables, which in this example are satisfaction and reported intention to recommend the product or service being studied.

While the invention has been described in its presently preferred form, it will be understood that the invention is capable of modification without departing from the spirit of the invention as set forth in the appended claims.

Claims (18)

a manifest variable database for storing manifest variable data that is indicative of the measured physical attributes;

a partial least squares determinator connected to said manifest variable database for determining statistical weights based upon said stored manifest variable data;

a weights database connected to said partial least squares determinator for storing said determined weights;

a latent variable determinator connected to said weighing database for determining scores for latent variables based upon said stored manifest variables and upon said stored weights, said latent variables being indicative of the predetermined customer characteristics;

a latent variable database connected to said latent variable score determinator for storing said determined latent variable scores; and

an impact determinator connected to said latent variable database for determining impact relationships among said latent variables based upon said stored latent variable scores.

2. The apparatus of claim 1 wherein said manifest variable data includes performance metric data, said impact determinator determining impact relationships of said latent variables upon said performance metric data based upon said latent variable scores and upon said performance metric data.

a weight substitution determinator connected to said partial least squares determinator for selecting said superset weights based upon a predetermined criterion.

4. The apparatus of claim 3 wherein said manifest variable data are associated with a plurality of cases associated with said measured physical attributes, wherein said predetermined criteria include the numbers of cases associated with said manifest variable data.

5. The apparatus of claim 4 further comprising:

a weight average for determining the average of said selected weights, said latent variable determinator determining said latent variable scores based upon said manifest variable data and upon the average of said selected weights.

6. The apparatus of claim 5 further comprising:

a manifest variable normalizer for normalizing said manifest variable data to a predetermined scale, said latent variable determinator determining said latent variable scores based upon said normalized manifest variable data and upon the average of said normalized weights.

7. The apparatus of claim 1 further comprising:

first diagnostic rules connected to said partial least squares determinator for determining performance of said partial least squares determinator based upon a predetermined criteria.

8. The apparatus of claim 7 wherein said partial least squares determinator selects weights based upon said first diagnostic rules.

9. The apparatus of claim 1 further comprising:

second diagnostic rules connected to said latent variable determinator for determining performance of said latent variable determinator based upon predetermined criteria.

10. The apparatus of claim 9 wherein said latent variable determinator interprets latent variable scores based upon said second diagnostic rules.

11. The apparatus of claim 1 further comprising:

third diagnostic rules connected to said impact determinator to determine performance of said impact analyzer based upon predetermined criteria.

12. The apparatus of claim 11 wherein said impact determinator interprets said impact relationships based upon said third diagnostic rules.

14. The apparatus of claim 13 wherein said regression analysis includes analysis being selected from the group consisting of phantom regression analysis, constrained regression analysis, multi-variate regression analysis, and combinations thereof.

16. The apparatus of claim 1 wherein said manifest variable data is associated with time data, said impact determinator determining the impact of the measured physical attributes in a temporal domain using said manifest variable data and said associated time data.

17. The apparatus of claim 1 further comprising:

a models database for storing a plurality of models for establishing interrelationships among said manifest variable data, said latent variable determinator determining scores for said latent variables using said model specifications.

18. The apparatus of claim 17 further comprising:

diagnostic rules for comparing one of said plurality of models with another model in said plurality of models to interpret the degree of fit between said models.