15 $1000 If the span of 3 vectors x, y, and z is a 2-dimensional subspace (a plane) then... A. x, y, and z are linearly dependent B. x, y, and z are linearly independent C. x, y, and z are orthogonal D. x, y, and z are all multiples of the same vector Jeopardy Round

16 $1000 If the span of 3 vectors x, y, and z is a 2-dimensional subspace (a plane) then... A. x, y, and z are linearly dependent B. x, y, and z are linearly independent C. x, y, and z are orthogonal D. x, y, and z are all multiples of the same vector Jeopardy Round

17 $1000 In order for a matrix to have eigenvalues and eigenvectors, what must be true? A. All matrices have eigenvalues and eigenvectors B. The matrix must be square C. The matrix must be orthogonal D. The matrix must be a covariance matrix Jeopardy Round

18 $1000 In order for a matrix to have eigenvalues and eigenvectors, what must be true? A. All matrices have eigenvalues and eigenvectors B. The matrix must be square C. The matrix must be orthogonal D. The matrix must be a covariance matrix Jeopardy Round

19 $1000 If I multiply a matrix A by its eigenvector x, what can I say about the result, Ax? A. The result is a unit vector B. The result is a scalar, which is called the eigenvalue C. The result is a scalar multiple of x D. The result is orthogonal Jeopardy Round

20 $1000 If I multiply a matrix A by its eigenvector x, what can I say about the result, Ax? A. The result is a unit vector B. The result is a scalar, which is called the eigenvalue C. The result is a scalar multiple of x D. The result is orthogonal Jeopardy Round

24 $400 The first principal component is... A. A statistic that tells you how much multicollinearity is in your data B. A scalar that tells you how much total variance is in the data C. The first column in your data matrix D. A vector that points in the direction of maximum variance in the data Double Jeopardy Round

25 $400 The first principal component is... A. A statistic that tells you how much multicollinearity is in your data B. A scalar that tells you how much total variance is in the data C. The first column in your data matrix D. A vector that points in the direction of maximum variance in the data Double Jeopardy Round

26 $800 The loadings on a principal component tell you A. The variance of each variable on that principal component B. How correlated each variable is with that principal component C. Absolutely nothing D. How much each observation weighs along that principal component Double Jeopardy Round

27 $800 The loadings on a principal component tell you A. The variance of each variable on that principal component B. How correlated each variable is with that principal component C. Absolutely nothing D. How much each observation weighs along that principal component Double Jeopardy Round

28 $1200 The principal component scores are... A. Statistics which tell you the importance of each principal component B. The coordinates of your data in the new basis of principal components C. Statistics which tell you how each variable relates to each principal component D. Relatively random Double Jeopardy Round

29 $1200 The principal component scores are... A. Statistics which tell you the importance of each principal component B. The coordinates of your data in the new basis of principal components C. Statistics which tell you how each variable relates to each principal component D. Relatively random Double Jeopardy Round

30 $1200 The eigenvalues of the covariance matrix... A. Are always orthogonal B. Add up to 1 C. Tell you how much variance exists along each principal component D. Tell you the proportion of variance explained by each principal component Double Jeopardy Round

31 $1200 The eigenvalues of the covariance matrix... A. Are always orthogonal B. Add up to 1 C. Tell you how much variance exists along each principal component D. Tell you the proportion of variance explained by each principal component Double Jeopardy Round

32 $1600 The total amount of variance in a data set is... A. The sum of all the entries in the covariance matrix B. The sum of the eigenvalues of the covariance matrix C. The sum of the variances of each variable D. Both (B) and (C) Double Jeopardy Round

33 $1600 The total amount of variance in a data set is... A. The sum of all the entries in the covariance matrix B. The sum of the eigenvalues of the covariance matrix C. The sum of the variances of each variable D. Both (B) and (C) Double Jeopardy Round

34 $1600 PCA is a special case of the Singular Value Decomposition, when your data is either centered or standardized. A. True B. False Double Jeopardy Round

35 $1600 PCA is a special case of the Singular Value Decomposition, when your data is either centered or standardized. A. True B. False Double Jeopardy Round

36 $1600 Principal Component Regression... A. Can give you meaningful beta parameters for your original variables B. Attempts to solve the problem of severe multicollinearity in predictor variables C. Is a biased regression technique and should be used only as a last resort when you cannot omit correlated variables. D. All of the above Double Jeopardy Round

37 $1600 Principal Component Regression... A. Can give you meaningful beta parameters for your original variables B. Attempts to solve the problem of severe multicollinearity in predictor variables C. Is a biased regression technique and should be used only as a last resort when you cannot omit correlated variables. D. All of the above Double Jeopardy Round

38 $1600 Principal components with eigenvalues close to zero are correlated with the intercept in a linear regression model A. True B. False Double Jeopardy Round

39 $1600 Principal components with eigenvalues close to zero are correlated with the intercept in a linear regression model A. True B. False Double Jeopardy Round

40 Final Jeopardy Category: PCA Rotations Final Jeopardy

41 Wager $2000 $3000 $4000 $5000 Final Jeopardy

42 Final Jeopardy Question What is the purpose or motivation behind the rotations of principal components in Factor Analysis? A. The original principal components were not orthogonal, so we need to adjust them B. The first principal component does not explain enough variance. By rotating, we can explain more variance. C. The loadings of the variables are difficult to interpret, by rotating we get new factors which more clearly represent combinations of original variables D. The rotation helps spread the observations out so that we can more clearly see different groups or classes in the data Final Jeopardy

43 Final Jeopardy Question What is the purpose or motivation behind the rotations of principal components in Factor Analysis? A. The original principal components were not orthogonal, so we need to adjust them B. The first principal component does not explain enough variance. By rotating, we can explain more variance. C. The loadings of the variables are difficult to interpret, by rotating we get new factors which more clearly represent combinations of original variables D. The rotation helps spread the observations out so that we can more clearly see different groups or classes in the data Final Jeopardy

.: MATRIX OPERATIONS What are diagonal entries and the main diagonal of a matrix? What is a diagonal matrix? When are matrices equal? Scalar Multiplication 45 Matrix Addition Theorem (pg 0) Let A, B, and

Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x

Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

Math 5. Selected Solutions for Week 2 Section. (Page 2). Let u = and A = 5 2 6. Is u in the plane in R spanned by the columns of A? (See the figure omitted].) Why or why not? First of all, the plane in

Matrix algebra January 20 Introduction Basics The mathematics of multiple regression revolves around ordering and keeping track of large arrays of numbers and solving systems of equations The mathematical

Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

Math 33 AH : Solution to the Final Exam Honors Linear Algebra and Applications 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) (1) If A is an invertible

Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

Chapter 400 Introduction Canonical correlation analysis is the study of the linear relations between two sets of variables. It is the multivariate extension of correlation analysis. Although we will present

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

Chapter 420 Introduction (FA) is an exploratory technique applied to a set of observed variables that seeks to find underlying factors (subsets of variables) from which the observed variables were generated.

55 CHAPTER NUMERICAL METHODS. POWER METHOD FOR APPROXIMATING EIGENVALUES In Chapter 7 we saw that the eigenvalues of an n n matrix A are obtained by solving its characteristic equation n c n n c n n...

Multiple regression Introduction Multiple regression is a logical extension of the principles of simple linear regression to situations in which there are several predictor variables. For instance if we

Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry

.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

INTERNATIONAL BLACK SEA UNIVERSITY COMPUTER TECHNOLOGIES AND ENGINEERING FACULTY ELABORATION OF AN ALGORITHM OF DETECTING TESTS DIMENSIONALITY Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree

3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

PCA to Eigenfaces CS 510 Lecture #16 March 23 th 2015 A 9 dimensional PCA example is dark around the edges and bright in the middle. is light with dark vertical bars. is light with dark horizontal bars.

2 Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Theorem 8: Let A be a square matrix. Then the following statements are equivalent. That is, for a given A, the statements are either all true

4.1 VECTOR SPACES AND SUBSPACES What is a vector space? (pg 229) A vector space is a nonempty set, V, of vectors together with two operations; addition and scalar multiplication which satisfies the following

Cramer s rule, inverse matrix, and volume We know a formula for and some properties of the determinant. Now we see how the determinant can be used. Formula for A We know: a b d b =. c d ad bc c a Can we

Images and Kernels in Linear Algebra By Kristi Hoshibata Mathematics 232 In mathematics, there are many different fields of study, including calculus, geometry, algebra and others. Mathematics has been

. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

Chapter 9. General Matrices An n m matrix is an array a a a m a a a m... = [a ij]. a n a n a nm The matrix A has n row vectors and m column vectors row i (A) = [a i, a i,..., a im ] R m a j a j a nj col

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

Practice Math 110 Final Instructions: Work all of problems 1 through 5, and work any 5 of problems 10 through 16. 1. Let A = 3 1 1 3 3 2. 6 6 5 a. Use Gauss elimination to reduce A to an upper triangular

8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

Multivariate Analysis (Slides 13) The final topic we consider is Factor Analysis. A Factor Analysis is a mathematical approach for attempting to explain the correlation between a large set of variables

Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals

Sec 4. Vector Spaces and Subspaces Motivation Let S be the set of all solutions to the differential equation y + y =. Let T be the set of all 2 3 matrices with real entries. These two sets share many common