Transcription

1 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL Compressed Sensing David L. Donoho, Member, IEEE Abstract Suppose is an unknown vector in (a digital image or signal); we plan to measure general linear functionals of then reconstruct. If is known to be compressible by transform coding with a known transform, we reconstruct via the nonlinear procedure defined here, the number of measurements can be dramatically smaller than the size. Thus, certain natural classes of images with pixels need only = ( 1 4 log 5 2 ( )) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual pixel samples. More specifically, suppose has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor) so the coefficients belong to an ball for 0 1. The most important coefficients in that expansion allow reconstruction with 2 error ( ). Itis possible to design = ( log( )) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the most important coefficients. Moreover, a good approximation to those important coefficients is extracted from the measurements by solving a linear program Basis Pursuit in signal processing. The nonadaptive measurements have the character of rom linear combinations of basis/frame elements. Our results use the notions of optimal recovery, of -widths, information-based complexity. We estimate the Gel f -widths of balls in high-dimensional Euclidean space in the case 0 1, give a criterion identifying near-optimal subspaces for Gel f -widths. We show that most subspaces are near-optimal, show that convex optimization (Basis Pursuit) is a near-optimal way to extract information derived from these near-optimal subspaces. Index Terms Adaptive sampling, almost-spherical sections of Banach spaces, Basis Pursuit, eigenvalues of rom matrices, Gel f -widths, information-based complexity, integrated sensing processing, minimum 1 -norm decomposition, optimal recovery, Quotient-of-a-Subspace theorem, sparse solution of linear equations. I. INTRODUCTION AS our modern technology-driven civilization acquires exploits ever-increasing amounts of data, everyone now knows that most of the data we acquire can be thrown away with almost no perceptual loss witness the broad success of lossy compression formats for sounds, images, specialized technical data. The phenomenon of ubiquitous compressibility raises very natural questions: why go to so much effort to acquire all the data when most of what we get will be thrown away? Can we not just directly measure the part that will not end up being thrown away? In this paper, we design compressed data acquisition protocols which perform as if it were possible to directly acquire just Manuscript received September 18, 2004; revised December 15, The author is with the Department of Statistics, Stanford University, Stanford, CA USA ( Communicated by A. Høst-Madsen, Associate Editor for Detection Estimation. Digital Object Identifier /TIT the important information about the signals/images in effect, not acquiring that part of the data that would eventually just be thrown away by lossy compression. Moreover, the protocols are nonadaptive parallelizable; they do not require knowledge of the signal/image to be acquired in advance other than knowledge that the data will be compressible do not attempt any understing of the underlying object to guide an active or adaptive sensing strategy. The measurements made in the compressed sensing protocol are holographic thus, not simple pixel samples must be processed nonlinearly. In specific applications, this principle might enable dramatically reduced measurement time, dramatically reduced sampling rates, or reduced use of analog-to-digital converter resources. A. Transform Compression Background Our treatment is abstract general, but depends on one specific assumption which is known to hold in many settings of signal image processing: the principle of transform sparsity. We suppose that the object of interest is a vector, which can be a signal or image with samples or pixels, that there is an orthonormal basis for which can be, for example, an orthonormal wavelet basis, a Fourier basis, or a local Fourier basis, depending on the application. (As explained later, the extension to tight frames such as curvelet or Gabor frames comes for free.) The object has transform coefficients, these are assumed sparse in the sense that, for some for some (I.1) Such constraints are actually obeyed on natural classes of signals images; this is the primary reason for the success of stard compression tools based on transform coding [1]. To fix ideas, we mention two simple examples of constraint. Bounded Variation model for images. Here image brightness is viewed as an underlying function on the unit square, which obeys (essentially) The digital data of interest consist of pixel samples of produced by averaging over pixels. We take a wavelet point of view; the data are seen as a superposition of contributions from various scales. Let denote the component of the data at scale, let denote the orthonormal basis of wavelets at scale, containing elements. The corresponding coefficients obey /$ IEEE

2 1290 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006 Bump Algebra model for spectra. Here a spectrum (e.g., mass spectrum or magnetic resonance spectrum) is modeled as digital samples of an underlying function on the real line which is a superposition of so-called spectral lines of varying positions, amplitudes, linewidths. Formally use OR/IBC as a generic label for work taking place in those fields, admittedly being less than encyclopedic about various scholarly contributions. We have a class of possible objects of interest, are interested in designing an information operator that samples pieces of information about, an algorithm that offers an approximate reconstruction of. Here the information operator takes the form Here the parameters are line locations, are amplitudes/polarities, are linewidths, represents a lineshape, for example the Gaussian, although other profiles could be considered. We assume the constraint where, which in applications represents an energy or total mass constraint. Again we take a wavelet viewpoint, this time specifically using smooth wavelets. The data can be represented as a superposition of contributions from various scales. Let denote the component of the spectrum at scale, let denote the orthonormal basis of wavelets at scale, containing elements. The corresponding coefficients again obey, [2]. While in these two examples, the constraint appeared, other constraints with can appear naturally as well; see below. For some readers, the use of norms with may seem initially strange; it is now well understood that the norms with such small are natural mathematical measures of sparsity [3], [4]. As decreases below, more more sparsity is being required. Also, from this viewpoint, an constraint based on requires no sparsity at all. Note that in each of these examples, we also allowed for separating the object of interest into subbs, each one of which obeys an constraint. In practice, in the following we stick with the view that the object of interest is a coefficient vector obeying the constraint (I.1), which may mean, from an application viewpoint, that our methods correspond to treating various subbs separately, as in these examples. The key implication of the constraint is sparsity of the transform coefficients. Indeed, we have trivially that, if denotes the vector with everything except the largest coefficients set to (I.2) for, with a constant depending only on. Thus, for example, to approximate with error, we need to keep only the biggest terms in. B. Optimal Recovery/Information-Based Complexity Background Our question now becomes: if is an unknown signal whose transform coefficient vector obeys (I.1), can we make a reduced number of measurements which will allow faithful reconstruction of. Such questions have been discussed (for other types of assumptions about ) under the names of Optimal Recovery [5] Information-Based Complexity [6]; we now adopt their viewpoint, partially adopt their notation, without making a special effort to be really orthodox. We where the are sampling kernels, not necessarily sampling pixels or other simple features of the signal; however, they are nonadaptive, i.e., fixed independently of. The algorithm is an unspecified, possibly nonlinear reconstruction operator. We are interested in the error of reconstruction in the behavior of optimal information optimal algorithms. Hence, we consider the minimax error as a stard of comparison So here, all possible methods of nonadaptive linear sampling are allowed, all possible methods of reconstruction are allowed. In our application, the class of objects of interest is the set of objects where obeys (I.1) for a given. Denote then Our goal is to evaluate to have practical schemes which come close to attaining it. C. Four Surprises Here is the main quantitative phenomenon of interest for this paper. Theorem 1: Let be a sequence of problem sizes with,,,,. Then for there is so that (I.3) We find this surprising in four ways. First, compare (I.3) with (I.2). We see that the forms are similar, under the calibration. In words: the quality of approximation to which could be gotten by using the biggest transform coefficients can be gotten by using the pieces of nonadaptive information provided by. The surprise is that we would not know in advance which transform coefficients are likely to be the important ones in this approximation, yet the optimal information operator is nonadaptive, depending at most on the class not on the specific object. In some sense this nonadaptive information is just as powerful as knowing the best transform coefficients. This seems even more surprising when we note that for objects, the transform representation is the optimal one: no other representation can do as well at characterizing by a few coefficients [3], [7]. Surely then, one imagines, the sampling kernels underlying the optimal information

3 DONOHO: COMPRESSED SENSING 1291 operator must be simply measuring individual transform coefficients? Actually, no: the information operator is measuring very complex holographic functionals which in some sense mix together all the coefficients in a big soup. Compare (VI.1) below. (Holography is a process where a three dimensional (3-D) image generates by interferometry a two dimensional (2-D) transform. Each value in the 2-D transform domain is influenced by each part of the whole 3-D object. The 3-D object can later be reconstructed by interferometry from all or even a part of the 2-D transform domain. Leaving aside the specifics of 2-D/3-D the process of interferometry, we perceive an analogy here, in which an object is transformed to a compressed domain, each compressed domain component is a combination of all parts of the original object.) Another surprise is that, if we enlarged our class of information operators to allow adaptive ones, e.g., operators in which certain measurements are made in response to earlier measurements, we could scarcely do better. Define the minimax error under adaptive information allowing adaptive operators where, for, each kernel is allowed to depend on the information gathered at previous stages. Formally setting we have the following. Theorem 2: For, there is so that for So adaptive information is of minimal help despite the quite natural expectation that an adaptive method ought to be able iteratively somehow localize then close in on the big coefficients. An additional surprise is that, in the already-interesting case, Theorems 1 2 are easily derivable from known results in OR/IBC approximation theory! However, the derivations are indirect; so although they have what seem to the author as fairly important implications, very little seems known at present about good nonadaptive information operators or about concrete algorithms matched to them. Our goal in this paper is to give direct arguments which cover the case of highly compressible objects, to give direct information about near-optimal information operators about concrete, computationally tractable algorithms for using this near-optimal information. D. Geometry Widths From our viewpoint, the phenomenon described in Theorem 1 concerns the geometry of high-dimensional convex nonconvex balls. To see the connection, note that the class is the image, under orthogonal transformation, of an ball. If this is convex symmetric about the origin, as well as being orthosymmetric with respect to the axes provided by the wavelet basis; if, this is again symmetric about the origin orthosymmetric, while not being convex, but still star-shaped. To develop this geometric viewpoint further, we consider two notions of -width; see [5]. Definition 1.1: The Gel f -width of with respect to the norm is defined as where the infimum is over -dimensional linear subspaces of, denotes the orthocomplement of with respect to the stard Euclidean inner product. In words, we look for a subspace such that trapping in that subspace causes to be small. Our interest in Gel f -widths derives from an equivalence between optimal recovery for nonadaptive information such -widths, well known in the case [5], in the present setting extending as follows. Theorem 3: For (I.4) (I.5) Thus the Gel f -widths either exactly or nearly equal the value of optimal information. Ultimately, the bracketing with constant will be for us just as good as equality, owing to the unspecified constant factors in (I.3). We will typically only be interested in near-optimal performance, i.e., in obtaining to within constant factors. It is relatively rare to see the Gel f -widths studied directly [8]; more commonly, one sees results about the Kolmogorov -widths. Definition 1.2: Let be a bounded set. The Kolmogorov -width of with respect the norm is defined as where the infimum is over -dimensional linear subspaces of. In words, measures the quality of approximation of possible by -dimensional subspaces. In the case, there is an important duality relationship between Kolmogorov widths Gel f widths which allows us to infer properties of from published results on.to state it, let be defined in the obvious way, based on approximation in rather than norm. Also, for given, let be the stard dual indices,. Also, let denote the stard unit ball of. Then [8] In particular (I.6)

4 1292 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006 The asymptotic properties of the left-h side have been determined by Garnaev Gluskin [9]. This follows major work by Kashin [10], who developed a slightly weaker version of this result in the course of determining the Kolmogorov -widths of Sobolev spaces. See the original papers, or Pinkus s book [8] for more details. Theorem 4: (Kashin, Garnaev, Gluskin (KGG)) For all Theorem 1 now follows in the case by applying KGG with the duality formula (I.6) the equivalence formula (I.4). The case of Theorem 1 does not allow use of duality the whole range is approached differently in this paper. E. Mysteries Because of the indirect manner by which the KGG result implies Theorem 1, we really do not learn much about the phenomenon of interest in this way. The arguments of Kashin, Garnaev, Gluskin show that there exist near-optimal -dimensional subspaces for the Kolmogorov widths; they arise as the null spaces of certain matrices with entries which are known to exist by counting the number of matrices lacking certain properties, the total number of matrices with entries, comparing. The interpretability of this approach is limited. The implicitness of the information operator is matched by the abstractness of the reconstruction algorithm. Based on OR/IBC theory we know that the so-called central algorithm is optimal. This algorithm asks us to consider, for given information, the collection of all objects which could have given rise to the data F. Results Our paper develops two main types of results. Near-Optimal Information. We directly consider the problem of near-optimal subspaces for Gel f -widths of, introduce three structural conditions (CS1 CS3) on an -by- matrix which imply that its nullspace is near-optimal. We show that the vast majority of -subspaces of are near-optimal, rom sampling yields near-optimal information operators with overwhelmingly high probability. Near-Optimal Algorithm. We study a simple nonlinear reconstruction algorithm: simply minimize the norm of the coefficients satisfying the measurements. This has been studied in the signal processing literature under the name Basis Pursuit; it can be computed by linear programming. We show that this method gives near-optimal results for all. In short, we provide a large supply of near-optimal information operators a near-optimal reconstruction procedure based on linear programming, which, perhaps unexpectedly, works even for the nonconvex case. For a taste of the type of result we obtain, consider a specific information/algorithm combination. CS Information. Let be an matrix generated by romly sampling the columns, with different columns independent identically distributed (i.i.d.) rom uniform on. With overwhelming probability for large, has properties CS1 CS3 discussed in detail in Section II-A below; assume we have achieved such a favorable draw. Let be the basis matrix with basis vector as the th column. The CS Information operator is the matrix. -minimization. To reconstruct from CS Information, we solve the convex optimization problem Defining now the center of a set center the central algorithm is In words, we look for the object having coefficients with smallest norm that is consistent with the information. To evaluate the quality of an information operator, set center it obeys, when the information is optimal, To evaluate the quality of a combined algorithm/information pair, set see Section III below. This abstract viewpoint unfortunately does not translate into a practical approach (at least in the case of the, ). The set is a section of the ball, finding the center of this section does not correspond to a stard tractable computational problem. Moreover, this assumes we know, which would typically not be the case. Theorem 5: Let, be a sequence of problem sizes obeying,, ; let be a corresponding sequence of operators deriving from CS matrices with underlying parameters (see Section II below). Let. There exists so that is near-optimal:

5 DONOHO: COMPRESSED SENSING 1293 for,. Moreover, the algorithm delivering the solution to is near-optimal: usual the nullspace relative to an operator.wedefine the width of a set (II.1) for,. Thus, for large, we have a simple description of near-optimal information a tractable near-optimal reconstruction algorithm. G. Potential Applications To see the potential implications, recall first the Bump Algebra model for spectra. In this context, our result says that, for a spectrometer based on the information operator in Theorem 5, it is really only necessary to take measurements to get an accurate reconstruction of such spectra, rather than the nominal measurements. However, they must then be processed nonlinearly. Recall the Bounded Variation model for images. In that context, a result paralleling Theorem 5 says that for a specialized imaging device based on a near-optimal information operator it is really only necessary to take measurements to get an accurate reconstruction of images with pixels, rather than the nominal measurements. The calculations underlying these results will be given below, along with a result showing that for cartoon-like images (which may model certain kinds of simple natural imagery, like brain scans), the number of measurements for an -pixel image is only. H. Contents Section II introduces a set of conditions CS1 CS3 for near-optimality of an information operator. Section III considers abstract near-optimal algorithms, proves Theorems 1 3. Section IV shows that solving the convex optimization problem gives a near-optimal algorithm whenever. Section V points out immediate extensions to weak- conditions to tight frames. Section VI sketches potential implications in image, signal, array processing. Section VII, building on work in [11], shows that conditions CS1 CS3 are satisfied for most information operators. Finally, in Section VIII, we note the ongoing work by two groups (Gilbert et al. [12] Cès et al. [13], [14]), which although not written in the -widths/or/ibc tradition, imply (as we explain), closely related results. II. INFORMATION Consider information operators constructed as follows. With the orthogonal matrix whose columns are the basis elements, with certain -by- matrices obeying conditions specified below, we construct corresponding information operators. Everything will be completely transparent to the choice of orthogonal matrix hence we will assume that is the identity throughout this section. In view of the relation between Gel f -widths minimax errors, we may work with -widths. Let denote as In words, this is the radius of the section of cut out by the nullspace. In general, the Gel f -width is the smallest value of obtainable by choice of is an matrix We will show for all large the existence of by matrices where with dependent at most on the ratio. A. Conditions CS1 CS3 In the following, with let denote a submatrix of obtained by selecting just the indicated columns of. We let denote the range of in. Finally, we consider a family of quotient norms on ; with denoting the norm on vectors indexed by These describe the minimal -norm representation of achievable using only specified subsets of columns of. We define three conditions to impose on an matrix, indexed by strictly positive parameters,,. CS1: The minimal singular value of exceeds uniformly in. CS2: On each subspace we have the inequality uniformly in. CS3: On each subspace uniformly in. CS1 dems a certain quantitative degree of linear independence among all small groups of columns. CS2 says that linear combinations of small groups of columns give vectors that look much like rom noise, at least as far as the comparison of norms is concerned. It will be implied by a geometric fact: every slices through the ball in such a way that the resulting convex section is actually close to spherical. CS3 says that for every vector in some, the associated quotient norm is never dramatically smaller than the simple norm on. It turns out that matrices satisfying these conditions are ubiquitous for large when we choose the properly. Of course, for any finite, all norms are equivalent almost any arbitrary matrix can trivially satisfy these conditions simply by taking very small, very large. However, the definition of very small very large would have to

6 1294 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006 depend on for this trivial argument to work. We claim something deeper is true: it is possible to choose independent of of. Consider the set of all matrices having unit-normalized columns. On this set, measure frequency of occurrence with the natural uniform measure (the product measure, uniform on each factor ). Theorem 6: Let be a sequence of problem sizes with,,,,. There exist depending only on so that, for each the proportion of all matrices satisfying CS1 CS3 with parameters eventually exceeds. We will discuss prove this in Section VII. The proof will show that the proportion of matrices not satisfying the condition decays exponentially fast in. For later use, we will leave the constants implicit speak simply of CS matrices, meaning matrices that satisfy the given conditions with values of parameters of the type described by this theorem, namely, with not depending on permitting the above ubiquity. B. Near-Optimality of CS Matrices We now show that the CS conditions imply near-optimality of widths induced by CS matrices. Theorem 7: Let be a sequence of problem sizes with. Consider a sequence of by matrices obeying the conditions CS1 CS3 with positive independent of. Then for each, there is so that for A similar argument for approximation gives, in case (II.4) Now. Hence, with,wehave.as, we can invoke CS3, getting On the other h, again using, invoke CS2, getting Combining these with the above with. Recalling, invoking CS1 we have In short, with The theorem follows with Proof: Consider the optimization problem Our goal is to bound the value of Choose so that. Let denote the indices of the largest values in. Without loss of generality, suppose coordinates are ordered so that comes first among the entries, partition. Clearly (II.2) while, because each entry in is at least as big as any entry in, (I.2) gives III. ALGORITHMS Given an information operator, we must design a reconstruction algorithm which delivers reconstructions compatible in quality with the estimates for the Gel f -widths. As discussed in the Introduction, the optimal method in the OR/IBC framework is the so-called central algorithm, which unfortunately, is typically not efficiently computable in our setting. We now describe an alternate abstract approach, allowing us to prove Theorem 1. A. Feasible-Point Methods Another general abstract algorithm from the OR/IBC literature is the so-called feasible-point method, which aims simply to find any reconstruction compatible with the observed information constraints. As in the case of the central algorithm, we consider, for given information, the collection of all objects which could have given rise to the information (II.3)

7 DONOHO: COMPRESSED SENSING 1295 In the feasible-point method, we simply select any member of, by whatever means. One can show, adapting stard OR/IBC arguments in [15], [6], [8] the following. Lemma 3.1: Let where is an optimal information operator, let be any element of. Then for (III.1) In short, any feasible point is within a factor two of optimal. Proof: We first justify our claims for optimality of the central algorithm, then show that a feasible point is near to the central algorithm. Let again denote the result of the central algorithm. Now radius Now clearly, in the special case when is only known to lie in is measured, the minimax error is exactly radius. Since this error is achieved by the central algorithm for each such, the minimax error over all is achieved by the central algorithm. This minimax error is with where for given B. Proof of Theorem 3 Before proceeding, it is convenient to prove Theorem 3. Note that the case is well known in OR/IBC so we only need to give an argument for (though it happens that our argument works for as well). The key point will be to apply the -triangle inequality valid for ; this inequality is well known in interpolation theory [17] through Peetre Sparr s work, is easy to verify directly. Suppose without loss of generality that there is an optimal subspace, which is fixed given in this proof. As we just saw Now radius radius radius Now the feasible point obeys But the triangle inequality gives radius ; hence, so clearly. Now suppose without loss of generality that attain the radius bound, i.e., they satisfy, for center they satisfy hence, as Then define. Set. By the -triangle inequality radius radius so is only near- More generally, if the information operator optimal, then the same argument gives Hence. However, (III.2) A popular choice of feasible point is to take an element of least norm, i.e., a solution of the problem so belongs to. Hence, where here is the vector of transform coefficients,. A nice feature of this approach is that it is not necessary to know the radius of the ball ; the element of least norm will always lie inside it. For later use, call the solution. By the preceding lemma, this procedure is near-minimax: C. Proof of Theorem 1 We are now in a position to prove Theorem 1 of the Introduction. First, in the case, we have already explained in the Introduction that the theorem of Garnaev Gluskin implies

8 1296 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006 the result by duality. In the case, we need only to show a lower bound an upper bound of the same order. For the lower bound, we consider the entropy numbers, defined as follows. Let be a set let be the smallest number such that an -net for can be built using a net of cardinality at most. From Carl s theorem [18] see the exposition in Pisier s book [19] there is a constant so that the Gel f -widths dominate the entropy numbers. Secondly, the entropy numbers obey [20], [21] At the same time, the combination of Theorems 7 6 shows that Applying now the Feasible Point method, we have with immediate extensions to for all. We conclude that A. The Case In the case, is a convex optimization problem. Written in an equivalent form, with being the optimization variable, gives This can be formulated as a linear programming problem: let be the by matrix. The linear program LP (IV.1) has a solution, say, a vector in which can be partitioned as ; then solves. The reconstruction is. This linear program is typically considered computationally tractable. In fact, this problem has been studied in the signal analysis literature under the name Basis Pursuit [26]; in that work, very large-scale underdetermined problems e.g., with were solved successfully using interior-point optimization methods. As far as performance goes, we already know that this procedure is near-optimal in case ; from (III.2) we have the following. Corollary 4.1: Suppose that is an information operator achieving, for some as was to be proven. then the Basis Pursuit algorithm all achieves, for D. Proof of Theorem 2 Now is an opportune time to prove Theorem 2. We note that in the case of, this is known [22] [25]. The argument is the same for, we simply repeat it. Suppose that, consider the adaptively constructed subspace according to whatever algorithm is in force. When the algorithm terminates, we have an -dimensional information vector a subspace consisting of objects which would all give that information vector. For all objects in, the adaptive information therefore turns out the same. Now the minimax error associated with that information is exactly radius ; but this cannot be smaller than radius The result follows by comparing with. In particular, we have a universal algorithm for dealing with any class i.e., any, any, any. First, apply a near-optimal information operator; second, reconstruct by Basis Pursuit. The result obeys for a constant depending at most on. The inequality can be put to use as follows. Fix. Suppose the unknown object is known to be highly compressible, say obeying the a priori bound,. Let. For any such object, rather than making measurements, we only need to make measurements, our reconstruction obeys IV. BASIS PURSUIT The least-norm method of the previous section has two drawbacks. First, it requires that one know ; we prefer an algorithm which works for. Second, if, the least-norm problem invokes a nonconvex optimization procedure, would be considered intractable. In this section, we correct both drawbacks. While the case is already significant interesting, the case is of interest because it corresponds to data which are more highly compressible, offering more impressive performance in Theorem 1, because the exponent is even stronger than in the case. Later in this section, we extend the same interpretation of to performance over throughout.

9 DONOHO: COMPRESSED SENSING 1297 B. Relation Between Minimization The general OR/IBC theory would suggest that to hle cases where, we would need to solve the nonconvex optimization problem, which seems intractable. However, in the current situation at least, a small miracle happens: solving is again near-optimal. To underst this, we first take a small detour, examining the relation between the extreme case of the spaces. Let us define where is just the number of nonzeros in. Again, since the work of Peetre Sparr [16], the importance of the relation with for is well understood; see [17] for more detail. Ordinarily, solving such a problem involving the norm requires combinatorial optimization; one enumerates all sparse subsets of searching for one which allows a solution. However, when has a sparse solution, will find it. Theorem 8: Suppose that satisfies CS1 CS3 with given positive constants,. There is a constant depending only on not on or so that, if some solution to has at most nonzeros, then both have the same unique solution. In words, although the system of equations is massively underdetermined, minimization sparse solution coincide when the result is sufficiently sparse. There is by now an extensive literature exhibiting results on equivalence of minimization [27] [34]. In the early literature on this subject, equivalence was found under conditions involving sparsity constraints allowing nonzeros. While it may seem surprising that any results of this kind are possible, the sparsity constraint is, ultimately, disappointingly small. A major breakthrough was the contribution of Cès, Romberg, Tao [13] which studied the matrices built by taking rows at rom from an by Fourier matrix gave an order bound, showing that dramatically weaker sparsity conditions were needed than the results known previously. In [11], it was shown that for nearly all by matrices with, equivalence held for nonzeros,. The above result says effectively that for nearly all by matrices with, equivalence held up to nonzeros, where. Our argument, in parallel with [11], shows that the nullspace has a very special structure for obeying the conditions in question. When is sparse, the only element in a given affine subspace which can have small norm is itself. To prove Theorem 8, we first need a lemma about the nonsparsity of elements in the nullspace of. Let, for a given vector, let denote the mutilated vector with entries.define the concentration This measures the fraction of norm which can be concentrated on a certain subset for a vector in the nullspace of. This concentration cannot be large if is small. Lemma 4.1: Suppose that satisfies CS1 CS3, with constants. There is a constant depending on the so that if satisfies then Proof: This is a variation on the argument for Theorem 7. Let. Assume without loss of generality that is the most concentrated subset of cardinality, that the columns of are numbered so that ; partition. We again consider, have. We again invoke CS2 CS3, getting We invoke CS1, getting Now, of course, The lemma follows, setting.. Combining all these Proof of Theorem 8: Suppose that is supported on a subset. We first show that if, is the only minimizer of. Suppose that is a solution to, obeying Then where.wehave Invoking the definition of twice Now gives we have i.e.,. Now recall the constant of Lemma 4.1. Define so that. Lemma 4.1 shows that implies. C. Near-Optimality of Basis Pursuit for We now return to the claimed near-optimality of Basis Pursuit throughout the range. Theorem 9: Suppose that satisifies CS1 CS3 with constants. There is so that a solution to a problem instance of with obeys The proof requires an stability lemma, showing the stability of minimization under small perturbations as measured in norm. For stability lemmas, see [33] [35]; however, note that those lemmas do not suffice for our needs in this proof.

10 1298 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006 Lemma 4.2: Let be a vector in be the corresponding mutilated vector with entries. Suppose that where. Consider the instance of defined by ; the solution of this instance of obeys Proof of Lemma 4.2: Put for short.bydefinition of (IV.2), set V. IMMEDIATE EXTENSIONS Before continuing, we mention two immediate extensions to the results so far; they are of interest below elsewhere. A. Tight Frames Our main results so far have been stated in the context of making an orthonormal basis. In fact, the results hold for tight frames. These are collections of vectors which, when joined together as columns in an matrix have. It follows that, if, then we have the Parseval relation while As solves of course the reconstruction formula. In fact, Theorems 7 9 only need the Parseval relation in the proof. Hence, the same results hold without change when the relation between involves a tight frame. In particular, if is an matrix satisfying CS1 CS3, then defines a near-optimal information operator on, solution of the optimization problem Hence, Finally Combining the above, setting,, we get (IV.2) follows. Proof of Theorem 9: We use the same general framework as in Theorem 7. Let where. Let be the solution to, set. Let as in Lemma 4.1 set. Let index the largest amplitude entries in. From (II.4) we have Lemma 4.1 provides Applying Lemma 4.2 (IV.3) The vector lies in has. Hence, We conclude by homogeneity that Combining this with (IV.3), defines a near-optimal reconstruction algorithm. A referee remarked that there is no need to restrict attention to tight frames here; if we have a general frame the same results go through, with constants involving the frame bounds. This is true potentially very useful, although we will not use it in what follows. B. Weak Balls Our main results so far have been stated for spaces, but the proofs hold for weak balls as well. The weak ball of radius consists of vectors whose decreasing rearrangements obey Conversely, for a given, the smallest for which these inequalities all hold is defined to be the norm:. The weak moniker derives from. Weak constraints have the following key property: if denotes a mutilated version of the vector with all except the largest items set to zero, then the inequality (V.1) is valid for, with. In fact, Theorems 7 9 only needed (V.1) in the proof, together with (implicitly). Hence, we can state results for spaces defined using only weak- norms, the proofs apply without change. VI. STYLIZED APPLICATIONS We sketch three potential applications of the above abstract theory. In each case, we exhibit that a certain class of functions

11 DONOHO: COMPRESSED SENSING 1299 has expansion coefficients in a basis or frame that obey a particular or weak embedding, then apply the above abstract theory. A. Bump Algebra Consider the class of functions which are restrictions to the unit interval of functions belonging to the Bump Algebra [2], with bump norm. This was mentioned in the Introduction, which observed that the wavelet coefficients at level obey where depends only on the wavelet used. Here later we use stard wavelet analysis notations as in [36], [37], [2]. We consider two ways of approximating functions in.in the classic linear scheme, we fix a finest scale measure the resumé coefficients where, with a smooth function integrating to. Think of these as point samples at scale after applying an antialiasing filter. We reconstruct by giving an approximation error with depending only on the chosen wavelet. There are coefficients associated with the unit interval, so the approximation error obeys In the compressed sensing scheme, we need also wavelets where is an oscillating function with mean zero. We pick a coarsest scale. We measure the resumé coefficients there are of these then let denote an enumeration of the detail wavelet coefficients. The dimension of is. The norm satisfies The compressed sensing scheme takes a total of samples of resumé coefficients samples associated with detail coefficients, for a total pieces of information. It achieves error comparable to classical sampling based on samples. Thus, it needs dramatically fewer samples for comparable accuracy: roughly speaking, only the cube root of the number of samples of linear sampling. To achieve this dramatic reduction in sampling, we need an information operator based on some satisfying CS1 CS3. The underlying measurement kernels will be of the form (VI.1) where the collection is simply an enumeration of the wavelets, with. B. Images of Bounded Variation We consider now the model with images of Bounded Variation from the Introduction. Let denote the class of functions with domain, having total variation at most [38], bounded in absolute value by as well. In the Introduction, it was mentioned that the wavelet coefficients at level obey where depends only on the wavelet used. It is also true that. We again consider two ways of approximating functions in. The classic linear scheme uses a 2-D version of the scheme we have already discussed. We again fix a finest scale measure the resumé coefficients where now is a pair of integers,. indexing position. We use the Haar scaling function giving an approxima- We reconstruct by tion error This establishes the constraint on norm needed for our theory. We take apply a near-optimal information operator for this (described in more detail later). We apply the near-optimal algorithm of minimization, getting the error estimate with independent of. The overall reconstruction has error There are coefficients associated with the unit square, so the approximation error obeys In the compressed sensing scheme, we need also Haar wavelets where is an oscillating function with mean zero which is either horizontally oriented, vertically oriented, or diagonally oriented. We pick a coarsest scale, measure the resumé coefficients there are of these. Then let be the concatenation of the detail wavelet coefficients. The dimension of is. The norm obeys again with independent of. This is of the same order of magnitude as the error of linear sampling. This establishes the constraint on norm needed for applying our theory. We take apply a near-optimal information operator for this. We apply the near-

12 1300 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006 optimal algorithm of minimization to the resulting information, getting the error estimate with independent of. The overall reconstruction has error This is no better than the performance of linear sampling for the Bounded Variation case, despite the piecewise character of ; the possible discontinuities in are responsible for the inability of linear sampling to improve its performance over compared to Bounded Variation. In the compressed sensing scheme, we pick a coarsest scale. We measure the resumé coefficients in a smooth wavelet expansion there are of these then let denote a concatentation of the finer scale curvelet coefficients. The dimension of is, with due to overcompleteness of curvelets. The weak norm obeys again with independent of. This is of the same order of magnitude as the error of linear sampling. The compressed sensing scheme takes a total of samples of resumé coefficients samples associated with detail coefficients, for a total pieces of measured information. It achieves error comparable to classical sampling with samples. Thus, just as we have seen in the Bump Algebra case, we need dramatically fewer samples for comparable accuracy: roughly speaking, only the square root of the number of samples of linear sampling. C. Piecewise Images With Edges We now consider an example where, we can apply the extensions to tight frames to weak- mentioned earlier. Again in the image processing setting, we use the - model discussed in Cès Donoho [39], [40]. Consider the collection of piecewise smooth, with values, first second partial derivatives bounded by, away from an exceptional set which is a union of curves having first second derivatives in an appropriate parametrization; the curves have total length. More colorfully, such images are cartoons patches of uniform smooth behavior separated by piecewise-smooth curvilinear boundaries. They might be reasonable models for certain kinds of technical imagery e.g., in radiology. The curvelets tight frame [40] is a collection of smooth frame elements offering a Parseval relation reconstruction formula with depending on. This establishes the constraint on weak norm needed for our theory. We take apply a near-optimal information operator for this. We apply the near-optimal algorithm of minimization to the resulting information, getting the error estimate with has error absolute. The overall reconstruction again with independent of. This is of the same order of magnitude as the error of linear sampling. The compressed sensing scheme takes a total of samples of resumé coefficients samples associated with detail coefficients, for a total pieces of information. It achieves error comparable to classical sampling based on samples. Thus, even more so than in the Bump Algebra case, we need dramatically fewer samples for comparable accuracy: roughly speaking, only the fourth root of the number of samples of linear sampling. The frame elements have a multiscale organization, frame coefficients grouped by scale obey the weak constraint compare [40]. For such objects, classical linear sampling at scale by smooth 2-D scaling functions gives VII. NEARLY ALL MATRICES ARE CS MATRICES We may reformulate Theorem 6 as follows. Theorem 10: Let, be a sequence of problem sizes with,, for. Let be a matrix with columns drawn independently uniformly at rom from. Then for some, conditions CS1 CS3 hold for with overwhelming probability for all large.

13 DONOHO: COMPRESSED SENSING 1301 Indeed, note that the probability measure on induced by sampling columns i.i.d. uniform on is exactly the natural uniform measure on. Hence, Theorem 6 follows immediately from Theorem 10. In effect matrices satisfying the CS conditions are so ubiquitous that it is reasonable to generate them by sampling at rom from a uniform probability distribution. The proof of Theorem 10 is conducted over Sections VII- A C; it proceeds by studying events,, where CS1 Holds, etc. It will be shown that for parameters The third final idea is that bounds for individual subsets can control simultaneous behavior over all. This is expressed as follows. Lemma 7.3: Suppose we have events all obeying, for some fixed for each with. Pick with with. Then for all sufficiently large then defining,wehave for some with Since, when occurs, our rom draw has produced a matrix obeying CS1 CS3 with parameters, this proves Theorem 10. The proof actually shows that for some,, ; the convergence is exponentially fast. A. Control of Minimal Eigenvalue The following lemma allows us to choose positive constants so that condition CS1 holds with overwhelming probability. Lemma 7.1: Consider sequences of with.define the event Our main goal of this subsection, Lemma 7.1, now follows by combining these three ideas. It remains only to prove Lemma 7.3. Let with We note that, by Boole s inequality the last inequality following because each member cardinality, since, as soon as. Also, of course is of Then, for each, for sufficiently small so we get. Taking as given, we get the desired conclusion. The proof involves three ideas. The first idea is that the event of interest for Lemma 7.1 is representable in terms of events indexed by individual subsets Our plan is to bound the probability of occurrence of every. The second idea is that for a specific subset, we get large deviations bounds on the minimum eigenvalue; this can be stated as follows. Lemma 7.2: For, let denote the event that the minimum eigenvalue exceeds. Then there is so that for sufficiently small all uniformly in. This was derived in [11] in [35], using the concentration of measure property of singular values of rom matrices, e.g., see Szarek s work [41], [42]. B. Spherical Sections Property We now show that condition CS2 can be made overwhelmingly likely for large by choice of sufficiently small but still positive. Our approach derives from [11], which applied an important result from Banach space theory, the almost spherical sections phenomenon. This says that slicing the unit ball in a Banach space by intersection with an appropriate finite-dimensional linear subspace will result in a slice that is effectively spherical [43], [44]. We develop a quantitative refinement of this principle for the norm in, showing that, with overwhelming probability, every operator for affords a spherical section of the ball. The basic argument we use originates from work of Milman, Kashin, others [44], [10], [45]; we refine an argument in Pisier [19], as in [11], draw inferences that may be novel. We conclude that not only do almost-spherical sections exist, but they are so ubiquitous that every with will generate them. Definition 7.1: Let. We say that offers an -isometry between if (VII.1)

14 1302 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006 The following lemma shows that condition CS2 is a generic property of matrices. Lemma 7.4: Consider the event that every with offers an -isometry between. For each, there is so that To prove this, we first need a lemma about individual subsets proven in [11]. Lemma 7.5: Fix. Choose so that Choose so that (VII.2) (VII.3) let denote the difference between the two sides. For a subset in, let denote the event that furnishes an -isometry to. Then as (VII.6) In words, a small multiple of any sign pattern almost lives in the dual ball. Before proving this result, we indicate how it gives the property CS3; namely, that, imply (VII.7) Consider the convex optimization problem (VII.8) This can be written as a linear program, by the same sort of construction as given for (IV.1). By the duality theorem for linear programming, the value of the primal program is at least the value of the dual Lemma 7.6 gives us a supply of dual-feasible vectors hence a lower bound on the dual program. Take ; we can find which is dual-feasible obeys Now note that the event of interest for Lemma 7.4 is to finish, apply the individual Lemma 7.5 together with the combining principle in Lemma 7.3. C. Quotient Norm Inequalities We now show that, for, for sufficiently small, nearly all large by matrices have property CS3. Our argument borrows heavily from [11] which the reader may find helpful. We here make no attempt to provide intuition or to compare with closely related notions in the local theory of Banach spaces (e.g., Milman s Quotient of a Subspace Theorem [19]). Let be any collection of indices in ; is a linear subspace of, on this subspace a subset of possible sign patterns can be realized, i.e., sequences of s generated by CS3 will follow if we can show that for every, some approximation to satisfies for. Lemma 7.6: Uniform Sign-Pattern Embedding. Fix. Then for, set (VII.4) For sufficiently small, there is an event with,as. On this event, for each subset with, for each sign pattern in, there is a vector with (VII.5) picking appropriately taking into account the spherical sections theorem, for sufficiently large, wehave ; (VII.7) follows with. 1) Proof of Uniform Sign-Pattern Embedding: The proof of Lemma 7.6 follows closely a similar result in [11] that considered the case. Our idea here is to adapt the argument for the case to the case, with changes reflecting the different choice of,, the sparsity bound. We leave out large parts of the argument, as they are identical to the corresponding parts in [11]. The bulk of our effort goes to produce the following lemma, which demonstrates approximate embedding of a single sign pattern in the dual ball. Lemma 7.7: Individual Sign-Pattern Embedding. Let, let, with,,, as in the statement of Lemma 7.6. Let. Given a collection, there is an iterative algorithm, described below, producing a vector as output which obeys (VII.9) Let be i.i.d. uniform on ; there is an event described below, having probability controlled by (VII.10) for which can be explicitly given in terms of.on this event (VII.11) Lemma 7.7 will be proven in a section of its own. We now show that it implies Lemma 7.6.

15 DONOHO: COMPRESSED SENSING 1303 We recall a stard implication of so-called Vapnik Cervonenkis theory [46] If is empty, then the process terminates, set. Termination must occur at stage. At termination Notice that if while also, then Hence, is definitely dual feasible. The only question is how close to it is. 2) Analysis Framework: Also in [11] bounds were developed for two key descriptors of the algorithm trajectory Hence, the total number of sign patterns generated by operators obeys We adapt the arguments deployed there. We define bounds for, of the form Now with furnished by Lemma 7.7 is positive, so pick.define here, where will be defined later. We define subevents where denotes the instance of the event (called in the statement of Lemma 7.7) generated by a specific, combination. On the event, every sign pattern associated with any obeying is almost dual feasible. Now Now define this event implies, because which tends to zero exponentially rapidly. D. Proof of Individual Sign-Pattern Embedding 1) An Embedding Algorithm: The companion paper [11] introduced an algorithm to create a dual feasible point starting from a nearby almost-feasible point. It worked as follows. Let be the collection of indices with We will show that, for This implies chosen in conjunction with (VII.12) then set the lemma follows. 3) Large Deviations: Define the events where denotes the least-squares projector. In effect, identify the indices where exceeds half the forbidden level, kill those indices. Continue this process, producing,, etc., with stage-dependent thresholds successively closer to. Set so that Put, putting, note that this depends quite weakly on. Recall that the event is defined in terms of. On the event,. Lemma 7.1 implicitly defined a quantity lowerbounding the minimum eigenvalue of

16 1304 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006 every where. Pick so that. Pick so that With this choice of, when the event occurs, Since, the term of most concern in is at ; the other terms are always better. Also in fact does not depend on. Focusing now on, we may write Also, on, (say) for. In [11], an analysis framework was developed in which a family of rom variables i.i.d. was introduced, it was shown that Recalling that putting we get for, A similar analysis holds for the s. That paper also stated two simple large deviations bounds. Lemma 7.8: Let be i.i.d.,, Applying this, we note that the event VIII. CONCLUSION A. Summary We have described an abstract framework for compressed sensing of objects which can be represented as vectors. We assume the object of interest is a priori compressible so that for a known basis or frame. Starting from an by matrix with satisfying conditions CS1 CS3, with the matrix of an orthonormal basis or tight frame underlying,wedefine the information operator. Starting from the -tuple of measured information, we reconstruct an approximation to by solving stated in terms of stated in terms of stard We therefore have for Now variables, is equivalent to an event rom variables, where the inequality The proposed reconstruction rule uses convex optimization is computationally tractable. Also, the needed matrices satisfying CS1 CS3 can be constructed by rom sampling from a uniform distribution on the columns of. We give error bounds showing that despite the apparent undersampling, good accuracy reconstruction is possible for compressible objects, we explain the near-optimality of these bounds using ideas from Optimal Recovery Information-Based Complexity. We even show that the results are stable under small measurement errors in the data ( small). Potential applications are sketched related to imaging spectroscopy. B. Alternative Formulation We remark that the CS1 CS3 conditions are not the only way to obtain our results. Our proof of Theorem 9 really shows the following. Theorem 11: Suppose that an matrix obeys the following conditions, with constants,.

17 DONOHO: COMPRESSED SENSING 1305 A1: The maximal concentration (defined in Section IV-B) obeys (VIII.1) A2: The width (defined in Section II) obeys Let. For some all, the solution of obeys the estimate (VIII.2) In short, a different approach might exhibit operators with good widths over balls only, low concentration on thin sets. Another way to see that the conditions CS1 CS3 can no doubt be approached differently is to compare the results in [11], [35]; the second paper proves results which partially overlap those in the first, by using a different technique. C. The Partial Fourier Ensemble We briefly discuss two recent articles which do not fit in the -widths tradition followed here, so were not easy to cite earlier with due prominence. First, closest to our viewpoint, is the breakthrough paper of Cès, Romberg, Tao [13]. This was discussed in Section IV-B; the result of [13] showed that minimization can be used to exactly recover sparse sequences from the Fourier transform at romly chosen frequencies, whenever the sequence has fewer than nonzeros, for some. Second is the article of Gilbert et al. [12], which showed that a different nonlinear reconstruction algorithm can be used to recover approximations to a vector in which is nearly as good as the best -term approximation in norm, using about rom but nonuniform samples in the frequency domain; here is (it seems) an upper bound on the norm of. These papers both point to the partial Fourier ensemble, i.e., the collection of matrices made by sampling rows out of the Fourier matrix, as concrete examples of working within the CS framework; that is, generating near-optimal subspaces for Gel f -widths, allowing minimization to reconstruct from such information for all. Now [13] (in effect) proves that if, then in the partial Fourier ensemble with uniform measure, the maximal concentration condition A1 (VIII.1) holds with overwhelming probability for large (for appropriate constants, ). On the other h, the results in [12] seem to show that condition A2 (VIII.2) holds in the partial Fourier ensemble with overwhelming probability for large, when it is sampled with a certain nonuniform probability measure. Although the two papers [13], [12] refer to different rom ensembles of partial Fourier matrices, both reinforce the idea that interesting relatively concrete families of operators can be developed for compressed sensing applications. In fact, Cès has informed us of some recent results he obtained with Tao [47] indicating that, modulo polylog factors, A2 holds for the uniformly sampled partial Fourier ensemble. This seems a very significant advance. Note Added in Proof In the months since the paper was written, several groups have conducted numerical experiments on synthetic real data for the method described here related methods. They have explored applicability to important sensor problems, studied applications issues such as stability in the presence of noise. The reader may wish to consult the forthcoming Special Issue on Sparse Representation of the EURASIP journal Applied Signal Processing, or look for papers presented at a special session in ICASSP 2005, or the workshop on sparse representation held in May 2005 at the University of Maryl Center for Scientific Computing Applied Mathematics, or the workshop in November 2005 at Spars05, Université de Rennes. A referee has pointed out that Compressed Sensing is in some respects similar to problems arising in data stream processing, where one wants to learn basic properties [e.g., moments, histogram] of a datastream without storing the stream. In short, one wants to make relatively few measurements while inferring relatively much detail. The notions of Iceberg queries in large databases heavy hitters in data streams may provide points of entry into that literature. ACKNOWLEDGMENT In spring 2004, Emmanuel Cès told the present author about his ideas for using the partial Fourier ensemble in undersampled imaging ; some of these were published in [13]; see also the presentation [14]. More recently, Cès informed the present author of the results in [47] referred to above. It is a pleasure to acknowledge the inspiring nature of these conversations. The author would also like to thank Anna Gilbert for telling him about her work [12] finding the B-best Fourier coefficients by nonadaptive sampling, to thank Emmanuel Cès for conversations clarifying Gilbert s work. Thanks to the referees for numerous suggestions which helped to clarify the exposition argumentation. Anna Gilbert offered helpful pointers to the data stream processing literature. REFERENCES [1] D. L. Donoho, M. Vetterli, R. A. DeVore, I. C. Daubechies, Data compression harmonic analysis, IEEE Trans. Inf. Theory, vol. 44, no. 6, pp , Oct [2] Y. Meyer, Wavelets Operators. Cambridge, U.K.: Cambridge Univ. Press, [3] D. L. Donoho, Unconditional bases are optimal bases for data compression for statistical estimation, Appl. Comput. Harmonic Anal., vol. 1, pp , [4], Sparse components of images optimal atomic decomposition, Constructive Approx., vol. 17, pp , [5] A. Pinkus, n-widths optimal recovery in approximation theory, in Proc. Symp. Applied Mathematics, vol. 36, C. de Boor, Ed., Providence, RI, 1986, pp [6] J. F. Traub H. Woziakowski, A General Theory of Optimal Algorithms. New York: Academic, [7] D. L. Donoho, Unconditional bases bit-level compression, Appl. Comput. Harmonic Anal., vol. 3, pp , [8] A. Pinkus, n-widths in Approximation Theory. New York: Springer- Verlag, [9] A. Y. Garnaev E. D. Gluskin, On widths of the Euclidean ball (in English), Sov. Math. Dokl., vol. 30, pp , [10] B. S. Kashin, Diameters of certain finite-dimensional sets in classes of smooth functions, Izv. Akad. Nauk SSSR, Ser. Mat., vol. 41, no. 2, pp , 1977.

Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

Math 1512 Fall 2010 Notes on least squares approximation Given n data points (x 1, y 1 ),..., (x n, y n ), we would like to find the line L, with an equation of the form y = mx + b, which is the best fit

A Negative Result Concerning Explicit Matrices With The Restricted Isometry Property Venkat Chandar March 1, 2008 Abstract In this note, we prove that matrices whose entries are all 0 or 1 cannot achieve

Linear Programming Linear programming refers to problems stated as maximization or minimization of a linear function subject to constraints that are linear equalities and inequalities. Although the study

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering

24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

Week 5 Integral Polyhedra We have seen some examples 1 of linear programming formulation that are integral, meaning that every basic feasible solution is an integral vector. This week we develop a theory

Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length

This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra

. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

ORDERS OF ELEMENTS IN A GROUP KEITH CONRAD 1. Introduction Let G be a group and g G. We say g has finite order if g n = e for some positive integer n. For example, 1 and i have finite order in C, since

TOPOLOGY: THE JOURNEY INTO SEPARATION AXIOMS VIPUL NAIK Abstract. In this journey, we are going to explore the so called separation axioms in greater detail. We shall try to understand how these axioms

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

Chapter 4 Ideal Class Group and Units We are now interested in understanding two aspects of ring of integers of number fields: how principal they are (that is, what is the proportion of principal ideals

Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e. This chapter contains the beginnings of the most important, and probably the most subtle, notion in mathematical analysis, i.e.,

Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Wei Dai and Olgica Milenkovic Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign

THE FUNDAMENTAL THEOREM OF ARBITRAGE PRICING 1. Introduction The Black-Scholes theory, which is the main subject of this course and its sequel, is based on the Efficient Market Hypothesis, that arbitrages

NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

MA 323 Geometric Modelling Course Notes: Day 02 Model Construction Problem David L. Finn November 30th, 2004 In the next few days, we will introduce some of the basic problems in geometric modelling, and

Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem

LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

1 2 The calibration problem was discussed in details during lecture 3. 3 Once the camera is calibrated (intrinsics are known) and the transformation from the world reference system to the camera reference

Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

This material is posted here with permission of the IEEE Such permission of the IEEE does not in any way imply IEEE endorsement of any of Helsinki University of Technology's products or services Internal

The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor

Manifold Learning Examples PCA, LLE and ISOMAP Dan Ventura October 14, 28 Abstract We try to give a helpful concrete example that demonstrates how to use PCA, LLE and Isomap, attempts to provide some intuition

Lecture 4. LaSalle s Invariance Principle We begin with a motivating eample. Eample 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum Dynamics of a pendulum with friction can be written

.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

LECTURE NOTES: FINITE ELEMENT METHOD AXEL MÅLQVIST. Motivation The finite element method has two main strengths... Geometry. Very complex geometries can be used. This is probably the main reason why finite

3.7 Non-autonomous linear systems of ODE. General theory Now I will study the ODE in the form ẋ = A(t)x + g(t), x(t) R k, A, g C(I), (3.1) where now the matrix A is time dependent and continuous on some