Weak topology

This article is about the weak topology on a normed vector space. For the weak topology induced by a general family of maps, see initial topology. For the weak topology generated by a cover of a space, see coherent topology.

One may call subsets of a topological vector space weakly closed (respectively, weakly compact, etc.) if they are closed (respectively, compact, etc.) with respect to the weak topology. Likewise, functions are sometimes called weakly continuous (respectively, weakly differentiable, weakly analytic, etc.) if they are continuous (respectively, differentiable, analytic, etc.) with respect to the weak topology.

We may define a possibly different topology on X using the continuous (or topological) dual spaceX*. The topological dual space consists of all linear functions from X into the base field K that are continuous with respect to the given topology. The weak topology on X is the initial topology
with respect to the family X*. In other words, it is the coarsest topology on X such that each element of X* remains a continuous function. In order to distinguish the weak topology from the original topology on X, the original topology is often called the strong topology.

A subbase for the weak topology is the collection of sets of the form φ−1(U) where φ ∈ X* and U is an open subset of the base field K. In other words, a subset of X is open in the weak topology if and only if it can be written as a union of (possibly infinitely many) sets, each of which is an intersection of finitely many sets of the form φ−1(U).

More generally, if F is a subset of the algebraic dual space, then the initial topology of X with respect to F, denoted by σ(X,F), is the weak topology with respect to F. If one takes F to be the whole continuous dual space of X, then the weak topology with respect to F coincides with the weak topology defined above.

If the field K has an absolute value|⋅|{\displaystyle |\cdot |}, then the weak topology σ(X,F) is induced by the family of
seminorms,

for all f∈F and x∈X. In particular, weak topologies are locally convex.
From this point of view, the weak topology is the coarsest polar topology; see weak topology (polar topology) for details. Specifically, if F is a vector space of linear functionals on X which separates points of X, then the continuous dual of X with respect to the topology σ(X,F) is precisely equal to F (Rudin 1991, Theorem 3.10).

If X is a normed space, then the dual space X* is itself a normed vector space by using the norm ǁφǁ = supǁxǁ≤1|φ(x)|. This norm gives rise to a topology, called the strong topology, on X*. This is the topology of uniform convergence. The uniform and strong topologies are generally different for other spaces of linear maps; see below.

Thus T : X → X** is an injective linear mapping, though not necessarily surjective (spaces for which this canonical embedding is surjective are called reflexive). The weak-* topology on X* is the weak topology induced by the image of T: T(X) ⊂ X**. In other words, it is the coarsest topology such that the maps Tx, defined by Tx(φ) = φ(x) from X* to the base field R or C remain continuous.

A netφλ in X* is convergent to φ in the weak-* topology if it converges pointwise:

ϕλ(x)→ϕ(x){\displaystyle \phi _{\lambda }(x)\rightarrow \phi (x)}

for all x in X. In particular, a sequence of φn ∈ X* converges to φ provided that

ϕn(x)→ϕ(x){\displaystyle \phi _{n}(x)\to \phi (x)}

for all x in X. In this case, one writes

ϕn→w∗ϕ{\displaystyle \phi _{n}{\overset {w^{*}}{\rightarrow }}\phi }

as n → ∞.

Weak-* convergence is sometimes called the topology of simple convergence or the topology of pointwise convergence. Indeed, it coincides with the topology of pointwise convergence of linear functionals.

By definition, the weak* topology is weaker than the weak topology on X*. An important fact about the weak* topology is the Banach–Alaoglu theorem: if X is normed, then the closed unit ball in X* is weak*-compact (more generally, the polar in X* of a neighborhood of 0 in X is weak*-compact). Moreover, the closed unit ball in a normed space X is compact in the weak topology if and only if X is reflexive.

In more generality, let F be locally compact valued field (e.g., the reals, the complex numbers, or any of the p-adic number systems). Let X be a normed topological vector space over F, compatible with the absolute value in F. Then in X*, the topological dual space X of continuous F-valued linear functionals on X, all norm-closed balls are compact in the weak-* topology.

If a normed space X is separable, then the weak-* topology is metrizable on the norm-bounded subsets of X*. If X is a Banach space, the weak-* topology is not metrizable on all of X* unless X is finite-dimensional.[1]

for all functions f∈L2 (or, more typically, all f in a dense subset of L2 such as a space of test functions, if the sequence {ψk} is bounded). For given test functions, the relevant notion of convergence only corresponds to the topology used in C.

One normally obtains spaces of distributions by forming the strong dual of a space of test functions (such as the compactly supported smooth functions on Rn). In an alternative construction of such spaces, one can take the weak dual of a space of test functions inside a Hilbert space such as L2. Thus one is led to consider the idea of a rigged Hilbert space.

If X and Y are topological vector spaces, the space L(X,Y) of continuous linear operatorsf:X → Y may carry a variety of different possible topologies. The naming of such topologies depends on the kind of topology one is using on the target space Y to define operator convergence (Yosida 1980, IV.7 Topologies of linear maps). There are, in general, a vast array of possible operator topologies on L(X,Y), whose naming is not entirely intuitive.

For example, the strong operator topology on L(X,Y) is the topology of pointwise convergence. For instance, if Y is a normed space, then this topology is defined by the seminorms indexed by x∈X:

f↦‖f(x)‖Y.{\displaystyle f\mapsto \|f(x)\|_{Y}.}

More generally, if a family of seminorms Q defines the topology on Y, then the seminorms pq,x on L(X,Y) defining the strong topology are given by

1.
Mathematics
–
Mathematics is the study of topics such as quantity, structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope, Mathematicians seek out patterns and use them to formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof, when mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, practical mathematics has been a human activity from as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry, rigorous arguments first appeared in Greek mathematics, most notably in Euclids Elements. Galileo Galilei said, The universe cannot be read until we have learned the language and it is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth, carl Friedrich Gauss referred to mathematics as the Queen of the Sciences. Benjamin Peirce called mathematics the science that draws necessary conclusions, David Hilbert said of mathematics, We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules, rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. Albert Einstein stated that as far as the laws of mathematics refer to reality, they are not certain, Mathematics is essential in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics has led to entirely new mathematical disciplines, such as statistics, Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, the history of mathematics can be seen as an ever-increasing series of abstractions. The earliest uses of mathematics were in trading, land measurement, painting and weaving patterns, in Babylonian mathematics elementary arithmetic first appears in the archaeological record. Numeracy pre-dated writing and numeral systems have many and diverse. Between 600 and 300 BC the Ancient Greeks began a study of mathematics in its own right with Greek mathematics. Mathematics has since been extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today, the overwhelming majority of works in this ocean contain new mathematical theorems and their proofs. The word máthēma is derived from μανθάνω, while the modern Greek equivalent is μαθαίνω, in Greece, the word for mathematics came to have the narrower and more technical meaning mathematical study even in Classical times

2.
Hilbert space
–
The mathematical concept of a Hilbert space, named after David Hilbert, generalizes the notion of Euclidean space. It extends the methods of algebra and calculus from the two-dimensional Euclidean plane. A Hilbert space is a vector space possessing the structure of an inner product that allows length. Furthermore, Hilbert spaces are complete, there are limits in the space to allow the techniques of calculus to be used. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as infinite-dimensional function spaces, the earliest Hilbert spaces were studied from this point of view in the first decade of the 20th century by David Hilbert, Erhard Schmidt, and Frigyes Riesz. They are indispensable tools in the theories of partial differential equations, quantum mechanics, Fourier analysis —and ergodic theory, john von Neumann coined the term Hilbert space for the abstract concept that underlies many of these diverse applications. The success of Hilbert space methods ushered in a very fruitful era for functional analysis, geometric intuition plays an important role in many aspects of Hilbert space theory. Exact analogs of the Pythagorean theorem and parallelogram law hold in a Hilbert space, at a deeper level, perpendicular projection onto a subspace plays a significant role in optimization problems and other aspects of the theory. An element of a Hilbert space can be specified by its coordinates with respect to a set of coordinate axes. When that set of axes is countably infinite, this means that the Hilbert space can also usefully be thought of in terms of the space of sequences that are square-summable. The latter space is often in the literature referred to as the Hilbert space. One of the most familiar examples of a Hilbert space is the Euclidean space consisting of vectors, denoted by ℝ3. The dot product takes two vectors x and y, and produces a real number x·y, If x and y are represented in Cartesian coordinates, then the dot product is defined by ⋅ = x 1 y 1 + x 2 y 2 + x 3 y 3. The dot product satisfies the properties, It is symmetric in x and y, x · y = y · x. It is linear in its first argument, · y = ax1 · y + bx2 · y for any scalars a, b, and vectors x1, x2, and y. It is positive definite, for all x, x · x ≥0, with equality if. An operation on pairs of vectors that, like the dot product, a vector space equipped with such an inner product is known as a inner product space. Every finite-dimensional inner product space is also a Hilbert space, multivariable calculus in Euclidean space relies on the ability to compute limits, and to have useful criteria for concluding that limits exist

3.
Complex number
–
A complex number is a number that can be expressed in the form a + bi, where a and b are real numbers and i is the imaginary unit, satisfying the equation i2 = −1. In this expression, a is the part and b is the imaginary part of the complex number. If z = a + b i, then ℜ z = a, ℑ z = b, Complex numbers extend the concept of the one-dimensional number line to the two-dimensional complex plane by using the horizontal axis for the real part and the vertical axis for the imaginary part. The complex number a + bi can be identified with the point in the complex plane, a complex number whose real part is zero is said to be purely imaginary, whereas a complex number whose imaginary part is zero is a real number. In this way, the numbers are a field extension of the ordinary real numbers. As well as their use within mathematics, complex numbers have applications in many fields, including physics, chemistry, biology, economics, electrical engineering. The Italian mathematician Gerolamo Cardano is the first known to have introduced complex numbers and he called them fictitious during his attempts to find solutions to cubic equations in the 16th century. Complex numbers allow solutions to equations that have no solutions in real numbers. For example, the equation 2 = −9 has no real solution, Complex numbers provide a solution to this problem. The idea is to extend the real numbers with the unit i where i2 = −1. According to the theorem of algebra, all polynomial equations with real or complex coefficients in a single variable have a solution in complex numbers. A complex number is a number of the form a + bi, for example, −3.5 + 2i is a complex number. The real number a is called the part of the complex number a + bi. By this convention the imaginary part does not include the unit, hence b. The real part of a number z is denoted by Re or ℜ. For example, Re ⁡ = −3.5 Im ⁡ =2, hence, in terms of its real and imaginary parts, a complex number z is equal to Re ⁡ + Im ⁡ ⋅ i. This expression is known as the Cartesian form of z. A real number a can be regarded as a number a + 0i whose imaginary part is 0

4.
Weak convergence (Hilbert space)
–
In mathematics, weak convergence in a Hilbert space is convergence of a sequence of points in the weak topology. The notation x n ⇀ x is used to denote this kind of convergence. If a sequence converges strongly, then it converges weakly as well, since every closed and bounded set is weakly relatively compact, every bounded sequence x n in a Hilbert space H contains a weakly convergent subsequence. Note that closed and bounded sets are not in general weakly compact in Hilbert spaces, however, bounded and weakly closed sets are weakly compact so as a consequence every convex bounded closed set is weakly compact. As a consequence of the principle of uniform boundedness, every weakly convergent sequence is bounded. The norm is weakly lower-semicontinuous, if x n converges weakly to x, then ∥ x ∥ ≤ lim inf n → ∞ ∥ x n ∥, for example, infinite orthonormal sequences converge weakly to zero, as demonstrated below. If the Hilbert space is finite-dimensional, i. e. a Euclidean space, then the concepts of weak convergence and strong convergence are the same. The Hilbert space L2 is the space of the functions on the interval equipped with the inner product defined by ⟨ f, g ⟩ = ∫02 π f ⋅ g d x. Although f n has an number of 0s in as n goes to infinity. Note that f n does not converge to 0 in the L ∞ or L2 norms and this dissimilarity is one of the reasons why this type of convergence is considered to be weak. Consider a sequence e n which was constructed to be orthonormal and we claim that if the sequence is infinite, then it converges weakly to zero. A simple proof is as follows, for x ∈ H, we have ∑ n | ⟨ e n, x ⟩ |2 ≤ ∥ x ∥2 where equality holds when is a Hilbert space basis. Therefore | ⟨ e n, x ⟩ |2 →0 i. e. ⟨ e n, x ⟩ →0. The Banach–Saks theorem states that every bounded sequence x n contains a subsequence x n k, the definition of weak convergence can be extended to Banach spaces. If B is a Hilbert space, then, by the Riesz representation theorem, any such f has the form f = ⟨ ⋅, y ⟩ for some y in B, so one obtains the Hilbert space definition of weak convergence

5.
Banach space
–
In mathematics, more specifically in functional analysis, a Banach space is a complete normed vector space. Banach spaces are named after the Polish mathematician Stefan Banach, who introduced this concept and studied it systematically in 1920–1922 along with Hans Hahn, Banach spaces originally grew out of the study of function spaces by Hilbert, Fréchet, and Riesz earlier in the century. Banach spaces play a role in functional analysis. In other areas of analysis, the spaces under study are often Banach spaces, the vector space structure allows one to relate the behavior of Cauchy sequences to that of converging series of vectors. All norms on a vector space are equivalent. Every finite-dimensional normed space over R or C is a Banach space, if X and Y are normed spaces over the same ground field K, the set of all continuous K-linear maps T, X → Y is denoted by B. In infinite-dimensional spaces, not all maps are continuous. For Y a Banach space, the space B is a Banach space with respect to this norm, if X is a Banach space, the space B = B forms a unital Banach algebra, the multiplication operation is given by the composition of linear maps. If X and Y are normed spaces, they are isomorphic normed spaces if there exists a linear bijection T, X → Y such that T, if one of the two spaces X or Y is complete then so is the other space. Two normed spaces X and Y are isometrically isomorphic if in addition, T is an isometry, the Banach–Mazur distance d between two isomorphic but not isometric spaces X and Y gives a measure of how much the two spaces X and Y differ. Every normed space X can be embedded in a Banach space. More precisely, there is a Banach space Y and an isometric mapping T, X → Y such that T is dense in Y. If Z is another Banach space such that there is an isomorphism from X onto a dense subset of Z. This Banach space Y is the completion of the normed space X, the underlying metric space for Y is the same as the metric completion of X, with the vector space operations extended from X to Y. The completion of X is often denoted by X ^, the cartesian product X × Y of two normed spaces is not canonically equipped with a norm. However, several equivalent norms are used, such as ∥ ∥1 = ∥ x ∥ + ∥ y ∥, ∥ ∥ ∞ = max. In this sense, the product X × Y is complete if and only if the two factors are complete. If M is a linear subspace of a normed space X, there is a natural norm on the quotient space X / M

6.
Field (mathematics)
–
In mathematics, a field is a set on which are defined addition, subtraction, multiplication, and division, which behave as they do when applied to rational and real numbers. A field is thus an algebraic structure, which is widely used in algebra, number theory. The best known fields are the field of numbers. In addition, the field of numbers is widely used, not only in mathematics. Finite fields are used in most cryptographic protocols used for computer security, any field may be used as the scalars for a vector space, which is the standard general context for linear algebra. Formally, a field is a set together with two operations the addition and the multiplication, which have the properties, called axioms of fields. An operation is a mapping that associates an element of the set to every pair of its elements, the result of the addition of a and b is called the sum of a and b and denoted a + b. Similarly, the result of the multiplication of a and b is called the product of a and b, associativity of addition and multiplication For all a, b and c in F, one has a + = + c and a · = · c. Commutativity of addition and multiplication For all a and b in F one has a + b = b + a and a · b = b · a. Existence of additive and multiplicative identity elements There exists an element 0 in F, called the identity, such that for all a in F. There is an element 1, different from 0 and called the identity, such that for all a in F. Existence of additive inverses and multiplicative inverses For every a in F, there exists an element in F, denoted −a, such that a + =0. For every a ≠0 in F, there exists an element in F, denoted a−1, 1/a, or 1/a, distributivity of multiplication over addition For all a, b and c in F, one has a · = +. The elements 0 and 1 being required to be distinct, a field has, at least, for every a in F, one has − a = ⋅ a. Thus, the inverse of every element is known as soon as one knows the additive inverse of 1. A subtraction and a division are defined in every field by a − b = a +, a subfield E of a field F is a subset of F that contains 1, and is closed under addition, multiplication, additive inverse and multiplicative inverse of a nonzero element. It is straightforward to verify that a subfield is indeed a field, two groups are associated to every field. The field itself is a group under addition, when considering this group structure rather the field structure, one talks of the additive group of the field

7.
Continuous function
–
In mathematics, a continuous function is a function for which sufficiently small changes in the input result in arbitrarily small changes in the output. Otherwise, a function is said to be a discontinuous function, a continuous function with a continuous inverse function is called a homeomorphism. Continuity of functions is one of the concepts of topology. The introductory portion of this focuses on the special case where the inputs and outputs of functions are real numbers. In addition, this article discusses the definition for the general case of functions between two metric spaces. In order theory, especially in theory, one considers a notion of continuity known as Scott continuity. Other forms of continuity do exist but they are not discussed in this article, as an example, consider the function h, which describes the height of a growing flower at time t. By contrast, if M denotes the amount of money in an account at time t, then the function jumps at each point in time when money is deposited or withdrawn. A form of the definition of continuity was first given by Bernard Bolzano in 1817. Cauchy defined infinitely small quantities in terms of quantities. The formal definition and the distinction between pointwise continuity and uniform continuity were first given by Bolzano in the 1830s but the work wasnt published until the 1930s, all three of those nonequivalent definitions of pointwise continuity are still in use. Eduard Heine provided the first published definition of continuity in 1872. This is not a definition of continuity since the function f =1 x is continuous on its whole domain of R ∖ A function is continuous at a point if it does not have a hole or jump. A “hole” or “jump” in the graph of a function if the value of the function at a point c differs from its limiting value along points that are nearby. Such a point is called a discontinuity, a function is then continuous if it has no holes or jumps, that is, if it is continuous at every point of its domain. Otherwise, a function is discontinuous, at the points where the value of the function differs from its limiting value, there are several ways to make this definition mathematically rigorous. These definitions are equivalent to one another, so the most convenient definition can be used to determine whether a function is continuous or not. In the definitions below, f, I → R. is a function defined on a subset I of the set R of real numbers and this subset I is referred to as the domain of f

8.
Initial topology
–
In general topology and related areas of mathematics, the initial topology on a set X, with respect to a family of functions on X, is the coarsest topology on X that makes those functions continuous. The subspace topology and product topology constructions are both special cases of initial topologies, indeed, the initial topology construction can be viewed as a generalization of these. The dual construction is called the final topology, explicitly, the initial topology may be described as the topology generated by sets of the form f i −1, where U is an open set in Y i. The sets f i −1 are often called cylinder sets, if I contains exactly one element, all the open sets of are cylinder sets. Several topological constructions can be regarded as special cases of the initial topology, the subspace topology is the initial topology on the subspace with respect to the inclusion map. The product topology is the initial topology with respect to the family of projection maps, the inverse limit of any inverse system of spaces and continuous maps is the set-theoretic inverse limit together with the initial topology determined by the canonical morphisms. The weak topology on a convex space is the initial topology with respect to the continuous linear forms of its dual space. Given a family of topologies on a fixed set X the initial topology on X with respect to the functions idi and that is, the initial topology τ is the topology generated by the union of the topologies. A topological space is regular if and only if it has the initial topology with respect to its family of real-valued continuous functions. Every topological space X has the initial topology with respect to the family of functions from X to the Sierpiński space. The initial topology on X can be characterized by the characteristic property, A function g from some space Z to X is continuous if. Note that, despite looking quite similar, this is not a universal property, a categorical description is given below. By the universal property of the topology, we know that any family of continuous maps fi. This map is known as the evaluation map, a family of maps is said to separate points in X if for all x ≠ y in X there exists some i such that fi ≠ fi. Clearly, the family separates points if and only if the evaluation map f is injective. The evaluation map f will be an embedding if and only if X has the initial topology determined by the maps. If a space X comes equipped with a topology, it is useful to know whether or not the topology on X is the initial topology induced by some family of maps on X. This section gives a sufficient condition, a family of continuous maps separates points from closed sets if and only if the cylinder sets f i −1, for U open in Yi, form a base for the topology on X

9.
Derivative
–
The derivative of a function of a real variable measures the sensitivity to change of the function value with respect to a change in its argument. Derivatives are a tool of calculus. For example, the derivative of the position of an object with respect to time is the objects velocity. The derivative of a function of a variable at a chosen input value. The tangent line is the best linear approximation of the function near that input value, for this reason, the derivative is often described as the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable. Derivatives may be generalized to functions of real variables. In this generalization, the derivative is reinterpreted as a transformation whose graph is the best linear approximation to the graph of the original function. The Jacobian matrix is the matrix that represents this linear transformation with respect to the basis given by the choice of independent and dependent variables and it can be calculated in terms of the partial derivatives with respect to the independent variables. For a real-valued function of variables, the Jacobian matrix reduces to the gradient vector. The process of finding a derivative is called differentiation, the reverse process is called antidifferentiation. The fundamental theorem of calculus states that antidifferentiation is the same as integration, differentiation and integration constitute the two fundamental operations in single-variable calculus. Differentiation is the action of computing a derivative, the derivative of a function y = f of a variable x is a measure of the rate at which the value y of the function changes with respect to the change of the variable x. It is called the derivative of f with respect to x, If x and y are real numbers, and if the graph of f is plotted against x, the derivative is the slope of this graph at each point. The simplest case, apart from the case of a constant function, is when y is a linear function of x. This formula is true because y + Δ y = f = m + b = m x + m Δ x + b = y + m Δ x. Thus, since y + Δ y = y + m Δ x and this gives an exact value for the slope of a line. If the function f is not linear, however, then the change in y divided by the change in x varies, differentiation is a method to find an exact value for this rate of change at any given value of x. The idea, illustrated by Figures 1 to 3, is to compute the rate of change as the value of the ratio of the differences Δy / Δx as Δx becomes infinitely small

10.
Vector space
–
A vector space is a collection of objects called vectors, which may be added together and multiplied by numbers, called scalars in this context. Scalars are often taken to be numbers, but there are also vector spaces with scalar multiplication by complex numbers, rational numbers. The operations of addition and scalar multiplication must satisfy certain requirements, called axioms. Euclidean vectors are an example of a vector space and they represent physical quantities such as forces, any two forces can be added to yield a third, and the multiplication of a force vector by a real multiplier is another force vector. In the same vein, but in a more geometric sense, Vector spaces are the subject of linear algebra and are well characterized by their dimension, which, roughly speaking, specifies the number of independent directions in the space. Infinite-dimensional vector spaces arise naturally in mathematical analysis, as function spaces and these vector spaces are generally endowed with additional structure, which may be a topology, allowing the consideration of issues of proximity and continuity. Among these topologies, those that are defined by a norm or inner product are commonly used. This is particularly the case of Banach spaces and Hilbert spaces, historically, the first ideas leading to vector spaces can be traced back as far as the 17th centurys analytic geometry, matrices, systems of linear equations, and Euclidean vectors. Today, vector spaces are applied throughout mathematics, science and engineering, furthermore, vector spaces furnish an abstract, coordinate-free way of dealing with geometrical and physical objects such as tensors. This in turn allows the examination of local properties of manifolds by linearization techniques, Vector spaces may be generalized in several ways, leading to more advanced notions in geometry and abstract algebra. The concept of space will first be explained by describing two particular examples, The first example of a vector space consists of arrows in a fixed plane. This is used in physics to describe forces or velocities, given any two such arrows, v and w, the parallelogram spanned by these two arrows contains one diagonal arrow that starts at the origin, too. This new arrow is called the sum of the two arrows and is denoted v + w, when a is negative, av is defined as the arrow pointing in the opposite direction, instead. Such a pair is written as, the sum of two such pairs and multiplication of a pair with a number is defined as follows, + = and a =. The first example above reduces to one if the arrows are represented by the pair of Cartesian coordinates of their end points. A vector space over a field F is a set V together with two operations that satisfy the eight axioms listed below, elements of V are commonly called vectors. Elements of F are commonly called scalars, the second operation, called scalar multiplication takes any scalar a and any vector v and gives another vector av. In this article, vectors are represented in boldface to distinguish them from scalars

11.
Absolute value
–
In mathematics, the absolute value or modulus |x| of a real number x is the non-negative value of x without regard to its sign. Namely, |x| = x for a x, |x| = −x for a negative x. For example, the value of 3 is 3. The absolute value of a number may be thought of as its distance from zero, generalisations of the absolute value for real numbers occur in a wide variety of mathematical settings. For example, a value is also defined for the complex numbers. The absolute value is related to the notions of magnitude, distance. The term absolute value has been used in this sense from at least 1806 in French and 1857 in English, the notation |x|, with a vertical bar on each side, was introduced by Karl Weierstrass in 1841. Other names for absolute value include numerical value and magnitude, in programming languages and computational software packages, the absolute value of x is generally represented by abs, or a similar expression. Thus, care must be taken to interpret vertical bars as an absolute value sign only when the argument is an object for which the notion of an absolute value is defined. For any real number x the value or modulus of x is denoted by |x| and is defined as | x | = { x, if x ≥0 − x. As can be seen from the definition, the absolute value of x is always either positive or zero. Indeed, the notion of a distance function in mathematics can be seen to be a generalisation of the absolute value of the difference. Since the square root notation without sign represents the square root. This identity is used as a definition of absolute value of real numbers. The absolute value has the four fundamental properties, The properties given by equations - are readily apparent from the definition. To see that equation holds, choose ε from so that ε ≥0, some additional useful properties are given below. These properties are either implied by or equivalent to the properties given by equations -, for example, Absolute value is used to define the absolute difference, the standard metric on the real numbers. Since the complex numbers are not ordered, the definition given above for the absolute value cannot be directly generalised for a complex number

A tiling with squares whose sides are successive Fibonacci numbers in length.

The plot of a convergent sequence (an) is shown in blue. From the graph we can see that the sequence is converging to the limit zero as n increases.

The plot of a Cauchy sequence (Xn), shown in blue, as Xn versus n. In the graph the sequence appears to be converging to a limit as the distance between consecutive terms in the sequence gets smaller as n increases. In the real numbers every Cauchy sequence converges to some limit.

In mathematics, a function f from a set X to a set Y is surjective (or onto), or a surjection, if for every element y …

A surjective function from domain X to codomain Y. The function is surjective because every point in the codomain is the value of f(x) for at least one point x in the domain.

A non-surjective function from domain X to codomain Y. The smaller oval inside Y is the image (also called range) of f. This function is not surjective, because the image does not fill the whole codomain. In other words, Y is colored in a two-step process: First, for every x in X, the point f(x) is colored yellow; Second, all the rest of the points in Y, that are not yellow, are colored blue. The function f is surjective only if there are no blue points.

Interpretation for surjective functions in the Cartesian plane, defined by the mapping f : X → Y, where y = f(x), X = domain of function, Y = range of function. Every element in the range is mapped onto from an element in the domain, by the rule f. There may be a number of domain elements which map to the same range element. That is, every y in Y is mapped from an element x in X, more than one x can map to the same y. Left: Only one domain is shown which makes f surjective. Right: two possible domains X1 and X2 are shown.

Non-surjective functions in the Cartesian plane. Although some parts of the function are surjective, where elements y in Y do have a value x in X such that y = f(x), some parts are not. Left: There is y0 in Y, but there is no x0 in X such that y0 = f(x0). Right: There are y1, y2 and y3 in Y, but there are no x1, x2, and x3 in X such that y1 = f(x1), y2 = f(x2), and y3 = f(x3).