While there are currently no mainstream general-purpose processors built to operate on 128-bitintegers or addresses, a number of processors do have specialized ways to operate on 128-bit chunks of data. The IBM System/370 could be considered the first simple 128-bit computer, as it used 128-bit floating-point registers. Most modern CPUs feature single-instruction multiple-data (SIMD) instruction sets (Streaming SIMD Extensions, AltiVec etc.) where 128-bit vector registers are used to store several smaller numbers, such as four 32-bit floating-point numbers. A single instruction can then operate on all these values in parallel. However, these processors do not operate on individual numbers that are 128 binary digits in length; only their registers have the size of 128 bits.

The DEC VAX supported operations on 128-bit integer ('O' or octaword) and 128-bit floating-point ('H-float' or HFLOAT) datatypes. Support for such operations was an upgrade option rather than being a standard feature. Since the VAX's registers were 32 bits wide, a 128-bit operation used four consecutive registers or four longwords in memory.

In the same way that compilers emulate e.g. 64-bit integer arithmetic on architectures with register sizes less than 64 bits, some compilers also support 128-bit integer arithmetic. For example, the GCC C compiler 4.6 and later has a 128-bit integer type __int128 for some architectures.[1] For the C programming language, this is a compiler-specific extension, as C11 itself does not guarantee support for 128-bit integers.

A 128-bit register can store 2128 (over 3.40 × 1038) different values. The range of integer values that can be stored in 128 bits depends on the integer representation used. With the two most common representations, the range is 0 through 340,282,366,920,938,463,463,374,607,431,768,211,455 (2128 − 1) for representation as an (unsigned) binary number, and −170,141,183,460,469,231,731,687,303,715,884,105,728 (−2127) through 170,141,183,460,469,231,731,687,303,715,884,105,727 (2127 − 1) for representation as two's complement.

128-bit processors could be used for addressing directly up to 2128 (over 3.40 × 1038) bytes, which would greatly exceed the total data stored on Earth as of 2010, which has been estimated to be around 1.2 zettabytes (1.42 × 1021 bytes).[3]

The AS/400 virtual instruction set defines all pointers as 128-bit. This gets translated to the hardware's real instruction set as required, allowing the underlying hardware to change without needing to recompile the software. Past hardware was 48-bit CISC, while current hardware is 64-bit PowerPC. Because pointers are defined to be 128-bit, future hardware may be 128-bit without software incompatibility.

1.
Central processing unit
–
The computer industry has used the term central processing unit at least since the early 1960s. The form, design and implementation of CPUs have changed over the course of their history, most modern CPUs are microprocessors, meaning they are contained on a single integrated circuit chip. An IC that contains a CPU may also contain memory, peripheral interfaces, some computers employ a multi-core processor, which is a single chip containing two or more CPUs called cores, in that context, one can speak of such single chips as sockets. Array processors or vector processors have multiple processors that operate in parallel, there also exists the concept of virtual CPUs which are an abstraction of dynamical aggregated computational resources. Early computers such as the ENIAC had to be rewired to perform different tasks. Since the term CPU is generally defined as a device for software execution, the idea of a stored-program computer was already present in the design of J. Presper Eckert and John William Mauchlys ENIAC, but was initially omitted so that it could be finished sooner. On June 30,1945, before ENIAC was made, mathematician John von Neumann distributed the paper entitled First Draft of a Report on the EDVAC and it was the outline of a stored-program computer that would eventually be completed in August 1949. EDVAC was designed to perform a number of instructions of various types. Significantly, the programs written for EDVAC were to be stored in high-speed computer memory rather than specified by the wiring of the computer. This overcame a severe limitation of ENIAC, which was the considerable time, with von Neumanns design, the program that EDVAC ran could be changed simply by changing the contents of the memory. Early CPUs were custom designs used as part of a larger, however, this method of designing custom CPUs for a particular application has largely given way to the development of multi-purpose processors produced in large quantities. This standardization began in the era of discrete transistor mainframes and minicomputers and has accelerated with the popularization of the integrated circuit. The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of nanometers, both the miniaturization and standardization of CPUs have increased the presence of digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in electronic devices ranging from automobiles to cellphones, the so-called Harvard architecture of the Harvard Mark I, which was completed before EDVAC, also utilized a stored-program design using punched paper tape rather than electronic memory. Relays and vacuum tubes were used as switching elements, a useful computer requires thousands or tens of thousands of switching devices. The overall speed of a system is dependent on the speed of the switches, tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the Harvard Mark I failed very rarely. In the end, tube-based CPUs became dominant because the significant speed advantages afforded generally outweighed the reliability problems, most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs. Clock signal frequencies ranging from 100 kHz to 4 MHz were very common at this time, the design complexity of CPUs increased as various technologies facilitated building smaller and more reliable electronic devices

2.
VAX
–
VAX is a discontinued instruction set architecture developed by Digital Equipment Corporation in the mid-1970s. The VAX-11/780, introduced on October 25,1977, was the first of a range of popular, a 32-bit system with a CISC architecture based on DECs earlier PDP-11, VAX was designed to extend or replace DECs various PDP ISAs. The VAX architectures primary features were virtual addressing and its instruction set. Later versions offloaded the compatibility mode and some of the less used CISC instructions to emulation in the system software. The VAX instruction set was designed to be powerful and orthogonal, when it was introduced, many programs were written in assembly language, so having a programmer-friendly instruction set was important. In time, as programs were written in higher-level language, the instruction set became less visible. One unusual aspect of the VAX instruction set is the presence of register masks at the start of each subprogram and these are arbitrary bit patterns that specify, when control is passed to the subprogram, which registers are to be preserved. Since register masks are a form of data embedded within the executable code and this can complicate optimization techniques that are applied on machine code. The native VAX operating system is Digitals VAX/VMS, the VAX architecture and OpenVMS operating system were engineered concurrently to take maximum advantage of each other, as was the initial implementation of the VAXcluster facility. Other VAX operating systems have included various releases of BSD UNIX up to 4. 3BSD, Ultrix-32, VAXELN, more recently, NetBSD and OpenBSD support various VAX models and some work has been done on porting Linux to the VAX architecture. The first VAX model sold was the VAX-11/780, which was introduced on October 25,1977 at the Digital Equipment Corporations Annual Meeting of Shareholders, bill Strecker, C. Gordon Bells doctoral student at Carnegie Mellon University, was responsible for the architecture. Many different models with different prices, performance levels, and capacities were subsequently created, VAX superminicomputers were very popular in the early 1980s. For a while the VAX-11/780 was used as a standard in CPU benchmarks, the actual number of instructions executed in 1 second was about 500,000, which led to complaints of marketing exaggeration. The result was the definition of a VAX MIPS, the speed of a VAX-11/780, within the Digital community the term VUP was the more common term, because MIPS do not compare well across different architectures. The related term cluster VUPs was informally used to describe the performance of a VAXcluster. The VAX-11/780 included a subordinate stand-alone LSI-11 computer that performed microcode load, booting and this was dropped from subsequent VAX models. Enterprising VAX-11/780 users could therefore run three different Digital Equipment Corporation operating systems, VMS on the VAX processor, and either RSX-11M or RT-11 on the LSI-11, the VAX went through many different implementations. The original VAX 11/780 was implemented in TTL and filled a cabinet with a single CPU

3.
ICL 2900 Series
–
The ICL2900 Series was a range of mainframe computer systems announced by the UK manufacturer ICL on 9 October 1974. The company had started development, under the name New Range immediately on its formation in 1968 and these included enhancements to either ICTs 1900 Series or the English Electric System 4, and a development based on J. K. Iliffes Basic Language Machine. The option finally selected was the so-called Synthetic Option, a new design starting with a sheet of paper. As the name implies, the design was influenced from many sources and these included ICLs own earlier machines. The design of Burroughs mainframes was influential, although ICL rejected the concept of optimising the design for one high-level language, the Multics system provided other ideas, notably in the area of protection. However, the biggest single influence was probably the MU5 machine developed at Manchester University. The 2900 Series architecture uses the concept of a machine as the set of resources available to a program. The concept of a machine in the 2900 Series architecture should not be confused with the way the term is used in other environments. Because each program runs in its own machine, the concept may be likened to a process in other operating systems. The most obvious resource in a machine is the virtual store. Other resources include peripherals, files, network connections, and so on, within a virtual machine, code can run at up to sixteen different layers of protection, called access levels. System calls thus involve a change of level, but not an expensive call to invoke code in a different virtual machine. Every code module executes at an access level, and can invoke the functions offered by lower-level code. The architecture thus offers a built-in encapsulation mechanism to ensure system integrity, segments of memory can be shared between virtual machines. For example, global memory segments are used for database lock tables, hardware semaphore instructions are available to synchronise access to such segments. The 2900 architecture supports a hardware-based call stack, providing an efficient vehicle for executing high-level language programs and this was a forward-looking decision at the time, since it was expected that the dominant programming languages would initially be COBOL and FORTRAN. The architecture provides built-in mechanisms for making procedure calls using the stack, and special purpose registers for addressing the top of the stack, off-stack data is typically addressed via a descriptor. This is a 64-bit structure containing a 32-bit virtual address, plus 32 bits of control information, the 32-bit virtual address comprises a 14-bit segment number and an 18-bit displacement within the segment

4.
Floating-point arithmetic
–
In computing, floating-point arithmetic is arithmetic using formulaic representation of real numbers as an approximation so as to support a trade-off between range and precision. A number is, in general, represented approximately to a number of significant digits and scaled using an exponent in some fixed base. For example,1.2345 =12345 ⏟ significand ×10 ⏟ base −4 ⏞ exponent, the term floating point refers to the fact that a numbers radix point can float, that is, it can be placed anywhere relative to the significant digits of the number. This position is indicated as the exponent component, and thus the floating-point representation can be thought of as a kind of scientific notation. The result of dynamic range is that the numbers that can be represented are not uniformly spaced. Over the years, a variety of floating-point representations have been used in computers, however, since the 1990s, the most commonly encountered representation is that defined by the IEEE754 Standard. A floating-point unit is a part of a computer system designed to carry out operations on floating point numbers. A number representation specifies some way of encoding a number, usually as a string of digits, there are several mechanisms by which strings of digits can represent numbers. In common mathematical notation, the string can be of any length. If the radix point is not specified, then the string implicitly represents an integer, in fixed-point systems, a position in the string is specified for the radix point. So a fixed-point scheme might be to use a string of 8 decimal digits with the point in the middle. The scaling factor, as a power of ten, is then indicated separately at the end of the number, floating-point representation is similar in concept to scientific notation. Logically, a floating-point number consists of, A signed digit string of a length in a given base. This digit string is referred to as the significand, mantissa, the length of the significand determines the precision to which numbers can be represented. The radix point position is assumed always to be somewhere within the significand—often just after or just before the most significant digit and this article generally follows the convention that the radix point is set just after the most significant digit. A signed integer exponent, which modifies the magnitude of the number, using base-10 as an example, the number 7005152853504700000♠152853.5047, which has ten decimal digits of precision, is represented as the significand 1528535047 together with 5 as the exponent. In storing such a number, the base need not be stored, since it will be the same for the range of supported numbers. Symbolically, this value is, s b p −1 × b e, where s is the significand, p is the precision, b is the base

5.
Binary number
–
The base-2 system is a positional notation with a radix of 2. Because of its implementation in digital electronic circuitry using logic gates. Each digit is referred to as a bit, the modern binary number system was devised by Gottfried Leibniz in 1679 and appears in his article Explication de lArithmétique Binaire. Systems related to binary numbers have appeared earlier in multiple cultures including ancient Egypt, China, Leibniz was specifically inspired by the Chinese I Ching. The scribes of ancient Egypt used two different systems for their fractions, Egyptian fractions and Horus-Eye fractions, the method used for ancient Egyptian multiplication is also closely related to binary numbers. This method can be seen in use, for instance, in the Rhind Mathematical Papyrus, the I Ching dates from the 9th century BC in China. The binary notation in the I Ching is used to interpret its quaternary divination technique and it is based on taoistic duality of yin and yang. Eight trigrams and a set of 64 hexagrams, analogous to the three-bit and six-bit binary numerals, were in use at least as early as the Zhou Dynasty of ancient China. The Song Dynasty scholar Shao Yong rearranged the hexagrams in a format that resembles modern binary numbers, the Indian scholar Pingala developed a binary system for describing prosody. He used binary numbers in the form of short and long syllables, Pingalas Hindu classic titled Chandaḥśāstra describes the formation of a matrix in order to give a unique value to each meter. The binary representations in Pingalas system increases towards the right, the residents of the island of Mangareva in French Polynesia were using a hybrid binary-decimal system before 1450. Slit drums with binary tones are used to encode messages across Africa, sets of binary combinations similar to the I Ching have also been used in traditional African divination systems such as Ifá as well as in medieval Western geomancy. The base-2 system utilized in geomancy had long been applied in sub-Saharan Africa. Leibnizs system uses 0 and 1, like the modern binary numeral system, Leibniz was first introduced to the I Ching through his contact with the French Jesuit Joachim Bouvet, who visited China in 1685 as a missionary. Leibniz saw the I Ching hexagrams as an affirmation of the universality of his own beliefs as a Christian. Binary numerals were central to Leibnizs theology and he believed that binary numbers were symbolic of the Christian idea of creatio ex nihilo or creation out of nothing. Is not easy to impart to the pagans, is the ex nihilo through Gods almighty power. In 1854, British mathematician George Boole published a paper detailing an algebraic system of logic that would become known as Boolean algebra

6.
Bus (computing)
–
In computer architecture, a bus is a communication system that transfers data between components inside a computer, or between computers. This expression covers all related components and software, including communication protocols. Modern computer buses can use both parallel and bit serial connections, and can be wired in either a multidrop or daisy chain topology, or connected by switched hubs, as in the case of USB. An early computer might contain a hand-wired CPU of vacuum tubes, a drum for main memory. In both examples, computer buses of one form or another move data between all of these devices, in most traditional computer architectures, the CPU and main memory tend to be tightly coupled. In most cases, the CPU and memory share signalling characteristics, the bus connecting the CPU and memory is one of the defining characteristics of the system, and often referred to simply as the system bus. It is possible to allow peripherals to communicate with memory in the same fashion and this is commonly accomplished through some sort of standardized electrical connector, several of these forming the expansion bus or local bus. However, as the differences between the CPU and peripherals varies widely, some solution is generally needed to ensure that peripherals do not slow overall system performance. Many CPUs feature a set of pins similar to those for communicating with memory. Others use smart controllers to place the data directly in memory, most modern systems combine both solutions, where appropriate. As the number of potential peripherals grew, using a card for every peripheral became increasingly untenable. This has led to the introduction of bus systems designed specifically to support multiple peripherals, common examples are the SATA ports in modern computers, which allow a number of hard drives to be connected without the need for a card. However, these systems are generally too expensive to implement in low-end devices. This has led to the development of a number of low-performance bus systems for these solutions. All such examples may be referred to as peripheral buses, although this terminology is not universal, in modern systems the performance difference between the CPU and main memory has grown so great that increasing amounts of high-speed memory is built directly into the CPU, known as a cache. These system buses are used to communicate with most other peripherals, through adaptors. Such systems are more similar to multicomputers, communicating over a bus rather than a network. In these cases, expansion buses are entirely separate and no longer share any architecture with their host CPU, what would have formerly been a system bus is now often known as a front-side bus

7.
IBM System/370
–
The IBM System/370 was a model range of IBM mainframe computers announced on June 30,1970 as the successors to the System/360 family. A Dynamic Address Translation option was not announced until 1972, 128-bit floating point arithmetic on all models, the original System/370 line underwent several architectural improvements during its roughly 20-year lifetime. The first System/370 machines, the Model 155, the Model 165, and these changes included,13 new instructions, among which were MOVE LONG, COMPARE LOGICAL LONG, thereby permitting operations on up to 2^24-1 bytes, vs. They did not include support for virtual memory, in 1972, a very significant change was made when support for virtual memory was introduced with IBMs System/370 Advanced Function announcement. IBM had initially chosen to exclude virtual storage from the S/370 line, the S/370-145 had an associative memory used by the microcode for the DOS compatibility feature from its first shipments in June 1971, the same hardware was used by the microcode for DAT. The 145 microcode architecture simplified the addition of virtual memory, allowing this capability to be present in early 145s without the extensive modifications needed in other models. The Reference and Change bits of the Storage-protection Keys, however, were labeled on the rollers, existing S/370-145 customers were happy to learn that they did not have to purchase a hardware upgrade in order to run DOS/VS or OS/VS1. After installation, these models were known as the S/370-155-II and S/370-165-II, IBM wanted customers to upgrade their 155 and 165 systems to the widely sold S/370-158 and -168. This led to the original S/370-155 and S/370-165 models being described as boat anchors, later architectural changes primarily involved expansions in memory – both physical memory and virtual address space – to enable larger workloads and meet client demands for more storage. This was the trend as Moores Law eroded the unit cost of memory. As with all IBM mainframe development, preserving backward compatibility was paramount, in October 1981, the 3033 and 3081 processors added extended real addressing, which allowed 26-bit addressing for physical storage. This capability appeared later on other systems, such as the 4381 and 3090, the cross-memory services capability which facilitated movement of data between address spaces was actually available just prior to S/370-XA architecture on the 3031,3032 and 3033 processors. As described above, the S/370 product line underwent a major architectural change, the evolution of S/370 addressing was always complicated by the basic S/360 instruction set design, and its large installed code base, which relied on a 24-bit logical address. Most shops thus continued to run their 24-bit applications in a higher-performance 31-bit world and this evolutionary implementation had the characteristic of solving the most urgent problems first, relief for real memory addressing being needed sooner than virtual memory addressing. IBMs choice of 31-bit addressing for 370-XA involved various factors, the System/360 Model 67 had included a full 32-bit addressing mode, but this feature was not carried forward to the System/370 series, which began with only 24-bit addressing. When IBM later expanded the S/370 address space in S/370-XA, several reasons are cited for the choice of 31 bits, in particular, the standard subroutine calling convention marked the final parameter word by setting its high bit. Interaction between 32-bit addresses and two instructions that treated their arguments as signed numbers, input from key initial Model 67 sites, which had debated the alternatives during the initial system design period, and had recommended 31 bits. The following table summarizes the major S/370 series and models, the second column lists the principal architecture associated with each series

8.
Application software
–
An application program is a computer program designed to perform a group of coordinated functions, tasks, or activities for the benefit of the user. Examples of an application include a processor, a spreadsheet, an accounting application, a web browser, a media player, an aeronautical flight simulator. The collective noun application software refers to all applications collectively and this contrasts with system software, which is mainly involved with running the computer. Applications may be bundled with the computer and its software or published separately. Apps built for mobile platforms are called mobile apps, in information technology, an application is a computer program designed to help people perform an activity. An application thus differs from a system, a utility. Depending on the activity for which it was designed, an application can manipulate text, numbers, graphics, some application packages focus on a single task, such as word processing, others, called integrated software include several applications. User-written software tailors systems to meet the specific needs. User-written software includes templates, word processor macros, scientific simulations, graphics. Even email filters are a kind of user software, users create this software themselves and often overlook how important it is. The delineation between system software such as operating systems and application software is not exact, however, and is occasionally the object of controversy. As another example, the GNU/Linux naming controversy is, in part, the above definitions may exclude some applications that may exist on some computers in large organizations. For an alternative definition of an app, see Application Portfolio Management, the word application, once used as an adjective, is not restricted to the of or pertaining to application software meaning. Sometimes a new and popular application arises which only runs on one platform and this is called a killer application or killer app. There are many different ways to divide up different types of application software, web apps have indeed greatly increased in popularity for some uses, but the advantages of applications make them unlikely to disappear soon, if ever. Furthermore, the two can be complementary, and even integrated, Application software can also be seen as being either horizontal or vertical. Horizontal applications are more popular and widespread, because they are general purpose, vertical applications are niche products, designed for a particular type of industry or business, or department within an organization. Integrated suites of software will try to handle every aspect possible of, for example, manufacturing or banking systems, or accounting