GDB offers extensive facilities for tracing and altering the execution of computer programs. The user can monitor and modify the values of programs' internal variables, and even call functions independently of the program's normal behavior.

GDB is still actively developed. As of version 7.0 new features include support for Python scripting[6] and as of version 7.8 GNU Guile scripting as well.[7] Since version 7.0, support for "reversible debugging" — allowing a debugging session to step backward, much like rewinding a crashed program to see what happened — is available.[8]

GDB offers a "remote" mode often used when debugging embedded systems. Remote operation is when GDB runs on one machine and the program being debugged runs on another. GDB can communicate to the remote "stub" that understands GDB protocol through a serial device or TCP/IP.[9] A stub program can be created by linking to the appropriate stub files provided with GDB, which implement the target side of the communication protocol.[10] Alternatively, gdbserver can be used to remotely debug the program without needing to change it in any way.

The same mode is also used by KGDB for debugging a running Linux kernel on the source level with gdb. With KGDB, kernel developers can debug a kernel in much the same way as they debug application programs. It makes it possible to place breakpoints in kernel code, step through the code and observe variables. On architectures where hardware debugging registers are available, watchpoints can be set which trigger breakpoints when specified memory addresses are executed or accessed. KGDB requires an additional machine which is connected to the machine to be debugged using a serial cable or Ethernet. On FreeBSD, it is also possible to debug using Firewiredirect memory access (DMA).[11]

#include<stdio.h>#include<stdlib.h>#include<string.h>size_tfoo_len(constchar*s){returnstrlen(s);}intmain(intargc,char*argv[]){constchar*a=NULL;printf("size of a = %d\n",foo_len(a));exit(0);}

Using the GCC compiler on Linux, the code above must be compiled using the -g flag in order to include appropriate debug information on the binary generated, thus making it possible to inspect it using GDB. Assuming that the file containing the code above is named example.c, the command for the compilation could be:

$ gcc example.c -g -o example

And the binary can now be run:

$ ./example
Segmentation fault

Since the example code, when executed, generates a segmentation fault, GDB can be used to inspect the problem.

The problem is present in line 8, and occurs when calling the function strlen (because its argument, s, is NULL). Depending on the implementation of strlen (inline or not), the output can be different, e.g.:

To fix the problem, the variable a (in the function main) must contain a valid string. Here is a fixed version of the code:

#include<stdio.h>#include<stdlib.h>#include<string.h>size_tfoo_len(constchar*s){returnstrlen(s);}intmain(intargc,char*argv[]){constchar*a="This is a test string";printf("size of a = %d\n",foo_len(a));exit(0);}

Recompiling and running the executable again inside GDB now gives a correct result:

^ ab"Richard Stallman lecture at the Royal Institute of Technology, Sweden (1986-10-30)". Retrieved 2006-09-21. Then after GNU Emacs was reasonably stable, which took all in all about a year and a half, I started getting back to other parts of the system. I developed a debugger which I called GDB which is a symbolic debugger for C code, which recently entered distribution. Now this debugger is to a large extent in the spirit of DBX, which is a debugger that comes with Berkeley Unix.

1.
Software developer
–
A software developer is a person concerned with facets of the software development process, including the research, design, programming, and testing of computer software. Other job titles which are used with similar meanings are programmer, software analyst. According to developer Eric Sink, the differences between system design, software development, and programming are more apparent, even more so that developers become systems architects, those who design the multi-leveled architecture or component interactions of a large software system. In a large company, there may be employees whose sole responsibility consists of one of the phases above. In smaller development environments, a few people or even an individual might handle the complete process. The word software was coined as a prank as early as 1953, before this time, computers were programmed either by customers, or the few commercial computer vendors of the time, such as UNIVAC and IBM. The first company founded to provide products and services was Computer Usage Company in 1955. The software industry expanded in the early 1960s, almost immediately after computers were first sold in mass-produced quantities, universities, government, and business customers created a demand for software. Many of these programs were written in-house by full-time staff programmers, some were distributed freely between users of a particular machine for no charge. Others were done on a basis, and other firms such as Computer Sciences Corporation started to grow. The computer/hardware makers started bundling operating systems, systems software and programming environments with their machines, new software was built for microcomputers, so other manufacturers including IBM, followed DECs example quickly, resulting in the IBM AS/400 amongst others. The industry expanded greatly with the rise of the computer in the mid-1970s. In the following years, it created a growing market for games, applications. DOS, Microsofts first operating system product, was the dominant operating system at the time, by 2014 the role of cloud developer had been defined, in this context, one definition of a developer in general was published, Developers make software for the world to use. The job of a developer is to crank out code -- fresh code for new products, code fixes for maintenance, code for business logic, bus factor Software Developer description from the US Department of Labor

2.
GNU Project
–
The GNU Project /ɡnuː/ is a free-software, mass-collaboration project, first announced on September 27,1983 by Richard Stallman at MIT. GNU software guarantees these freedom-rights legally, and is free software. In order to ensure that the software of a computer grants its users all freedom rights, even the most fundamental and important part. Stallman decided to call this operating system GNU, basing its design on that of Unix, development was initiated in January 1984. In 1991, the kernel Linux appeared, developed outside of the GNU project by Linus Torvalds, combined with the operating system utilities already developed by the GNU project, it allowed for the first operating system that was free software, known as Linux or GNU/Linux. The projects current work includes development, awareness building, political campaigning and sharing of the new material. Richard Stallman announced his intent to start coding the GNU Project in a Usenet message in September 1983, when the GNU project first started they had an Emacs text editor with Lisp for writing editor commands, a source level debugger, a yacc-compatible parser generator, and a linker. The GNU system required its own C compiler and tools to be free software, by June 1987, the project had accumulated and developed free software for an assembler, an almost finished portable optimizing C compiler, an editor, and various Unix utilities. They had a kernel that needed more updates. Once the kernel and the compiler were finished, GNU was able to be used for program development, the main goal was to create many other applications to be like the Unix system. GNU was able to run Unix programs but was not identical to it, GNU incorporated longer file names, file version numbers, and a crashproof file system. The GNU Manifesto was written to support and participation from others for the project. Programmers were encouraged to take part in any aspect of the project that interested them, people could donate funds, computer parts, or even their own time to write code and programs for the project. The origins and development of most aspects of the GNU Project are shared in a narrative in the Emacs help system. It is the detailed history as at their web site. The GNU Manifesto was written by Richard Stallman to gain support, to implement these freedoms, users needed full access to code. Although most of the GNU Projects output is technical in nature, it was launched as a social, ethical, as well as producing software and licenses, the GNU Project has published a number of writings, the majority of which were authored by Richard Stallman. The GNU project uses software that is free for users to copy, edit and it is free in the sense that users can change the software to fit individual needs

3.
Software release life cycle
–
Usage of the alpha/beta test terminology originated at IBM. As long ago as the 1950s, IBM used similar terminology for their hardware development, a test was the verification of a new product before public announcement. B test was the verification before releasing the product to be manufactured, C test was the final test before general availability of the product. Martin Belsky, a manager on some of IBMs earlier software projects claimed to have invented the terminology, IBM dropped the alpha/beta terminology during the 1960s, but by then it had received fairly wide notice. The usage of beta test to refer to testing done by customers was not done in IBM, rather, IBM used the term field test. Pre-alpha refers to all activities performed during the project before formal testing. These activities can include requirements analysis, software design, software development, in typical open source development, there are several types of pre-alpha versions. Milestone versions include specific sets of functions and are released as soon as the functionality is complete, the alpha phase of the release life cycle is the first phase to begin software testing. In this phase, developers generally test the software using white-box techniques, additional validation is then performed using black-box or gray-box techniques, by another testing team. Moving to black-box testing inside the organization is known as alpha release, alpha software can be unstable and could cause crashes or data loss. Alpha software may not contain all of the features that are planned for the final version, in general, external availability of alpha software is uncommon in proprietary software, while open source software often has publicly available alpha versions. The alpha phase usually ends with a freeze, indicating that no more features will be added to the software. At this time, the software is said to be feature complete, Beta, named after the second letter of the Greek alphabet, is the software development phase following alpha. Software in the stage is also known as betaware. Beta phase generally begins when the software is complete but likely to contain a number of known or unknown bugs. Software in the phase will generally have many more bugs in it than completed software, as well as speed/performance issues. The focus of beta testing is reducing impacts to users, often incorporating usability testing, the process of delivering a beta version to the users is called beta release and this is typically the first time that the software is available outside of the organization that developed it. Beta version software is useful for demonstrations and previews within an organization

4.
Repository (version control)
–
In revision control systems, a repository is an on-disk data structure which stores metadata for a set of files and/or directory structure. Some of the metadata that a repository contains includes, among other things, a set of references to commit objects, called heads. The main purpose of a repository is to store a set of files and these differences in methodology have generally led to diverse uses of revision control by different groups, depending on their needs. Software repository Codebase Forge Comparison of source code hosting facilities

5.
C (programming language)
–
C was originally developed by Dennis Ritchie between 1969 and 1973 at Bell Labs, and used to re-implement the Unix operating system. C has been standardized by the American National Standards Institute since 1989, C is an imperative procedural language. Therefore, C was useful for applications that had formerly been coded in assembly language. Despite its low-level capabilities, the language was designed to encourage cross-platform programming, a standards-compliant and portably written C program can be compiled for a very wide variety of computer platforms and operating systems with few changes to its source code. The language has become available on a wide range of platforms. In C, all code is contained within subroutines, which are called functions. Function parameters are passed by value. Pass-by-reference is simulated in C by explicitly passing pointer values, C program source text is free-format, using the semicolon as a statement terminator and curly braces for grouping blocks of statements. The C language also exhibits the characteristics, There is a small, fixed number of keywords, including a full set of flow of control primitives, for, if/else, while, switch. User-defined names are not distinguished from keywords by any kind of sigil, There are a large number of arithmetical and logical operators, such as +, +=, ++, &, ~, etc. More than one assignment may be performed in a single statement, function return values can be ignored when not needed. Typing is static, but weakly enforced, all data has a type, C has no define keyword, instead, a statement beginning with the name of a type is taken as a declaration. There is no function keyword, instead, a function is indicated by the parentheses of an argument list, user-defined and compound types are possible. Heterogeneous aggregate data types allow related data elements to be accessed and assigned as a unit, array indexing is a secondary notation, defined in terms of pointer arithmetic. Unlike structs, arrays are not first-class objects, they cannot be assigned or compared using single built-in operators, There is no array keyword, in use or definition, instead, square brackets indicate arrays syntactically, for example month. Enumerated types are possible with the enum keyword and they are not tagged, and are freely interconvertible with integers. Strings are not a data type, but are conventionally implemented as null-terminated arrays of characters. Low-level access to memory is possible by converting machine addresses to typed pointers

6.
Operating system
–
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. All computer programs, excluding firmware, require a system to function. Operating systems are found on many devices that contain a computer – from cellular phones, the dominant desktop operating system is Microsoft Windows with a market share of around 83. 3%. MacOS by Apple Inc. is in place, and the varieties of Linux is in third position. Linux distributions are dominant in the server and supercomputing sectors, other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can run one program at a time. Multi-tasking may be characterized in preemptive and co-operative types, in preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, e. g. Solaris, Linux, cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking, 32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem, a distributed operating system manages a group of distinct computers and makes them appear to be a single computer. The development of networked computers that could be linked and communicate with each other gave rise to distributed computing, distributed computations are carried out on more than one machine. When computers in a work in cooperation, they form a distributed system. The technique is used both in virtualization and cloud computing management, and is common in large server warehouses, embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines like PDAs with less autonomy and they are able to operate with a limited number of resources. They are very compact and extremely efficient by design, Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is a system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could run different programs in succession to speed up processing

7.
Unix-like
–
A Unix-like operating system is one that behaves in a manner similar to a Unix system, while not necessarily conforming to or being certified to any version of the Single UNIX Specification. A Unix-like application is one that behaves like the corresponding Unix command or shell, there is no standard for defining the term, and some difference of opinion is possible as to the degree to which a given operating system or application is Unix-like. The Open Group owns the UNIX trademark and administers the Single UNIX Specification and they do not approve of the construction Unix-like, and consider it a misuse of their trademark. Other parties frequently treat Unix as a genericized trademark, in 2007, Wayne R. Gray sued to dispute the status of UNIX as a trademark, but lost his case, and lost again on appeal, with the court upholding the trademark and its ownership. Unix-like systems started to appear in the late 1970s and early 1980s, many proprietary versions, such as Idris, UNOS, Coherent, and UniFlex, aimed to provide businesses with the functionality available to academic users of UNIX. These largely displaced the proprietary clones, growing incompatibility among these systems led to the creation of interoperability standards, including POSIX and the Single UNIX Specification. Various free, low-cost, and unrestricted substitutes for UNIX emerged in the 1980s and 1990s, including 4. 4BSD, Linux, some of these have in turn been the basis for commercial Unix-like systems, such as BSD/OS and OS X. The various BSD variants are notable in that they are in fact descendants of UNIX, however, the BSD code base has evolved since then, replacing all of the AT&T code. Since the BSD variants are not certified as compliant with the Single UNIX Specification, dennis Ritchie, one of the original creators of Unix, expressed his opinion that Unix-like systems such as Linux are de facto Unix systems. Eric S. Raymond and Rob Landley have suggested there are three kinds of Unix-like systems, Genetic UNIX Those systems with a historical connection to the AT&T codebase. Most commercial UNIX systems fall into this category, so do the BSD systems, which are descendants of work done at the University of California, Berkeley in the late 1970s and early 1980s. Some of these systems have no original AT&T code but can trace their ancestry to AT&T designs. Trademark or branded UNIX These systems‍—‌largely commercial in nature‍—‌have been determined by the Open Group to meet the Single UNIX Specification and are allowed to carry the UNIX name, many ancient UNIX systems no longer meet this definition. Around 2001, Linux was given the opportunity to get a certification including free help from the POSIX chair Andrew Josey for the price of one dollar. Some non-Unix-like operating systems provide a Unix-like compatibility layer, with degrees of Unix-like functionality. IBM z/OSs UNIX System Services is sufficiently complete to be certified as trademark UNIX, cygwin and MSYS both provide a GNU environment on top of the Microsoft Windows user API, sufficient for most common open source software to be compiled and run. Subsystem for Unix-based Applications provides Unix-like functionality as a Windows NT subsystem, Windows Subsystem for Linux provides a Linux-compatible kernel interface developed by Microsoft and containing no Linux code, with Ubuntu user-mode binaries running on top of it

8.
Microsoft Windows
–
Microsoft Windows is a metafamily of graphical operating systems developed, marketed, and sold by Microsoft. It consists of families of operating systems, each of which cater to a certain sector of the computing industry with the OS typically associated with IBM PC compatible architecture. Active Windows families include Windows NT, Windows Embedded and Windows Phone, defunct Windows families include Windows 9x, Windows 10 Mobile is an active product, unrelated to the defunct family Windows Mobile. Microsoft introduced an operating environment named Windows on November 20,1985, Microsoft Windows came to dominate the worlds personal computer market with over 90% market share, overtaking Mac OS, which had been introduced in 1984. Apple came to see Windows as an encroachment on their innovation in GUI development as implemented on products such as the Lisa. On PCs, Windows is still the most popular operating system, however, in 2014, Microsoft admitted losing the majority of the overall operating system market to Android, because of the massive growth in sales of Android smartphones. In 2014, the number of Windows devices sold was less than 25% that of Android devices sold and this comparison however may not be fully relevant, as the two operating systems traditionally target different platforms. As of September 2016, the most recent version of Windows for PCs, tablets, smartphones, the most recent versions for server computers is Windows Server 2016. A specialized version of Windows runs on the Xbox One game console, Microsoft, the developer of Windows, has registered several trademarks each of which denote a family of Windows operating systems that target a specific sector of the computing industry. It now consists of three operating system subfamilies that are released almost at the time and share the same kernel. Windows, The operating system for personal computers, tablets. The latest version is Windows 10, the main competitor of this family is macOS by Apple Inc. for personal computers and Android for mobile devices. Windows Server, The operating system for server computers, the latest version is Windows Server 2016. Unlike its clients sibling, it has adopted a strong naming scheme, the main competitor of this family is Linux. Windows PE, A lightweight version of its Windows sibling meant to operate as an operating system, used for installing Windows on bare-metal computers. The latest version is Windows PE10.0.10586.0, Windows Embedded, Initially, Microsoft developed Windows CE as a general-purpose operating system for every device that was too resource-limited to be called a full-fledged computer. The following Windows families are no longer being developed, Windows 9x, Microsoft now caters to the consumers market with Windows NT. Windows Mobile, The predecessor to Windows Phone, it was a mobile operating system

9.
Debugger
–
A debugger or debugging tool is a computer program that is used to test and debug other programs. Some debuggers offer two modes of operation, full or partial simulation, to limit this impact, a trap occurs when the program cannot normally continue because of a programming bug or invalid data. For example, the program might have tried to use an instruction not available on the current version of the CPU or attempted to access unavailable or protected memory, if it is a low-level debugger or a machine-language debugger it shows the line in the disassembly. Typically, debuggers offer a query processor, a symbol resolver, an interpreter. Some debuggers have the ability to modify program state while it is running and it may also be possible to continue execution at a different location in the program to bypass a crash or logical error. It often also makes it useful as a verification tool, fault coverage. Most mainstream debugging engines, such as gdb and dbx, provide console-based command line interfaces, debugger front-ends are popular extensions to debugger engines that provide IDE integration, program animation, and visualization features. Some debuggers include a feature called reverse debugging, also known as historical debugging or backwards debugging and these debuggers make it possible to step a programs execution backwards in time. Microsoft Visual Studio offers IntelliTrace reverse debugging for C#, Visual Basic. NET, and some other languages, reverse debuggers also exist for C, C++, Java, Python, Perl, and other languages. Some are open source, some are commercial software. Some reverse debuggers slow down the target by orders of magnitude, reverse debugging is very useful for certain types of problems, but is still not commonly used yet. Some debuggers operate on a specific language while others can handle multiple languages transparently. Some debuggers also incorporate memory protection to avoid storage violations such as buffer overflow and this may be extremely important in transaction processing environments where memory is dynamically allocated from memory pools on a task by task basis. Most modern microprocessors have at least one of features in their CPU design to make debugging easier, Hardware support for single-stepping a program. In-system programming allows an external hardware debugger to reprogram a system under test, many systems with such ISP support also have other hardware debug support. Hardware support for code and data breakpoints, such as address comparators and data value comparators or, with more work involved. JTAG access to hardware debug interfaces such as those on ARM architecture processors or using the Nexus command set, processors used in embedded systems typically have extensive JTAG debug support. Micro controllers with as few as six pins need to use low pin-count substitutes for JTAG, such as BDM, Spy-Bi-Wire, debugWIRE, for example, uses bidirectional signaling on the RESET pin

10.
Software license
–
A software license is a legal instrument governing the use or redistribution of software. Under United States copyright law all software is copyright protected, in code as also object code form. The only exception is software in the public domain, most distributed software can be categorized according to its license type. Two common categories for software under copyright law, and therefore with licenses which grant the licensee specific rights, are proprietary software and free, unlicensed software outside the copyright protection is either public domain software or software which is non-distributed, non-licensed and handled as internal business trade secret. Contrary to popular belief, distributed unlicensed software is copyright protected. Examples for this are unauthorized software leaks or software projects which are placed on public software repositories like GitHub without specified license. As voluntarily handing software into the domain is problematic in some international law domains, there are also licenses granting PD-like rights. Therefore, the owner of a copy of software is legally entitled to use that copy of software. Hence, if the end-user of software is the owner of the respective copy, as many proprietary licenses only enumerate the rights that the user already has under 17 U. S. C. §117, and yet proclaim to take away from the user. Proprietary software licenses often proclaim to give software publishers more control over the way their software is used by keeping ownership of each copy of software with the software publisher. The form of the relationship if it is a lease or a purchase, for example UMG v. Augusto or Vernor v. Autodesk. The ownership of goods, like software applications and video games, is challenged by licensed. The Swiss based company UsedSoft innovated the resale of business software and this feature of proprietary software licenses means that certain rights regarding the software are reserved by the software publisher. Therefore, it is typical of EULAs to include terms which define the uses of the software, the most significant effect of this form of licensing is that, if ownership of the software remains with the software publisher, then the end-user must accept the software license. In other words, without acceptance of the license, the end-user may not use the software at all, one example of such a proprietary software license is the license for Microsoft Windows. The most common licensing models are per single user or per user in the appropriate volume discount level, Licensing per concurrent/floating user also occurs, where all users in a network have access to the program, but only a specific number at the same time. Another license model is licensing per dongle which allows the owner of the dongle to use the program on any computer, Licensing per server, CPU or points, regardless the number of users, is common practice as well as site or company licenses

11.
GNU General Public License
–
The GNU General Public License is a widely used free software license, which guarantees end users the freedom to run, study, share and modify the software. The license was written by Richard Stallman of the Free Software Foundation for the GNU Project. The GPL is a license, which means that derivative work can only be distributed under the same license terms. This is in distinction to permissive free licenses, of which the BSD licenses. GPL was the first copyleft license for general use, historically, the GPL license family has been one of the most popular software licenses in the free and open-source software domain. Prominent free software licensed under the GPL include the Linux kernel. In 2007, the version of the license was released to address some perceived problems with the second version that were discovered during its long-time usage. To keep the license up to date, the GPL license includes an optional any later version clause, developers can omit it when licensing their software, for instance the Linux kernel is licensed under GPLv2 without the any later version clause. The GPL was written by Richard Stallman in 1989, for use with programs released as part of the GNU project, the original GPL was based on a unification of similar licenses used for early versions of GNU Emacs, the GNU Debugger and the GNU C Compiler. These licenses contained similar provisions to the modern GPL, but were specific to each program, rendering them incompatible, Stallmans goal was to produce one license that could be used for any project, thus making it possible for many projects to share code. The second version of the license, version 2, was released in 1991, version 3 was developed to attempt to address these concerns and was officially released on 29 June 2007. Version 1 of the GNU GPL, released on 25 February 1989, the first problem was that distributors may publish binary files only—executable, but not readable or modifiable by humans. To prevent this, GPLv1 stated that any vendor distributing binaries must also make the source code available under the same licensing terms. The second problem was that distributors might add restrictions, either to the license, the union of two sets of restrictions would apply to the combined work, thus adding unacceptable restrictions. To prevent this, GPLv1 stated that modified versions, as a whole, had to be distributed under the terms in GPLv1. Therefore, software distributed under the terms of GPLv1 could be combined with software under more permissive terms, according to Richard Stallman, the major change in GPLv2 was the Liberty or Death clause, as he calls it – Section 7. The section says that licensees may distribute a GPL-covered work only if they can all of the licenses obligations. In other words, the obligations of the license may not be severed due to conflicting obligations and this provision is intended to discourage any party from using a patent infringement claim or other litigation to impair users freedom under the license

12.
Programming language
–
A programming language is a formal computer language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to programs to control the behavior of a machine or to express algorithms. From the early 1800s, programs were used to direct the behavior of such as Jacquard looms. Thousands of different programming languages have created, mainly in the computer field. Many programming languages require computation to be specified in an imperative form while other languages use forms of program specification such as the declarative form. The description of a language is usually split into the two components of syntax and semantics. Some languages are defined by a document while other languages have a dominant implementation that is treated as a reference. Some languages have both, with the language defined by a standard and extensions taken from the dominant implementation being common. A programming language is a notation for writing programs, which are specifications of a computation or algorithm, some, but not all, authors restrict the term programming language to those languages that can express all possible algorithms. For example, PostScript programs are created by another program to control a computer printer or display. More generally, a language may describe computation on some, possibly abstract. It is generally accepted that a specification for a programming language includes a description, possibly idealized. In most practical contexts, a programming language involves a computer, consequently, abstractions Programming languages usually contain abstractions for defining and manipulating data structures or controlling the flow of execution. Expressive power The theory of computation classifies languages by the computations they are capable of expressing, all Turing complete languages can implement the same set of algorithms. ANSI/ISO SQL-92 and Charity are examples of languages that are not Turing complete, markup languages like XML, HTML, or troff, which define structured data, are not usually considered programming languages. Programming languages may, however, share the syntax with markup languages if a computational semantics is defined, XSLT, for example, is a Turing complete XML dialect. Moreover, LaTeX, which is used for structuring documents. The term computer language is used interchangeably with programming language

13.
Ada (programming language)
–
Ada is a structured, statically typed, imperative, wide-spectrum, and object-oriented high-level computer programming language, extended from Pascal and other languages. It has built-in language support for design-by-contract, extremely strong typing, explicit concurrency, offering tasks, synchronous message passing, protected objects, Ada improves code safety and maintainability by using the compiler to find errors in favor of runtime errors. Ada is a standard, the current version is defined by ISO/IEC8652,2012. Ada was named after Ada Lovelace, who has credited with being the first computer programmer. Ada was originally targeted at embedded and real-time systems, the Ada 95 revision, designed by S. Tucker Taft of Intermetrics between 1992 and 1995, improved support for systems, numerical, financial, and object-oriented programming. Features of Ada include, strong typing, modularity mechanisms, run-time checking, parallel processing, exception handling, Ada 95 added support for object-oriented programming, including dynamic dispatch. The syntax of Ada minimizes choices of ways to perform basic operations, Ada uses the basic arithmetical operators +, -, *, and /, but avoids using other symbols. Code blocks are delimited by words such as declare, begin, and end, in the case of conditional blocks this avoids a dangling else that could pair with the wrong nested if-expression in other languages like C or Java. Ada is designed for development of large software systems. Ada packages can be compiled separately, Ada package specifications can also be compiled separately without the implementation to check for consistency. This makes it possible to detect problems early during the design phase, for example, the syntax requires explicitly named closing of blocks to prevent errors due to mismatched end tokens. The adherence to strong typing allows detection of common software errors either during compile-time. As concurrency is part of the specification, the compiler can in some cases detect potential deadlocks. Compilers also commonly check for misspelled identifiers, visibility of packages, redundant declarations, etc. and can provide warnings and useful suggestions on how to fix the error. Ada also supports run-time checks to protect against access to unallocated memory, buffer overflow errors, range violations, off-by-one errors, array access errors and these checks can be disabled in the interest of runtime efficiency, but can often be compiled efficiently. It also includes facilities to help program verification, for these reasons, Ada is widely used in critical systems, where any anomaly might lead to very serious consequences, e. g. accidental death, injury or severe financial loss. Examples of systems where Ada is used include avionics, ATC, railways, banking, military, adas dynamic memory management is high-level and type-safe. Ada does not have generic or untyped pointers, nor does it implicitly declare any pointer type, instead, all dynamic memory allocation and deallocation must take place through explicitly declared access types

14.
Free Pascal
–
Free Pascal Compiler is a compiler for the closely related programming language dialects, Pascal and Object Pascal. It is free software released under the GNU General Public License, the dialect is selected on a per-unit basis, and more than one dialect can be used to produce one program. It follows a write once, compile anywhere philosophy, and is available for many CPU architectures and it supports integrated assembly language and an internal assembler in several dialects. Separate projects exist to facilitate developing cross-platform graphical user interface applications, initially, Free Pascal adopted the de facto standard dialect of Pascal programmers, Borland Pascal but later on adopted Delphi. From version 2.0 on, Delphi 7 compatibility has been implemented or improved. A small effort has made to support some of the Apple Pascal syntax to ease interfacing to the Classic Mac OS and macOS. Since the Apple dialect implements some standard Pascal features that Turbo Pascal and Delphi omit, the 2.2. x release series does not significantly change the dialect objectives beyond Delphi 7, instead they aim for closer compatibility. The project still lacks the Delphi functionality of compiler-supported exporting of classes from shared libraries, which is useful, for example, for Lazarus, which implements packages of components. As of 2011, several Delphi 2006-specific features were added in the development branch, the development branch also features an Objective-Pascal extension for Objective-C interfacing. As of version 2.7.1, Free Pascal implemented basic ISO Pascal mode, though many such as Get and Put procedure. As of version 3.0.0, ISO Pascal mode is fairly complete and it has been able to compile standardpascal. orgs P5 with no changes. Free Pascal emerged when Borland clarified that Borland Pascal development for DOS would stop with version 7, to be replaced by a Windows-only product, originally, the compiler was a 16-bit DOS executable compiled by Turbo Pascal. After two years, the compiler was able to compile itself and became a 32-bit executable, the initial 32-bit compiler was published on the Internet, and the first contributors joined the project. Later, a Linux port was made by Michael van Canneyt, the DOS port was adapted for use in OS/2 using the Eberhard Mattes eXtender which made OS/2 the second supported compiling target. Apart from work of Florian Klämpfl as original author, Daniël Mantione contributed significantly to make this happen and provided the port of the run-time library to OS/2. The compiler improved gradually, and the DOS version migrated to the GO32v2 extender and this release was also ported to systems using a Motorola 68000 family processors. With release 0.99.8 the Win32 target was added, stabilizing for a non-beta release began, and version 1.0 was released in July 2000. The 1.0. x series was used, in business

15.
Fortran
–
Fortran is a general-purpose, imperative programming language that is especially suited to numeric computation and scientific computing. It is a language for high-performance computing and is used for programs that benchmark. Fortran encompasses a lineage of versions, each of which evolved to add extensions to the language while usually retaining compatibility with prior versions, the names of earlier versions of the language through FORTRAN77 were conventionally spelled in all-capitals. The capitalization has been dropped in referring to newer versions beginning with Fortran 90, the official language standards now refer to the language as Fortran rather than all-caps FORTRAN. In late 1953, John W. Backus submitted a proposal to his superiors at IBM to develop a practical alternative to assembly language for programming their IBM704 mainframe computer. Backus historic FORTRAN team consisted of programmers Richard Goldberg, Sheldon F. Best, Harlan Herrick, Peter Sheridan, Roy Nutt, Robert Nelson, Irving Ziller, Lois Haibt, and David Sayre. Its concepts included easier entry of equations into a computer, a developed by J. Halcombe Laning and demonstrated in the Laning. A draft specification for The IBM Mathematical Formula Translating System was completed by mid-1954, the first manual for FORTRAN appeared in October 1956, with the first FORTRAN compiler delivered in April 1957. John Backus said during a 1979 interview with Think, the IBM employee magazine, the language was widely adopted by scientists for writing numerically intensive programs, which encouraged compiler writers to produce compilers that could generate faster and more efficient code. The inclusion of a complex data type in the language made Fortran especially suited to technical applications such as electrical engineering. By 1960, versions of FORTRAN were available for the IBM709,650,1620, significantly, the increasing popularity of FORTRAN spurred competing computer manufacturers to provide FORTRAN compilers for their machines, so that by 1963 over 40 FORTRAN compilers existed. For these reasons, FORTRAN is considered to be the first widely used programming language supported across a variety of computer architectures, the arithmetic IF statement was similar to a three-way branch instruction on the IBM704. However, the 704 branch instructions all contained only one destination address, an optimizing compiler like FORTRAN would most likely select the more compact and usually faster Transfers instead of the Compare. Also the Compare considered −0 and +0 to be different values while the Transfer Zero, the FREQUENCY statement in FORTRAN was used originally to give branch probabilities for the three branch cases of the arithmetic IF statement. The Monte Carlo technique is documented in Backus et al, many years later, the FREQUENCY statement had no effect on the code, and was treated as a comment statement, since the compilers no longer did this kind of compile-time simulation. A similar fate has befallen compiler hints in other programming languages. The first FORTRAN compiler reported diagnostic information by halting the program when an error was found and that code could be looked up by the programmer in a error messages table in the operators manual, providing them with a brief description of the problem. Before the development of disk files, text editors and terminals, programs were most often entered on a keyboard onto 80-column punched cards

16.
Java (programming language)
–
Java is a general-purpose computer programming language that is concurrent, class-based, object-oriented, and specifically designed to have as few implementation dependencies as possible. It is intended to let application developers write once, run anywhere, Java applications are typically compiled to bytecode that can run on any Java virtual machine regardless of computer architecture. As of 2016, Java is one of the most popular programming languages in use, particularly for client-server web applications, Java was originally developed by James Gosling at Sun Microsystems and released in 1995 as a core component of Sun Microsystems Java platform. The language derives much of its syntax from C and C++, the original and reference implementation Java compilers, virtual machines, and class libraries were originally released by Sun under proprietary licences. As of May 2007, in compliance with the specifications of the Java Community Process, others have also developed alternative implementations of these Sun technologies, such as the GNU Compiler for Java, GNU Classpath, and IcedTea-Web. James Gosling, Mike Sheridan, and Patrick Naughton initiated the Java language project in June 1991, Java was originally designed for interactive television, but it was too advanced for the digital cable television industry at the time. The language was initially called Oak after an oak tree that stood outside Goslings office, later the project went by the name Green and was finally renamed Java, from Java coffee. Gosling designed Java with a C/C++-style syntax that system and application programmers would find familiar, Sun Microsystems released the first public implementation as Java 1.0 in 1995. It promised Write Once, Run Anywhere, providing no-cost run-times on popular platforms, fairly secure and featuring configurable security, it allowed network- and file-access restrictions. Major web browsers soon incorporated the ability to run Java applets within web pages, and Java quickly became popular, while mostly outside of browsers, in January 2016, Oracle announced that Java runtime environments based on JDK9 will discontinue the browser plugin. The Java 1.0 compiler was re-written in Java by Arthur van Hoff to comply strictly with the Java 1.0 language specification, with the advent of Java 2, new versions had multiple configurations built for different types of platforms. J2EE included technologies and APIs for enterprise applications typically run in server environments, the desktop version was renamed J2SE. In 2006, for marketing purposes, Sun renamed new J2 versions as Java EE, Java ME, in 1997, Sun Microsystems approached the ISO/IEC JTC1 standards body and later the Ecma International to formalize Java, but it soon withdrew from the process. Java remains a de facto standard, controlled through the Java Community Process, at one time, Sun made most of its Java implementations available without charge, despite their proprietary software status. Sun generated revenue from Java through the selling of licenses for specialized products such as the Java Enterprise System, on November 13,2006, Sun released much of its Java virtual machine as free and open-source software, under the terms of the GNU General Public License. Suns vice-president Rich Green said that Suns ideal role with regard to Java was as an evangelist and this did not prevent Oracle from filing a lawsuit against Google shortly after that for using Java inside the Android SDK. Java software runs on everything from laptops to data centers, game consoles to scientific supercomputers, on April 2,2010, James Gosling resigned from Oracle. There were five primary goals in the creation of the Java language, It must be simple, object-oriented and it must be robust and secure

17.
Richard Stallman
–
Richard Matthew Stallman, often known by his initials, rms, is an American software freedom activist and programmer. He campaigns for software to be distributed in a such that its users receive the freedoms to use, study, distribute. Software that ensures these freedoms is termed free software, Stallman launched the GNU Project, founded the Free Software Foundation, developed the GNU Compiler Collection and GNU Emacs, and wrote the GNU General Public License. Stallman launched the GNU Project in September 1983 to create a Unix-like computer operating system composed entirely of free software, with this, he also launched the free software movement. In October 1985 he founded the Free Software Foundation, in 1989 he co-founded the League for Programming Freedom. This has included software license agreements, non-disclosure agreements, activation keys, dongles, copy restriction, proprietary formats, as of 2016, he has received fifteen honorary doctorates and professorships. Stallman was born to Alice Lippman, a teacher, and Daniel Stallman. He was interested in computers at an age, when Stallman was a pre-teen at a summer camp. From 1967 to 1969, Stallman attended a Columbia University Saturday program for school students. Stallman was also a laboratory assistant in the biology department at Rockefeller University. Although he was interested in mathematics and physics, his professor at Rockefeller thought he showed promise as a biologist. His first experience with computers was at the IBM New York Scientific Center when he was in high school. He was hired for the summer in 1970, following his year of high school. He completed the task after a couple of weeks and spent the rest of the writing a text editor in APL. As a first-year student at Harvard University in fall 1970, Stallman was known for his performance in Math 55. He was happy, For the first time in my life, Stallman graduated from Harvard magna cum laude earning a bachelors degree in Physics in 1974. Stallman considered staying on at Harvard, but instead he decided to enroll as a student at MIT. He pursued a doctorate in physics for one year, but left that program to focus on his programming at the MIT AI Laboratory

18.
GNU
–
GNU /ɡnuː/ is an operating system and an extensive collection of computer software. GNU is composed wholly of free software, most of which is licensed under the GNU Projects own GPL, GNU is a recursive acronym for GNUs Not Unix. Chosen because GNUs design is Unix-like, but differs from Unix by being free software, the GNU project includes an operating system kernel, GNU HURD, which was the original focus of the Free Software Foundation. However, non-GNU kernels, most famously Linux, can also be used with GNU software, and since the kernel is the least mature part of GNU, the combination of GNU software and the Linux kernel is commonly known as Linux. Richard Stallman, the founder of the project, views GNU as a means to a social end. unix-wizards. Software development began on January 5,1984, when Stallman quit his job at the Lab so that they could not claim ownership or interfere with distributing GNU components as free software, Richard Stallman chose the name by using various plays on words, including the song The Gnu. The goal was to bring a free software operating system into existence. This philosophy was published as the GNU Manifesto in March 1985. It was thus decided that the development would be started using C and Lisp as system programming languages, at the time, Unix was already a popular proprietary operating system. The design of Unix was modular, so it could be reimplemented piece by piece, in October 1985, Stallman set up the Free Software Foundation. In the late 1980s and 1990s, the FSF hired software developers to write the software needed for GNU, as GNU gained prominence, interested businesses began contributing to development or selling GNU software and technical support. The most prominent and successful of these was Cygnus Solutions, now part of Red Hat, GNU developers have contributed to Linux ports of GNU applications and utilities, which are now also widely used on other operating systems such as BSD variants, Solaris and macOS. Many GNU programs have been ported to other operating systems, including proprietary platforms such as Microsoft Windows, GNU programs have been shown to be more reliable than their proprietary Unix counterparts. As of November 2015, there are a total of 466 GNU packages hosted on the official GNU development site. With the April 30,2015 release of the Debian GNU/HURD2015 distro, GNU OS now provides the components to assemble a system that users can install. This includes the GNU Hurd kernel, that is currently in a pre-production state, the Hurd status page states that it may not be ready for production use, as there are still some bugs and missing features. However, it should be a base for further development. Due to Hurd not being ready for use, in practice these operating systems are Linux distributions

19.
GNU Emacs
–
GNU Emacs is the most popular and most ported Emacs text editor. It was created by GNU Project founder Richard Stallman, in common with other varieties of Emacs, GNU Emacs is extensible using a Turing complete programming language. GNU Emacs has been called the most powerful text editor available today, with proper support from the underlying system, GNU Emacs is able to display files in multiple character sets, and has been able to simultaneously display most human languages since at least 1999. Throughout its history, GNU Emacs has been a component of the GNU project. GNU Emacs is sometimes abbreviated as GNUMACS, especially to differentiate it from other EMACS variants, the tag line for GNU Emacs is the extensible self-documenting text editor. In 1976, Stallman wrote the first Emacs, and in 1984, began work on GNU Emacs, GNU Emacs was initially based on Gosling Emacs, but Stallmans replacement of its Mocklisp interpreter with a true Lisp interpreter required that nearly all of its code be rewritten. This became the first program released by the nascent GNU Project, GNU Emacs is written in C and provides Emacs Lisp, also implemented in C, as an extension language. Version 13, the first public release, was made on March 20,1985, the first widely distributed version of GNU Emacs was version 15.34, released later in 1985. Early versions of GNU Emacs were numbered as 1. x. x, the 1 was dropped after version 1.12 as it was thought that the major number would never change, and thus the major version skipped from 1 to 13. A new third version number was added to represent changes made by user sites, in the current numbering scheme, a number with two components signifies a release version, with development versions having three components. GNU Emacs was later ported to Unix and it offered more features than Gosling Emacs, in particular a full-featured Lisp as its extension language, and soon replaced Gosling Emacs as the de facto Unix Emacs editor. Markus Hess exploited a security flaw in GNU Emacs email subsystem in his 1986 cracking spree, the project has since adopted a public development mailing list and anonymous CVS access. Development took place in a single CVS trunk until 2008, Richard Stallman has remained the principal maintainer of GNU Emacs, but he has stepped back from the role at times. Stefan Monnier and Chong Yidong have overseen maintenance since 2008, on September 21,2015 Monnier announced that he would be stepping down as maintainer effective with the feature freeze of Emacs 25. Older versions of the GNU Emacs documentation appeared under a license that required the inclusion of certain text in any modified copy. In the GNU Emacs users manual, for example, this included instructions for obtaining GNU Emacs, the XEmacs manuals, which were inherited from older GNU Emacs manuals when the fork occurred, have the same license. Bug fixes and minor code contributions of fewer than 10 lines are exempt and this policy is in place so that the FSF can defend the software in court if its copyleft license is violated. In 2011 it was noticed that GNU Emacs had been violating the GPL for two years, Richard Stallman described this incident as a very bad mistake, which was promptly fixed and no lawsuit was filed

20.
Free software
–
The right to study and modify software entails availability of the software source code to its users. This right is conditional on the person actually having a copy of the software. Richard Stallman used the existing term free software when he launched the GNU Project—a collaborative effort to create a freedom-respecting operating system—and the Free Software Foundation. The FSFs Free Software Definition states that users of software are free because they do not need to ask for permission to use the software. Free software thus differs from proprietary software, such as Microsoft Office, Google Docs, Sheets, and Slides or iWork from Apple, which users cannot study, change, freeware, which is a category of freedom-restricting proprietary software that does not require payment for use. For computer programs that are covered by law, software freedom is achieved with a software license. Software that is not covered by law, such as software in the public domain, is free if the source code is in the public domain. Proprietary software, including freeware, use restrictive software licences or EULAs, Users are thus prevented from changing the software, and this results in the user relying on the publisher to provide updates, help, and support. This situation is called vendor lock-in, Users often may not reverse engineer, modify, or redistribute proprietary software. Other legal and technical aspects, such as patents and digital rights management may restrict users in exercising their rights. Free software may be developed collaboratively by volunteer computer programmers or by corporations, as part of a commercial, from the 1950s up until the early 1970s, it was normal for computer users to have the software freedoms associated with free software, which was typically public domain software. Software was commonly shared by individuals who used computers and by manufacturers who welcomed the fact that people were making software that made their hardware useful. Organizations of users and suppliers, for example, SHARE, were formed to exchange of software. As software was written in an interpreted language such as BASIC. Software was also shared and distributed as printed source code in computer magazines and books, in United States vs. IBM, filed January 17,1969, the government charged that bundled software was anti-competitive. While some software might always be free, there would henceforth be an amount of software produced primarily for sale. In the 1970s and early 1980s, the industry began using technical measures to prevent computer users from being able to study or adapt the software as they saw fit. In 1980, copyright law was extended to computer programs, Software development for the GNU operating system began in January 1984, and the Free Software Foundation was founded in October 1985

21.
BSD
–
Berkeley Software Distribution is a Unix operating system derivative developed and distributed by the Computer Systems Research Group of the University of California, Berkeley, from 1977 to 1995. Today the term BSD is often used non-specifically to refer to any of the BSD descendants which together form a branch of the family of Unix-like operating systems, operating systems derived from the original BSD code remain actively developed and widely used. Historically, BSD has been considered a branch of Unix, Berkeley Unix, because it shared the initial codebase, in the 1980s, BSD was widely adopted by vendors of workstation-class systems in the form of proprietary Unix variants such as DEC ULTRIX and Sun Microsystems SunOS. This can be attributed to the ease with which it could be licensed, FreeBSD, OpenBSD, NetBSD, Darwin, and PC-BSD. The earliest distributions of Unix from Bell Labs in the 1970s included the source code to the system, allowing researchers at universities to modify. A larger PDP-11/70 was installed at Berkeley the following year, using money from the Ingres database project, also in 1975, Ken Thompson took a sabbatical from Bell Labs and came to Berkeley as a visiting professor. He helped to install Version 6 Unix and started working on a Pascal implementation for the system, graduate students Chuck Haley and Bill Joy improved Thompsons Pascal and implemented an improved text editor, ex. Other universities became interested in the software at Berkeley, and so in 1977 Joy started compiling the first Berkeley Software Distribution, 1BSD was an add-on to Version 6 Unix rather than a complete operating system in its own right. Some thirty copies were sent out, some 75 copies of 2BSD were sent out by Bill Joy. 2. 9BSD from 1983 included code from 4. 1cBSD, the most recent release,2. 11BSD, was first issued in 1992. As of 2008, maintenance updates from volunteers are still continuing, a VAX computer was installed at Berkeley in 1978, but the port of Unix to the VAX architecture, UNIX/32V, did not take advantage of the VAXs virtual memory capabilities. 3BSD was also alternatively called Virtual VAX/UNIX or VMUNIX, and BSD kernel images were normally called /vmunix until 4. 4BSD, 4BSD offered a number of enhancements over 3BSD, notably job control in the previously released csh, delivermail, reliable signals, and the Curses programming library. In a 1985 review of BSD releases, John Quarterman et al, many installations inside the Bell System ran 4. 1BSD. 4. 1BSD was a response to criticisms of BSDs performance relative to the dominant VAX operating system, the 4. 1BSD kernel was systematically tuned up by Bill Joy until it could perform as well as VMS on several benchmarks. Back at Bell Labs,4. 1cBSD became the basis of the 8th Edition of Research Unix, to guide the design of 4. The committee met from April 1981 to June 1983, apart from the Fast File System, several features from outside contributors were accepted, including disk quotas and job control. Sun Microsystems provided testing on its Motorola 68000 machines prior to release, the official 4. 2BSD release came in August 1983. On a lighter note, it marked the debut of BSDs daemon mascot in a drawing by John Lasseter that appeared on the cover of the printed manuals distributed by USENIX

22.
John Gilmore (activist)
–
John Gilmore is one of the founders of the Electronic Frontier Foundation, the Cypherpunks mailing list, and Cygnus Solutions. He created the alt. * hierarchy in Usenet and is a contributor to the GNU project. An outspoken civil libertarian, Gilmore has sued the Federal Aviation Administration, Department of Justice and he was the plaintiff in the prominent case Gilmore v. Gonzales, challenging secret travel-restriction laws. He is also an advocate for policy reform. He co-authored the Bootstrap Protocol in 1985, which evolved into DHCP - the primary way local networks assign devices an IP address, as the fifth employee of Sun Microsystems and founder of Cygnus Support, he became wealthy enough to retire early and pursue other interests. Outside of the GNU project he founded the FreeS/WAN project, an implementation of IPsec and he sponsored the EFFs Deep Crack DES cracker, the Micropolis city building game based on SimCity, and he is a proponent of opportunistic encryption. Gilmore co-authored the Bootstrap Protocol with Bill Croft in 1985, the Bootstrap Protocol evolved into DHCP, the method by which Ethernet and wireless networks typically assign devices an IP address. Gilmore owns the name toad. com, which is one of the 100 oldest active. com domains. It was registered on August 18,1987 and he runs the mail server at toad. com as an open mail relay. In October 2002, Gilmores ISP, Verio, cut off his Internet access for running an open relay, many people contend that open relays make it too easy to send spam. Gilmore protests that his mail server was programmed to be useless to spammers and other senders of mass email. Gilmore famously stated of Internet censorship that The Net interprets censorship as damage and he unsuccessfully challenged the constitutionality of secret laws regarding travel security policies in Gilmore v. Gonzales. He is a member of the boards of MAPS, the Marijuana Policy Project, Gilmore has received the Free Software Foundations Advancement of Free Software 2009 award. Official website Gilmore v. Gonzales information Verio censored John Gilmores email under pressure from anti-spammers, John Gilmore on inflight activism, spam and sarongs, interview by Mikael Pawlo, August 18,2004. Gilmore on Secret Laws / Gonzales case, audio interview,13 November 2006

23.
Free Software Foundation
–
The FSF was incorporated in Massachusetts, USA, where it is also based. From its founding until the mid-1990s, FSFs funds were used to employ software developers to write free software for the GNU Project. Since the mid-1990s, the FSFs employees and volunteers have worked on legal and structural issues for the free software movement. Consistent with its goals, the FSF aims to use free software on its own computers. The Free Software Foundation was founded in 1985 as a non-profit corporation supporting free software development and it continued existing GNU projects such as the sale of manuals and tapes, and employed developers of the free software system. Since then, it has continued these activities, as well as advocating for the free software movement, the FSF is also the steward of several free software licenses, meaning it publishes them and has the ability to make revisions as needed. The FSF holds the copyrights on many pieces of the GNU system, as holder of these copyrights, it has the authority to enforce the copyleft requirements of the GNU General Public License when copyright infringement occurs on that software. From 1991 until 2001, GPL enforcement was done informally, usually by Stallman himself, often with assistance from FSFs lawyer, typically, GPL violations during this time were cleared up by short email exchanges between Stallman and the violator. In the interest of promoting copyleft assertiveness by software companies to the level that the FSF was already doing, in 2004 Harald Welte launched gpl-violations. org. In late 2001, Bradley M. Kuhn, with the assistance of Moglen, David Turner, from 2002-2004, high-profile GPL enforcement cases, such as those against Linksys and OpenTV, became frequent. GPL enforcement and educational campaigns on GPL compliance was a focus of the FSFs efforts during this period. In March 2003, SCO filed suit against IBM alleging that IBMs contributions to free software, including FSFs GNU. While FSF was never a party to the lawsuit, FSF was subpoenaed on November 5,2003, during 2003 and 2004, FSF put substantial advocacy effort into responding to the lawsuit and quelling its negative impact on the adoption and promotion of free software. From 2003 to 2005, FSF held legal seminars to explain the GPL, usually taught by Bradley M. Kuhn and Daniel Ravicher, these seminars offered CLE credit and were the first effort to give formal legal education on the GPL. In 2007, the FSF published the version of the GNU General Public License after significant outside input. In December 2008, FSF filed a lawsuit against Cisco for using GPL-licensed components shipped with Linksys products, Cisco was notified of the licensing issue in 2003 but Cisco repeatedly disregarded its obligations under the GPL. The GNU project The original purpose of the FSF was to promote the ideals of free software, the organization developed the GNU operating system as an example of this. GNU licenses The GNU General Public License is a widely used license for software projects

24.
Computer program
–
A computer program is a collection of instructions that performs a specific task when executed by a computer. A computer requires programs to function, and typically executes the programs instructions in a processing unit. A computer program is written by a computer programmer in a programming language. From the program in its form of source code, a compiler can derive machine code—a form consisting of instructions that the computer can directly execute. Alternatively, a program may be executed with the aid of an interpreter. A part of a program that performs a well-defined task is known as an algorithm. A collection of programs, libraries and related data are referred to as software. Computer programs may be categorized along functional lines, such as software or system software. The earliest programmable machines preceded the invention of the digital computer, in 1801, Joseph-Marie Jacquard devised a loom that would weave a pattern by following a series of perforated cards. Patterns could be weaved and repeated by arranging the cards, in 1837, Charles Babbage was inspired by Jacquards loom to attempt to build the Analytical Engine. The names of the components of the device were borrowed from the textile industry. In the textile industry, yarn was brought from the store to be milled, the device would have had a store—memory to hold 1,000 numbers of 40 decimal digits each. Numbers from the store would then have then transferred to the mill. It was programmed using two sets of perforated cards—one to direct the operation and the other for the input variables, however, after more than 17,000 pounds of the British governments money, the thousands of cogged wheels and gears never fully worked together. During a nine-month period in 1842–43, Ada Lovelace translated the memoir of Italian mathematician Luigi Menabrea, the memoir covered the Analytical Engine. The translation contained Note G which completely detailed a method for calculating Bernoulli numbers using the Analytical Engine and this note is recognized by some historians as the worlds first written computer program. In 1936, Alan Turing introduced the Universal Turing machine—a theoretical device that can model every computation that can be performed on a Turing complete computing machine and it is a finite-state machine that has an infinitely long read/write tape. The machine can move the back and forth, changing its contents as it performs an algorithm

25.
DEC Alpha
–
Alpha was implemented in microprocessors originally developed and fabricated by DEC. These microprocessors were most prominently used in a variety of DEC workstations and servers, several third-party vendors also produced Alpha systems, including PC form factor motherboards. Operating systems that supported Alpha included OpenVMS, Tru64 UNIX, Windows NT, Linux, BSD UNIX, Plan 9 from Bell Labs, as well as the L4Ka, the Alpha architecture was sold, along with most parts of DEC, to Compaq in 1998. Alpha was born out of an earlier RISC project named PRISM, PRISM was intended to be a flexible design, supporting both Unix-like applications, as well as Digitals existing VMS programs from the VAX after minor conversion. A new Unix-like operating system known as Mica would run applications natively, during development, the Palo Alto design team were working on a Unix-only workstation that originally included the PRISM. DEC management doubted the need to produce a new architecture to replace their existing VAX and DECstation lines. By the time of cancellation, however, second-generation RISC chips were offering much better price/performance ratios than the VAX lineup and it was clear a third generation would completely outperform the VAX in all ways, not just on cost. Another study was started to see if a new RISC architecture could be defined that could support the VMS operating system. The new design used most of the basic PRISM concepts, but was re-tuned to allow VMS and VMS programs to run at speed with no conversion at all. The decision was made to upgrade the design to a full 64-bit implementation from PRISMs 32-bit. Eventually that new architecture became Alpha, the primary Alpha instruction set architects were Richard L. Sites and Richard T. Witek. The PRISMs Epicode was developed into the Alphas PALcode, providing an interface to platform-. The main contribution of Alpha to the industry, and the main reason for its performance, was not so much the architecture. At that time, the industry was dominated by automated design. The chip designers at Digital continued pursuing sophisticated manual circuit design in order to deal with the overly complex VAX architecture and these chips caused a renaissance of custom circuit design within the microprocessor design community. Originally, the Alpha processors were designated the DECchip 21x64 series, the first two digits,21 signifies the 21st century, and the last two digits,64 signifies 64 bits. The Alpha was designed as 64-bit from the start and there is no 32-bit version, the middle digit corresponded to the generation of the Alpha architecture. The first few generations of the Alpha chips were some of the most innovative of their time, the first version, the Alpha 21064 or EV4, was the first CMOS microprocessor whose operating frequency rivalled higher-powered ECL minicomputers and mainframes

26.
ARM architecture
–
ARM, originally Acorn RISC Machine, later Advanced RISC Machine, is a family of reduced instruction set computing architectures for computer processors, configured for various environments. It also designs cores that implement this instruction set and licenses these designs to a number of companies that incorporate those core designs into their own products, a RISC-based computer design approach means processors require fewer transistors than typical complex instruction set computing x86 processors in most personal computers. This approach reduces costs, heat and power use and these characteristics are desirable for light, portable, battery-powered devices‍—‌including smartphones, laptops and tablet computers, and other embedded systems. For supercomputers, which large amounts of electricity, ARM could also be a power-efficient solution. ARM Holdings periodically releases updates to architectures and core designs, some older cores can also provide hardware execution of Java bytecodes. The ARMv8-A architecture, announced in October 2011, adds support for a 64-bit address space, with over 100 billion ARM processors produced as of 2017, ARM is the most widely used instruction set architecture in terms of quantity produced. Currently, the widely used Cortex cores, older classic cores, the British computer manufacturer Acorn Computers first developed the Acorn RISC Machine architecture in the 1980s to use in its personal computers. Its first ARM-based products were coprocessor modules for the BBC Micro series of computers, according to Sophie Wilson, all the tested processors at that time performed about the same, with about a 4 Mbit/second bandwidth. After testing all available processors and finding them lacking, Acorn decided it needed a new architecture, inspired by white papers on the Berkeley RISC project, Acorn considered designing its own processor. Wilson developed the set, writing a simulation of the processor in BBC BASIC that ran on a BBC Micro with a 6502 second processor. This convinced Acorn engineers they were on the right track, Wilson approached Acorns CEO, Hermann Hauser, and requested more resources. Hauser gave his approval and assembled a team to implement Wilsons model in hardware. The official Acorn RISC Machine project started in October 1983 and they chose VLSI Technology as the silicon partner, as they were a source of ROMs and custom chips for Acorn. Wilson and Furber led the design and they implemented it with a similar efficiency ethos as the 6502. A key design goal was achieving low-latency input/output handling like the 6502, the 6502s memory access architecture had let developers produce fast machines without costly direct memory access hardware. The first samples of ARM silicon worked properly when first received and tested on 26 April 1985, Wilson subsequently rewrote BBC BASIC in ARM assembly language. The in-depth knowledge gained from designing the instruction set enabled the code to be very dense, the original aim of a principally ARM-based computer was achieved in 1987 with the release of the Acorn Archimedes. In 1992, Acorn once more won the Queens Award for Technology for the ARM, the ARM2 featured a 32-bit data bus, 26-bit address space and 27 32-bit registers

27.
Atmel AVR
–
AVR is a family of microcontrollers developed by Atmel beginning in 1996. These are modified Harvard architecture 8-bit RISC single-chip microcontrollers, AVR microcontrollers find many applications as embedded systems, they are also used in the Arduino line of open source board designs. The AVR architecture was conceived by two students at the Norwegian Institute of Technology, Alf-Egil Bogen and Vegard Wollan. The original AVR MCU was developed at a local ASIC house in Trondheim, Norway, called Nordic VLSI at the time, now Nordic Semiconductor and it was known as a μRISC and was available as silicon IP/building block from Nordic VLSI. When the technology was sold to Atmel from Nordic VLSI, the architecture was further developed by Bogen and Wollan at Atmel Norway. The designers worked closely with writers at IAR Systems to ensure that the AVR instruction set provided efficient compilation of high-level languages. Atmel says that the name AVR is not an acronym and does not stand for anything in particular, the creators of the AVR give no definitive answer as to what the term AVR stands for. However, it is accepted that AVR stands for Alf. Note that the use of AVR in this generally refers to the 8-bit RISC line of Atmel AVR Microcontrollers. Among the first of the AVR line was the AT90S8515, which in a 40-pin DIP package has the same pinout as an 8051 microcontroller, including the external multiplexed address, the polarity of the RESET line was opposite, but other than that the pinout was identical. The AVR 8-bit microcontroller architecture was introduced in 1997, by 2003, Atmel had shipped 500 million AVR flash microcontrollers. The Arduino platform for simple electronics projects was released in 2005, AVRs are generally classified into following, tinyAVR – the ATtiny series 0. This is a different architecture unrelated to the 8-bit AVR. It has a 32-bit data path, SIMD and DSP instructions, along with other audio-, the instruction set is similar to other RISC cores, but it is not compatible with the original AVR. Flash, EEPROM, and SRAM are all integrated onto a single chip, some devices have a parallel external bus option to allow adding additional data memory or memory-mapped devices. Almost all devices have serial interfaces, which can be used to connect larger serial EEPROMs or flash chips, Program instructions are stored in non-volatile flash memory. Although the MCUs are 8-bit, each instruction takes one or two 16-bit words, the size of the program memory is usually indicated in the naming of the device itself. There is no provision for off-chip program memory, all executed by the AVR core must reside in the on-chip flash

28.
Hitachi H8
–
The family of largely CISC machines is unrelated to the higher-performance SuperH family of 32-bit RISC-like microcontrollers. It is supported in the Linux kernel since version 4.2, built-in ROM and flash memory tends to range from 16 KB to 1024 KB, and RAM from 512 B to 512 KB. The basic architecture of the H8 is patterned after the DEC PDP-11, with eight 16-bit registers, several companies provide compilers for the H8 family, and there is a complete GCC port, including a simulator. There are also various hardware emulators available, the family is continued with the H8SX 32-bit controllers. H8S may be found in digital cameras, some ThinkPad notebooks, printer controllers, smart cards, chess computers, music synthesizers, the LEGO Mindstorms RCX, an advanced robot toy/educational tool, uses the H8/300. Namco employed an H8/3002 as a processor for various games it made in the late 1990s. H8 is referenced in the Muse song Space Dementia, Renesas Electronics Online training for Renesas products A community support forum

29.
System/370
–
The IBM System/370 was a model range of IBM mainframe computers announced on June 30,1970 as the successors to the System/360 family. A Dynamic Address Translation option was not announced until 1972, 128-bit floating point arithmetic on all models, the original System/370 line underwent several architectural improvements during its roughly 20-year lifetime. The first System/370 machines, the Model 155, the Model 165, and these changes included,13 new instructions, among which were MOVE LONG, COMPARE LOGICAL LONG, thereby permitting operations on up to 2^24-1 bytes, vs. They did not include support for virtual memory, in 1972, a very significant change was made when support for virtual memory was introduced with IBMs System/370 Advanced Function announcement. IBM had initially chosen to exclude virtual storage from the S/370 line, the S/370-145 had an associative memory used by the microcode for the DOS compatibility feature from its first shipments in June 1971, the same hardware was used by the microcode for DAT. The 145 microcode architecture simplified the addition of virtual memory, allowing this capability to be present in early 145s without the extensive modifications needed in other models. The Reference and Change bits of the Storage-protection Keys, however, were labeled on the rollers, existing S/370-145 customers were happy to learn that they did not have to purchase a hardware upgrade in order to run DOS/VS or OS/VS1. After installation, these models were known as the S/370-155-II and S/370-165-II, IBM wanted customers to upgrade their 155 and 165 systems to the widely sold S/370-158 and -168. This led to the original S/370-155 and S/370-165 models being described as boat anchors, later architectural changes primarily involved expansions in memory – both physical memory and virtual address space – to enable larger workloads and meet client demands for more storage. This was the trend as Moores Law eroded the unit cost of memory. As with all IBM mainframe development, preserving backward compatibility was paramount, in October 1981, the 3033 and 3081 processors added extended real addressing, which allowed 26-bit addressing for physical storage. This capability appeared later on other systems, such as the 4381 and 3090, the cross-memory services capability which facilitated movement of data between address spaces was actually available just prior to S/370-XA architecture on the 3031,3032 and 3033 processors. As described above, the S/370 product line underwent a major architectural change, the evolution of S/370 addressing was always complicated by the basic S/360 instruction set design, and its large installed code base, which relied on a 24-bit logical address. Most shops thus continued to run their 24-bit applications in a higher-performance 31-bit world and this evolutionary implementation had the characteristic of solving the most urgent problems first, relief for real memory addressing being needed sooner than virtual memory addressing. IBMs choice of 31-bit addressing for 370-XA involved various factors, the System/360 Model 67 had included a full 32-bit addressing mode, but this feature was not carried forward to the System/370 series, which began with only 24-bit addressing. When IBM later expanded the S/370 address space in S/370-XA, several reasons are cited for the choice of 31 bits, in particular, the standard subroutine calling convention marked the final parameter word by setting its high bit. Interaction between 32-bit addresses and two instructions that treated their arguments as signed numbers, input from key initial Model 67 sites, which had debated the alternatives during the initial system design period, and had recommended 31 bits. The following table summarizes the major S/370 series and models, the second column lists the principal architecture associated with each series

30.
System 390
–
The introduction covered new architecture, new hardware and new software. The newly introduced ESA/390 architecture brought with it MVS/ESA VM/ESA and VSE/ESA These systems followed the IBM3090, with over a decade of follow-ons, new models were offered on an ongoing basis. 18 models were announced September 5,1990 for the ES/9000, ESCON fiber optic channels Two of the models could be configured with as much as 9 Gigabytes of main memory. Optional vector facilities were available on 14 of the 18 models The number of vector processors coould be 1,2,3,4 or 6, six models were air-cooled models,4 are rack-mounted. Water-cooled ES/9000 models included ES/9021-900, -820, -720, -620, -580, -500, -340 and this was introduced as part of IBMs moving towards lights-out operation and increased control of multiple system configurations. ESA/390 was introduced in September 1990 and was IBMs last 31-bit-address/32-bit-data mainframe computing design, copied by Amdahl, Hitachi and it was the successor of Enterprise Systems Architecture/370 and, in turn, was succeeded by the 64-bit z/Architecture in 2000. Machines supporting the architecture have been sold under the brand System/390 from the beginning of the 1990s, the 9672 implementations of System/390 were the first high-end IBM mainframe architecture implemented first with CMOS CPU electronics rather than the traditional bipolar logic. The architecture employs a channel I/O subsystem in the System/360 tradition and it also includes a standard set of CCW opcodes that new equipment is expected to provide. The architecture maintains problem state backward compatibility with the 24-bit-address/32-bit-data System/360, however, the I/O subsystem is based on System/370 Extended Architecture, not on the original S/370 I/O instructions. Only byte-addressable real memory and Virtual Storage addressing is limited to 31 bits, in fact, total system memory is not limited to 31 bits. While the virtual storage of an address space cannot exceed 2 GB. Further, each address space can have Dataspaces associated with it, while Central Storage is limited to 2GB additional memory can be configured as expanded storage. With Expanded Storage 4KB pages can be moved between Central Storage and Expanded Storage, Expanded Storage can be used for a number of things such as ultra-fast paging, for disk caching and virtual disks within the VM/CMS operating system. Under Linux/390 this memory cannot be used for caching, instead, it is supported by a block device driver, allowing to use it as ultra-fast swap space. In addition, a machine may be divided into Logical Partitions, an important capability to form a Parallel Sysplex was added to the architecture in 1994. Some PC-based IBM-compatible mainframes which provide ESA/390 processors in smaller machines have been released over time, the Hercules emulator is a portable ESA/390 and z/Architecture machine emulator which supports enough devices to boot many ESA/390 operating systems. Since it is written in pure C, it has been ported to many platforms, a commercial emulation product for IBM xSeries with higher execution speed is also available. Specific I/O-Device Commands in Enterprise Systems Architecture/390 Common I/O-Device Commands shows the following commands, the ESA/370 architecture was introduced with the IBM3090 mainframe and the ESA/390 architecture was introduced with the IBM ES/9000 family of mainframes

31.
X86
–
X86 is a family of backward-compatible instruction set architectures based on the Intel 8086 CPU and its Intel 8088 variant. The term x86 came into being because the names of several successors to Intels 8086 processor end in 86, many additions and extensions have been added to the x86 instruction set over the years, almost consistently with full backward compatibility. The architecture has been implemented in processors from Intel, Cyrix, AMD, VIA and many companies, there are also open implementations. In the 1980s and early 1990s, when the 8088 and 80286 were still in common use, today, however, x86 usually implies a binary compatibility also with the 32-bit instruction set of the 80386. An 8086 system, including such as 8087 and 8089. There were also terms iRMX, iSBC, and iSBX – all together under the heading Microsystem 80, however, this naming scheme was quite temporary, lasting for a few years during the early 1980s. Today, x86 is ubiquitous in both stationary and portable computers, and is also used in midrange computers, workstations, servers. A large amount of software, including operating systems such as DOS, Windows, Linux, BSD, Solaris and macOS, functions with x86-based hardware. There have been attempts, including by Intel itself, to end the market dominance of the inelegant x86 architecture designed directly from the first simple 8-bit microprocessors. Examples of this are the iAPX432, the Intel 960, Intel 860, however, the continuous refinement of x86 microarchitectures, circuitry and semiconductor manufacturing would make it hard to replace x86 in many segments. The table below lists processor models and model series implementing variations of the x86 instruction set, each line item is characterized by significantly improved or commercially successful processor microarchitecture designs. Such x86 implementations are seldom simple copies but often employ different internal microarchitectures as well as different solutions at the electronic, quite naturally, early compatible microprocessors were 16-bit, while 32-bit designs were developed much later. For the personal computer market, real quantities started to appear around 1990 with i386 and i486 compatible processors, other companies, which designed or manufactured x86 or x87 processors, include ITT Corporation, National Semiconductor, ULSI System Technology, and Weitek. Some early versions of these microprocessors had heat dissipation problems, AMD later managed to establish itself as a serious contender with the K6 set of processors, which gave way to the very successful Athlon and Opteron. There were also other contenders, such as Centaur Technology, Rise Technology, VIA Technologies energy efficient C3 and C7 processors, which were designed by the Centaur company, have been sold for many years. Centaurs newest design, the VIA Nano, is their first processor with superscalar and it was, perhaps interestingly, introduced at about the same time as Intels first in-order processor since the P5 Pentium, the Intel Atom. The instruction set architecture has twice been extended to a word size. In 1999-2003, AMD extended this 32-bit architecture to 64 bits and referred to it as x86-64 in early documents, Intel soon adopted AMDs architectural extensions under the name IA-32e, later using the name EM64T and finally using Intel 64

32.
X86-64
–
X86-64 is the 64-bit version of the x86 instruction set. It supports vastly larger amounts of memory and physical memory than is possible on its 32-bit predecessors. X86-64 also provides 64-bit general-purpose registers and numerous other enhancements and it is fully backward compatible with 16-bit and 32-bit x86 code. The original specification, created by AMD and released in 2000, has been implemented by AMD, Intel, the AMD K8 processor was the first to implement the architecture, this was the first significant addition to the x86 architecture designed by a company other than Intel. Intel was forced to suit and introduced a modified NetBurst family which was fully software-compatible with AMDs design. VIA Technologies introduced x86-64 in their VIA Isaiah architecture, with the VIA Nano, the x86-64 specification is distinct from the Intel Itanium architecture, which is not compatible on the native instruction set level with the x86 architecture. AMD64 was created as an alternative to the radically different IA-64 architecture, the first AMD64-based processor, the Opteron, was released in April 2003. AMDs processors implementing the AMD64 architecture include Opteron, Athlon 64, Athlon 64 X2, Athlon 64 FX, Athlon II, Turion 64, Turion 64 X2, Sempron, Phenom, Phenom II, FX, Fusion and Ryzen. The primary defining characteristic of AMD64 is the availability of 64-bit general-purpose processor registers, 64-bit integer arithmetic and logical operations, the designers took the opportunity to make other improvements as well. Some of the most significant changes are described below, pushes and pops on the stack default to 8-byte strides, and pointers are 8 bytes wide. Additional registers In addition to increasing the size of the general-purpose registers, AMD64 still has fewer registers than many common RISC instruction sets or VLIW-like machines such as the IA-64. However, an AMD64 implementation may have far more internal registers than the number of architectural registers exposed by the instruction set, additional XMM registers Similarly, the number of 128-bit XMM registers is also increased from 8 to 16. Larger virtual address space The AMD64 architecture defines a 64-bit virtual address format and this allows up to 256 TB of virtual address space. The architecture definition allows this limit to be raised in future implementations to the full 64 bits and this is compared to just 4 GB for the x86. This means that very large files can be operated on by mapping the entire file into the address space, rather than having to map regions of the file into. Larger physical address space The original implementation of the AMD64 architecture implemented 40-bit physical addresses, current implementations of the AMD64 architecture extend this to 48-bit physical addresses and therefore can address up to 256 TB of RAM. The architecture permits extending this to 52 bits in the future, for comparison, 32-bit x86 processors are limited to 64 GB of RAM in Physical Address Extension mode, or 4 GB of RAM without PAE mode. Any implementation therefore allows the physical address limit as under long mode

33.
Itanium
–
Itanium is a family of 64-bit Intel microprocessors that implement the Intel Itanium architecture. Intel markets the processors for servers and high-performance computing systems. The Itanium architecture originated at Hewlett-Packard, and was jointly developed by HP. Itanium-based systems have produced by HP and several other manufacturers. In 2008, Itanium was the fourth-most deployed microprocessor architecture for enterprise-class systems, behind x86-64, Power Architecture, the currently shipping Itanium processor generation, Poulson, was released on November 8,2012. In February 2017, Intel began releasing its successor, Kittson, to test customers, as Intel has not provided a roadmap beyond it and Hewlett-Packard is the only remaining major Itanium vendor, press and analysts have speculated that it will be the last Itanium generation. In 1989, HP determined that Reduced Instruction Set Computing architectures were approaching a processing limit at one instruction per cycle, HP researchers investigated a new architecture, later named Explicitly Parallel Instruction Computing, that allows the processor to execute multiple instructions in each clock cycle. EPIC implements a form of very long instruction word architecture, in which a single instruction word contains multiple instructions, Intel was willing to undertake a very large development effort on IA-64 in the expectation that the resulting microprocessor would be used by the majority of enterprise systems manufacturers. HP and Intel initiated a joint development effort with a goal of delivering the first product, Merced. Compaq and Silicon Graphics decided to further development of the Alpha. Several groups developed operating systems for the architecture, including Microsoft Windows, OpenVMS, Linux, and UNIX variants such as HP-UX, Solaris, Tru64 UNIX, and Monterey/64. By 1997, it was apparent that the IA-64 architecture and the compiler were much more difficult to implement than originally thought, technical difficulties included the very high transistor counts needed to support the wide instruction words and the large caches. There were also structural problems within the project, as the two parts of the joint team used different methodologies and had different priorities. Since Merced was the first EPIC processor, the development effort encountered more unanticipated problems than the team was accustomed to, in addition, the EPIC concept depends on compiler capabilities that had never been implemented before, so more research was needed. Intel announced the name of the processor, Itanium, on October 4,1999. Within hours, the name Itanic had been coined on a Usenet newsgroup, a reference to Titanic, by the time Itanium was released in June 2001, its performance was not superior to competing RISC and CISC processors. Itanium competed at the low-end with servers based on x86 processors, Intel repositioned Itanium to focus on high-end business and HPC computing, attempting to duplicate x86s successful horizontal market. POWER and SPARC remained strong, while the 32-bit x86 architecture continued to grow into the enterprise space, only a few thousand systems using the original Merced Itanium processor were sold, due to relatively poor performance, high cost and limited software availability

34.
Motorola 68000
–
The Motorola 68000 is a 32-bit CISC microprocessor with a 16-bit external data bus, designed and marketed by Motorola Semiconductor Products Sector. After 38 years in production, the 68000 architecture is still in use, the 68000 grew out of the MACSS project, begun in 1976 to develop an entirely new architecture without backward compatibility. It would be a higher-power sibling complementing the existing 8-bit 6800 line rather than a compatible successor, in the end, the 68000 did retain a bus protocol compatibility mode for existing 6800 peripheral devices, and a version with an 8-bit data bus was produced. However, the designers focused on the future, or forward compatibility. For instance, the CPU registers are 32 bits wide, though few self-contained structures in the processor itself operate on 32 bits at a time. The MACSS team drew heavily on the influence of minicomputer processor design, such as the PDP-11 and VAX systems, in the mid 1970s, the 8-bit microprocessor manufacturers raced to introduce the 16-bit generation. National Semiconductor had been first with its IMP-16 and PACE processors in 1973–1975, Intel had worked on their advanced 16/32-bit Intel iAPX432 since 1975 and their Intel 8086 since 1976. Arriving late to the 16-bit arena afforded the new processor more transistors, 32-bit macroinstructions, the original MC68000 was fabricated using an HMOS process with a 3.5 µm feature size. Formally introduced in September 1979, Initial samples were released in February 1980, Initial speed grades were 4,6, and 8 MHz.10 MHz chips became available during 1981, and 12.5 MHz chips by June 1982. The 16.67 MHz 12F version of the MC68000, the fastest version of the original HMOS chip, was not produced until the late 1980s, tom Gunter, retired Corporate Vice President at Motorola, is known as the Father of the 68000. The 68000 was used in Microsoft Xenix systems as well as an early NetWare Unix-based Server, the 68000 was used in the first generation of desktop laser printers including the original Apple Inc. In 1982, the 68000 received an update to its ISA allowing it to virtual memory and to conform to the Popek. The updated chip was called the 68010, a further extended version which exposed 31 bits of the address bus was also produced, in small quantities, as the 68012. To support lower-cost systems and control applications with smaller sizes, Motorola introduced the 8-bit compatible MC68008. This was a 68000 with an 8-bit data bus and an address bus. After 1982, Motorola devoted more attention to the 68020 and 88000 projects, several other companies were second-source manufacturers of the HMOS68000. These included Hitachi, who shrank the size to 2.7 µm for their 12.5 MHz version, Mostek, Rockwell, Signetics, Thomson/SGS-Thomson. Toshiba was also a maker of the CMOS 68HC000

35.
MIPS architecture
–
MIPS is a reduced instruction set computer instruction set architecture developed by MIPS Technologies. The early MIPS architectures were 32-bit, with 64-bit versions added later, multiple revisions of the MIPS instruction set exist, including MIPS I, MIPS II, MIPS III, MIPS IV, MIPS V, MIPS32, and MIPS64. The current revisions are MIPS32 and MIPS64, MIPS32 and MIPS64 define a control register set as well as the instruction set. Computer architecture courses in universities and technical schools often study the MIPS architecture, the architecture greatly influenced later RISC architectures such as Alpha. It used to be popular in supercomputers but all systems have dropped off the TOP500 list. Until late 2006, they were used in many of SGIs computer products. MIPS implementations were also used by Digital Equipment Corporation, NEC, Pyramid Technology, Siemens Nixdorf, in the mid to late 1990s, it was estimated that one in three RISC microprocessors produced was a MIPS implementation. Windows NT supported MIPS until the release of Windows NT4.0 SP3 in 1997, MIPS is a modular architecture supporting up to four coprocessors. In MIPS terminology, COP0 is the System Control Coprocessor, COP1 is an optional FPU, for example, in the original Playstation game console, COP0 is the System Control Coprocessor and COP2 is Geometry Transformation Engine. In the Playstation 2 game console, COP0 is a Toshiba R5900 chip, COP1 is a FPU, MIPS is a load-store architecture, meaning it only performs arithmetic and logic operations between CPU registers, requiring load/store instructions to access memory. Processors based upon the MIPS instruction set have been in production since 1988, over time several enhancements of the instruction set were made. The different revisions which have introduced are MIPS I, MIPS II, MIPS III, MIPS IV. Each revision is a superset of its predecessors, when MIPS Technologies was spun out of Silicon Graphics again in 1998, they refocused on the embedded market. At that time, this property was found to be a problem, and the architecture definition was changed to define a 32-bit MIPS32. Introduced in 1985 with the R2000, introduced in 1990 with the R6000. Introduced in 1992 in the R4000 and it adds 64-bit registers and integer instructions and a floating point square root instruction. MIPS IV is the version of the architecture. It is a superset of MIPS III and is compatible with all existing versions of MIPS, the first implementation of MIPS IV was the R8000, which was introduced in 1994

36.
PA-RISC
–
PA-RISC is an instruction set architecture developed by Hewlett-Packard. As the name implies, it is an instruction set computer architecture. The design is referred to as HP/PA for Hewlett Packard Precision Architecture. The architecture was introduced on 26 February 1986, when the HP3000 Series 930 and HP9000 Model 840 computers were launched featuring the first implementation, PA-RISC has been succeeded by the Itanium ISA, jointly developed by HP and Intel. HP stopped selling PA-RISC-based HP9000 systems at the end of 2008, in the late 1980s, HP was building four series of computers, all based on CISC CPUs. One line was the IBM PC compatible Intel i286-based Vectra Series, HP planned to use PA-RISC to move all of their non-PC compatible machines to a single RISC CPU family. Precision Architecture was introduced in 1986 and it had thirty-two 32-bit integer registers and sixteen 64-bit floating-point registers. The number of floating-point registers was doubled in the 1.1 version to 32 once it became apparent that 16 were inadequate, the architects included Allen Baum, Hans Jeans, Michael J. Mahon, Ruby Bei-Loh Lee, Russel Kao, Steve Muchnick, Terrence C. Miller, David Fotland, and William S. Worley, the first implementation was the TS1, a central processing unit built from discrete transistor-transistor logic devices. Later implementations were multi-chip VLSI designs fabricated in NMOS processes and CMOS. They were first used in a new series of HP3000 machines in the late 1980s – the 930 and 950, commonly known at the time as Spectrum systems, the HP9000 machines were soon upgraded with the PA-RISC processor as well, running the HP-UX version of UNIX. Other operating systems ported to the PA-RISC architecture include Linux, OpenBSD, NetBSD, an interesting aspect of the PA-RISC line is that most of its generations have no Level 2 cache. Instead large Level 1 caches are used, formerly as separate chips connected by a bus, only the PA-7100LC and PA-7300LC had L2 caches. Another innovation of the PA-RISC was the addition of vectorized instructions in the form of MAX, Precision RISC Organization, an industry group led by HP, was founded in 1992, to promote the PA-RISC architecture. Members included Hitachi, Redbrick Software, Allegro Consultants, Mitsubishi, NEC, OKI, the ISA was extended in 1996 to 64 bits, with this revision named PA-RISC2.0. The first PA-RISC2.0 implementation was the PA-8000, which was introduced in January 1996. net Comprehensive PA-RISC chip and computer information chipdb. org Images of different PA-RISC processors

37.
PowerPC
–
PowerPC is a RISC instruction set architecture created by the 1991 Apple–IBM–Motorola alliance, known as AIM. PowerPC was the cornerstone of AIMs PReP and Common Hardware Reference Platform initiatives in the 1990s and it has since become niche in personal computers, but remain popular as embedded and high-performance processors. Its use in game consoles and embedded applications provided an array of uses. In addition, PowerPC CPUs are still used in AmigaOne and third party AmigaOS4 personal computers, the history of RISC began with IBMs 801 research project, on which John Cocke was the lead developer, where he developed the concepts of RISC in 1975–78. 801-based microprocessors were used in a number of IBM embedded products, the RT was a rapid design implementing the RISC architecture. The result was the POWER instruction set architecture, introduced with the RISC System/6000 in early 1990, the original POWER microprocessor, one of the first superscalar RISC implementations, was a high performance, multi-chip design. IBM soon realized that a microprocessor was needed in order to scale its RS/6000 line from lower-end to high-end machines. Work on a one-chip POWER microprocessor, designated the RSC began, in early 1991, IBM realized its design could potentially become a high-volume microprocessor used across the industry. IBM approached Apple with the goal of collaborating on the development of a family of single-chip microprocessors based on the POWER architecture and this three-way collaboration became known as AIM alliance, for Apple, IBM, Motorola. In 1991, the PowerPC was just one facet of an alliance among these three companies. The PowerPC chip was one of joint ventures involving the three, in their efforts to counter the growing Microsoft-Intel dominance of personal computing. For Motorola, POWER looked like an unbelievable deal and it allowed them to sell a widely tested and powerful RISC CPU for little design cash on their own part. It also maintained ties with an important customer, Apple, and seemed to offer the possibility of adding IBM too, at this point Motorola already had its own RISC design in the form of the 88000 which was doing poorly in the market. Motorola was doing well with their 68000 family and the majority of the funding was focused on this, the 88000 effort was somewhat starved for resources. However, the 88000 was already in production, Data General was shipping 88000 machines, the 88000 had also achieved a number of embedded design wins in telecom applications. The result of various requirements was the PowerPC specification. The differences between the earlier POWER instruction set and PowerPC is outlined in Appendix E of the manual for PowerPC ISA v.2.02, when the first PowerPC products reached the market, they were met with enthusiasm. In addition to Apple, both IBM and the Motorola Computer Group offered systems built around the processors, Microsoft released Windows NT3.51 for the architecture, which was used in Motorolas PowerPC servers, and Sun Microsystems offered a version of its Solaris OS

38.
SuperH
–
SuperH is a 32-bit reduced instruction set computing instruction set architecture developed by Hitachi and currently produced by Renesas. It is implemented by microcontrollers and microprocessors for embedded systems, as of 2015, many of the original patents for the SuperH architecture are expiring and the SH2 CPU has been reimplemented as open source hardware under the name J2. The SuperH processor core family was first developed by Hitachi in the early 1990s, Hitachi has developed a complete group of upward compatible instruction set CPU cores. The SH-1 and the SH-2 were used in the Sega Saturn and these cores have 16-bit instructions for better code density than 32-bit instructions, which was a great benefit at the time, due to the high cost of main memory. A few years later the SH-3 core was added to the SH CPU family, new features included another interrupt concept, a management unit. The SH-3 core also got a DSP extension, then called SH-3-DSP, with extended data paths for efficient DSP processing, special accumulators and a dedicated MAC-type DSP engine, this core was unifying the DSP and the RISC processor world. A derivative was also used with the original SH-2 core, between 1994 and 1996,35.1 million SuperH devices were shipped worldwide. For the Dreamcast, Hitachi developed the SH-4 architecture, superscalar instruction execution and a vector floating point unit were the highlights of this architecture. SH-4 based standard chips were introduced around 1998, the SH-3 and SH-4 architectures support both big-endian and little-endian byte ordering. Hitachi and STMicroelectronics started collaborating as early as 1997 on the design of the SH-4, in 2003, Hitachi and Mitsubishi Electric formed a joint-venture called Renesas Technology, with Hitachi controlling 55% of it. In 2004, Renesas Technology bought STMicroelectronicss share of ownership in the SuperH Inc. Renesas Technology later became Renesas Electronics, following their merger with NEC Electronics. The SH-5 design supported two modes of operation, sHcompact mode is equivalent to the user-mode instructions of the SH-4 instruction set. SHmedia mode is different, using 32-bit instructions with sixty-four 64-bit integer registers. In SHmedia mode the destination of a branch is loaded into a branch register separately from the branch instruction. This allows the processor to prefetch instructions for a branch without having to snoop the instruction stream, however, SH-5 differs because its backward compatibility mode is the 16-bit encoding rather than the 32-bit encoding. The evolution of the SuperH architecture still continues, the last of the SH-2 patents expired in 2014. At LinuxCon Japan 2015, j-core developers presented a cleanroom reimplemention of the SH-2 ISA with extensions, subsequently, a design walkthrough was presented at ELC2016. The open source BSD licensed VHDL code for the J2 core has been proven on Xilinx FPGAs and on ASICs manufactured on TSMCs 180 nm process, additional instructions are easy to add

39.
SPARC
–
The Scalable Processor Architecture is a reduced instruction set computing instruction set architecture originally developed by Sun Microsystems. Since the establishment of SPARC International, Inc. in 1989, SPARC International is also responsible for licensing and promoting the SPARC architecture, managing SPARC trademarks, and providing conformance testing. As a result of SPARC International, SPARC is fully open, non-proprietary, later, SPARC processors were used in SMP and CC-NUMA servers produced by Sun, Solbourne and Fujitsu, among others, and designed for 64-bit operation. As of April 2017, the latest commercial high-end SPARC processors are Fujitsus SPARC64 XII and SPARC64 XIfx, the SPARC architecture was heavily influenced by the earlier RISC designs including the RISC I and II from the University of California, Berkeley and the IBM801. These original RISC designs were minimalist, including as few features or op-codes as possible and this made them similar to the MIPS architecture in many ways, including the lack of instructions such as multiply or divide. Another feature of SPARC influenced by this early RISC movement is the delay slot. The SPARC processor usually contains as many as 160 general purpose registers, according to the Oracle SPARC Architecture 2015 specification an implementation may contain from 72 to 640 general-purpose 64-bit registers. At any point, only 32 of them are visible to software –8 are a set of global registers. These 24 registers form what is called a window, and at function call/return. Each window has 8 local registers and shares 8 registers with each of the adjacent windows, the shared registers are used for passing function parameters and returning values, and the local registers are used for retaining local values across function calls. Other architectures that include similar register file features include Intel i960, IA-64, the architecture has gone through several revisions. It gained hardware multiply and divide functionality in Version 8, 64-bit were added to the version 9 SPARC specification published in 1994. In SPARC Version 8, the floating point register file has 16 double precision registers, each of them can be used as two single precision registers, providing a total of 32 single precision registers. An odd-even number pair of double precision registers can be used as a quad precision register, SPARC Version 9 added 16 more double precision registers, but these additional registers can not be accessed as single precision registers. No SPARC CPU implements quad-precision operations in hardware as of 2004, tagged add and subtract instructions perform adds and subtracts on values checking that the bottom two bits of both operands are 0 and reporting overflow if they are not. This can be useful in the implementation of the run time for ML, Lisp, the endianness of the 32-bit SPARC V8 architecture is purely big-endian. The latter is used for accessing data from inherently little-endian devices. There have been three revisions of the architecture

40.
VAX
–
VAX is a discontinued instruction set architecture developed by Digital Equipment Corporation in the mid-1970s. The VAX-11/780, introduced on October 25,1977, was the first of a range of popular, a 32-bit system with a CISC architecture based on DECs earlier PDP-11, VAX was designed to extend or replace DECs various PDP ISAs. The VAX architectures primary features were virtual addressing and its instruction set. Later versions offloaded the compatibility mode and some of the less used CISC instructions to emulation in the system software. The VAX instruction set was designed to be powerful and orthogonal, when it was introduced, many programs were written in assembly language, so having a programmer-friendly instruction set was important. In time, as programs were written in higher-level language, the instruction set became less visible. One unusual aspect of the VAX instruction set is the presence of register masks at the start of each subprogram and these are arbitrary bit patterns that specify, when control is passed to the subprogram, which registers are to be preserved. Since register masks are a form of data embedded within the executable code and this can complicate optimization techniques that are applied on machine code. The native VAX operating system is Digitals VAX/VMS, the VAX architecture and OpenVMS operating system were engineered concurrently to take maximum advantage of each other, as was the initial implementation of the VAXcluster facility. Other VAX operating systems have included various releases of BSD UNIX up to 4. 3BSD, Ultrix-32, VAXELN, more recently, NetBSD and OpenBSD support various VAX models and some work has been done on porting Linux to the VAX architecture. The first VAX model sold was the VAX-11/780, which was introduced on October 25,1977 at the Digital Equipment Corporations Annual Meeting of Shareholders, bill Strecker, C. Gordon Bells doctoral student at Carnegie Mellon University, was responsible for the architecture. Many different models with different prices, performance levels, and capacities were subsequently created, VAX superminicomputers were very popular in the early 1980s. For a while the VAX-11/780 was used as a standard in CPU benchmarks, the actual number of instructions executed in 1 second was about 500,000, which led to complaints of marketing exaggeration. The result was the definition of a VAX MIPS, the speed of a VAX-11/780, within the Digital community the term VUP was the more common term, because MIPS do not compare well across different architectures. The related term cluster VUPs was informally used to describe the performance of a VAXcluster. The VAX-11/780 included a subordinate stand-alone LSI-11 computer that performed microcode load, booting and this was dropped from subsequent VAX models. Enterprising VAX-11/780 users could therefore run three different Digital Equipment Corporation operating systems, VMS on the VAX processor, and either RSX-11M or RT-11 on the LSI-11, the VAX went through many different implementations. The original VAX 11/780 was implemented in TTL and filled a cabinet with a single CPU

41.
A29K
–
The AMD Am29000, often simply 29k, is a popular family of 32-bit RISC microprocessors and microcontrollers developed and fabricated by Advanced Micro Devices. They were, for a time, the most popular RISC chips on the market, in late 1995 AMD dropped development of the 29k because the design team was transferred to support the PC side of the business. What remained of AMDs embedded business was realigned towards the embedded 186 family of 80186 derivatives, the majority of AMDs resources were then concentrated on their high-performance, desktop x86 clones, using many of the ideas and individual parts of the latest 29k to produce the AMD K5. The 29000 evolved from the same Berkeley RISC design that led to the Sun SPARC. One trick used in all of the Berkeley-derived designs is the concept of register windows, the basic idea is to use a large set of registers as a stack, loading local data into a set of registers during a call, and marking them dead when the procedure returns. Values being returned from the routines would be placed in the global page, in the original Berkeley design, SPARC, and i960, the windows were fixed in size. A routine using only one local variable would still use up eight registers on the SPARC and it was here that the 29000 differed from these earlier designs, in that it used a variable window size to improve usage. In this example only two registers would be used, one for the variable, another for the return address. It also added more registers, including the same 128 registers for the procedure stack, in comparison, the SPARC had 128 registers in total, and the global set was a standard window of eight. The 29000 also extended the register window stack with an in-memory stack, when the window filled the calls would be pushed off the end of the register stack into memory, restored as required when the routine returned. Generally the 29000s register usage was more advanced than competing designs based on the Berkeley concepts. Another difference, this one not so odd, is that the 29000 included no special-purpose condition code register, any register could be used for this purpose, allowing the conditions to be easily saved at the expense of complicating some code. The buffer mitigated this by storing four instructions from the side of the branch. The first 29000 was released in 1988, including a built-in MMU, the 29005 was a cut-down version. The line was upgraded with the 29030 and 29035, which included an 8 KB or 4 KB of instruction cache, another update integrated the FPU on-die and added a 4 KB data cache to produce the 29040. The final general-purpose version was the 29050, the 29050 also has much better floating point performance than previous 29k microprocessors. Several portions of the 29050 design were used as the basis for the K5 series of x86-compatible processors

42.
ETRAX CRIS
–
The ETRAX CRIS is a series of CPUs for use in embedded systems since 1993. The name is an acronym of the features, Ethernet, Token Ring. Token ring support has taken out from the latest chips as it has become obsolete. The TGA, developed in 1986, was a communications transceiver for the AS/400 architecture, the First chip with embedded microcontroller was the CGA-1 which contained booth IBM3270 communications, AS/400 communications. It also had a small microcontroller and various IO, s, including serial, the 1 chip was designed by Martin Gren, the bug-fixed CGA-2 by Martin Gren and Staffan Göransson. In 1993, by introducing 10 Mbit/s Ethernet and Token Ring controllers, the ETRAX-4 had improved performance than previous models, along with a SCSI controller. The ETRAX100 features a 10/100 Mbit/s Ethernet Controller, along with ATA, in 2000, the ETRAX 100LX design added an MMU, as well as USB, synchronous serial and SDRAM support, and boosted the CPU performance up to 100 MIPS. Since it has a MMU, it can run the Linux kernel without modifications and this system-on-a-chip is an ETRAX 100LX plus flash memory, SDRAM, and an Ethernet PHYceiver. There were two versions commercialized, the ETRAX 100LX MCM 2+8, and the ETRAX MCM 4-16. Designed in 2005, and with full Linux 2.6 support, this features, A200 MIPS, 32-bit RISC with 5 stage pipeline CRIS CPU core with 16 kB data and 16 kB instruction cache. Two 10/100 Mbit/s Ethernet controllers Crypto accelerator, supporting AES, DES, Triple DES, SHA-1,128 kB on-chip RAM A microprogrammable I/O processor, supporting PC-Card, CardBus, PCI, USB FS/HS host, USB FS device, SCSI and ATA. The device comes in a 256-pin Plastic Ball Grid Array package, a SDK is provided by Axis on the development site. Several hardware manufacturers offer developer boards, a board featuring an ETRAX chip

Software developer
–
A software developer is a person concerned with facets of the software development process, including the research, design, programming, and testing of computer software. Other job titles which are used with similar meanings are programmer, software analyst. According to developer Eric Sink, the differences between system design, software developme

1.
Mistory software developer group

GNU Project
–
The GNU Project /ɡnuː/ is a free-software, mass-collaboration project, first announced on September 27,1983 by Richard Stallman at MIT. GNU software guarantees these freedom-rights legally, and is free software. In order to ensure that the software of a computer grants its users all freedom rights, even the most fundamental and important part. Stal

1.
GNU mascot, by Aurelio A. Heckert (derived from a more detailed version by Etienne Suvasa)

Software release life cycle
–
Usage of the alpha/beta test terminology originated at IBM. As long ago as the 1950s, IBM used similar terminology for their hardware development, a test was the verification of a new product before public announcement. B test was the verification before releasing the product to be manufactured, C test was the final test before general availability

1.
Software release life cycle map

Repository (version control)
–
In revision control systems, a repository is an on-disk data structure which stores metadata for a set of files and/or directory structure. Some of the metadata that a repository contains includes, among other things, a set of references to commit objects, called heads. The main purpose of a repository is to store a set of files and these differenc

1.
Local only

C (programming language)
–
C was originally developed by Dennis Ritchie between 1969 and 1973 at Bell Labs, and used to re-implement the Unix operating system. C has been standardized by the American National Standards Institute since 1989, C is an imperative procedural language. Therefore, C was useful for applications that had formerly been coded in assembly language. Desp

2.
The C Programming Language (often referred to as "K&R"), the seminal book on C

Operating system
–
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. All computer programs, excluding firmware, require a system to function. Operating systems are found on many devices that contain a computer – from cellular phones, the dominant desktop operating system is

1.
OS/360 was used on most IBM mainframe computers beginning in 1966, including computers used by the Apollo program.

3.
The first server for the World Wide Web ran on NeXTSTEP, based on BSD

Unix-like
–
A Unix-like operating system is one that behaves in a manner similar to a Unix system, while not necessarily conforming to or being certified to any version of the Single UNIX Specification. A Unix-like application is one that behaves like the corresponding Unix command or shell, there is no standard for defining the term, and some difference of op

1.
Richard Stallman, founder of the GNU Project and the free software movement

2.
Evolution of Unix and Unix-like systems, starting in 1969

Microsoft Windows
–
Microsoft Windows is a metafamily of graphical operating systems developed, marketed, and sold by Microsoft. It consists of families of operating systems, each of which cater to a certain sector of the computing industry with the OS typically associated with IBM PC compatible architecture. Active Windows families include Windows NT, Windows Embedde

1.
Screenshot of Windows 10, showing the Action Center and Start Menu

Debugger
–
A debugger or debugging tool is a computer program that is used to test and debug other programs. Some debuggers offer two modes of operation, full or partial simulation, to limit this impact, a trap occurs when the program cannot normally continue because of a programming bug or invalid data. For example, the program might have tried to use an ins

1.
Winpdb debugging itself

Software license
–
A software license is a legal instrument governing the use or redistribution of software. Under United States copyright law all software is copyright protected, in code as also object code form. The only exception is software in the public domain, most distributed software can be categorized according to its license type. Two common categories for

1.
Diagram of software under various licenses

GNU General Public License
–
The GNU General Public License is a widely used free software license, which guarantees end users the freedom to run, study, share and modify the software. The license was written by Richard Stallman of the Free Software Foundation for the GNU Project. The GPL is a license, which means that derivative work can only be distributed under the same lic

1.
Richard Stallman at the launch of the first draft of the GNU GPLv3. MIT, Cambridge, Massachusetts, USA. To his right is Columbia Law Professor Eben Moglen, chairman of the Software Freedom Law Center

2.
GNU GPLv3 Logo

Programming language
–
A programming language is a formal computer language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to programs to control the behavior of a machine or to express algorithms. From the early 1800s, programs were used to direct the behavior of such as Jacquard looms. Thousands of differen

1.
The Manchester Mark 1 ran programs written in Autocode from 1952.

2.
A selection of textbooks that teach programming, in languages both popular and obscure. These are only a few of the thousands of programming languages and dialects that have been designed in history.

Free Pascal
–
Free Pascal Compiler is a compiler for the closely related programming language dialects, Pascal and Object Pascal. It is free software released under the GNU General Public License, the dialect is selected on a per-unit basis, and more than one dialect can be used to produce one program. It follows a write once, compile anywhere philosophy, and is

1.
The Free Pascal IDE for Linux. The computer was being prepared for use in the 2002 National Olympiad in Informatics, China

2.
FPC in Cygwin

3.
Dialects

Fortran
–
Fortran is a general-purpose, imperative programming language that is especially suited to numeric computation and scientific computing. It is a language for high-performance computing and is used for programs that benchmark. Fortran encompasses a lineage of versions, each of which evolved to add extensions to the language while usually retaining c

1.
The Fortran Automatic Coding System for the IBM 704 (15 October 1956), the first Programmer's Reference Manual for Fortran

3.
FORTRAN-77 program with compiler output, written on a CDC 175 at RWTH Aachen University, Germany, in 1987

Java (programming language)
–
Java is a general-purpose computer programming language that is concurrent, class-based, object-oriented, and specifically designed to have as few implementation dependencies as possible. It is intended to let application developers write once, run anywhere, Java applications are typically compiled to bytecode that can run on any Java virtual machi

1.
James Gosling, the creator of Java (2008)

2.
Java

3.
Java Control Panel, version 7

Richard Stallman
–
Richard Matthew Stallman, often known by his initials, rms, is an American software freedom activist and programmer. He campaigns for software to be distributed in a such that its users receive the freedoms to use, study, distribute. Software that ensures these freedoms is termed free software, Stallman launched the GNU Project, founded the Free So

4.
Richard Stallman giving a speech on "Free Software and your freedom" at the biennale du design of Saint-Étienne (2008)

GNU
–
GNU /ɡnuː/ is an operating system and an extensive collection of computer software. GNU is composed wholly of free software, most of which is licensed under the GNU Projects own GPL, GNU is a recursive acronym for GNUs Not Unix. Chosen because GNUs design is Unix-like, but differs from Unix by being free software, the GNU project includes an operat

1.
Richard Stallman, founder of the GNU project

2.
GNU

GNU Emacs
–
GNU Emacs is the most popular and most ported Emacs text editor. It was created by GNU Project founder Richard Stallman, in common with other varieties of Emacs, GNU Emacs is extensible using a Turing complete programming language. GNU Emacs has been called the most powerful text editor available today, with proper support from the underlying syste

1.
Richard Stallman, founder of the GNU Project and author of GNU Emacs

2.
GNU Emacs 24.3.1 on GNOME 3

Free software
–
The right to study and modify software entails availability of the software source code to its users. This right is conditional on the person actually having a copy of the software. Richard Stallman used the existing term free software when he launched the GNU Project—a collaborative effort to create a freedom-respecting operating system—and the Fr

1.
Richard Stallman, founder of the Free Software Movement (2009)

2.
Trisquel, an operating system composed entirely of free software

3.
Creating a 3D car racing game using the free/open-source Blender Game Engine

4.
Of the world's five hundred fastest supercomputers, 480 (96%) use the Linux kernel. The world's second fastest computer is the Oak Ridge National Laboratory 's Titan supercomputer (illustrated), which uses the Cray Linux Environment.

BSD
–
Berkeley Software Distribution is a Unix operating system derivative developed and distributed by the Computer Systems Research Group of the University of California, Berkeley, from 1977 to 1995. Today the term BSD is often used non-specifically to refer to any of the BSD descendants which together form a branch of the family of Unix-like operating

1.
BSD

2.
The DEC VT100 terminal, widely used for Unix timesharing

3.
The VAX-11/780, a typical minicomputer used for early BSD timesharing systems

4.
VAX-11/780 internals

John Gilmore (activist)
–
John Gilmore is one of the founders of the Electronic Frontier Foundation, the Cypherpunks mailing list, and Cygnus Solutions. He created the alt. * hierarchy in Usenet and is a contributor to the GNU project. An outspoken civil libertarian, Gilmore has sued the Federal Aviation Administration, Department of Justice and he was the plaintiff in the

1.
Gilmore in 2014

Free Software Foundation
–
The FSF was incorporated in Massachusetts, USA, where it is also based. From its founding until the mid-1990s, FSFs funds were used to employ software developers to write free software for the GNU Project. Since the mid-1990s, the FSFs employees and volunteers have worked on legal and structural issues for the free software movement. Consistent wit

1.
gNewSense is a distribution officially supported by the FSF

Computer program
–
A computer program is a collection of instructions that performs a specific task when executed by a computer. A computer requires programs to function, and typically executes the programs instructions in a processing unit. A computer program is written by a computer programmer in a programming language. From the program in its form of source code,

1.
Lovelace's diagram from Note G, the first published computer algorithm

2.
Switches for manual input on a Data General Nova 3, manufactured in the mid 1970s

3.
In the 1950s, computer programs were stored on perforated paper tape

4.
The microcontroller on the right of this USB flash drive is controlled with embedded firmware.

DEC Alpha
–
Alpha was implemented in microprocessors originally developed and fabricated by DEC. These microprocessors were most prominently used in a variety of DEC workstations and servers, several third-party vendors also produced Alpha systems, including PC form factor motherboards. Operating systems that supported Alpha included OpenVMS, Tru64 UNIX, Windo

1.
DEC Alpha AXP 21064 Microprocessor die photo

2.
Package for DEC Alpha AXP 21064 Microprocessor

3.
Alpha AXP 21064 bare die mounted on a business card with some statistics

4.
Compaq Alpha 21264C.

ARM architecture
–
ARM, originally Acorn RISC Machine, later Advanced RISC Machine, is a family of reduced instruction set computing architectures for computer processors, configured for various environments. It also designs cores that implement this instruction set and licenses these designs to a number of companies that incorporate those core designs into their own

Atmel AVR
–
AVR is a family of microcontrollers developed by Atmel beginning in 1996. These are modified Harvard architecture 8-bit RISC single-chip microcontrollers, AVR microcontrollers find many applications as embedded systems, they are also used in the Arduino line of open source board designs. The AVR architecture was conceived by two students at the Nor

1.
Atmel ATmega8 in 28-pin narrow DIP

2.
Atmel ATxmega128A1 in 100-pin TQFP package

3.
Atmel STK500 development board

4.
AVRISP mkII

Hitachi H8
–
The family of largely CISC machines is unrelated to the higher-performance SuperH family of 32-bit RISC-like microcontrollers. It is supported in the Linux kernel since version 4.2, built-in ROM and flash memory tends to range from 16 KB to 1024 KB, and RAM from 512 B to 512 KB. The basic architecture of the H8 is patterned after the DEC PDP-11, wi

1.
Hitachi H8/323

2.
Divisions and subsidiaries

System/370
–
The IBM System/370 was a model range of IBM mainframe computers announced on June 30,1970 as the successors to the System/360 family. A Dynamic Address Translation option was not announced until 1972, 128-bit floating point arithmetic on all models, the original System/370 line underwent several architectural improvements during its roughly 20-year

1.
System/370-145 system console.

System 390
–
The introduction covered new architecture, new hardware and new software. The newly introduced ESA/390 architecture brought with it MVS/ESA VM/ESA and VSE/ESA These systems followed the IBM3090, with over a decade of follow-ons, new models were offered on an ongoing basis. 18 models were announced September 5,1990 for the ES/9000, ESCON fiber optic

X86
–
X86 is a family of backward-compatible instruction set architectures based on the Intel 8086 CPU and its Intel 8088 variant. The term x86 came into being because the names of several successors to Intels 8086 processor end in 86, many additions and extensions have been added to the x86 instruction set over the years, almost consistently with full b

X86-64
–
X86-64 is the 64-bit version of the x86 instruction set. It supports vastly larger amounts of memory and physical memory than is possible on its 32-bit predecessors. X86-64 also provides 64-bit general-purpose registers and numerous other enhancements and it is fully backward compatible with 16-bit and 32-bit x86 code. The original specification, c

1.
Opteron, the first CPU to introduce the x86-64 extensions in 2003

Itanium
–
Itanium is a family of 64-bit Intel microprocessors that implement the Intel Itanium architecture. Intel markets the processors for servers and high-performance computing systems. The Itanium architecture originated at Hewlett-Packard, and was jointly developed by HP. Itanium-based systems have produced by HP and several other manufacturers. In 200

1.
Itanium 2 processor

2.
HP zx6000 system board with dual Itanium 2 processors

3.
HP zx6000, an Itanium 2-based Unix workstation

4.
Itanium processor

Motorola 68000
–
The Motorola 68000 is a 32-bit CISC microprocessor with a 16-bit external data bus, designed and marketed by Motorola Semiconductor Products Sector. After 38 years in production, the 68000 architecture is still in use, the 68000 grew out of the MACSS project, begun in 1976 to develop an entirely new architecture without backward compatibility. It w

1.
Pre-release XC68000 chip manufactured in 1979.

2.
Die of Motorola 68000.

3.
Motorola MC68000 (CLCC package)

4.
Motorola MC68000 (PLCC package)

MIPS architecture
–
MIPS is a reduced instruction set computer instruction set architecture developed by MIPS Technologies. The early MIPS architectures were 32-bit, with 64-bit versions added later, multiple revisions of the MIPS instruction set exist, including MIPS I, MIPS II, MIPS III, MIPS IV, MIPS V, MIPS32, and MIPS64. The current revisions are MIPS32 and MIPS6

1.
Bottom-side view of package of R4700 Orion with the exposed silicon chip, fabricated by IDT, designed by Quantum Effect Devices.

PA-RISC
–
PA-RISC is an instruction set architecture developed by Hewlett-Packard. As the name implies, it is an instruction set computer architecture. The design is referred to as HP/PA for Hewlett Packard Precision Architecture. The architecture was introduced on 26 February 1986, when the HP3000 Series 930 and HP9000 Model 840 computers were launched feat

1.
HP PA-RISC 7300LC Microprocessor

2.
HP 9000 C110 PA-RISC workstation booting Debian GNU / Linux

PowerPC
–
PowerPC is a RISC instruction set architecture created by the 1991 Apple–IBM–Motorola alliance, known as AIM. PowerPC was the cornerstone of AIMs PReP and Common Hardware Reference Platform initiatives in the 1990s and it has since become niche in personal computers, but remain popular as embedded and high-performance processors. Its use in game co

1.
IBM PowerPC 601 microprocessor

2.
IBM PowerPC 604e 200 MHz

3.
Custom PowerPC CPU from the Nintendo Wii video game console

4.
The Freescale XPC855T Service Processor of a Sun SunFire V20z

SuperH
–
SuperH is a 32-bit reduced instruction set computing instruction set architecture developed by Hitachi and currently produced by Renesas. It is implemented by microcontrollers and microprocessors for embedded systems, as of 2015, many of the original patents for the SuperH architecture are expiring and the SH2 CPU has been reimplemented as open sou

1.
SH-2 on Sega 32X and Sega Saturn

2.
Renesas SH-3 CPU

3.
Renesas SH-2 CPU

4.
Renesas SH-4 CPU

SPARC
–
The Scalable Processor Architecture is a reduced instruction set computing instruction set architecture originally developed by Sun Microsystems. Since the establishment of SPARC International, Inc. in 1989, SPARC International is also responsible for licensing and promoting the SPARC architecture, managing SPARC trademarks, and providing conforman

1.
Sun UltraSPARC II Microprocessor

2.
SPARC

VAX
–
VAX is a discontinued instruction set architecture developed by Digital Equipment Corporation in the mid-1970s. The VAX-11/780, introduced on October 25,1977, was the first of a range of popular, a 32-bit system with a CISC architecture based on DECs earlier PDP-11, VAX was designed to extend or replace DECs various PDP ISAs. The VAX architectures

1.
DEC VAX

2.
VAX 8350 front view with cover removed

3.
K 1840, VAX-11/780 clone, 1988, Technical Collections Dresden

4.
DEC VAX 11/780-5 computer.

A29K
–
The AMD Am29000, often simply 29k, is a popular family of 32-bit RISC microprocessors and microcontrollers developed and fabricated by Advanced Micro Devices. They were, for a time, the most popular RISC chips on the market, in late 1995 AMD dropped development of the 29k because the design team was transferred to support the PC side of the busines

1.
AMD 29000 Microprocessor

2.
AMD 29030.

3.
AMD 29040

ETRAX CRIS
–
The ETRAX CRIS is a series of CPUs for use in embedded systems since 1993. The name is an acronym of the features, Ethernet, Token Ring. Token ring support has taken out from the latest chips as it has become obsolete. The TGA, developed in 1986, was a communications transceiver for the AS/400 architecture, the First chip with embedded microcontrol

1.
AMD DirectGMA is a form of DMA. It enables low-latency peer-to-peer data transfers between devices on the PCIe bus and AMD FirePro -branded products. SDI devices supporting DirectGMA can write directly into the graphics memory of the GPU and vice versa the GPU can directly access the memory of a peer device.