Although keeping time can be done without an RTC,[1] using one has benefits:

Low power consumption[2] (important when running from alternate power)

Frees the main system for time-critical tasks

Sometimes more accurate than other methods

A GPS receiver can shorten its startup time by comparing the current time, according to its RTC, with the time at which it last had a valid signal.[3] If it has been less than a few hours, then the previous ephemeris is still usable.

RTCs often have an alternate source of power, so they can continue to keep time while the primary source of power is off or unavailable. This alternate source of power is normally a lithium battery in older systems, but some newer systems use a supercapacitor,[4][5] because they are rechargeable and can be soldered. The alternate power source can also supply power to battery backed RAM.[6]

Some modern computers receive clock information by digital radio and use it to promote time-standards. There are two common methods: Most cell phone protocols (e.g. LTE) directly provide the current local time. If an internet radio is available, a computer may use the network time protocol. Computers used as local time servers occasionally use GPS[13] or ultra-low frequency digital radio transmissions broadcast by a national standards organization (i.e. a radio clock[14]).

Some older computer designs such as Novas and PDP-8s[15] used a real-time clock that was notable for its high accuracy, simplicity, flexibility and low cost. The computer's power supply produces a pulse at logic voltages for either each half-wave or each zero crossing of AC mains. A wire carries the pulse to an interrupt. The interrupt handler software counts cycles, seconds, etc. In this way, it can provide an entire clock and calendar.

The clock also usually formed the basis of computers' software timing chains; e.g. it was usually the timer used to switch tasks in an operating system. Counting timers used in modern computers provide similar features at lower precision, and may trace their requirements to this type of clock. (e.g. in the PDP-8, the mains-based clock, model DK8EA, came first, and was later followed by a crystal-based clock, DK8EC.)

A software-based clock must be set each time its computer is turned on. Originally this was done by computer operators. When the Internet became commonplace, network time protocols were used to automatically set clocks of this type.

In Europe, North America and some other grids, this RTC works because the frequency of the AC mains is adjusted to have a long-term frequency accuracy as good as the national standard clocks. That is, in those grids this RTC is superior to quartz clocks and less costly.

This design of RTC is not practical in portable computers or grids (e.g. in South Asia) that do not regulate the frequency of AC mains. Also it might be thought inconvenient without Internet access to set the clock.

Some motherboards are made without real time clocks. The real time clock is omitted either out of the desire to save money (as in the Raspberry Pi system architecture) or because real time clocks may not be needed at all (as in the Arduino system architecture).

1.
Computer
–
A computer is a device that can be instructed to carry out an arbitrary set of arithmetic or logical operations automatically. The ability of computers to follow a sequence of operations, called a program, such computers are used as control systems for a very wide variety of industrial and consumer devices. The Internet is run on computers and it millions of other computers. Since ancient times, simple manual devices like the abacus aided people in doing calculations, early in the Industrial Revolution, some mechanical devices were built to automate long tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century, the first digital electronic calculating machines were developed during World War II. The speed, power, and versatility of computers has increased continuously and dramatically since then, conventionally, a modern computer consists of at least one processing element, typically a central processing unit, and some form of memory. The processing element carries out arithmetic and logical operations, and a sequencing, peripheral devices include input devices, output devices, and input/output devices that perform both functions. Peripheral devices allow information to be retrieved from an external source and this usage of the term referred to a person who carried out calculations or computations. The word continued with the same meaning until the middle of the 20th century, from the end of the 19th century the word began to take on its more familiar meaning, a machine that carries out computations. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, one who calculates, the Online Etymology Dictionary states that the use of the term to mean calculating machine is from 1897. The Online Etymology Dictionary indicates that the use of the term. 1945 under this name, theoretical from 1937, as Turing machine, devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was probably a form of tally stick, later record keeping aids throughout the Fertile Crescent included calculi which represented counts of items, probably livestock or grains, sealed in hollow unbaked clay containers. The use of counting rods is one example, the abacus was initially used for arithmetic tasks. The Roman abacus was developed from used in Babylonia as early as 2400 BC. Since then, many forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, the Antikythera mechanism is believed to be the earliest mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions and it was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to circa 100 BC

2.
Clock
–
A clock is an instrument to measure, keep, and indicate time. The word clock is derived from the Celtic words clagan and clocca meaning bell, a silent instrument missing such a striking mechanism has traditionally been known as a timepiece. In general usage today a clock refers to any device for measuring and displaying the time, Watches and other timepieces that can be carried on ones person are often distinguished from clocks. The clock is one of the oldest human inventions, meeting the need to measure intervals of time shorter than the natural units, the day, the lunar month. Devices operating on several physical processes have been used over the millennia, a sundial shows the time by displaying the position of a shadow on a flat surface. There is a range of duration timers, an example being the hourglass. Water clocks, along with the sundials, are possibly the oldest time-measuring instruments, spring-driven clocks appeared during the 15th century. During the 15th and 16th centuries, clockmaking flourished, the next development in accuracy occurred after 1656 with the invention of the pendulum clock. A major stimulus to improving the accuracy and reliability of clocks was the importance of precise time-keeping for navigation, the electric clock was patented in 1840. The development of electronics in the 20th century led to clocks with no clockwork parts at all, the timekeeping element in every modern clock is a harmonic oscillator, a physical object that vibrates or oscillates at a particular frequency. This object can be a pendulum, a fork, a quartz crystal. Analog clocks usually indicate time using angles, Digital clocks display a numeric representation of time. Two numeric display formats are used on digital clocks, 24-hour notation. Most digital clocks use electronic mechanisms and LCD, LED, or VFD displays, for convenience, distance, telephony or blindness, auditory clocks present the time as sounds. There are also clocks for the blind that have displays that can be read by using the sense of touch, some of these are similar to normal analog displays, but are constructed so the hands can be felt without damaging them. The evolution of the technology of clocks continues today, the study of timekeeping is known as horology. The apparent position of the Sun in the sky moves over the course of a day, shadows cast by stationary objects move correspondingly, so their positions can be used to indicate the time of day. A sundial shows the time by displaying the position of a shadow on a flat surface, sundials can be horizontal, vertical, or in other orientations

3.
Integrated circuit
–
An integrated circuit or monolithic integrated circuit is a set of electronic circuits on one small flat piece of semiconductor material, normally silicon. The ICs mass production capability, reliability and building-block approach to circuit design ensured the rapid adoption of standardized ICs in place of using discrete transistors. ICs are now used in all electronic equipment and have revolutionized the world of electronics. Computers, mobile phones, and other home appliances are now inextricable parts of the structure of modern societies, made possible by the small size. These advances, roughly following Moores law, allow a computer chip of 2016 to have millions of times the capacity, ICs have two main advantages over discrete circuits, cost and performance. Cost is low because the chips, with all their components, are printed as a unit by photolithography rather than being constructed one transistor at a time, furthermore, packaged ICs use much less material than discrete circuits. Performance is high because the ICs components switch quickly and consume little power because of their small size, the main disadvantage of ICs is the high cost to design them and fabricate the required photomasks. This high initial cost means ICs are only practical when high production volumes are anticipated, Circuits meeting this definition can be constructed using many different technologies, including thin-film transistor, thick film technology, or hybrid integrated circuit. However, in general usage integrated circuit has come to refer to the single-piece circuit construction originally known as a integrated circuit. Jacobi disclosed small and cheap hearing aids as typical industrial applications of his patent, an immediate commercial use of his patent has not been reported. The idea of the circuit was conceived by Geoffrey Dummer. Dummer presented the idea to the public at the Symposium on Progress in Quality Electronic Components in Washington and he gave many symposia publicly to propagate his ideas, and unsuccessfully attempted to build such a circuit in 1956. A precursor idea to the IC was to create small ceramic squares, Components could then be integrated and wired into a bidimensional or tridimensional compact grid. This idea, which seemed very promising in 1957, was proposed to the US Army by Jack Kilby, however, as the project was gaining momentum, Kilby came up with a new, revolutionary design, the IC. In his patent application of 6 February 1959, Kilby described his new device as a body of semiconductor material … wherein all the components of the circuit are completely integrated. The first customer for the new invention was the US Air Force, Kilby won the 2000 Nobel Prize in Physics for his part in the invention of the integrated circuit. His work was named an IEEE Milestone in 2009, half a year after Kilby, Robert Noyce at Fairchild Semiconductor developed his own idea of an integrated circuit that solved many practical problems Kilbys had not. Noyces design was made of silicon, whereas Kilbys chip was made of germanium, Noyce credited Kurt Lehovec of Sprague Electric for the principle of p–n junction isolation, a key concept behind the IC

4.
Time
–
Time is the indefinite continued progress of existence and events that occur in apparently irreversible succession from the past through the present to the future. Time is often referred to as the dimension, along with the three spatial dimensions. Time has long been an important subject of study in religion, philosophy, and science, nevertheless, diverse fields such as business, industry, sports, the sciences, and the performing arts all incorporate some notion of time into their respective measuring systems. Two contrasting viewpoints on time divide prominent philosophers, one view is that time is part of the fundamental structure of the universe—a dimension independent of events, in which events occur in sequence. Isaac Newton subscribed to this realist view, and hence it is referred to as Newtonian time. This second view, in the tradition of Gottfried Leibniz and Immanuel Kant, holds that time is neither an event nor a thing, Time in physics is unambiguously operationally defined as what a clock reads. Time is one of the seven fundamental physical quantities in both the International System of Units and International System of Quantities, Time is used to define other quantities—such as velocity—so defining time in terms of such quantities would result in circularity of definition. The operational definition leaves aside the question there is something called time, apart from the counting activity just mentioned, that flows. Investigations of a single continuum called spacetime bring questions about space into questions about time, questions that have their roots in the works of early students of natural philosophy. Furthermore, it may be there is a subjective component to time. Temporal measurement has occupied scientists and technologists, and was a motivation in navigation. Periodic events and periodic motion have long served as standards for units of time, examples include the apparent motion of the sun across the sky, the phases of the moon, the swing of a pendulum, and the beat of a heart. Currently, the unit of time, the second, is defined by measuring the electronic transition frequency of caesium atoms. Time is also of significant social importance, having economic value as well as value, due to an awareness of the limited time in each day. In day-to-day life, the clock is consulted for periods less than a day whereas the calendar is consulted for periods longer than a day, increasingly, personal electronic devices display both calendars and clocks simultaneously. The number that marks the occurrence of an event as to hour or date is obtained by counting from a fiducial epoch—a central reference point. Artifacts from the Paleolithic suggest that the moon was used to time as early as 6,000 years ago. Lunar calendars were among the first to appear, either 12 or 13 lunar months, without intercalation to add days or months to some years, seasons quickly drift in a calendar based solely on twelve lunar months

5.
Personal computer
–
A personal computer is a multi-purpose electronic computer whose size, capabilities, and price make it feasible for individual use. PCs are intended to be operated directly by a end-user, rather than by an expert or technician. In the 2010s, PCs are typically connected to the Internet, allowing access to the World Wide Web, personal computers may be connected to a local area network, either by a cable or a wireless connection. In the 2010s, a PC may be, a multi-component desktop computer, designed for use in a location a laptop computer, designed for easy portability or a tablet computer. In the 2010s, PCs run using a system, such as Microsoft Windows, Linux. The very earliest microcomputers, equipped with a front panel, required hand-loading of a program to load programs from external storage. Before long, automatic booting from permanent read-only memory became universal, in the 2010s, users have access to a wide range of commercial software, free software and free and open-source software, which are provided in ready-to-run or ready-to-compile form. Since the early 1990s, Microsoft operating systems and Intel hardware have dominated much of the computer market, first with MS-DOS. Alternatives to Microsofts Windows operating systems occupy a minority share of the industry and these include Apples OS X and free open-source Unix-like operating systems such as Linux and Berkeley Software Distribution. Advanced Micro Devices provides the alternative to Intels processors. PC is an initialism for personal computer, some PCs, including the OLPC XOs, are equipped with x86 or x64 processors but not designed to run Microsoft Windows. PC is used in contrast with Mac, an Apple Macintosh computer and this sense of the word is used in the Get a Mac advertisement campaign that ran between 2006 and 2009, as well as its rival, Im a PC campaign, that appeared in 2008. Since Apples transition to Intel processors starting 2005, all Macintosh computers are now PCs, the “brain” may one day come down to our level and help with our income-tax and book-keeping calculations. But this is speculation and there is no sign of it so far, in the history of computing there were many examples of computers designed to be used by one person, as opposed to terminals connected to mainframe computers. Using the narrow definition of operated by one person, the first personal computer was the ENIAC which became operational in 1946 and it did not meet further definitions of affordable or easy to use. An example of an early single-user computer was the LGP-30, created in 1956 by Stan Frankel and used for science and it came with a retail price of $47, 000—equivalent to about $414,000 today. Introduced at the 1965 New York Worlds Fair, the Programma 101 was a programmable calculator described in advertisements as a desktop computer. It was manufactured by the Italian company Olivetti and invented by the Italian engineer Pier Giorgio Perotto, the Soviet MIR series of computers was developed from 1965 to 1969 in a group headed by Victor Glushkov

6.
Server (computing)
–
In computing, a server is a computer program or a device that provides functionality for other programs or devices, called clients. This architecture is called the model, and a single overall computation is distributed across multiple processes or devices. Servers can provide various functionalities, often called services, such as sharing data or resources among multiple clients, a single server can serve multiple clients, and a single client can use multiple servers. A client process may run on the device or may connect over a network to a server on a different device. Typical servers are database servers, file servers, mail servers, print servers, web servers, game servers, designating a computer as server-class hardware implies that it is specialized for running servers on it. The use of the server in computing comes from queueing theory, where it dates to the mid 20th century, being notably used in Kendall. In earlier papers, such as the Erlang, more terms such as operators are used. In computing, server dates at least to RFC5, one of the earliest documents describing ARPANET, the use of serving also dates to early documents, such as RFC4, contrasting serving-host with using-host. The Jargon File defines server in the sense of a process performing service for requests, usually remote, with the 1981 version reading. A kind of DAEMON which performs a service for the requester, strictly speaking, the term server refers to a computer program or process. Through metonymy, it refers to a used to running one or several server programs. On a network, such a device is called a host, in addition to server, the words serve and service are frequently used, though servicer and servant are not. The word service may refer to either the form of functionality. Alternatively, it may refer to a program that turns a computer into a server. Originally used as servers serve users, in the sense of obey, today one says that servers serve data. For instance, web servers serve web pages to users or service their requests, the server is part of the client–server model, in this model, a server serves data for clients. The nature of communication between a client and server is request and response and this is in contrast with peer-to-peer model in which the relationship is on-demand reciprocation. In principle, any computerized process that can be used or called by another process is a server, thus any general purpose computer connected to a network can host servers

7.
Embedded system
–
An embedded system is a computer system with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints. It is embedded as part of a device often including hardware. Embedded systems control many devices in use today. Ninety-eight percent of all microprocessors are manufactured as components of embedded systems, examples of properties of typically embedded computers when compared with general-purpose counterparts are low power consumption, small size, rugged operating ranges, and low per-unit cost. This comes at the price of limited processing resources, which make them more difficult to program. For example, intelligent techniques can be designed to power consumption of embedded systems. Modern embedded systems are based on microcontrollers, but ordinary microprocessors are also common. In either case, the processor used may be ranging from general purpose to those specialised in certain class of computations. A common standard class of dedicated processors is the signal processor. Since the embedded system is dedicated to tasks, design engineers can optimize it to reduce the size and cost of the product and increase the reliability. Some embedded systems are mass-produced, benefiting from economies of scale, complexity varies from low, with a single microcontroller chip, to very high with multiple units, peripherals and networks mounted inside a large chassis or enclosure. One of the very first recognizably modern embedded systems was the Apollo Guidance Computer, an early mass-produced embedded system was the Autonetics D-17 guidance computer for the Minuteman missile, released in 1961. When the Minuteman II went into production in 1966, the D-17 was replaced with a new computer that was the first high-volume use of integrated circuits. Since these early applications in the 1960s, embedded systems have come down in price and there has been a rise in processing power. An early microprocessor for example, the Intel 4004, was designed for calculators and other systems but still required external memory. By the early 1980s, memory, input and output system components had been integrated into the chip as the processor forming a microcontroller. Microcontrollers find applications where a computer would be too costly. A comparatively low-cost microcontroller may be programmed to fulfill the role as a large number of separate components

8.
Single-board computer
–
A single-board computer is a complete computer built on a single circuit board, with microprocessor, memory, input/output and other features required of a functional computer. Single-board computers were made as demonstration or development systems, for educational systems, many types of home computers or portable computers integrate all their functions onto a single printed circuit board. Unlike a desktop computer, single board computers often do not rely on expansion slots for peripheral functions or expansion. Single board computers have been using a wide range of microprocessors. Simple designs, such as built by hobbyists, often use static RAM. Other types, such as servers, would perform similar to a server computer. A computer-on-module is a type of single-board computer made to plug into a board, baseboard. The first true single-board computer called the dyna-micro was based on the Intel C8080A, and also used Intels first EPROM, the C1702A. The dyna-micro was re-branded by E&L Instruments of Derby, CT in 1976 as the MMD-1 and was famous as the example microcomputer in the very popular 8080 BugBook series of the time. SBCs also figured heavily in the history of home computers, for example in the Acorn Electron. Other typical early single board computers like the KIM-1 were often shipped without enclosure, which had to be added by the owner, other examples are the Ferguson Big Board, as the PC market became more prevalent, fewer SBCs were being used in computers. The main components were assembled on a motherboard, and peripheral components such as ports, disk drive controllers. Plug-in cards are now more commonly high performance graphics cards, high end RAID controllers, Single board computers were made possible by increasing density of integrated circuits. A single-board configuration reduces a systems overall cost, by reducing the number of circuit boards required, by putting all the functions on one board, a smaller overall system can be obtained, for example, as in notebook computers. Connectors are a frequent source of reliability problems, so a single-board system eliminates these problems, Single board computers are now commonly defined across two distinct architectures, no slots and slot support. Embedded SBCs are units providing all the required I/O with no provision for plug-in cards, applications are typically gaming, kiosk, and machine control automation. The term Single Board Computer now generally applies to an architecture where the board computer is plugged into a backplane to provide for I/O cards. In the case of PC104, the bus is not a backplane in the traditional sense but is a series of pin connectors allowing I/O boards to be stacked

9.
Signal (electrical engineering)
–
A signal as referred to in communication systems, signal processing, and electrical engineering is a function that conveys information about the behavior or attributes of some phenomenon. The IEEE Transactions on Signal Processing states that the signal includes audio, video, speech, image, communication, geophysical, sonar, radar. Typically, signals are provided by a sensor, and often the form of a signal is converted to another form of energy using a transducer. For example, a microphone converts a signal to a voltage waveform. The formal study of the content of signals is the field of information theory. The information in a signal is accompanied by noise. The term noise usually means an undesirable random disturbance, but is extended to include unwanted signals conflicting with the desired signal. The prevention of noise is covered in part under the heading of signal integrity, the separation of desired signals from a background is the field of signal recovery, one branch of which is estimation theory, a probabilistic approach to suppressing random disturbances. Engineering disciplines such as electrical engineering have led the way in the design, study, and implementation of systems involving transmission, storage, definitions specific to sub-fields are common. For example, in theory, a signal is a codified message, that is. In the context of signal processing, arbitrary binary data streams are not considered as signals, in a communication system, a transmitter encodes a message to a signal, which is carried to a receiver by the communications channel. For example, the words Mary had a little lamb might be the message spoken into a telephone, the telephone transmitter converts the sounds into an electrical voltage signal. The signal is transmitted to the telephone by wires, at the receiver it is reconverted into sounds. In telephone networks, signalling, for example common-channel signaling, refers to number and other digital control information rather than the actual voice signal. Signals can be categorized in various ways, the most common distinction is between discrete and continuous spaces that the functions are defined over, for example discrete and continuous time domains. Discrete-time signals are often referred to as series in other fields. Continuous-time signals are often referred to as continuous signals even when the functions are not continuous. A second important distinction is between discrete-valued and continuous-valued, particularly in digital signal processing a digital signal is sometimes defined as a sequence of discrete values, that may or may not be derived from an underlying continuous-valued physical process

10.
Digital electronics
–
Digital electronics or digital circuits are electronics that handle digital signals rather than by continuous ranges as used in analog electronics. All levels within a band of values represent the information state. In most cases, the number of states is two, and they are represented by two voltage bands, one near a reference value, and the other a value near the supply voltage. These correspond to the false and true values of the Boolean domain respectively, Digital techniques are useful because it is easier to get an electronic device to switch into one of a number of known states than to accurately reproduce a continuous range of values. Digital electronic circuits are made from large assemblies of logic gates. The binary number system was refined by Gottfried Wilhelm Leibniz and he established that by using the binary system. Digital logic as we know it was the brain-child of George Boole, Boole died young, but his ideas lived on. In an 1886 letter, Charles Sanders Peirce described how logical operations could be carried out by electrical switching circuits, eventually, vacuum tubes replaced relays for logic operations. Lee De Forests modification, in 1907, of the Fleming valve can be used as an AND logic gate, ludwig Wittgenstein introduced a version of the 16-row truth table as proposition 5.101 of Tractatus Logico-Philosophicus. Walther Bothe, inventor of the circuit, got part of the 1954 Nobel Prize in physics. Mechanical analog computers started appearing in the first century and were used in the medieval era for astronomical calculations. In World War II, mechanical computers were used for specialized military applications such as calculating torpedo aiming. During this time the first electronic computers were developed. Originally they were the size of a room, consuming as much power as several hundred modern personal computers. The Z3 was a computer designed by Konrad Zuse, finished in 1941. It was the worlds first working programmable, fully automatic digital computer and its operation was facilitated by the invention of the vacuum tube in 1904 by John Ambrose Fleming. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, the bipolar junction transistor was invented in 1947. From 1955 onwards transistors replaced vacuum tubes in computer designs, giving rise to the generation of computers

11.
Global Positioning System
–
The Global Positioning System is a space-based radionavigation system owned by the United States government and operated by the United States Air Force. The GPS system operates independently of any telephonic or internet reception, the GPS system provides critical positioning capabilities to military, civil, and commercial users around the world. The United States government created the system, maintains it, however, the US government can selectively deny access to the system, as happened to the Indian military in 1999 during the Kargil War. The U. S. Department of Defense developed the system and it became fully operational in 1995. Roger L. Easton of the Naval Research Laboratory, Ivan A, getting of The Aerospace Corporation, and Bradford Parkinson of the Applied Physics Laboratory are credited with inventing it. Announcements from Vice President Al Gore and the White House in 1998 initiated these changes, in 2000, the U. S. Congress authorized the modernization effort, GPS III. In addition to GPS, other systems are in use or under development, mainly because of a denial of access. The Russian Global Navigation Satellite System was developed contemporaneously with GPS, GLONASS can be added to GPS devices, making more satellites available and enabling positions to be fixed more quickly and accurately, to within two meters. There are also the European Union Galileo positioning system and Chinas BeiDou Navigation Satellite System, special and general relativity predict that the clocks on the GPS satellites would be seen by the Earths observers to run 38 microseconds faster per day than the clocks on the Earth. The GPS calculated positions would quickly drift into error, accumulating to 10 kilometers per day, the relativistic time effect of the GPS clocks running faster than the clocks on earth was corrected for in the design of GPS. The Soviet Union launched the first man-made satellite, Sputnik 1, two American physicists, William Guier and George Weiffenbach, at Johns Hopkinss Applied Physics Laboratory, decided to monitor Sputniks radio transmissions. Within hours they realized that, because of the Doppler effect, the Director of the APL gave them access to their UNIVAC to do the heavy calculations required. The next spring, Frank McClure, the deputy director of the APL, asked Guier and Weiffenbach to investigate the inverse problem — pinpointing the users location and this led them and APL to develop the TRANSIT system. In 1959, ARPA also played a role in TRANSIT, the first satellite navigation system, TRANSIT, used by the United States Navy, was first successfully tested in 1960. It used a constellation of five satellites and could provide a navigational fix approximately once per hour, in 1967, the U. S. Navy developed the Timation satellite, which proved the feasibility of placing accurate clocks in space, a technology required by GPS. In the 1970s, the ground-based OMEGA navigation system, based on comparison of signal transmission from pairs of stations. Limitations of these systems drove the need for a more universal navigation solution with greater accuracy, during the Cold War arms race, the nuclear threat to the existence of the United States was the one need that did justify this cost in the view of the United States Congress. This deterrent effect is why GPS was funded and it is also the reason for the ultra secrecy at that time

12.
Ephemeris
–
In astronomy and celestial navigation, an ephemeris gives the positions of naturally occurring astronomical objects as well as artificial satellites in the sky at a given time or times. Historically, positions were given as printed tables of values, given at intervals of date. Modern ephemerides are often computed electronically from mathematical models of the motion of astronomical objects, the astronomical position calculated from an ephemeris is given in the spherical polar coordinate system of right ascension and declination. Ephemerides are used in navigation and astronomy. They are also used by some astrologers, 1st millennium BC — Ephemerides in Babylonian astronomy. 13th century — the Zīj-i Īlkhānī were compiled at the Maragheh observatory in Persia, 13th century — the Alfonsine Tables were compiled in Spain to correct anomalies in the Tables of Toledo, remaining the standard European ephemeris until the Prutenic Tables almost 300 years later. 1531 — Work of Johannes Stöffler is published posthumously at Tübingen,1551 — the Prutenic Tables of Erasmus Reinhold were published, based on Copernicuss theories. 1554 — Johannes Stadius published Ephemerides novae et auctae, the first major ephemeris computed according to Copernicus heliocentric model, one of the users of Stadiuss tables is Tycho Brahe. 1627 — the Rudolphine Tables of Johannes Kepler based on elliptical planetary motion became the new standard. 1679 — La Connaissance des Temps ou calendrier et éphémérides du lever & coucher du Soleil, de la Lune & des autres planètes, first published yearly by Jean Picard and still extent. According to Gingerich, the patterns are as distinctive as fingerprints. Typically, such ephemerides cover several centuries, past and future, nevertheless, there are secular phenomena which cannot adequately be considered by ephemerides. The greatest uncertainties in the positions of planets are caused by the perturbations of asteroids, most of whose masses and orbits are poorly known. Reflecting the continuing influx of new data and observations, NASAs Jet Propulsion Laboratory has revised its published ephemerides nearly every year for the past 20 years. Solar system ephemerides are essential for the navigation of spacecraft and for all kinds of observations of the planets, their natural satellites, stars. The equinox of the system must be given. It is, in all cases, either the actual equinox, or that of one of the standard equinoxes, typically J2000.0, B1950.0. Star maps almost always use one of the standard equinoxes, Ephemerides of the planet Saturn also sometimes contain the apparent inclination of its ring

13.
Lithium battery
–
Lithium batteries are batteries that have lithium as an anode. These types of batteries are also referred to as lithium-metal batteries and they stand apart from other batteries in their high charge density and high cost per unit. The term lithium battery refers to a family of different lithium-metal chemistries, comprising many types of cathodes and electrolytes, the battery requires from 0.15 to 0.3 kg of lithium per kWh. The most common type of lithium cell used in consumer applications uses metallic lithium as anode and manganese dioxide as cathode, another type of lithium cell having a large energy density is the lithium-thionyl chloride cell. The cell contains a mixture of thionyl chloride and lithium tetrachloroaluminate. A porous carbon material serves as a current collector which receives electrons from the external circuit. Lithium-thionyl chloride batteries are well suited to extremely low-current applications where long life is necessary, the liquid organic electrolyte is a solution of an ion-forming inorganic lithium compound in a mixture of a high-permittivity solvent and a low-viscosity solvent. Lithium batteries find application in many long-life, critical devices, such as pacemakers and these devices use specialized lithium-iodide batteries designed to last 15 or more years. But for other, less critical applications such as in toys, in such cases, an expensive lithium battery may not be cost-effective. Lithium batteries can be used in place of ordinary alkaline cells in many devices, such as clocks, although they are more costly, lithium cells will provide much longer life, thereby minimizing battery replacement. However, attention must be given to the voltage developed by the lithium cells before using them as a drop-in replacement in devices that normally use ordinary zinc cells. Lithium batteries also prove valuable in oceanographic applications, while lithium battery packs are considerably more expensive than standard oceanographic packs, they hold up to three times the capacity of alkaline packs. The high cost of servicing remote oceanographic instrumentation often justifies this higher cost and they are available in many shapes and sizes, with a common variety being the 3 volt coin type manganese variety, typically 20 mm in diameter and 1. 6–4 mm thick. The heavy electrical demands of many of these devices make lithium batteries a particularly attractive option, in particular, lithium batteries can easily support the brief, heavy current demands of devices such as digital cameras, and they maintain a higher voltage for a longer period than alkaline cells. Lithium primary batteries account for 28% of all battery sales in Japan. In the EU only 0. 5% of all battery sales including secondary types are lithium primaries, the computer industrys drive to increase battery capacity can test the limits of sensitive components such as the membrane separator, a polyethylene or polypropylene film that is only 20-25 µm thick. The energy density of lithium batteries has more than doubled since they were introduced in 1991, when the battery is made to contain more material, the separator can undergo stress. Lithium batteries can provide extremely high currents and can discharge very rapidly when short-circuited, although this is useful in applications where high currents are required, a too-rapid discharge of a lithium battery can result in overheating of the battery, rupture, and even explosion

14.
Supercapacitor
–
A supercapacitor is a high-capacity capacitor with capacitance values much higher than other capacitors that bridge the gap between electrolytic capacitors and rechargeable batteries. Smaller units are used as backup for static random-access memory. Supercapacitors do not use the solid dielectric of ordinary capacitors. The separation of charge is of the order of a few ångströms, Electrochemical pseudocapacitors use metal oxide or conducting polymer electrodes with a high amount of electrochemical pseudocapacitance additional to the double-layer capacitance. Pseudocapacitance is achieved by Faradaic electron charge-transfer with redox reactions, intercalation or electrosorption, Hybrid capacitors, such as the lithium-ion capacitor, use electrodes with differing characteristics, one exhibiting mostly electrostatic capacitance and the other mostly electrochemical capacitance. Supercapacitors are polarized by design with asymmetric electrodes, or, for symmetric electrodes, development of the double layer and pseudocapacitance models. In the early 1950s, General Electric engineers began experimenting with porous carbon electrodes, in the design of capacitors, from the design of fuel cells, activated charcoal is an electrical conductor that is an extremely porous spongy form of carbon with a high specific surface area. In 1957 H. Becker developed a Low voltage electrolytic capacitor with porous carbon electrodes and he believed that the energy was stored as a charge in the carbon pores as in the pores of the etched foils of electrolytic capacitors. General Electric did not immediately pursue this work, in 1966 researchers at Standard Oil of Ohio developed another version of the component as electrical energy storage apparatus, while working on experimental fuel cell designs. The nature of energy storage was not described in this patent. Even in 1970, the electrochemical capacitor patented by Donald L. Boos was registered as a capacitor with activated carbon electrodes. Early electrochemical capacitors used two aluminum foils covered with activated carbon—the electrodes—which were soaked in an electrolyte and separated by a thin porous insulator and this design gave a capacitor with a capacitance on the order of one farad, significantly higher than electrolytic capacitors of the same dimensions. This basic mechanical design remains the basis of most electrochemical capacitors, SOHIO did not commercialize their invention, licensing the technology to NEC, who finally marketed the results as supercapacitors in 1971, to provide backup power for computer memory. Between 1975 and 1980 Brian Evans Conway conducted extensive fundamental and development work on ruthenium oxide electrochemical capacitors, in 1991 he described the difference between Supercapacitor and Battery behavior in electrochemical energy storage. In 1999 he coined the term supercapacitor to explain the increased capacitance by surface redox reactions with faradaic charge transfer between electrodes and ions, the working mechanisms of pseudocapacitors are redox reactions, intercalation and electrosorption. With his research, Conway greatly expanded the knowledge of electrochemical capacitors and that changed around 1978 as Panasonic marketed its Goldcaps brand. This product became an energy source for memory backup applications. In 1987 ELNA Dynacaps entered the market, first generation EDLCs had relatively high internal resistance that limited the discharge current

15.
Soldering
–
Soldering, is a process in which two or more items are joined together by melting and putting a filler metal into the joint, the filler metal having a lower melting point than the adjoining metal. Soldering differs from welding in that soldering does not involve melting the work pieces, in brazing, the filler metal melts at a higher temperature, but the work piece metal does not melt. In the past, nearly all solders contained lead, but environmental and health concerns have increasingly dictated use of alloys for electronics. There is evidence that soldering was employed as early as 5000 years ago in Mesopotamia, soldering and brazing are thought to have originated very early in the history of metal-working, probably before 4000 BC. Sumerian swords from ~3000 BC were assembled using hard soldering, soldering was historically used to make jewelry items, cooking ware and tools, as well as other uses such as in assembling stained glass. Soldering is used in plumbing, electronics, and metalwork from flashing to jewelry, jewelry components, machine tools and some refrigeration and plumbing components are often assembled and repaired by the higher temperature silver soldering process. Small mechanical parts are often soldered or brazed as well, soldering is also used to join lead came and copper foil in stained glass work. Electronic soldering connects electrical wiring and electronic components to printed circuit boards, soldering filler materials are available in many different alloys for differing applications. In electronics assembly, the alloy of 63% tin and 37% lead has been the alloy of choice. Other alloys are used for plumbing, mechanical assembly, and other applications, a eutectic formulation has advantages when applied to soldering, the liquidus and solidus temperatures are the same, so there is no plastic phase, and it has the lowest possible melting point. Having the lowest possible melting point minimizes heat stress on electronic components during soldering, and, having no plastic phase allows for quicker wetting as the solder heats up, and quicker setup as the solder cools. A non-eutectic formulation must remain still as the temperature drops through the liquidus and solidus temperatures, any movement during the plastic phase may result in cracks, resulting in an unreliable joint. Common solder formulations based on tin and lead are listed below and they are also suggested anywhere young children may come into contact with, or for outdoor use where rain and other precipitation may wash the lead into the groundwater. Unfortunately, most lead-free solders are not eutectic formulations, melting at around 250 °C, alloying silver with other metals changes the melting point, adhesion and wetting characteristics, and tensile strength. Of all the brazing alloys, silver solders have the greatest strength, specialty alloys are available with properties such as higher strength, the ability to solder aluminum, better electrical conductivity, and higher corrosion resistance. The purpose of flux is to facilitate the soldering process, one of the obstacles to a successful solder joint is an impurity at the site of the joint, for example, dirt, oil or oxidation. The impurities can be removed by mechanical cleaning or by chemical means and this effect is accelerated as the soldering temperatures increase and can completely prevent the solder from joining to the workpiece. One of the earliest forms of flux was charcoal, which acts as a reducing agent, some fluxes go beyond the simple prevention of oxidation and also provide some form of chemical cleaning

16.
Nonvolatile BIOS memory
–
Nonvolatile BIOS memory refers to a small memory on PC motherboards that is used to store BIOS settings. It is traditionally called CMOS RAM because it uses a volatile, low-power complementary metal-oxide-semiconductor SRAM powered by a small CMOS battery when system, the typical NVRAM capacity is 256 bytes. The CMOS RAM and the real-time clock have been integrated as a part of the southbridge chipset, the memory battery is generally a CR2032 lithium coin cell. This cell battery has a life of 3 years when psu is unplugged or when psu power switch is turned off. Other common battery types can last significantly longer or shorter periods. Higher temperatures and longer power-off time will shorten battery cell life, when replacing battery cell, the system time and CMOS BIOS settings may revert to default values. Unwanted BIOS reset may be avoided by replacing battery cell with psu power switch turned on, on ATX motherboards, turning on power switch on psu, will supply 5V standby power to the motherboard to keep CMOS memory energized during computer turned off period. Tutorial, How to replace the CMOS battery in your laptop How to replace the CMOS battery Replacing the Motherboard Battery

17.
Crystal oscillator
–
A crystal oscillator is an electronic oscillator circuit that uses the mechanical resonance of a vibrating crystal of piezoelectric material to create an electrical signal with a precise frequency. Quartz crystals are manufactured for frequencies from a few tens of kilohertz to hundreds of megahertz, more than two billion crystals are manufactured annually. Most are used for devices such as wristwatches, clocks, radios, computers. Quartz crystals are found inside test and measurement equipment, such as counters, signal generators. A crystal oscillator is an oscillator circuit that uses a piezoelectric resonator. Crystal is the term used in electronics for the frequency-determining component. A more accurate term for it is piezoelectric resonator, Crystals are also used in other types of electronic circuits, such as crystal filters. Piezoelectric resonators are sold as components for use in crystal oscillator circuits. An example is shown in the picture and they are also often incorporated in a single package with the crystal oscillator circuit, shown on the righthand side. Piezoelectricity was discovered by Jacques and Pierre Curie in 1880, paul Langevin first investigated quartz resonators for use in sonar during World War I. Cady built the first quartz oscillator in 1921. Other early innovators in quartz crystal oscillators include G. W. Pierce, Quartz crystal oscillators were developed for high-stability frequency references during the 1920s and 1930s. Prior to crystals, radio stations controlled their frequency with tuned circuits, since broadcast stations were assigned frequencies only 10 kHz apart, interference between adjacent stations due to frequency drift was a common problem. In 1928, Warren Marrison of Bell Telephone Laboratories developed the first quartz-crystal clock, with accuracies of up to 1 second in 30 years, quartz clocks replaced precision pendulum clocks as the worlds most accurate timekeepers until atomic clocks were developed in the 1950s. Using the early work at Bell Labs, AT&T eventually established their Frequency Control Products division, later spun off, a number of firms started producing quartz crystals for electronic use during this time. Using what are now considered primitive methods, about 100,000 crystal units were produced in the United States during 1939, through World War II crystals were made from natural quartz crystal, virtually all from Brazil. By the 1970s virtually all crystals used in electronics were synthetic, although crystal oscillators still most commonly use quartz crystals, devices using other materials are becoming more common, such as ceramic resonators. A crystal is a solid in which the constituent atoms, molecules, or ions are packed in a regularly ordered, almost any object made of an elastic material could be used like a crystal, with appropriate transducers, since all objects have natural resonant frequencies of vibration

18.
Utility frequency
–
The utility frequency, line frequency or mains frequency is the nominal frequency of the oscillations of alternating current in an electric power grid transmitted from a power station to the end-user. In large parts of the world this is 50 Hz, although in the Americas, current usage by country or region is given in the list of mains power around the world. During the development of electric power systems in the late 19th and early 20th centuries. Large investment in equipment at one frequency made standardization a slow process, however, as of the turn of the 21st century, places that now use the 50 Hz frequency tend to use 220–240 V, and those that now use 60 Hz tend to use 100–127 V. Both frequencies coexist today with no technical reason to prefer one over the other. Unless specified by the manufacturer to operate on both 50 and 60 Hz, appliances may not operate efficiently or even if used on anything other than the intended frequency. In practice, the frequency of the grid varies around the nominal frequency, reducing when the grid is heavily loaded. However, most utilities will adjust the frequency of the grid over the course of the day to ensure a constant number of cycles occur and this is used by some clocks to accurately maintain their time. Several factors influence the choice of frequency in an AC system, lighting, motors, transformers, generators and transmission lines all have characteristics which depend on the power frequency. All of these factors interact and make selection of a power frequency a matter of considerable importance, the best frequency is a compromise between contradictory requirements. When large central generating stations became practical, the choice of frequency was made based on the nature of the intended load, eventually improvements in machine design allowed a single frequency to be used both for lighting and motor loads. A unified system improved the economics of electricity production, since system load was more uniform during the course of a day, the first applications of commercial electric power were incandescent lighting and commutator-type electric motors. Both devices operate well on DC, but DC could not be changed in voltage. Commutator-type motors do not operate well on high-frequency AC, because the changes of current are opposed by the inductance of the motor field. Though commutator-type universal motors are common in AC household appliances and power tools, they are small motors, less than 1 kW. The induction motor was found to work well on frequencies around 50 to 60 Hz, once AC electric motors became common, it was important to standardize frequency for compatibility with the customers equipment. Generators operated by slow-speed reciprocating engines will produce lower frequencies, for a number of poles, than those operated by, for example. For very slow prime mover speeds, it would be costly to build a generator with enough poles to provide a high AC frequency, as well, synchronizing two generators to the same speed was found to be easier at lower speeds

19.
Quartz clock
–
A quartz clock is a clock that uses an electronic oscillator that is regulated by a quartz crystal to keep time. This crystal oscillator creates a signal with very precise frequency, so that quartz clocks are at least an order of more accurate than mechanical clocks. Generally, some form of digital logic counts the cycles of this signal and provides a numeric display, usually in units of hours, minutes. The first quartz clock was built in 1927 by Warren Marrison, chemically, quartz is a compound called silicon dioxide. Many materials can be formed into plates that will resonate, however, quartz is also a piezoelectric material, that is, when a quartz crystal is subject to mechanical stress, such as bending, it accumulates electrical charge across some planes. In a reverse effect, if charges are placed across the crystal plane, since quartz can be directly driven by an electric signal, no additional speaker or microphone is required to use it in a resonator. Similar crystals are used in low-end phonograph cartridges, The movement of the stylus flexes a quartz crystal, which produces a small voltage, Quartz microphones are still available, though not common. Quartz has an advantage in that its size does not change much as temperature fluctuates. Fused quartz is used for laboratory equipment that must not change shape along with the temperature. A quartz plates resonance frequency, based on its size, will not significantly rise or fall, similarly, since its resonator does not change shape, a quartz clock will remain relatively accurate as the temperature changes. In the early 20th century, radio engineers sought a precise, stable source of radio frequencies, however, when Walter Guyton Cady found that quartz can resonate with less equipment and better temperature stability, steel resonators disappeared within a few years. Later, scientists at NIST discovered that a crystal oscillator could be accurate than a pendulum clock. The electronic circuit is an oscillator, an amplifier whose output passes through the quartz resonator, the resonator acts as an electronic filter, eliminating all but the single frequency of interest. The output of the resonator feeds back to the input of the amplifier, when the circuit starts up, even a single shot can cascade to bringing the oscillator at the desired frequency. If the amplifier is too perfect, the oscillator will not start, the frequency at which the crystal oscillates depends on its shape, size, and the crystal plane on which the quartz is cut. The positions at which electrodes are placed can slightly change the tuning, if the crystal is accurately shaped and positioned, it will oscillate at a desired frequency. In nearly all quartz watches, the frequency is 32,768 Hz, and this frequency is a power of two, just high enough so most people cannot hear it, yet low enough to permit inexpensive counters to derive a 1-second pulse. A 15-bit binary digital counter driven by the frequency will overflow once per second, the pulse-per-second output can be used to drive many kinds of clocks

20.
Celestial navigation
–
Celestial navigation uses sights, or angular measurements taken between a celestial body and the visible horizon. The sun is most commonly used, but navigators can also use the moon, Celestial navigation is the use of angular measurements between celestial bodies and the visible horizon to locate ones position on the globe, on land as well as at sea. At a given time, any celestial body is located directly over one point on the Earths surface. The latitude and longitude of that point is known as the celestial body’s geographic position, the measured angle between the celestial body and the visible horizon is directly related to the distance between the celestial bodys GP and the observers position. Sights on two celestial bodies give two such lines on the chart, intersecting at the observers position, most navigators will use sights of three to five stars, if theyre available, since that will result in only one common intersection and minimize the chance for error. That premise is the basis for the most commonly used method of celestial navigation, there are several other methods of celestial navigation which will also provide position finding using sextant observations, such as the noon sight, and the more archaic lunar distance method. Joshua Slocum used the lunar distance method during the first ever recorded single-handed circumnavigation of the world, unlike the altitude-intercept method, the noon sight and lunar distance methods do not require accurate knowledge of time. An example illustrating the concept behind the method for determining one’s position is shown to the right. In the adjacent image, the two circles on the map represent lines of position for the Sun and Moon at 1200 GMT on October 29,2005. At this time, a navigator on a ship at sea measured the Moon to be 56 degrees above the horizon using a sextant, ten minutes later, the Sun was observed to be 40 degrees above the horizon. Lines of position were calculated and plotted for each of these observations. Since both the Sun and Moon were observed at their respective angles from the location, the navigator would have to be located at one of the two locations where the circles cross. In this case the navigator is either located on the Atlantic Ocean, about 350 nautical miles west of Madeira, or in South America, about 90 nautical miles southwest of Asunción, Paraguay. In most cases, determining which of the two intersections is the one is obvious to the observer because they are often thousands of miles apart. As it is unlikely that the ship is sailing across South America, note that the lines of position in the figure are distorted because of the map’s projection, they would be circular if plotted on a globe. Accurate angle measurement evolved over the years, one simple method is to hold the hand above the horizon with ones arm stretched out. The need for accurate measurements led to the development of a number of increasingly accurate instruments, including the kamal, astrolabe. Navigators measure distance on the globe in degrees, arcminutes and arcseconds, a nautical mile is defined as 1852 meters, but is also one minute of angle along a meridian on the Earth

21.
Marine chronometer
–
A marine chronometer is a timepiece that is precise and accurate enough to be used as a portable time standard, it can therefore be used to determine longitude by means of celestial navigation. Timepieces made in Switzerland may display the word chronometer only if certified by the COSC, to determine a position on the Earths surface, it is necessary and sufficient to know the latitude, longitude, and altitude. Altitude considerations can, of course, be ignored for vessels operating at sea level, until the mid-1750s, accurate navigation at sea out of sight of land was an unsolved problem due to the difficulty in calculating longitude. Navigators could determine their latitude by measuring the angle at noon or, in the Northern Hemisphere. To find their longitude, however, they needed a standard that would work aboard a ship. Observation of regular celestial motions, such as Galileos method based on observing Jupiters natural satellites, was not possible at sea due to the ships motion. The lunar distances method, initially proposed by Johannes Werner in 1514, was developed in parallel with the marine chronometer, the Dutch scientist Gemma Frisius was the first to propose the use of a chronometer to determine longitude in 1530. The purpose of a chronometer is to measure accurately the time of a fixed location. This is particularly important for navigation, knowing GMT at local noon allows a navigator to use the time difference between the ships position and the Greenwich Meridian to determine the ships longitude. The creation of a timepiece which would work reliably at sea was difficult, christiaan Huygens, following his invention of the pendulum clock in 1656, made the first attempt at a marine chronometer in 1673 in France, under the sponsorship of Jean-Baptiste Colbert. He obtained a patent for his invention from Colbert, but his clock remained imprecise at sea, the first published use of the term was in 1684 in Arcanum Navarchicum, a theoretical work by Kiel professor Matthias Wasmuth. This was followed by a theoretical description of a chronometer in works published by English scientist William Derham in 1713. Attempts to construct a marine chronometer were begun by Jeremy Thacker in England in 1714. In 1714, the British government offered a prize for a method of determining longitude at sea. His first two sea timepieces H1 and H2 used this system, but he realised that they had a sensitivity to centrifugal force. However, H3s circular balances still proved too inaccurate and he abandoned the large machines. Harrison solved the problems with his much smaller H4 chronometer design in 1761. H4 looked much like a large five-inch diameter pocket watch, in 1761, Harrison submitted H4 for the £20,000 longitude prize

22.
Epson
–
Seiko Epson Corporation, or simply Epson, is a Japanese electronics company and one of the worlds largest manufacturers of computer printers, and information and imaging related equipment. It is one of three companies of the Seiko Group, a name traditionally known for manufacturing Seiko timepieces since its founding. In 1968 the company moved its UK headquarters to Audenshaw, Manchester, after acquiring the Jones Sewing Machine Company, Daiwa Kogyo was supported by an investment from the Hattori family and began as a manufacturer of watch parts for Daini Seikosha. The company started operation in a 2, 500-square-foot renovated miso storehouse with 22 employees, in 1943, Daini Seikosha established a factory in Suwa for manufacturing Seiko watches with Daiwa Kogyo. In 1959, the Suwa Factory of Daini Seikosha was split up, Ltd, the forerunner of the Seiko Epson Corporation. The company has developed many timepiece technologies, the watches made by the company are sold through the Seiko Watch Corporation, a subsidiary of Seiko Holdings Corporation. In 1961, Suwa Seikosha established a company called Shinshu Seiki Co. as a subsidiary to supply parts for Seiko watches. In September 1968, Shinshu Seiki launched the worlds first mini-printer, in June 1975, the name Epson was coined for the next generation of printers based on the EP-101 which was released to the public. In April of the same year Epson America Inc. was established to sell printers for Shinshu Seiki Co, in June 1978, the TX-80, eighty-column dot-matrix printer was released to the market, and was mainly used as a system printer for the Commodore PET Computer. After two years of development, an improved model, the MX-80, was launched in October 1980. It was soon described in the companys advertising as the best selling printer in the United States, in November 1985, Suwa Seikosha Co. Ltd. and the Epson Corporation merged to form Seiko Epson Corporation. Shortly after in 1994, Epson released the first high resolution color inkjet printer, newer models of the Stylus series employed Epson’s special DURABrite ink. They also had two hard drives, the HD850 and the HD860 MFM interface. The specifications are reference The WINN L. ROSCH Hardware bible 3rd addition SAMS publishing, in 1994 Epson started outsourcing sales reps to help sell their products in retail stores in the United States. In 1994 Epson started the Epson Weekend Warrior sales program, the purpose of the program was to help improve sales, improve retail sales reps knowledge of Epson products and to address Epson customer service in a retail environment. Reps were assigned on weekend shift, typically around 12–20 hours a week, Epson started the Weekend Warrior program with TMG Marketing, later with Keystone Marketing Inc, then to Mosaic, and now with Campaigners INC. The Mosaic contract expired with Epson on June 24,2007 and Epson is now represented by Campaigners, the sales reps of Campaigners, Inc. are not outsourced as Epson hired rack jobbers to ensure their retail customers displayed products properly. This frees up their regular sales force to concentrate on profitable sales solutions to VARs and system integrators, in June 2003, the company became public following their listing on the 1st section of the Tokyo Stock Exchange

23.
Intersil
–
Intersil, a Renesas company is a subsidiary of Renesas. The original Intersil was formed in August 1999 through the acquisition of the business of Harris Corporation. The original Intersil, Inc. was founded in 1967 by Jean Hoerni to develop digital watch processors and they were originally funded by SSIH, a Swiss watch company. In 1988 Intersil was taken over by Harris Semiconductor, which had offered the IM6100 as second source, Harris combined these activities with the semiconductor divisions of Radiation Incorporated, General Electric and RCA they had taken over before. In 1999 Harris spun off its semiconductor division and Intersil Corporation was created with the largest IPO in American semiconductor industry history. The second Intersil Corporation is a different company from the original Intersil, next to digital circuits like microprocessors and memories like the 1k-bit CMOS RAM IM6508 and CMOS EPROMS IM6604/IM6654 Intersil designed famous analogue ICs like the ICM8038 waveform generator. A creation of Intersil is the PRISM line of Wi-Fi hardware, that group of products was sold to GlobespanVirata in 2003, Intersil is the manufacturer of the RCA1802 microprocessor, a CPU traditionally used in space applications. The company, under CEO Dave Bell, then began the expansion of a catalog analog business, the company also completed a series of acquisitions, two of which are still part of the portfolio, Zilker Labs digital power devices and Techwell automotive and security and surveillance products. In 2012, with revenue in decline, the board of directors removed Dave Bell. In March 2013, the board appointed Necip Sayiner, the architect of Silicon Labs turnaround, Sayiner concentrated the companys efforts on power management and select target markets. The company was able to return to profitability in 2013 and in early 2014 re-launched as a management company, with products to improve power efficiency, extend battery life. Renesas acquired Intersil on February 24th,2017, Intersil develops and markets power management and precision analog technology for applications in the industrial, infrastructure, mobile, automotive and aerospace markets. Analog Semiconductors Intersil 6100 RTX2010 Official website

24.
Integrated Device Technology
–
The company markets its products primarily to original equipment manufacturers. Founded in 1980, the company began as a provider of complementary metal-oxide semiconductors for the business segment. The company is focused on three areas, communications infrastructure, high-performance computing, and advanced power management. This segment markets its products to the enterprise, data center and this segment’s computing products are designed for desktop, notebook, sub-notebook, storage, and server applications. The consumer segment provides products for digital TVs, smartphones, and gaming consoles through touch controllers, timing products, multi-port memory, audio, and power management devices. IDT’s first product was the first low-power, high-speed CMOS-based 6116 static random-access memory device, released in 1981, subsequent achievements include the first dual-port memory, pioneering in embedded RISC processors, leadership in network search engines and the first flow-control management device. In 1993, IDT entered the PC clock market with a family of devices focusing on computer platforms. IDT planned to expand its market by producing a suite of PC clock devices that serve next-generation notebook, in the early 2000s IDT introduced its first integrated microprocessor, the RC32334. The RC32334 was the first in the family of integrated processors targeted to communication applications, then a year later IDT introduced an industry’s first network search engine. In 2003, IDT announced its entry into the PC clock market, the company shifted its initial PC clock family to products serving current-generation desktop, notebook, and server platforms. In 2004, IDT continued to expand its business by acquiring ZettaCom and Internet Machines Corporation, rather than continue to evolve ZettaComs full line of existing physical-layer switching and traffic management chips, IDT converted ZettaComs operation into a new serial switching division. In July 2009, IDT and Micron Technology entered an alliance to develop PCI Express Solid-State Drive technologies for the server, storage, during this alliance, IDT and Micron co-developed enterprise flash controllers with PCIe host interface optimized for Micron’s flash devices and future generation RealSSD solid-state drives. In 2001, IDT acquired Newave Inc. a Chinese semiconductor firm, to accelerate its investment in the growing Asian semiconductor industry, Newave became a subsidiary of IDT through a cash-for-stock merger. In April 2001 IDT acquired Solidum Systems, an Ottawa-based fabless semiconductor company, in April 2004, IDT acquired ZettaCom, a serial switching and bridging semiconductor company, for $35 million. This enabled IDT to be one of the few communications IC suppliers to participate in the standards-based Advance Switching initiative spearheaded by Intel, IDT made two acquisitions in 2005. In June, IDT acquired Integrated Circuit Systems for about $1.7 billion in cash, the acquisition provided a platform for growth within the communications, computing, and consumer market. In October, IDT acquired Freescale Semiconductors timing solutions business for $35 million, the transaction was originally initiated by Integrated Circuit System Inc. before it was acquired by IDT earlier that year. In July 2006, IDT acquired the PC Audio division of Austin-based company SigmaTel for $80 million and this including the design, marketing and manufacturing rights, and software products

25.
Maxim Integrated Products
–
Maxim Integrated is an American, publicly traded company that designs, manufactures, and sells analog and mixed-signal integrated circuits. Maxim Integrated develops integrated circuits for the automotive, industrial, communications, consumer, headquartered in San Jose, California, the company has design centers, manufacturing facilities, and sales offices throughout the world. In the fiscal year 2015, it had US$2.31 billion in sales,8,800 employees, Maxim is a Fortune 1000 company listed on the NASDAQ100, Russell 1000, and MSCI US indices. Maxim was founded in April 1983 and its nine initial team members had a variety of experience in semiconductors design and sales. Based on a business plan, they obtained US$9 million in venture capital to establish the company. In the first year, the company developed 24 second-source products, after that, Maxim designed proprietary products that offered greater differentiation and higher profits. Maxim recorded its first profitable year in 1987, with the help of a product called MAX232. Annual revenue reached $500 million in fiscal year 1998 and in fiscal 2011 totaled over $2.47 billion,1990, Purchased first wafer fabrication facility in Sunnyvale, California. 1994, Acquired Tektronix Semiconductor Division in Beaverton, Oregon, giving Maxim high-speed bipolar processes for wireless RF,1997, Purchased an additional wafer fab from IC Works in San Jose, California, to increase fab capacity. 2001, Acquired Dallas Semiconductor in Dallas, Texas, to gain expertise in digital and mixed-signal CMOS design,2003, Purchased submicrometre CMOS fab from Philips in San Antonio, Texas, to ramp up capacity and support processes down to the 0. 25-micrometre level. 2007, Purchased 0. 18-micrometre fab from Atmel in Irving, Texas,2008, Acquired Mobilygen in Santa Clara, California, to add H.264 video-compression technology to its portfolio. 2009, Acquired Innova Card, headquartered in La Ciotat, France,2009, Acquired two product lines from Zilog, Inc. Maxim purchased the Secure Transactions product line, featuring the Zatara family,2010, Acquired privately held Teridian Semiconductor Corporation for approximately $315 million in cash. Teridian was a semiconductor company located in Irvine, California. 2010, Maxim acquired the technology and employees of Trinity Convergence Limited, Trinity was part of the ecosystem to bring Skype video conferencing to the LCD TV market. 2010, Maxim acquired Phyworks, a supplier of optical transceiver chips for the communications market. 2011, Maxim acquired SensorDynamics, a company that develops proprietary sensor. 2012, Maxim acquired Genasic Design Systems Ltd. a fabless RF chip company that makes chips for LTE applications

26.
NXP Semiconductors
–
NXP Semiconductors N. V. is a Dutch global semiconductor manufacturer headquartered in Eindhoven, Netherlands. The company employs approximately 45,000 people in more than 35 countries, NXP reported revenue of $6.1 billion in 2015, including one month of revenue contribution from recently merged Freescale Semiconductor. On October 27,2016, it was announced that Qualcomm would buy NXP, NXP said it was the fifth-largest non-memory semiconductor supplier in 2016, and the leading semiconductor supplier for the secure identification, automotive and digital networking industries. The company was founded in 1953 as part of the electronics firm Philips, with manufacturing and development in Nijmegen, known then as Philips Semiconductors, the company was sold to a consortium of private equity investors in 2006, at which point the companys name was changed to NXP. On August 6,2010, NXP completed its Initial public offering, on December 23,2013, NXP Semiconductors was added to the NASDAQ100. Finally, on March 2,2015, it was announced that NXP Semiconductors would merge with chip designer and manufacturer Freescale Semiconductor in a $40 billion US-dollar deal, the merger was closed on December 7,2015. NXP Semiconductors provides mixed signal and standard based on its security, identification, automotive, networking, radio frequency, analog signal. For example, in order to protect against potential hackers, NXP offers gateways to automotive manufacturers that prevent communication with every network within a car independently. NXP is the co-inventor of near field communication technology along with Sony and supplies NFC chip sets that enable mobile phones to be used to pay for goods, in addition, NXP manufactures automotive chips for in-vehicle networking, passive keyless entry and immobilization, and car radios. NXP invented the I²C interface over 30 years ago and supplier of products using it, NXP is also a volume supplier of standard logic devices, and celebrated its 50 years in logic in March 2012. NXP owns over 9,000 issued or pending patents, silicon Valley-based Signetics, the first company in the world established expressly to make and sell integrated circuits and inventor of the 555 timer IC, was acquired by Philips in 1975. At the time, it was claimed that with the Signetics acquisition, in 1987, Philips-Signetics, a unit of Philips, was ranked Europes largest semiconductor maker, with sales of $1.36 billion in 1986. Philips acquired VLSI Technology in June 1999, at the time, the acquisition made Philips the worlds sixth largest semiconductor company. In December 2005, Philips announced its intention to separate its semiconductor division, Philips Semiconductors. The new company name NXP was announced on August 31,2006, the newly independent NXP was ranked as one of the worlds top 10 semiconductor companies. At the time, CEO Frans van Houten emphasized the importance of NXP in enabling vibrant media technologies in mobile phones, digital TVs, portable music players and other consumer electronics devices. Similarly, in April 2008, NXP announced it would acquire the box business of Conexant to complement its existing Home business unit. In October 2009, NXP announced that it would sell its Home business unit to Trident Microsystems

27.
Texas Instruments
–
Texas Instruments Inc. is an American technology company that designs and manufactures semiconductors, which it sells to electronics designers and manufacturers globally. Headquartered in Dallas, Texas, United States, TI is one of the top ten semiconductor companies worldwide, Texas Instrumentss focus is on developing analog chips and embedded processors, which accounts for more than 85% of their revenue. TI also produces TI digital light processing technology and education technology products including calculators, microcontrollers, to date, TI has more than 43,000 patents worldwide. TI produced the worlds first commercial silicon transistor in 1950, Jack Kilby invented the integrated circuit in 1958 while working at TIs Central Research Labs. TI also invented the hand-held calculator in 1967, and introduced the first single-chip microcontroller in 1970, in 1987, TI invented the digital light processing device, which serves as the foundation for the companys award-winning DLP technology and DLP Cinema. In 1990, TI came out with the popular TI-81 calculator which made them a leader in the calculator industry. In 1997, its business was sold to Raytheon, which allowed TI to strengthen its focus on digital solutions. Texas Instruments was founded by Cecil H. Green, J. Erik Jonsson, Eugene McDermott, McDermott was one of the original founders of Geophysical Service Inc. in 1930. McDermott, Green, and Jonsson were GSI employees who purchased the company in 1941, in November,1945, Patrick Haggerty was hired as general manager of the Laboratory and Manufacturing division, which focused on electronic equipment. By 1951, the L&M division, with its contracts, was growing faster than GSIs Geophysical division. The company was reorganized and initially renamed General Instruments Inc, because there already existed a firm named General Instrument, the company was renamed Texas Instruments that same year. From 1956 to 1961, Fred Agnich of Dallas, later a Republican member of the Texas House of Representatives, was the Texas Instruments president, Geophysical Service, Inc. became a subsidiary of Texas Instruments. Early in 1988 most of GSI was sold to the Halliburton Company, in 1930, J. Clarence Karcher and Eugene McDermott founded Geophysical Service, an early provider of seismic exploration services to the petroleum industry. In 1939, the company reorganized as Coronado Corp. an oil company with Geophysical Service Inc, on December 6,1941, McDermott along with three other GSI employees, J. Erik Jonsson, Cecil H. Green, and H. B. During World War II, GSI expanded their services to include electronics for the U. S. Army, Signal Corps, in 1951, the company changed its name to Texas Instruments, GSI becoming a wholly owned subsidiary of the new company. Texas Instruments also continued to manufacture equipment for use in the seismic industry, after selling GSI, TI finally sold the company to Halliburton in 1988, at which point GSI ceased to exist as a separate entity. Texas Instruments entered the electronics market in 1942 with submarine detection equipment. During the early 1980s, Texas Instruments instituted a quality program which included Juran training, as well as promoting statistical process control, Taguchi methods and Design for Six Sigma

28.
STMicroelectronics
–
STMicroelectronics is a French-Italian multinational electronics and semiconductor manufacturer headquartered in Geneva, Switzerland. It is commonly called ST, and it is Europes largest semiconductor chip maker based on revenue, while STMicroelectronics corporate headquarters and the headquarters for EMEA region are based in Geneva, the holding company, STMicroelectronics N. V. is registered in Amsterdam, Netherlands. The companys US headquarters is in Coppell, Texas, headquarters for the Asia-Pacific region is in Singapore whilst Japan and Korea operations are headquartered in Tokyo. The company headquarters for the Greater China region is in Shanghai, STMicroelectronics was formed in 1987 by the merger of semiconductor companies SGS Microelettronica of Italy and Thomson Semiconducteurs, the semiconductor arm of Frances Thomson. At the time of the merger the company was known as SGS-THOMSON, SGS Microelettronica and Thomson Semiconducteurs were both long-established semiconductor companies. Thomson Semiconducteurs was created in 1982 by the French governments widespread nationalisation of industries and it included, the semiconductor activities of the French electronics company Thomson. Mostek, a US company founded in 1969 by some ex-employees of Texas Instruments, Eurotechnique founded in 1979 in Rousset, Bouches-du-Rhône as a joint-venture between Saint-Gobain of France and US-based National Semiconductor. After its creation by merger in 1987, SGS-Thomson was ranked 14th among the top 20 semiconductor suppliers with sales of around US$850 million, in 1994, Canada-based Nortels semiconductor activities. In 2002, Alcatels Microelectronics division, which along with the incorporation of smaller such as UK company, Synad Ltd. Genesis Microchip is known for their strength in video processing technology and has centres located in Santa Clara, Toronto, Taipei City, Taiwan R. O. C. On December 8,1994, the company completed its public offering on the Paris. Owner Thomson SA sold its stake in the company in 1998 when the also listed on the Borsa Italiana in Milan. In 2002, Motorola and TSMC joined ST and Philips in a new technology partnership, the Crolles2 Alliance was created with a new 12 wafer manufacturing facility located in Crolles. By 2005, STMicroelectronics was ranked fifth, behind Intel, Samsung, Texas Instruments and Toshiba, but ahead of Infineon, Renesas, NEC, NXP, the company was the largest European semiconductors supplier, ahead of Infineon and NXP. Early in 2007, NXP and Freescale decided to stop their participation in Crolles2 Alliance, under the terms of the agreement the Alliance came to an end on December 31,2007. On May 22,2007, ST and Intel created a joint venture in the application called Numonyx. This new company merged ST and Intel Flash Memory activities and this joint venture began on August 20,2008. On February 10,2009, ST Ericsson, a joint venture bringing together ST-NXP Wireless, in 2011, STMicroelectronics announced the creation of a joint lab with SantAnna School of Advanced Studies

29.
Ricoh
–
The Ricoh Company, Ltd. is a Japanese multinational imaging and electronics company. It was founded by the RIKEN zaibatsu on 6 February 1936 as Riken Sensitized Paper, Ricohs headquarters are located in Ricoh Building in Chūō, Tokyo. In the late 1990s through early 2000s, the company grew to become the largest copier manufacturer in the world, during this time, Ricoh acquired Savin, Gestetner, Lanier, Rex-Rotary, Monroe, Nashuatec, IKON and most recently IBM Printing Systems Division / Infoprint Solutions Company. Although the Monroe brand was discontinued, products continue to be marketed worldwide under the brand names. In 2006, Ricoh acquired the European operations of Danka for $210 million and these operations continue as a stand-alone business unit, under the Infotec brand. The company was founded in 1936, before relocating to Chūō, Ricoh was first in Minato, Tokyo. In 2006 Ricohs headquarters moved to the Ricoh Building, a 25-story building in the Ginza area in Chūō, throughout the late 1980s and early 1990s, Ricoh was the primary manufacturer of Pitney-Bowes copiers. They have also manufactured copiers for Toshiba, fax machines for AT&T Corporation and Omnifax and they also manufactured the Ricoh 2A03 8-bit processor used in the Nintendo Entertainment System. In 2003 Ricoh bought naming rights to the CNE Coliseum in Toronto, in 2004 Ricoh acquired Hitachi Printing Solutions, Ltd creating a new company, Ricoh Printing Systems, Ltd. In 2005 Ricoh bought the rights to the stadium/entertainment complex. In November 2006, Ricoh announced the integration of the office of Ricoh Europe B. V. in Amstelveen, Netherlands, with NRGs European headquarters in London. This was completed on April 1, with the former NRG HQ in London becoming the Strategic HQ and this mirrors a similar process which took place in the US with Lanier and Ricoh USA. This integration was the first step in integration within each country in Europe, on August 27,2008, Ricoh announced its intentions of acquiring IKON Office Solutions for $1.6 billion and later that year, on November 1, Ricoh completed the acquisition. In May 2011, Ricoh announced a cut of 10,000 jobs worldwide up to March 2014 from the current 40,000 workers in Japan and 68,900 others overseas, the company would also shift 15,000 workers to areas with more growth potential. Japanese optical glass-maker Hoya Corporation said on July 1,2011, it would sell its Pentax camera business to Ricoh, on July 29,2011, Hoya transferred its Pentax imaging systems business to a newly established subsidiary called Pentax Imaging Corporation. On October 1,2011, Ricoh acquired all shares of Pentax Imaging Corp. and renamed the new subsidiary Pentax Ricoh Imaging Company, on October 1,2011, Ricoh announced the establishment of Pentax Ricoh Imaging Company, LTD. On August 1,2013, the name was changed to Ricoh Imaging Company Ltd. On January 8,2016, Ricoh India stated they partnered with Siemens to offer digital lifecycle management software, on July 19,2016, Ricoh India admitted to an estimated ₹1,123 crore accounting fraud

30.
Motorola
–
Motorola was an American multinational telecommunications company founded on September 25,1928, based in Schaumburg, Illinois. After having lost $4.3 billion from 2007 to 2009, Motorola Solutions is generally considered to be the direct successor to Motorola, as the reorganization was structured with Motorola Mobility being spun off. Motorola Mobility was acquired by Lenovo in 2014, Motorola designed and sold wireless network equipment such as cellular transmission base stations and signal amplifiers. Its business and government customers consisted mainly of wireless voice and broadband systems and these businesses are now part of Motorola Solutions. Google sold Motorola Home to the Arris Group in December 2012 for US$2.35 billion, Motorolas wireless telephone handset division was a pioneer in cellular telephones. Also known as the Personal Communication Sector prior to 2004, it pioneered the mobile phone with DynaTAC and it had staged a resurgence by the mid-2000s with the RAZR, but lost market share in the second half of that decade. Later it focused on smartphones using Googles open-source Android mobile operating system, the first phone to use the newest version of Googles open source OS, Android 2.0, was released on November 2,2009 as the Motorola Droid. The handset division was spun off into the independent Motorola Mobility. On May 22,2012, Google CEO Larry Page announced that Google had closed on its deal to acquire Motorola Mobility. On January 29,2014, Page announced that pending closure of the deal, on October 30,2014, Lenovo finalized its purchase of Motorola Mobility from Google. Galvin Manufacturing Corporation set up shop in a section of a rented building. The company had $565 in working capital and five employees, the first weeks payroll was $63. The companys first products were battery-eliminators, devices that enabled battery-powered radios to operate on household electricity, due to advances in radio technology, battery-eliminators soon became obsolete. Paul Galvin learned that some radio technicians were installing sets in cars and his team was successful, and Galvin was able to demonstrate a working model of the radio at the June 1930 Radio Manufacturers Association convention in Atlantic City, New Jersey. He brought home enough orders to keep the company in business. g, the company sold its first Motorola branded radio on June 23,1930, to H. C. Wall of Fort Wayne, Indiana, for $30, the Motorola brand name became so well-known that Galvin Manufacturing Corporation later changed its name to Motorola, Inc. Galvin Manufacturing Corporation began selling Motorola car-radio receivers to police departments, the companys first public safety customers included the Village of River Forest, Village of Bellwood Police Department, City of Evanston Police, Illinois State Highway Police, and Cook County Police. In the same year, the company built its research and development program with Dan Noble, a pioneer in FM radio and semiconductor technologies, the company produced the hand-held AM SCR-536 radio during World War II, which was vital to Allied communication

31.
Motherboard
–
A motherboard is the main printed circuit board found in general purpose microcomputers and other expandable systems. It holds and allows communication between many of the electronic components of a system, such as the central processing unit and memory. In very old designs, copper wires were the discrete connections between card connector pins, but printed circuit boards soon became the standard practice, the Central Processing Unit, memory, and peripherals were housed on individual printed circuit boards, which were plugged into the backplate. The ubiquitous S-100 bus of the 1970s is an example of type of backplane system. During the late 1980s and 1990s, it became economical to move a number of peripheral functions onto the motherboard. Business PCs, workstations, and servers were more likely to need expansion cards, either for more robust functions, or for higher speeds, laptop and notebook computers that were developed in the 1990s integrated the most common peripherals. This even included motherboards with no upgradeable components, a trend that would continue as smaller systems were introduced after the turn of the century, memory, processors, network controllers, power source, and storage would be integrated into some systems. A motherboard provides the connections by which the other components of the system communicate. Unlike a backplane, it contains the central processing unit and hosts other subsystems. A typical desktop computer has its microprocessor, main memory, an important component of a motherboard is the microprocessors supporting chipset, which provides the supporting interfaces between the CPU and the various buses and external components. This chipset determines, to an extent, the features and capabilities of the motherboard, modern motherboards include, Sockets in which one or more microprocessors may be installed. In the case of CPUs in ball grid array packages, such as the VIA C3, as of 2007, some graphics cards require more power than the motherboard can provide, and thus dedicated connectors have been introduced to attach them directly to the power supply. Connectors for hard drives, typically SATA only, disk drives also connect to the power supply. Additionally, nearly all motherboards include logic and connectors to support commonly used devices, such as USB for mouse devices. Early personal computers such as the Apple II or IBM PC included only this minimal peripheral support on the motherboard, occasionally video interface hardware was also integrated into the motherboard, for example, on the Apple II and rarely on IBM-compatible computers such as the IBM PC Jr. Additional peripherals such as disk controllers and serial ports were provided as expansion cards, given the high thermal design power of high-speed computer CPUs and components, modern motherboards nearly always include heat sinks and mounting points for fans to dissipate excess heat. Motherboards are produced in a variety of sizes and shapes called computer form factor, however, the motherboards used in IBM-compatible systems are designed to fit various case sizes. As of 2007, most desktop computer motherboards use the ATX standard form factor — even those found in Macintosh and Sun computers, a cases motherboard and PSU form factor must all match, though some smaller form factor motherboards of the same family will fit larger cases

32.
Silkscreen
–
Screen printing is a printing technique whereby a mesh is used to transfer ink onto a substrate, except in areas made impermeable to the ink by a blocking stencil. A blade or squeegee is moved across the screen to fill the open mesh apertures with ink, and this causes the ink to wet the substrate and be pulled out of the mesh apertures as the screen springs back after the blade has passed. Screen printing is also a method of print making in which a design is imposed on a screen of polyester or other fine mesh. Ink is forced into the mesh openings by the blade or squeegee and by wetting the substrate. As the screen away from the substrate the ink remains on the substrate. It is also known as silk-screen, screen, serigraphy, one color is printed at a time, so several screens can be used to produce a multicoloured image or design. There are various terms used for what is essentially the same technique, traditionally the process was called screen printing or silkscreen printing because silk was used in the process prior to the invention of polyester mesh. Currently, synthetic threads are used in the screen printing process. The most popular mesh in general use is made of polyester, there are special-use mesh materials of nylon and stainless steel available to the screen printer. There are also different types of mesh size which will determine the outcome, Screen printing first appeared in a recognizable form in China during the Song Dynasty. It was then adapted by other Asian countries like Japan, and was furthered by creating newer methods, Roy Beck, Charles Peter and Edward Owens studied and experimented with chromic acid salt sensitized emulsions for photo-reactive stencils. Commercial screen printing now uses sensitizers far safer and less toxic than bichromates, currently there are large selections of pre-sensitized and user mixed sensitized emulsion chemicals for creating photo-reactive stencils. Serigraphy is a word formed from Latin sēricum and Greek graphein. The Printers National Environmental Assistance Center says Screenprinting is arguably the most versatile of all printing processes, credit is generally given to the artist Andy Warhol for popularising screen printing as an artistic technique, identified as serigraphy, in the United States. Sister Mary Corita Kent, gained fame for her vibrant serigraphs during the 1960s and 1970s. Her works were rainbow colored, contained words that were political and fostered peace and love and caring. American entrepreneur, artist and inventor Michael Vasilantone started to use, develop, Vasilantone later filed for patent on his invention in 1967 granted number 3,427,964 on February 18,1969. The original machine was manufactured to print logos and team information on bowling garments, the Vasilantone patent was licensed by multiple manufacturers, the resulting production and boom in printed T-shirts made this garment screen printing machine popular

33.
Southbridge (computing)
–
The southbridge is one of the two chips in the core logic chipset on a personal computer motherboard, the other being the northbridge. The southbridge typically implements the slower capabilities of the motherboard in a northbridge/southbridge chipset computer architecture, in systems with Intel chipsets, the southbridge is named I/O Controller Hub, while AMD has named its southbridge Fusion Controller Hub since the introduction of its Fusion APUs. The southbridge can usually be distinguished from the northbridge by not being connected to the CPU. Rather, the ties the southbridge to the CPU. Through the use of controller integrated channel circuitry, the northbridge can directly link signals from the I/O units to the CPU for data control, the southbridge became redundant and it was replaced by the Platform Controller Hub architecture introduced with the Intel 5 Series chipset in 2008. All southbridge features and remaining I/O functions are managed by the PCH which is connected to the CPU via the Direct Media Interface. A southbridge chipset handles all of a computers I/O functions, such as USB, audio, serial, the system BIOS, the ISA bus, the interrupt controller, traditionally, the interface between a northbridge and southbridge was the PCI bus. The main bridging interfaces used now are DMI and UMI, the name is derived from representing the architecture in the fashion of a map and was first described as such with the introduction of the PCI Local Bus Architecture in 1991. At Intel, the authors of the PCI specification viewed the PCI local bus as being at the centre of the PC platform architecture. The northbridge extends to the north of the PCI bus backbone in support of CPU, memory/cache, likewise the southbridge extends to the south of the PCI bus backbone and bridges to less performance-critical I/O capabilities such as the disk interface, audio, etc. The CPU is located at the top of the map at due north, the CPU is connected to the chipset via a fast bridge located north of other system devices as drawn. The northbridge is connected to the rest of the chipset via a bridge located south of other system devices as drawn. Although the current PC platform architecture has replaced the PCI bus backbone with faster I/O backbones, the functionality found in a contemporary southbridge includes, PCI bus. The PCI bus support includes the traditional PCI specification, but may include support for PCI-X. ISA support remains a part of the modern southbridge, though ISA slots are no longer provided on more recent motherboards. The LPC bridge provides a data and control path to the super I/O, the SPI bus is a simple serial bus mostly used for firmware flash storage access. The SMBus is used to communicate with devices on the motherboard. The DMA controller allows ISA or LPC devices direct access to main memory without needing help from the CPU, interrupt controllers such as 8259A and/or I/O APIC

34.
Microcontroller
–
A microcontroller is a small computer on a single integrated circuit. In modern terminology, it is a system on a chip or SoC, a microcontroller contains one or more CPUs along with memory and programmable input/output peripherals. Program memory in the form of Ferroelectric RAM, NOR flash or OTP ROM is also included on chip. Microcontrollers are designed for embedded applications, in contrast to the used in personal computers or other general purpose applications consisting of various discrete chips. Mixed signal microcontrollers are common, integrating analog components needed to control non-digital electronic systems, some microcontrollers may use four-bit words and operate at frequencies as low as 4 kHz, for low power consumption. Other microcontrollers may serve performance-critical roles, where they may need to act more like a signal processor, with higher clock speeds. The first microprocessor was the 4-bit Intel 4004 released in 1971, with the Intel 8008, however, both processors required external chips to implement a working system, raising total system cost, and making it impossible to economically computerize appliances. One book credits TI engineers Gary Boone and Michael Cochran with the creation of the first microcontroller in 1971. The result of their work was the TMS1000, which became available in 1974. It combined read-only memory, read/write memory, processor and clock on one chip and was targeted at embedded systems and it combined RAM and ROM on the same chip. This chip would find its way into one billion PC keyboards. At that time Intels President, Luke J. Valenter, stated that the microcontroller was one of the most successful in the companys history, most microcontrollers at this time had concurrent variants. One had an erasable EPROM program memory, with a transparent quartz window in the lid of the package to allow it to be erased by exposure to ultraviolet light, often used for prototyping. The PROM was of type of memory as the EPROM. The erasable versions required ceramic packages with quartz windows, making them more expensive than the OTP versions. The same year, Atmel introduced the first microcontroller using Flash memory, other companies rapidly followed suit, with both memory types. Cost has plummeted over time, with the cheapest 8-bit microcontrollers being available for under 0.25 USD in quantity in 2009, nowadays microcontrollers are cheap and readily available for hobbyists, with large online communities around certain processors. In the future, MRAM could potentially be used in microcontrollers as it has infinite endurance, in 2002, about 55% of all CPUs sold in the world were 8-bit microcontrollers and microprocessors

35.
Peripheral
–
A peripheral is an ancillary device used to put information into and get information out of the computer. Touchscreens are an example that combines different devices into a hardware component that can be used both as an input and output device. A peripheral device is defined as any auxiliary device such as a computer mouse or keyboard that connects to. Other examples of peripherals are image scanners, tape drives, microphones, loudspeakers, webcams, common input peripherals include keyboards, computer mice, graphic tablets, touchscreens, barcode readers, image scanners, microphones, webcams, game controllers, light pens, and digital cameras. Common output peripherals include computer displays, printers, projectors, computer hardware Controller Display device Expansion card Punched card input/output Punched tape Video game accessory

36.
LTE (telecommunication)
–
In telecommunication, Long-Term Evolution is a standard for high-speed wireless communication for mobile phones and data terminals, based on the GSM/EDGE and UMTS/HSPA technologies. It increases the capacity and speed using a different radio interface together with core network improvements, the standard is developed by the 3GPP and is specified in its Release 8 document series, with minor enhancements described in Release 9. LTE is the path for carriers with both GSM/UMTS networks and CDMA2000 networks. The different LTE frequencies and bands used in different countries mean that only multi-band phones are able to use LTE in all countries where it is supported. LTE is commonly marketed as 4G LTE, but it does not meet the criteria of a 4G wireless service, as specified in the 3GPP Release 8 and 9 document series. The requirements were set forth by the ITU-R organization in the IMT Advanced specification. The LTE Advanced standard formally satisfies the ITU-R requirements to be considered IMT-Advanced, to differentiate LTE Advanced and WiMAX-Advanced from current 4G technologies, ITU has defined them as True 4G. LTE stands for Long Term Evolution and is a trademark owned by ETSI for the wireless data communications technology. However, other nations and companies do play an role in the LTE project. The goal of LTE was to increase the capacity and speed of data networks using new DSP techniques. A further goal was the redesign and simplification of the architecture to an IP-based system with significantly reduced transfer latency compared to the 3G architecture. The LTE wireless interface is incompatible with 2G and 3G networks, LTE was first proposed by NTT DoCoMo of Japan in 2004, and studies on the new standard officially commenced in 2005. Initially, CDMA operators planned to upgrade to rival standards called UMB and WiMAX, the evolution of LTE is LTE Advanced, which was standardized in March 2011. Services are expected to commence in 2013, additional evolution known as LTE Advanced Pro have been approved in year 2015. The LTE specification provides downlink peak rates of 300 Mbit/s, uplink peak rates of 75 Mbit/s, LTE has the ability to manage fast-moving mobiles and supports multi-cast and broadcast streams. LTE supports scalable carrier bandwidths, from 1.4 MHz to 20 MHz, the simpler architecture results in lower operating costs. In 2004, NTT DoCoMo of Japan proposes LTE as the international standard, in September 2006, Siemens Networks showed in collaboration with Nomor Research the first live emulation of an LTE network to the media and investors. As live applications two users streaming an HDTV video in the downlink and playing a game in the uplink have been demonstrated

37.
Network Time Protocol
–
Network Time Protocol is a networking protocol for clock synchronization between computer systems over packet-switched, variable-latency data networks. In operation since before 1985, NTP is one of the oldest Internet protocols in current use, NTP was designed by David L. Mills of the University of Delaware. NTP is intended to synchronize all participating computers to within a few milliseconds of Coordinated Universal Time and it uses a modified version of Marzullos algorithm to select accurate time servers and is designed to mitigate the effects of variable network latency. NTP can usually maintain time to within tens of milliseconds over the public Internet, asymmetric routes and network congestion can cause errors of 100 ms or more. The protocol is described in terms of a client-server model. Implementations send and receive timestamps using the User Datagram Protocol on port number 123 and they can also use broadcasting or multicasting, where clients passively listen to time updates after an initial round-trip calibrating exchange. NTP supplies a warning of any impending leap second adjustment, the current protocol is version 4, which is a proposed standard as documented in RFC5905. It is backward compatible with version 3, specified in RFC1305, the technology was later described in the 1981 Internet Engineering Note 173 and a public protocol was developed from it that was documented in RFC778. Other related network tools were available both then and now and they include the Daytime and Time protocols for recording the time of events, as well as the ICMP Timestamp and IP Timestamp option. In 1985, NTPv0 was implemented in both Fuzzball and Unix, and the NTP packet header and round-trip delay and offset calculations, in 1988, a much more complete specification of the NTPv1 protocol, with associated algorithms, was published in RFC1059. It drew on the results and clock filter algorithm documented in RFC956 and was the first version to describe the client-server and peer-to-peer modes. In 1989, RFC1119 was published defining NTPv2 by means of a state machine and it introduced a management protocol and cryptographic authentication scheme which have both survived into NTPv4. The design of NTP was criticized for lacking formal correctness principles by the DTSS community and their alternative design included Marzullos algorithm, a modified version of which was promptly added to NTP. The bulk of the algorithms from this era have also survived into NTPv4. In 1992, RFC1305 defined NTPv3, in subsequent years, as new features were added and algorithm improvements were made, it became apparent that a new protocol version was required. In 2010, RFC5905 was published containing a proposed specification for NTPv4, but the protocol has significantly moved on since then, and as of 2014, an updated RFC has yet to be published. Following the retirement of Mills from the University of Delaware, the implementation is currently maintained as an open source project led by Harlan Stenn. NTP uses a hierarchical, semi-layered system of time sources, each level of this hierarchy is termed a stratum and is assigned a number starting with zero at the top

38.
GPS
–
The Global Positioning System is a space-based radionavigation system owned by the United States government and operated by the United States Air Force. The GPS system operates independently of any telephonic or internet reception, the GPS system provides critical positioning capabilities to military, civil, and commercial users around the world. The United States government created the system, maintains it, however, the US government can selectively deny access to the system, as happened to the Indian military in 1999 during the Kargil War. The U. S. Department of Defense developed the system and it became fully operational in 1995. Roger L. Easton of the Naval Research Laboratory, Ivan A, getting of The Aerospace Corporation, and Bradford Parkinson of the Applied Physics Laboratory are credited with inventing it. Announcements from Vice President Al Gore and the White House in 1998 initiated these changes, in 2000, the U. S. Congress authorized the modernization effort, GPS III. In addition to GPS, other systems are in use or under development, mainly because of a denial of access. The Russian Global Navigation Satellite System was developed contemporaneously with GPS, GLONASS can be added to GPS devices, making more satellites available and enabling positions to be fixed more quickly and accurately, to within two meters. There are also the European Union Galileo positioning system and Chinas BeiDou Navigation Satellite System, special and general relativity predict that the clocks on the GPS satellites would be seen by the Earths observers to run 38 microseconds faster per day than the clocks on the Earth. The GPS calculated positions would quickly drift into error, accumulating to 10 kilometers per day, the relativistic time effect of the GPS clocks running faster than the clocks on earth was corrected for in the design of GPS. The Soviet Union launched the first man-made satellite, Sputnik 1, two American physicists, William Guier and George Weiffenbach, at Johns Hopkinss Applied Physics Laboratory, decided to monitor Sputniks radio transmissions. Within hours they realized that, because of the Doppler effect, the Director of the APL gave them access to their UNIVAC to do the heavy calculations required. The next spring, Frank McClure, the deputy director of the APL, asked Guier and Weiffenbach to investigate the inverse problem — pinpointing the users location and this led them and APL to develop the TRANSIT system. In 1959, ARPA also played a role in TRANSIT, the first satellite navigation system, TRANSIT, used by the United States Navy, was first successfully tested in 1960. It used a constellation of five satellites and could provide a navigational fix approximately once per hour, in 1967, the U. S. Navy developed the Timation satellite, which proved the feasibility of placing accurate clocks in space, a technology required by GPS. In the 1970s, the ground-based OMEGA navigation system, based on comparison of signal transmission from pairs of stations. Limitations of these systems drove the need for a more universal navigation solution with greater accuracy, during the Cold War arms race, the nuclear threat to the existence of the United States was the one need that did justify this cost in the view of the United States Congress. This deterrent effect is why GPS was funded and it is also the reason for the ultra secrecy at that time

39.
Radio clock
–
A radio clock or radio-controlled clock is a clock that is automatically synchronized by a time code transmitted by a radio transmitter connected to a time standard such as an atomic clock. Such a clock may be synchronized to the time sent by a transmitter, such as many national or regional time transmitters, or may use multiple transmitters. Such systems may be used to set clocks or for any purpose where accurate time is needed. The radio controlled clock will contain a time base oscillator to maintain timekeeping if the radio signal is momentarily unavailable. Other radio controlled clocks use the signals transmitted by dedicated transmitters in the shortwave bands. Systems using dedicated time signal stations can achieve accuracy of a few tens of milliseconds, GPS satellite navigation receivers also internally generate accurate time information from the satellite signals. General purpose or consumer grade GPS may have an offset of up to one second between the calculated time, which is much more accurate than 1 second, and the time displayed on the screen. Other broadcast services may include timekeeping information of varying accuracy within their signals, Radio clocks depend on coded time signals from radio stations. The stations vary in broadcast frequency, in location. In general, each station has its own format for the time code, many other countries can receive these signals, but success depends on the time of day, atmospheric conditions, and interference from intervening buildings. Reception is generally better if the clock is placed near a window facing the transmitter, there is also a transit delay of approximately 1 ms for every 300 km the receiver is from the transmitter. A number of manufacturers and retailers sell radio clocks that receive coded time signals from a radio station, one of the first radio clocks was offered by Heathkit in late 1983. Their model GC-1000 Most Accurate Clock received shortwave time signals from radio station WWV in Fort Collins and it automatically switched between WWVs 5,10, and 15 MHz frequencies to find the strongest signal as conditions changed through the day and year. It kept time during periods of poor reception with a quartz-crystal oscillator and this oscillator was disciplined, meaning that the microprocessor-based clock used the highly accurate time signal received from WWV to trim the crystal oscillator. The timekeeping between updates was thus more accurate than the crystal alone could have achieved. Time down to the tenth of a second was shown on an LED display, the GC-1000 originally sold for US$250 in kit form, US$400 preassembled, and was considered impressive at the time. Heath Company was granted a patent for its design, in the 2000s radio-based atomic clocks became common in retail stores, as of 2010 prices start at around US$15 in many countries. Clocks may have features such as indoor thermometers and weather station functionality

40.
Data General Nova
–
The Data General Nova was a popular 16-bit minicomputer released by the American company Data General in 1968. The Nova was packaged into a rack mount case and had enough power to do most simple computing tasks. The Nova became popular in science laboratories around the world and it was succeeded by the Data General Eclipse, which was similar in most ways but added virtual memory support and other features required by modern operating systems. Edson de Castro was the Product Manager at Digital Equipment Corporation of their pioneering PDP-8, the fourth founder, Herbert Richman, had been a salesman for Fairchild Semiconductor and knew the others through his contacts with Digital Equipment. This greatly reduced costs over the traditional wire-wrapping technique, the larger-board construction also made the Nova more reliable, which made it especially attractive for industrial or lab settings. Fairchild Semiconductor provided the new medium-scale integration chips used throughout the system, the Nova was one of the first 16-bit minicomputers and was a leader in moving to word lengths that were multiples of the 8-bit byte in that market. DG released the Nova in 1969 at a price of US$3,995. The basic model was not very useful out of the box, starting in 1969, Data General shipped a total of 50,000 Novas at $8000 each. The Nova’s biggest competition was from the new DEC PDP-11 computer series, the Nova became popular in scientific and laboratory uses. The Nova 1200 executed core memory access instructions in 2.55 microseconds, use of read only memory saved 0.4 μs. Accumulator instructions took 1.55 μs, MUL2.55 μs, DIV3.75 μs, ISZ3. 15-4.5 μs. On the later Eclipse MV/6000, LDA and STA took 0.44 μs, ADD, etc. took 0.33 μs, MUL2.2 μs, DIV3.19 μs, ISZ1.32 μs, FAD5.17 μs, FMMD11.66 μs. Bob Supnik’s SimH project – Includes a basic Nova emulator in a user-modifiable package The portable C compiler includes a NOVA target, a portable PDP-8 and DG Nova cross-assembler Carl Friend’s Minicomputer Museum – Describes the Nova instruction set in detail

41.
PDP-8
–
The 12-bit PDP-8, produced by Digital Equipment Corporation, was the first successful commercial minicomputer. DEC introduced it on March 22,1965 priced at $18,500 and eventually more than 50,000 systems. The PDP-8 was the first computer to be sold for under $20,000 and it was the first widely sold computer in the DEC PDP series of computers. The chief engineer who designed the initial version of the PDP-8 was Edson de Castro, the earliest PDP-8 model uses diode–transistor logic, packaged on flip chip cards, and is about the size of a small household refrigerator. This was followed in 1966 by the PDP-8/S, available in desktop, using a one-bit serial arithmetic logic unit implementation, allowed the PDP-8/S to be smaller, less expensive and slower than the original PDP-8. The PDP-8/S was about 20% of the cost and about 10% of the performance of the PDP-8, the only mass storage peripheral available for the PDP-8/S was the DF32 disk. Later systems returned to a faster, fully parallel implementation but use less costly transistor-transistor logic MSI logic. Most surviving PDP-8s are from this era, the PDP-8/E is common, and well-regarded because so many types of I/O devices were available for it. It was often configured as a general-purpose computer, the last commercial PDP-8 models introduced in 1979 were called CMOS-8s. They use custom complementary metal-oxide-semiconductor microprocessors and they were not priced competitively, and the offering failed. The IBM PC in 1981 cemented the doom of the CMOS-8s by making a legitimate, Intersil sold the integrated circuits commercially through to 1982 as the Intersil 6100 family. By virtue of their CMOS technology they had low power requirements and were used in some embedded military systems, the PDP-8 combined low cost, simplicity, expandability, and careful engineering for value. The greatest historical significance was that the PDP-8s low cost and high volume made a computer available to new people for many new uses. Its continuing significance is as a example of value-engineered computer design. The low complexity brought other costs and it made programming cumbersome, as is seen in the examples in this article and from the discussion of pages and fields. Some ambitious programming projects failed to fit in memory or developed design defects that could not be solved, as design advances reduced the costs of logic and memory, the programmers time became relatively more important. Subsequent computer designs emphasized ease of programming, typically using a larger, eventually, most machine-language programming came to be generated by compilers and report generators. The PDP-8 used ideas from several 12-bit predecessors, most notably the LINC designed by W. A. Clark, the architecture has a simple programmed I/O bus, plus a DMA channel

42.
Network time protocol
–
Network Time Protocol is a networking protocol for clock synchronization between computer systems over packet-switched, variable-latency data networks. In operation since before 1985, NTP is one of the oldest Internet protocols in current use, NTP was designed by David L. Mills of the University of Delaware. NTP is intended to synchronize all participating computers to within a few milliseconds of Coordinated Universal Time and it uses a modified version of Marzullos algorithm to select accurate time servers and is designed to mitigate the effects of variable network latency. NTP can usually maintain time to within tens of milliseconds over the public Internet, asymmetric routes and network congestion can cause errors of 100 ms or more. The protocol is described in terms of a client-server model. Implementations send and receive timestamps using the User Datagram Protocol on port number 123 and they can also use broadcasting or multicasting, where clients passively listen to time updates after an initial round-trip calibrating exchange. NTP supplies a warning of any impending leap second adjustment, the current protocol is version 4, which is a proposed standard as documented in RFC5905. It is backward compatible with version 3, specified in RFC1305, the technology was later described in the 1981 Internet Engineering Note 173 and a public protocol was developed from it that was documented in RFC778. Other related network tools were available both then and now and they include the Daytime and Time protocols for recording the time of events, as well as the ICMP Timestamp and IP Timestamp option. In 1985, NTPv0 was implemented in both Fuzzball and Unix, and the NTP packet header and round-trip delay and offset calculations, in 1988, a much more complete specification of the NTPv1 protocol, with associated algorithms, was published in RFC1059. It drew on the results and clock filter algorithm documented in RFC956 and was the first version to describe the client-server and peer-to-peer modes. In 1989, RFC1119 was published defining NTPv2 by means of a state machine and it introduced a management protocol and cryptographic authentication scheme which have both survived into NTPv4. The design of NTP was criticized for lacking formal correctness principles by the DTSS community and their alternative design included Marzullos algorithm, a modified version of which was promptly added to NTP. The bulk of the algorithms from this era have also survived into NTPv4. In 1992, RFC1305 defined NTPv3, in subsequent years, as new features were added and algorithm improvements were made, it became apparent that a new protocol version was required. In 2010, RFC5905 was published containing a proposed specification for NTPv4, but the protocol has significantly moved on since then, and as of 2014, an updated RFC has yet to be published. Following the retirement of Mills from the University of Delaware, the implementation is currently maintained as an open source project led by Harlan Stenn. NTP uses a hierarchical, semi-layered system of time sources, each level of this hierarchy is termed a stratum and is assigned a number starting with zero at the top

43.
Raspberry Pi
–
The original model became far more popular than anticipated, selling outside of its target market for uses such as robotics. Peripherals are not included with the Raspberry Pi, some accessories however have been included in several official and unofficial bundles. According to the Raspberry Pi Foundation, over 5 million Raspberry Pis have been sold before February 2015, by 9 September 2016 they had sold 10 million. Several generations of Raspberry Pis have been released, the first generation was released in February 2012. It was followed by a simpler and inexpensive model Model A, in 2014, the foundation released a board with an improved design in Raspberry Pi 1 Model B+. These boards are approximately credit-card sized and represent the standard mainline form-factor, improved A+ and B+ models were released a year later. The Raspberry Pi 2 which added more RAM was released in February 2015, Raspberry Pi 3 Model B released in February 2016 is bundled with on-board WiFi, Bluetooth and USB Boot capabilities. As of January 2017, Raspberry Pi 3 Model B is the newest mainline Raspberry Pi, Raspberry Pi boards are priced between US$5–35. As of 28 February 2017, the Raspberry PI Zero W was launched, which is identical to the Raspberry PI Zero, all models feature a Broadcom system on a chip, which includes an ARM compatible central processing unit and an on-chip graphics processing unit. CPU speed ranges from 700 MHz to 1.2 GHz for the Pi 3, secure Digital cards are used to store the operating system and program memory in either the SDHC or MicroSDHC sizes. Most boards have between one and four USB slots, HDMI and composite video output, and a 3.5 mm phone jack for audio, lower level output is provided by a number of GPIO pins which support common protocols like I²C. The B-models have an 8P8C Ethernet port and the Pi 3 and Pi Zero W have on board Wi-Fi 802. 11n and Bluetooth. The Foundation provides Raspbian, a Debian-based Linux distribution for download, as well as third party Ubuntu, Windows 10 IOT Core, RISC OS and it promotes Python and Scratch as the main programming language, with support for many other languages. The default firmware is closed source, while an open source is available. The Raspberry Pi hardware has evolved through several versions that feature variations in memory capacity and this block diagram depicts Models A, B, A+, and B+. Model A, A+, and the Pi Zero lack the Ethernet and USB hub components. The Ethernet adapter is connected to an additional USB port. In Model A, A+, and the PI Zero, the USB port is connected directly to the system on a chip. On the Pi 1 Model B+ and later models the USB/Ethernet chip contains a five-point USB hub, on the Pi Zero, the USB port is also connected directly to the SoC, but it uses a micro USB port

44.
List of Arduino boards and compatible systems
–
This is a non-exhaustive list of Arduino boards and compatible systems. The official policy document on the use of the Arduino name emphasizes that the project is open to incorporating work by others into the official product. As a result of the naming conventions of the Arduino. The name Freeduino is not trademarked and is free to use for any purpose, several Arduino-compatible products commercially released have avoided the Arduino name by using -duino name variants. The following boards are fully or almost fully compatible with both the Arduino hardware and software, including being able to accept shield daughterboards, special purpose Arduino-compatible boards add additional hardware optimised for a specific application. It is kind of like having an Arduino and a shield on a single board, some are Shield compatible, others are not. These boards are compatible with the Arduino software, but they do not accept standard shields and they have different connectors for power and I/O, such as a series of pins on the underside of the board for use with breadboards for prototyping, or more specific connectors. One of the important choices made by Arduino-compatible board designers is whether or not to include USB circuitry in the board and that circuitry can be placed in the cable between development PC and board, thus making each instance of the board less expensive. For many Arduino tasks, the USB circuitry is redundant once the device has been programmed, the following non-ATmega boards accept Arduino shield daughter boards. The microcontrollers are not compatible with the official Arduino IDE, but they do provide a version of the Arduino IDE, the following boards accept Arduino shield daughter boards. They do not use microcontrollers compatible with the Arduino IDE, nor do they provide an implementation of the Arduino IDE. Category 5 cable Media related to Arduino compatibles at Wikimedia Commons

Computer
–
A computer is a device that can be instructed to carry out an arbitrary set of arithmetic or logical operations automatically. The ability of computers to follow a sequence of operations, called a program, such computers are used as control systems for a very wide variety of industrial and consumer devices. The Internet is run on computers and it m

1.
Computer

2.
Suanpan (the number represented on this abacus is 6,302,715,408)

Clock
–
A clock is an instrument to measure, keep, and indicate time. The word clock is derived from the Celtic words clagan and clocca meaning bell, a silent instrument missing such a striking mechanism has traditionally been known as a timepiece. In general usage today a clock refers to any device for measuring and displaying the time, Watches and other

1.
The Swiss railway clock.

2.
The Shepherd gate clock at the Royal Observatory, Greenwich.

3.
Simple horizontal sundial.

4.
The flow of sand in an hourglass can be used to keep track of elapsed time.

Integrated circuit
–
An integrated circuit or monolithic integrated circuit is a set of electronic circuits on one small flat piece of semiconductor material, normally silicon. The ICs mass production capability, reliability and building-block approach to circuit design ensured the rapid adoption of standardized ICs in place of using discrete transistors. ICs are now u

1.
Erasable programmable read-only memory integrated circuits. These packages have a transparent window that shows the die inside. The window allows the memory to be erased by exposing the chip to ultraviolet light.

2.
Integrated circuit from an EPROM memory microchip showing the memory blocks, the supporting circuitry and the fine silver wires which connect the integrated circuit die to the legs of the packaging.

3.
Jack Kilby 's original integrated circuit

4.
The die from an Intel 8742, an 8-bit microcontroller that includes a CPU running at 12 MHz, 128 bytes of RAM, 2048 bytes of EPROM, and I/O in the same chip

Time
–
Time is the indefinite continued progress of existence and events that occur in apparently irreversible succession from the past through the present to the future. Time is often referred to as the dimension, along with the three spatial dimensions. Time has long been an important subject of study in religion, philosophy, and science, nevertheless,

1.
The flow of sand in an hourglass can be used to keep track of elapsed time. It also concretely represents the present as being between the past and the future.

3.
Horizontal sundial in Taganrog

4.
A contemporary quartz watch

Personal computer
–
A personal computer is a multi-purpose electronic computer whose size, capabilities, and price make it feasible for individual use. PCs are intended to be operated directly by a end-user, rather than by an expert or technician. In the 2010s, PCs are typically connected to the Internet, allowing access to the World Wide Web, personal computers may b

1.
Commodore PET in 1983 (at American Museum of Science and Energy)

3.
IBM Personal Computer XT in 1988

4.
The 8-bit PMD 85 personal computer produced in 1985-1990 by the Tesla company in the former socialist Czechoslovakia. This computer was produced locally (in Piešťany) due to a lack of foreign currency with which to buy systems from the West.

Server (computing)
–
In computing, a server is a computer program or a device that provides functionality for other programs or devices, called clients. This architecture is called the model, and a single overall computation is distributed across multiple processes or devices. Servers can provide various functionalities, often called services, such as sharing data or r

1.
Wikimedia Foundation servers

2.
A rack-mountable server. Top cover removed to reveal the internal components.

Embedded system
–
An embedded system is a computer system with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints. It is embedded as part of a device often including hardware. Embedded systems control many devices in use today. Ninety-eight percent of all microprocessors are manufactured as components of

1.
Picture of the internals of an ADSL modem / router. A modern example of an embedded system. Labelled parts include a microprocessor (4), RAM (6), and flash memory (7).

Single-board computer
–
A single-board computer is a complete computer built on a single circuit board, with microprocessor, memory, input/output and other features required of a functional computer. Single-board computers were made as demonstration or development systems, for educational systems, many types of home computers or portable computers integrate all their func

2.
A socket 3 based 486 SBC with power supply and flatscreen

3.
Close up of SBC

4.
Major components on an PICMG 1.3 SBC

Signal (electrical engineering)
–
A signal as referred to in communication systems, signal processing, and electrical engineering is a function that conveys information about the behavior or attributes of some phenomenon. The IEEE Transactions on Signal Processing states that the signal includes audio, video, speech, image, communication, geophysical, sonar, radar. Typically, signa

1.
A digital signal has two or more distinguishable waveforms, in this example, high voltage and low voltages, each of which can be mapped onto a digit. Characteristically, noise can be removed from digital signals provided it is not too large.

Digital electronics
–
Digital electronics or digital circuits are electronics that handle digital signals rather than by continuous ranges as used in analog electronics. All levels within a band of values represent the information state. In most cases, the number of states is two, and they are represented by two voltage bands, one near a reference value, and the other a

1.
A binary clock, hand-wired on breadboards

2.
A digital signal has two or more distinguishable waveforms, in this example, high voltage and low voltages, each of which can be mapped onto a digit.

3.
An industrial digital controller

4.
Intel 80486DX2 microprocessor

Global Positioning System
–
The Global Positioning System is a space-based radionavigation system owned by the United States government and operated by the United States Air Force. The GPS system operates independently of any telephonic or internet reception, the GPS system provides critical positioning capabilities to military, civil, and commercial users around the world. T

Ephemeris
–
In astronomy and celestial navigation, an ephemeris gives the positions of naturally occurring astronomical objects as well as artificial satellites in the sky at a given time or times. Historically, positions were given as printed tables of values, given at intervals of date. Modern ephemerides are often computed electronically from mathematical m

Lithium battery
–
Lithium batteries are batteries that have lithium as an anode. These types of batteries are also referred to as lithium-metal batteries and they stand apart from other batteries in their high charge density and high cost per unit. The term lithium battery refers to a family of different lithium-metal chemistries, comprising many types of cathodes a

1.
CR2032 lithium button cell battery

2.
Lithium 9 volt, AA, and AAA sizes. The top unit has three lithium-manganese dioxide cells internally, the bottom two are lithium-iron disulfide single cells physically and electrically compatible with 1.5 volt zinc batteries.

4.
This article is about disposable lithium batteries. It is not to be confused with Lithium-ion battery.

Supercapacitor
–
A supercapacitor is a high-capacity capacitor with capacitance values much higher than other capacitors that bridge the gap between electrolytic capacitors and rechargeable batteries. Smaller units are used as backup for static random-access memory. Supercapacitors do not use the solid dielectric of ordinary capacitors. The separation of charge is

1.
A range of supercapacitors

2.
Typical button capacitor for PCB mounting used for memory backup

3.
Flat style used for mobile components

4.
Radial style of a lithium-ion capacitor for PCB mounting used for industrial applications

Soldering
–
Soldering, is a process in which two or more items are joined together by melting and putting a filler metal into the joint, the filler metal having a lower melting point than the adjoining metal. Soldering differs from welding in that soldering does not involve melting the work pieces, in brazing, the filler metal melts at a higher temperature, bu

1.
Desoldering a contact from a wire

2.
Small figurine being created by soldering

3.
Soldering of an SMD capacitor

4.
A tube of multicore electronics solder used for manual soldering

Nonvolatile BIOS memory
–
Nonvolatile BIOS memory refers to a small memory on PC motherboards that is used to store BIOS settings. It is traditionally called CMOS RAM because it uses a volatile, low-power complementary metal-oxide-semiconductor SRAM powered by a small CMOS battery when system, the typical NVRAM capacity is 256 bytes. The CMOS RAM and the real-time clock hav

1.
CMOS battery in a Pico ITX motherboard

2.
Type CR2032 button cell, most common CMOS battery.

Crystal oscillator
–
A crystal oscillator is an electronic oscillator circuit that uses the mechanical resonance of a vibrating crystal of piezoelectric material to create an electrical signal with a precise frequency. Quartz crystals are manufactured for frequencies from a few tens of kilohertz to hundreds of megahertz, more than two billion crystals are manufactured

1.
A miniature 16 MHz quartz crystal enclosed in a hermetically sealed HC-49/S package, used as the resonator in a crystal oscillator.

3.
100 kHz crystal oscillators at the US National Bureau of Standards that served as the frequency standard for the United States in 1929

4.
Very early Bell Labs crystals from Vectron International Collection

Utility frequency
–
The utility frequency, line frequency or mains frequency is the nominal frequency of the oscillations of alternating current in an electric power grid transmitted from a power station to the end-user. In large parts of the world this is 50 Hz, although in the Americas, current usage by country or region is given in the list of mains power around th

1.
Diagram of two areas of an interconnected power systems, with two independent loads.

2.
The waveform of 230 volt, 50 Hz compared with 110 V, 60 Hz.

Quartz clock
–
A quartz clock is a clock that uses an electronic oscillator that is regulated by a quartz crystal to keep time. This crystal oscillator creates a signal with very precise frequency, so that quartz clocks are at least an order of more accurate than mechanical clocks. Generally, some form of digital logic counts the cycles of this signal and provide

4.
Basic quartz wristwatch movement. Bottom right quartz crystal oscillator, left button cell watch battery. Top right oscillator counter, top left the coil of the stepper motor that powers the watch hands.

Celestial navigation
–
Celestial navigation uses sights, or angular measurements taken between a celestial body and the visible horizon. The sun is most commonly used, but navigators can also use the moon, Celestial navigation is the use of angular measurements between celestial bodies and the visible horizon to locate ones position on the globe, on land as well as at se

1.
A sextant

Marine chronometer
–
A marine chronometer is a timepiece that is precise and accurate enough to be used as a portable time standard, it can therefore be used to determine longitude by means of celestial navigation. Timepieces made in Switzerland may display the word chronometer only if certified by the COSC, to determine a position on the Earths surface, it is necessar

1.
Breguet twin barrel box chronometer.

2.
The marine "Chronometer" of Jeremy Thacker used gimbals and a vacuum in a bell jar.

Epson
–
Seiko Epson Corporation, or simply Epson, is a Japanese electronics company and one of the worlds largest manufacturers of computer printers, and information and imaging related equipment. It is one of three companies of the Seiko Group, a name traditionally known for manufacturing Seiko timepieces since its founding. In 1968 the company moved its

Intersil
–
Intersil, a Renesas company is a subsidiary of Renesas. The original Intersil was formed in August 1999 through the acquisition of the business of Harris Corporation. The original Intersil, Inc. was founded in 1967 by Jean Hoerni to develop digital watch processors and they were originally funded by SSIH, a Swiss watch company. In 1988 Intersil was

1.
Intersil Corporation

Integrated Device Technology
–
The company markets its products primarily to original equipment manufacturers. Founded in 1980, the company began as a provider of complementary metal-oxide semiconductors for the business segment. The company is focused on three areas, communications infrastructure, high-performance computing, and advanced power management. This segment markets i

NXP Semiconductors
–
NXP Semiconductors N. V. is a Dutch global semiconductor manufacturer headquartered in Eindhoven, Netherlands. The company employs approximately 45,000 people in more than 35 countries, NXP reported revenue of $6.1 billion in 2015, including one month of revenue contribution from recently merged Freescale Semiconductor. On October 27,2016, it was a

Texas Instruments
–
Texas Instruments Inc. is an American technology company that designs and manufactures semiconductors, which it sells to electronics designers and manufacturers globally. Headquartered in Dallas, Texas, United States, TI is one of the top ten semiconductor companies worldwide, Texas Instrumentss focus is on developing analog chips and embedded proc

2.
Texas Instruments operated this Convair 240 on experimental work in the 1980s fitted with a modified extended nose section

3.
A Bolt-117, the first laser-guided bomb built by Texas Instruments.

4.
An AGM-154 Joint Standoff Weapon.

STMicroelectronics
–
STMicroelectronics is a French-Italian multinational electronics and semiconductor manufacturer headquartered in Geneva, Switzerland. It is commonly called ST, and it is Europes largest semiconductor chip maker based on revenue, while STMicroelectronics corporate headquarters and the headquarters for EMEA region are based in Geneva, the holding com

1.
STMicroelectronics building in Geneva, Switzerland, aerial view

2.
STMicroelectronics N.V.

Ricoh
–
The Ricoh Company, Ltd. is a Japanese multinational imaging and electronics company. It was founded by the RIKEN zaibatsu on 6 February 1936 as Riken Sensitized Paper, Ricohs headquarters are located in Ricoh Building in Chūō, Tokyo. In the late 1990s through early 2000s, the company grew to become the largest copier manufacturer in the world, duri

1.
The Ricoh Building in Tokyo, the headquarters of Ricoh

2.
Ricoh Theta 360-degree compact

3.
Caplio R6

4.
Caplio GX100

Motorola
–
Motorola was an American multinational telecommunications company founded on September 25,1928, based in Schaumburg, Illinois. After having lost $4.3 billion from 2007 to 2009, Motorola Solutions is generally considered to be the direct successor to Motorola, as the reorganization was structured with Motorola Mobility being spun off. Motorola Mobil

1.
Local branch in Glostrup, Denmark

2.
Dr. Martin Cooper of Motorola made the first private handheld mobile phone call on a larger prototype model in 1973. This is a reenactment in 2007.

Motherboard
–
A motherboard is the main printed circuit board found in general purpose microcomputers and other expandable systems. It holds and allows communication between many of the electronic components of a system, such as the central processing unit and memory. In very old designs, copper wires were the discrete connections between card connector pins, bu

2.
Motherboard for an Acer desktop personal computer, showing the typical components and interfaces that are found on a motherboard. This model was made by Foxconn in 2007, and follows the ATX layout (known as the " form factor ") usually employed for desktop computers. It is designed to work with AMD's Athlon 64 processor

3.
The Octek Jaguar V motherboard from 1993. This board has few onboard peripherals, as evidenced by the 6 slots provided for ISA cards and the lack of other built-in external interface connectors

4.
The motherboard of a Samsung Galaxy SII; almost all functions of the device are integrated into a very small board

Silkscreen
–
Screen printing is a printing technique whereby a mesh is used to transfer ink onto a substrate, except in areas made impermeable to the ink by a blocking stencil. A blade or squeegee is moved across the screen to fill the open mesh apertures with ink, and this causes the ink to wet the substrate and be pulled out of the mesh apertures as the scree

1.
A silk screen design

2.
Screen Printers use a silkscreen like this Screenstretch version, a squeegee, and hinge clamps to screen print their designs. The ink is forced through the mesh using the rubber squeegee, the hinge clamps keep the screen in place for easy registration

3.
A step-by-step illustrated poster infographic showing how to screen print using emulsion on a screen

Southbridge (computing)
–
The southbridge is one of the two chips in the core logic chipset on a personal computer motherboard, the other being the northbridge. The southbridge typically implements the slower capabilities of the motherboard in a northbridge/southbridge chipset computer architecture, in systems with Intel chipsets, the southbridge is named I/O Controller Hub

1.
A part of an IBM T42 laptop motherboard, with the following labels: CPU (central processing unit), NB (northbridge), GPU (graphics processing unit), and SB (southbridge).

2.
A typical north/southbridge layout

Microcontroller
–
A microcontroller is a small computer on a single integrated circuit. In modern terminology, it is a system on a chip or SoC, a microcontroller contains one or more CPUs along with memory and programmable input/output peripherals. Program memory in the form of Ferroelectric RAM, NOR flash or OTP ROM is also included on chip. Microcontrollers are de

1.
The die from an Intel 8742, an 8-bit microcontroller that includes a CPU running at 12 MHz, 128 bytes of RAM, 2048 bytes of EPROM, and I/O in the same chip.

Peripheral
–
A peripheral is an ancillary device used to put information into and get information out of the computer. Touchscreens are an example that combines different devices into a hardware component that can be used both as an input and output device. A peripheral device is defined as any auxiliary device such as a computer mouse or keyboard that connects

1.
Human Machine Interface (HMI) peripherals.

LTE (telecommunication)
–
In telecommunication, Long-Term Evolution is a standard for high-speed wireless communication for mobile phones and data terminals, based on the GSM/EDGE and UMTS/HSPA technologies. It increases the capacity and speed using a different radio interface together with core network improvements, the standard is developed by the 3GPP and is specified in

1.
Telia -branded Samsung LTE modem

2.
Countries and regions with commercial LTE service

3.
HTC ThunderBolt, the second commercially available LTE smartphone

Network Time Protocol
–
Network Time Protocol is a networking protocol for clock synchronization between computer systems over packet-switched, variable-latency data networks. In operation since before 1985, NTP is one of the oldest Internet protocols in current use, NTP was designed by David L. Mills of the University of Delaware. NTP is intended to synchronize all parti

1.
NTP was originally designed by David L. Mills who still oversees its development.

2.
The U.S. Naval Observatory Alternate Master Clock at Schriever AFB (Colorado) is a stratum 0 source for NTP

3.
The NTP management protocol utility ntpq being used to query the state of a stratum 2 server.

GPS
–
The Global Positioning System is a space-based radionavigation system owned by the United States government and operated by the United States Air Force. The GPS system operates independently of any telephonic or internet reception, the GPS system provides critical positioning capabilities to military, civil, and commercial users around the world. T

Radio clock
–
A radio clock or radio-controlled clock is a clock that is automatically synchronized by a time code transmitted by a radio transmitter connected to a time standard such as an atomic clock. Such a clock may be synchronized to the time sent by a transmitter, such as many national or regional time transmitters, or may use multiple transmitters. Such

Data General Nova
–
The Data General Nova was a popular 16-bit minicomputer released by the American company Data General in 1968. The Nova was packaged into a rack mount case and had enough power to do most simple computing tasks. The Nova became popular in science laboratories around the world and it was succeeded by the Data General Eclipse, which was similar in mo

1.
A Nova system (beige and yellow, center bottom) and a cartridge hard disk system (opened, below Nova) in a mostly empty rack mount.

PDP-8
–
The 12-bit PDP-8, produced by Digital Equipment Corporation, was the first successful commercial minicomputer. DEC introduced it on March 22,1965 priced at $18,500 and eventually more than 50,000 systems. The PDP-8 was the first computer to be sold for under $20,000 and it was the first widely sold computer in the DEC PDP series of computers. The c

1.
A PDP-8 on display at the Smithsonian 's National Museum of American History in Washington, D.C.. This example is from the first generation of PDP-8s, built with discrete transistors and later known as the Straight 8.

2.
An open PDP-8/E with its logic modules behind the front panel and two DECtape drives at the top

3.
PDP-8/E front panel

4.
PDP-8/I core stack

Network time protocol
–
Network Time Protocol is a networking protocol for clock synchronization between computer systems over packet-switched, variable-latency data networks. In operation since before 1985, NTP is one of the oldest Internet protocols in current use, NTP was designed by David L. Mills of the University of Delaware. NTP is intended to synchronize all parti

1.
NTP was originally designed by David L. Mills who still oversees its development.

2.
The U.S. Naval Observatory Alternate Master Clock at Schriever AFB (Colorado) is a stratum 0 source for NTP

3.
The NTP management protocol utility ntpq being used to query the state of a stratum 2 server.

Raspberry Pi
–
The original model became far more popular than anticipated, selling outside of its target market for uses such as robotics. Peripherals are not included with the Raspberry Pi, some accessories however have been included in several official and unofficial bundles. According to the Raspberry Pi Foundation, over 5 million Raspberry Pis have been sold

1.
Raspberry Pi 1 model B+

2.
Raspberry Pi logo

3.
Raspberry Pi 2 model B

4.
An early alpha-test board in operation using different layout from later beta and production boards

List of Arduino boards and compatible systems
–
This is a non-exhaustive list of Arduino boards and compatible systems. The official policy document on the use of the Arduino name emphasizes that the project is open to incorporating work by others into the official product. As a result of the naming conventions of the Arduino. The name Freeduino is not trademarked and is free to use for any purp

1.
The Leonardo uses the Atmega32U4 processor, which has a USB controller built-in, eliminating one chip as compared to previous Arduinos.

2.
This uses the same ATmega328 as late-model Duemilanove, but whereas the Duemilanove used an FTDI chip for USB, the Uno uses an ATmega16U2 (ATmega8U2 before rev3) programmed as a serial converter.

3.
Total memory of 256 kB. Uses the ATmega16U2 (ATmega8U2 before Rev3) USB chip. Most shields that were designed for the Duemilanove, Diecimila, or Uno will fit, but a few shields will not fit because of interference with the extra pins.

4.
Based on the same WIZnet W5100 chip as the Arduino Ethernet Shield. A serial interface is provided for programming, but no USB interface. Late versions of this board support Power over Ethernet (PoE).