Network processors are typically software programmable devices and would have generic characteristics similar to general purpose central processing units that are commonly used in many different types of equipment and products.

Network processors have evolved into ICs with specific functions. This evolution has resulted in more complex and more flexible ICs being created. The newer circuits are programmable and thus allow a single hardware IC design to undertake a number of different functions, where the appropriate software is installed.

Network processors are used in the manufacture of many different types of network equipment such as:

Specialized microcoded engines to more efficiently accomplish the tasks at hand.

With the advent of multicore architectures, network processors can be used for higher layer (L4-L7) processing.

Additionally, traffic management, which is a critical element in L2-L3 network processing and used to be executed by a variety of co-processors, has become an integral part of the network processor architecture, and a substantial part of its silicon area ("real estate") is devoted to the integrated traffic manager.[1] Modern network processors are also equipped with low-latency high-throughput on-chip interconnection networks optimized for the exchange of small messages among cores (few data words). Such networks can be used as an alternative facility for the efficient inter-core communication aside of the standard use of shared memory.[2]

Using the generic function of the network processor, a software program implements an application that the network processor executes, resulting in the piece of physical equipment performing a task or providing a service. Some of the applications types typically implemented as software running on network processors are:[3]

Quality of service (QoS) enforcement - identifying different types or classes of packets and providing preferential treatment for some types or classes of packet at the expense of other types or classes of packet.

Access Control functions - determining whether a specific packet or stream of packets should be allowed to traverse the piece of network equipment.

Encryption of data streams - built in hardware-based encryption engines allow individual data flows to be encrypted by the processor.

Intel - Intel has ceased all development in the area of network processors in 2006, but its market share still grew in 2007 and 2008, topping at 38%, due to previously developed products. Netronome currently has the license to develop and manufacture IXP processors with more than 16 cores.[4]

1.
Integrated circuit
–
An integrated circuit or monolithic integrated circuit is a set of electronic circuits on one small flat piece of semiconductor material, normally silicon. The ICs mass production capability, reliability and building-block approach to circuit design ensured the rapid adoption of standardized ICs in place of using discrete transistors. ICs are now used in all electronic equipment and have revolutionized the world of electronics. Computers, mobile phones, and other home appliances are now inextricable parts of the structure of modern societies, made possible by the small size. These advances, roughly following Moores law, allow a computer chip of 2016 to have millions of times the capacity, ICs have two main advantages over discrete circuits, cost and performance. Cost is low because the chips, with all their components, are printed as a unit by photolithography rather than being constructed one transistor at a time, furthermore, packaged ICs use much less material than discrete circuits. Performance is high because the ICs components switch quickly and consume little power because of their small size, the main disadvantage of ICs is the high cost to design them and fabricate the required photomasks. This high initial cost means ICs are only practical when high production volumes are anticipated, Circuits meeting this definition can be constructed using many different technologies, including thin-film transistor, thick film technology, or hybrid integrated circuit. However, in general usage integrated circuit has come to refer to the single-piece circuit construction originally known as a integrated circuit. Jacobi disclosed small and cheap hearing aids as typical industrial applications of his patent, an immediate commercial use of his patent has not been reported. The idea of the circuit was conceived by Geoffrey Dummer. Dummer presented the idea to the public at the Symposium on Progress in Quality Electronic Components in Washington and he gave many symposia publicly to propagate his ideas, and unsuccessfully attempted to build such a circuit in 1956. A precursor idea to the IC was to create small ceramic squares, Components could then be integrated and wired into a bidimensional or tridimensional compact grid. This idea, which seemed very promising in 1957, was proposed to the US Army by Jack Kilby, however, as the project was gaining momentum, Kilby came up with a new, revolutionary design, the IC. In his patent application of 6 February 1959, Kilby described his new device as a body of semiconductor material … wherein all the components of the circuit are completely integrated. The first customer for the new invention was the US Air Force, Kilby won the 2000 Nobel Prize in Physics for his part in the invention of the integrated circuit. His work was named an IEEE Milestone in 2009, half a year after Kilby, Robert Noyce at Fairchild Semiconductor developed his own idea of an integrated circuit that solved many practical problems Kilbys had not. Noyces design was made of silicon, whereas Kilbys chip was made of germanium, Noyce credited Kurt Lehovec of Sprague Electric for the principle of p–n junction isolation, a key concept behind the IC

2.
Computer network
–
A computer network or data network is a telecommunications network which allows nodes to share resources. In computer networks, networked computing devices exchange data with other using a data link. The connections between nodes are established using either cable media or wireless media, the best-known computer network is the Internet. Network computer devices that originate, route and terminate the data are called network nodes, nodes can include hosts such as personal computers, phones, servers as well as networking hardware. Two such devices can be said to be networked together when one device is able to exchange information with the other device, Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the networks size, topology and organizational intent. In most cases, application-specific communications protocols are layered over other more general communications protocols and this formidable collection of information technology requires skilled network management to keep it all running reliably. The chronology of significant computer-network developments includes, In the late 1950s, in 1960, the commercial airline reservation system semi-automatic business research environment went online with two connected mainframes. Licklider developed a group he called the Intergalactic Computer Network. In 1964, researchers at Dartmouth College developed the Dartmouth Time Sharing System for distributed users of computer systems. The same year, at Massachusetts Institute of Technology, a group supported by General Electric and Bell Labs used a computer to route. Throughout the 1960s, Leonard Kleinrock, Paul Baran, and Donald Davies independently developed network systems that used packets to transfer information between computers over a network, in 1965, Thomas Marill and Lawrence G. Roberts created the first wide area network. This was an precursor to the ARPANET, of which Roberts became program manager. Also in 1965, Western Electric introduced the first widely used telephone switch that implemented true computer control, in 1972, commercial services using X.25 were deployed, and later used as an underlying infrastructure for expanding TCP/IP networks. In July 1976, Robert Metcalfe and David Boggs published their paper Ethernet, Distributed Packet Switching for Local Computer Networks, in 1979, Robert Metcalfe pursued making Ethernet an open standard. In 1976, John Murphy of Datapoint Corporation created ARCNET, a network first used to share storage devices. In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s, by 1998, Ethernet supported transmission speeds of a Gigabit. Subsequently, higher speeds of up to 100 Gbit/s were added, the ability of Ethernet to scale easily is a contributing factor to its continued use. Providing access to information on shared storage devices is an important feature of many networks, a network allows sharing of files, data, and other types of information giving authorized users the ability to access information stored on other computers on the network

3.
Software
–
Computer software, or simply software, is that part of a computer system that consists of data or computer instructions, in contrast to the physical hardware from which the system is built. In computer science and software engineering, computer software is all information processed by computer systems, programs, computer software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. Computer hardware and software require each other and neither can be used on its own. At the lowest level, executable code consists of machine language instructions specific to an individual processor—typically a central processing unit, a machine language consists of groups of binary values signifying processor instructions that change the state of the computer from its preceding state. For example, an instruction may change the value stored in a storage location in the computer—an effect that is not directly observable to the user. An instruction may also cause something to appear on a display of the computer system—a state change which should be visible to the user. The processor carries out the instructions in the order they are provided, unless it is instructed to jump to a different instruction, the majority of software is written in high-level programming languages that are easier and more efficient for programmers, meaning closer to a natural language. High-level languages are translated into machine language using a compiler or an interpreter or a combination of the two, an outline for what would have been the first piece of software was written by Ada Lovelace in the 19th century, for the planned Analytical Engine. However, neither the Analytical Engine nor any software for it were ever created, the first theory about software—prior to creation of computers as we know them today—was proposed by Alan Turing in his 1935 essay Computable numbers with an application to the Entscheidungsproblem. This eventually led to the creation of the academic fields of computer science and software engineering. Computer science is more theoretical, whereas software engineering focuses on practical concerns. However, prior to 1946, software as we now understand it—programs stored in the memory of stored-program digital computers—did not yet exist, the first electronic computing devices were instead rewired in order to reprogram them. On virtually all platforms, software can be grouped into a few broad categories. There are many different types of software, because the range of tasks that can be performed with a modern computer is so large—see list of software. System software includes, Operating systems, which are collections of software that manage resources and provides common services for other software that runs on top of them. Supervisory programs, boot loaders, shells and window systems are parts of operating systems. In practice, an operating system bundled with additional software so that a user can potentially do some work with a computer that only has an operating system. Device drivers, which operate or control a particular type of device that is attached to a computer, utilities, which are computer programs designed to assist users in the maintenance and care of their computers

4.
Central processing unit
–
The computer industry has used the term central processing unit at least since the early 1960s. The form, design and implementation of CPUs have changed over the course of their history, most modern CPUs are microprocessors, meaning they are contained on a single integrated circuit chip. An IC that contains a CPU may also contain memory, peripheral interfaces, some computers employ a multi-core processor, which is a single chip containing two or more CPUs called cores, in that context, one can speak of such single chips as sockets. Array processors or vector processors have multiple processors that operate in parallel, there also exists the concept of virtual CPUs which are an abstraction of dynamical aggregated computational resources. Early computers such as the ENIAC had to be rewired to perform different tasks. Since the term CPU is generally defined as a device for software execution, the idea of a stored-program computer was already present in the design of J. Presper Eckert and John William Mauchlys ENIAC, but was initially omitted so that it could be finished sooner. On June 30,1945, before ENIAC was made, mathematician John von Neumann distributed the paper entitled First Draft of a Report on the EDVAC and it was the outline of a stored-program computer that would eventually be completed in August 1949. EDVAC was designed to perform a number of instructions of various types. Significantly, the programs written for EDVAC were to be stored in high-speed computer memory rather than specified by the wiring of the computer. This overcame a severe limitation of ENIAC, which was the considerable time, with von Neumanns design, the program that EDVAC ran could be changed simply by changing the contents of the memory. Early CPUs were custom designs used as part of a larger, however, this method of designing custom CPUs for a particular application has largely given way to the development of multi-purpose processors produced in large quantities. This standardization began in the era of discrete transistor mainframes and minicomputers and has accelerated with the popularization of the integrated circuit. The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of nanometers, both the miniaturization and standardization of CPUs have increased the presence of digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in electronic devices ranging from automobiles to cellphones, the so-called Harvard architecture of the Harvard Mark I, which was completed before EDVAC, also utilized a stored-program design using punched paper tape rather than electronic memory. Relays and vacuum tubes were used as switching elements, a useful computer requires thousands or tens of thousands of switching devices. The overall speed of a system is dependent on the speed of the switches, tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the Harvard Mark I failed very rarely. In the end, tube-based CPUs became dominant because the significant speed advantages afforded generally outweighed the reliability problems, most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs. Clock signal frequencies ranging from 100 kHz to 4 MHz were very common at this time, the design complexity of CPUs increased as various technologies facilitated building smaller and more reliable electronic devices

5.
Telecommunications network
–
A telecommunications network is a collection of terminal nodes, links are connected so as to enable telecommunication between the terminals. The transmission links connect the nodes together, the nodes use circuit switching, message switching or packet switching to pass the signal through the correct links and nodes to reach the correct destination terminal. Each terminal in the network usually has an address so messages or connections can be routed to the correct recipients. The collection of addresses in the network is called the address space, for example, businesses need a greater telecommunications network if they plan to expand their company. With Internet, computer, and telephone networks, businesses can allocate their resources efficiently and these core types of networks will be discussed below, Computer network, a computer network consists of computers and devices connected to one another. Information can be transferred from one device to the next, for example, an office filled with computers can share files together on each separate device. Computer networks can range from a local area to a wide area network. The difference between the types of networks is the size and these types of computer networks work at certain speeds, also known as broadband. The Internet network connects computers worldwide, Internet network, access to the network allows users to use many resources. Over time the Internet network will replace books and this will enable users to discover information almost instantly and apply concepts to different situations. The Internet can be used for recreational, governmental, educational, businesses in particular use the Internet network for research or to service customers and clients. Telephone network, the network connects people to one another. This network can be used in a variety of ways, many businesses use the telephone network to route calls and/or service their customers. Some businesses use a network on a greater scale through a private branch exchange. It is a system where a specific business focuses on routing and servicing calls for another business, majority of the time, the telephone network is used around the world for recreational purposes. In general, every telecommunications network conceptually consists of three parts, or planes, The data plane carries the networks users traffic, the actual payload, the control plane carries control information. The management plane carries the operations and administration traffic required for network management, the management plane is sometimes considered a part of the control plane. The data network is used throughout the world to connect individuals

6.
Packet switching
–
Packet switching increases network efficiency and robustness, and enables technological convergence of many applications operating on the same network. Packets are composed of a header and payload, Information in the header is used by networking hardware to direct the packet to its destination where the payload is extracted and used by application software. This concept contrasted and contradicted then-established principles of pre-allocation of network bandwidth, the new concept found little resonance among network implementers until the independent work of British computer scientist Donald Davies at the National Physical Laboratory in the late 1960s. Packet mode communication may be implemented with or without intermediate forwarding nodes, in case of a shared physical medium, the packets may be delivered according to a multiple access scheme. In the late 1950s, the US Air Force established a wide area network for the Semi-Automatic Ground Environment radar defense system and they sought a system that might survive a nuclear attack to enable a response, thus diminishing the attractiveness of the first strike advantage by enemies. Report P-2626 described a general architecture for a large-scale, distributed, Barans work was known to Robert Taylor and J. C. R. Licklider at the Information Processing Technology Office, who advocated wide area networks, starting in 1965, Donald Davies at the National Physical Laboratory, UK, independently developed the same message routing methodology as developed by Baran. He called it packet switching, a more accessible name than Barans and he gave a talk on the proposal in 1966, after which a person from the Ministry of Defence told him about Barans work. A member of Davies team met Lawrence Roberts at the 1967 ACM Symposium on Operating System Principles, Davies had chosen some of the same parameters for his original network design as did Baran, such as a packet size of 1024 bits. In 1966, Davies proposed that a network should be built at the laboratory to serve the needs of NPL, the NPL Data Communications Network entered service in 1970. In 1974, Vint Cerf and Bob Kahn published the specifications for Transmission Control Protocol, Packet switching may be classified into connectionless packet switching, also known as datagram switching, and connection-oriented packet switching, also known as virtual circuit switching. Examples of connectionless protocols are Ethernet, Internet Protocol, and the User Datagram Protocol, connection-oriented protocols include X.25, Frame Relay, Multiprotocol Label Switching, and the Transmission Control Protocol. In connectionless mode each packet includes complete addressing information, the packets are routed individually, sometimes resulting in different paths and out-of-order delivery. Each packet is labeled with an address, source address. It may also be labeled with the number of the packet. At the destination, the original message/data is reassembled in the correct order, connection-oriented transmission requires a setup phase in each involved node before any packet is transferred to establish the parameters of communication. The packets include a connection identifier rather than address information and are negotiated between endpoints so that they are delivered in order and with error checking, the signaling protocols used allow the application to specify its requirements and discover link parameters. Acceptable values for service parameters may be negotiated, routing a packet requires the node to look up the connection id in a table

7.
Television
–
Television or TV is a telecommunication medium used for transmitting moving images in monochrome, or in color, and in two or three dimensions and sound. The term can refer to a set, a television program. Television is a medium for entertainment, education, news, politics, gossip. Television became available in experimental forms in the late 1920s. After World War II, a form of black-and-white TV broadcasting became popular in the United States and Britain, and television sets became commonplace in homes, businesses. During the 1950s, television was the medium for influencing public opinion. In the mid-1960s, color broadcasting was introduced in the US, for many reasons, the storage of television and video programming now occurs on the cloud. At the end of the first decade of the 2000s, digital television transmissions greatly increased in popularity, another development was the move from standard-definition television to high-definition television, which provides a resolution that is substantially higher. HDTV may be transmitted in various formats, 1080p, 1080i, in 2013, 79% of the worlds households owned a television set. Most TV sets sold in the 2000s were flat-panel, mainly LEDs, major manufacturers announced the discontinuation of CRT, DLP, plasma, and even fluorescent-backlit LCDs by the mid-2010s. In the near future, LEDs are gradually expected to be replaced by OLEDs, also, major manufacturers have announced that they will increasingly produce smart TVs in the mid-2010s. Smart TVs with integrated Internet and Web 2.0 functions became the dominant form of television by the late 2010s, Television signals were initially distributed only as terrestrial television using high-powered radio-frequency transmitters to broadcast the signal to individual television receivers. Alternatively television signals are distributed by cable or optical fiber, satellite systems and. Until the early 2000s, these were transmitted as analog signals, a standard television set is composed of multiple internal electronic circuits, including a tuner for receiving and decoding broadcast signals. A visual display device which lacks a tuner is correctly called a video monitor rather than a television, the word television comes from Ancient Greek τῆλε, meaning far, and Latin visio, meaning sight. The Anglicised version of the term is first attested in 1907 and it was. formed in English or borrowed from French télévision. In the 19th century and early 20th century, other. proposals for the name of a technology for sending pictures over distance were telephote. The abbreviation TV is from 1948, the use of the term to mean a television set dates from 1941

8.
Radio
–
When radio waves strike an electrical conductor, the oscillating fields induce an alternating current in the conductor. The information in the waves can be extracted and transformed back into its original form, Radio systems need a transmitter to modulate some property of the energy produced to impress a signal on it, for example using amplitude modulation or angle modulation. Radio systems also need an antenna to convert electric currents into radio waves, an antenna can be used for both transmitting and receiving. The electrical resonance of tuned circuits in radios allow individual stations to be selected, the electromagnetic wave is intercepted by a tuned receiving antenna. Radio frequencies occupy the range from a 3 kHz to 300 GHz, a radio communication system sends signals by radio. The term radio is derived from the Latin word radius, meaning spoke of a wheel, beam of light, however, this invention would not be widely adopted. The switch to radio in place of wireless took place slowly and unevenly in the English-speaking world, the United States Navy would also play a role. Although its translation of the 1906 Berlin Convention used the terms wireless telegraph and wireless telegram, the term started to become preferred by the general public in the 1920s with the introduction of broadcasting. Radio systems used for communication have the following elements, with more than 100 years of development, each process is implemented by a wide range of methods, specialised for different communications purposes. Each system contains a transmitter, This consists of a source of electrical energy, the transmitter contains a system to modulate some property of the energy produced to impress a signal on it. This modulation might be as simple as turning the energy on and off, or altering more subtle such as amplitude, frequency, phase. Amplitude modulation of a carrier wave works by varying the strength of the signal in proportion to the information being sent. For example, changes in the strength can be used to reflect the sounds to be reproduced by a speaker. It was the used for the first audio radio transmissions. Frequency modulation varies the frequency of the carrier, the instantaneous frequency of the carrier is directly proportional to the instantaneous value of the input signal. FM has the capture effect whereby a receiver only receives the strongest signal, Digital data can be sent by shifting the carriers frequency among a set of discrete values, a technique known as frequency-shift keying. FM is commonly used at Very high frequency radio frequencies for high-fidelity broadcasts of music, analog TV sound is also broadcast using FM. Angle modulation alters the phase of the carrier wave to transmit a signal

9.
Computer hardware
–
Computer hardware is the collection of physical components that constitute a computer system. By contrast, software is instructions that can be stored and run by hardware, hardware is directed by the software to execute any command or instruction. A combination of hardware and software forms a usable computing system, the template for all modern computers is the Von Neumann architecture, detailed in a 1945 paper by Hungarian mathematician John von Neumann. The meaning of the term has evolved to mean a stored-program computer in which an instruction fetch and this is referred to as the Von Neumann bottleneck and often limits the performance of the system. For the third year, U. S. business-to-business channel sales increased. The impressive growth was the fastest sales increase since the end of the recession, sales growth accelerated in the second half of the year peaking in fourth quarter with a 6.9 percent increase over the fourth quarter of 2012. There are a number of different types of system in use today. The personal computer, also known as the PC, is one of the most common types of computer due to its versatility, laptops are generally very similar, although they may use lower-power or reduced size components, thus lower performance. The computer case is a plastic or metal enclosure that houses most of the components, a case can be either big or small, but the form factor of motherboard for which it is designed matters more. A power supply unit converts alternating current electric power to low-voltage DC power for the components of the computer. Laptops are capable of running from a battery, normally for a period of hours. The motherboard is the component of a computer. It is usually cooled by a heatsink and fan, or water-cooling system, most newer CPUs include an on-die Graphics Processing Unit. The clock speed of CPUs governs how fast it executes instructions, many modern computers have the option to overclock the CPU which enhances performance at the expense of greater thermal output and thus a need for improved cooling. The chipset, which includes the bridge, mediates communication between the CPU and the other components of the system, including main memory. Random-Access Memory, which stores the code and data that are being accessed by the CPU. For example, when a web browser is opened on the computer it takes up memory, RAM usually comes on DIMMs in the sizes 2GB, 4GB, and 8GB, but can be much larger. Read-Only Memory, which stores the BIOS that runs when the computer is powered on or otherwise begins execution, the BIOS includes boot firmware and power management firmware

10.
Router (computing)
–
A router is a networking device that forwards data packets between computer networks. Routers perform the traffic directing functions on the Internet, a data packet is typically forwarded from one router to another router through the networks that constitute the internetwork until it reaches its destination node. A router is connected to two or more lines from different networks. When a data packet comes in on one of the lines, then, using information in its routing table or routing policy, it directs the packet to the next network on its journey. The most familiar type of routers are home and small office routers that simply pass IP packets between the computers and the Internet. An example of a router would be the cable or DSL router. Though routers are typically dedicated hardware devices, software-based routers also exist, when multiple routers are used in interconnected networks, the routers can exchange information about destination addresses using a dynamic routing protocol. Each router builds up a table listing the preferred routes between any two systems on the interconnected networks. A router may have interfaces for different physical types of connections, such as copper cables, fibre optic. Its firmware can also support different networking communications protocol standards, each network interface is used by this specialized computer software to enable data packets to be forwarded from one protocol transmission system to another. Routers may also be used to two or more logical groups of computer devices known as subnets, each with a different network prefix. The network prefixes recorded in the table do not necessarily map directly to the physical interface connections. It does this using internal pre-configured directives, called static routes, static and dynamic routes are stored in the Routing Information Base. The control-plane logic then strips non-essential directives from the RIB and builds a Forwarding Information Base to be used by the forwarding-plane, Forwarding plane, The router forwards data packets between incoming and outgoing interface connections. It routes them to the network type using information that the packet header contains. It uses data recorded in the routing control plane. Routers may provide connectivity within enterprises, between enterprises and the Internet, or between internet service providers networks, the largest routers interconnect the various ISPs, or may be used in large enterprise networks. Smaller routers usually provide connectivity for typical home and office networks, other networking solutions may be provided by a backbone Wireless Distribution System, which avoids the costs of introducing networking cables into buildings

11.
Software router
–
A router is a networking device that forwards data packets between computer networks. Routers perform the traffic directing functions on the Internet, a data packet is typically forwarded from one router to another router through the networks that constitute the internetwork until it reaches its destination node. A router is connected to two or more lines from different networks. When a data packet comes in on one of the lines, then, using information in its routing table or routing policy, it directs the packet to the next network on its journey. The most familiar type of routers are home and small office routers that simply pass IP packets between the computers and the Internet. An example of a router would be the cable or DSL router. Though routers are typically dedicated hardware devices, software-based routers also exist, when multiple routers are used in interconnected networks, the routers can exchange information about destination addresses using a dynamic routing protocol. Each router builds up a table listing the preferred routes between any two systems on the interconnected networks. A router may have interfaces for different physical types of connections, such as copper cables, fibre optic. Its firmware can also support different networking communications protocol standards, each network interface is used by this specialized computer software to enable data packets to be forwarded from one protocol transmission system to another. Routers may also be used to two or more logical groups of computer devices known as subnets, each with a different network prefix. The network prefixes recorded in the table do not necessarily map directly to the physical interface connections. It does this using internal pre-configured directives, called static routes, static and dynamic routes are stored in the Routing Information Base. The control-plane logic then strips non-essential directives from the RIB and builds a Forwarding Information Base to be used by the forwarding-plane, Forwarding plane, The router forwards data packets between incoming and outgoing interface connections. It routes them to the network type using information that the packet header contains. It uses data recorded in the routing control plane. Routers may provide connectivity within enterprises, between enterprises and the Internet, or between internet service providers networks, the largest routers interconnect the various ISPs, or may be used in large enterprise networks. Smaller routers usually provide connectivity for typical home and office networks, other networking solutions may be provided by a backbone Wireless Distribution System, which avoids the costs of introducing networking cables into buildings

12.
Network switch
–
A network switch is a computer networking device that connects devices together on a computer network by using packet switching to receive, process, and forward data to the destination device. Unlike less advanced network hubs, a network switch forwards data only to one or multiple devices that need to receive it, a network switch is a multiport network bridge that uses hardware addresses to process and forward data at the data link layer of the OSI model. Switches for Ethernet are the most common form and the first Ethernet switch was introduced by Kalpana in 1990, Switches also exist for other types of networks including Fibre Channel, Asynchronous Transfer Mode, and InfiniBand. A switch is a device in a network that electrically and logically connects together other devices. Multiple data cables are plugged into a switch to enable communication between different networked devices, Switches manage the flow of data across a network by transmitting a received network packet only to the one or more devices for which the packet is intended. Each networked device connected to a switch can be identified by its network address and this maximizes the security and efficiency of the network. Because broadcasts are still being forwarded to all connected devices, the newly formed network segment continues to be a broadcast domain, an Ethernet switch operates at the data link layer of the OSI model to create a separate collision domain for each switch port. In full duplex mode, each switch port can simultaneously transmit and receive, in the case of using a repeater hub, only a single transmission could take place at a time for all ports combined, so they would all share the bandwidth and run in half duplex. Necessary arbitration would result in collisions, requiring retransmissions. The network switch plays an role in most modern Ethernet local area networks. Mid-to-large sized LANs contain a number of linked managed switches, in most of these cases, the end-user device contains a router and components that interface to the particular physical broadband technology. User devices may include a telephone interface for Voice over IP protocol. Segmentation involves the use of a bridge or a switch to split a larger collision domain into smaller ones in order to reduce collision probability, in the extreme case, each device is located on a dedicated switch port. In contrast to an Ethernet hub, there is a separate collision domain on each of the switch ports and this allows computers to have dedicated bandwidth on point-to-point connections to the network and also to run in full-duplex without collisions. Full-duplex mode has one transmitter and one receiver per collision domain. Switches may operate at one or more layers of the OSI model, including the data link, a device that operates simultaneously at more than one of these layers is known as a multilayer switch. In switches intended for use, built-in or modular interfaces make it possible to connect different types of networks, including Ethernet, Fibre Channel, RapidIO, ATM. This connectivity can be at any of the layers mentioned, while the layer-2 functionality is adequate for bandwidth-shifting within one technology, interconnecting technologies such as Ethernet and token ring is performed easier at layer 3 or via routing

13.
Firewall (computing)
–
In computing, a firewall is a network security system that monitors and controls the incoming and outgoing network traffic based on predetermined security rules. A firewall typically establishes a barrier between a trusted, secure network and another outside network, such as the Internet. Firewalls are often categorized as either network firewalls or host-based firewalls, Network firewalls filter traffic between two or more networks, they are either software appliances running on general purpose hardware, or hardware-based firewall computer appliances. Host-based firewalls provide a layer of software on one host that controls network traffic in, Firewall appliances may also offer other functionality to the internal network they protect, such as acting as a DHCP or VPN server for that network. The term firewall originally referred to a wall intended to confine a fire or potential fire within a building, later uses refer to similar structures, such as the metal sheet separating the engine compartment of a vehicle or aircraft from the passenger compartment. Firewall technology emerged in the late 1980s when the Internet was a new technology in terms of its global use. It has hit Berkeley, UC San Diego, Lawrence Livermore, Stanford, the Morris Worm spread itself through multiple vulnerabilities in the machines of the time. Although it was not malicious in intent, the Morris Worm was the first large attack on Internet security. The first type of firewall was the packet filter which looks at network addresses and ports of the packet, the first paper published on firewall technology was in 1988, when engineers from Digital Equipment Corporation developed filter systems known as packet filter firewalls. This fairly basic system was the first generation of what is now a highly involved, packet filters act by inspecting the packets which are transferred between computers on the Internet. If a packet does not match the packet filters set of filtering rules, conversely, if the packet matches one or more of the programmed filters, the packet is allowed to pass. This type of packet filtering pays no attention to whether a packet is part of a stream of traffic. Instead, it filters each packet based only on information contained in the packet itself, when the packet passes through the firewall, it filters the packet on a protocol/port number basis. For example, if a rule in the firewall exists to block telnet access, from 1989–1990 three colleagues from AT&T Bell Laboratories, Dave Presotto, Janardan Sharma, and Kshitij Nigam, developed the second generation of firewalls, calling them circuit-level gateways. Second-generation firewalls perform the work of their predecessors but operate up to layer 4 of the OSI model. This is achieved by retaining packets until enough information is available to make a judgement about its state, though static rules are still used, these rules can now contain connection state as one of their test criteria. Certain denial-of-service attacks bombard the firewall with thousands of fake connection packets in an attempt to overwhelm it by filling its connection state memory, Marcus Ranum, Wei Xu, and Peter Churchyard developed an application firewall known as Firewall Toolkit. In June 1994, Wei Xu extended the FWTK with the enhancement of IP filter

14.
Session border controller
–
Early deployments of SBCs were focused on the borders between two service provider networks in a peering environment. This role has now expanded to include significant deployments between a service access network and a backbone network to provide service to residential and/or enterprise customers. The term session refers to a communication between two parties – in the context of telephony, this would be a call, together, these streams make up a session. It is the job of a session border controller to exert influence over the flows of sessions. The term border refers to a point of demarcation between one part of a network and another, as a simple example, at the edge of a corporate network, a firewall demarcates the local network from the rest of the Internet. A more complex example is that of a corporation where different departments have security needs for each location. In this case, filtering routers or other elements are used to control the flow of data streams. It is the job of a session border controller to assist policy administrators in managing the flow of data across these borders. The term controller refers to the influence that session border controllers have on the streams that comprise sessions. With the advent of WebRTC some SBCs have also assumed the role of SIP to WebRTC Gateway, in such a case the SBC acts as a gateway between the WebRTC applications and SIP end points. In technical terms, when used within the SIP protocol, this is defined as being a user agent. The effect of behavior is that not only the signaling traffic. Conversely, without an SBC, the media traffic directly between the VoIP phones, without the in-network call signaling elements having control over their path. In other cases, the SBC simply modifies the stream of control data involved in each call, perhaps limiting the kinds of calls that can be conducted, changing the codec choices. In order to show how an SBC works one can compare a simple call establishment sequence with a call establishment sequence with an SBC. In the simplest session establishment sequence with only one proxy between the agents the proxy’s task is to identify the callee’s location and forward the request to it. The proxy also adds a Via header with its own address to indicate the path that the response should traverse, the proxy does not change any dialog identification information present in the message such as the tag in the From header, the Call-Id or the Cseq. Proxies also do not alter any information in the SIP message bodies, note that during the session initiation phase the user agents exchange SIP messages with the SDP bodies that include addresses at which the agents expect the media traffic

15.
Queue (abstract data type)
–
This makes the queue a First-In-First-Out data structure. In a FIFO data structure, the first element added to the queue will be the first one to be removed. This is equivalent to the requirement that once a new element is added, often a peek or front operation is also entered, returning the value of the front element without dequeuing it. A queue is an example of a data structure, or more abstractly a sequential collection. Queues provide services in science, transport, and operations research where various entities such as data, objects, persons, or events are stored. In these contexts, the queue performs the function of a buffer, Queues are common in computer programs, where they are implemented as data structures coupled with access routines, as an abstract data structure or in object-oriented languages as classes. Common implementations are circular buffers and linked lists, theoretically, one characteristic of a queue is that it does not have a specific capacity. Regardless of how elements are already contained, a new element can always be added. It can also be empty, at which point removing an element will be impossible until a new element has been added again, fixed length arrays are limited in capacity, but it is not true that items need to be copied towards the head of the queue. The simple trick of turning the array into a circle and letting the head. If n is the size of the array, then computing indices modulo n will turn the array into a circle, the array size must be declared ahead of time, but some implementations simply double the declared array size when overflow occurs. Most modern languages with objects or pointers can implement or come with libraries for dynamic lists, such data structures may have not specified fixed capacity limit besides memory constraints. Queue overflow results from trying to add an element onto a full queue and queue underflow happens when trying to remove an element from an empty queue, a bounded queue is a queue limited to a fixed number of items. There are several efficient implementations of FIFO queues, an efficient implementation is one that can perform the operations—enqueuing and dequeuing—in O time. Linked list A doubly linked list has O insertion and deletion at both ends, so is a choice for queues. A regular singly linked list only has efficient insertion and deletion at one end, however, a small modification—keeping a pointer to the last node in addition to the first one—will enable it to implement an efficient queue. A deque implemented using a dynamic array Queues may be implemented as a separate data type, or may be considered a special case of a double-ended queue. C++s Standard Template Library provides a queue templated class which is restricted to only push/pop operations, since J2SE5.0, Javas library contains a Queue interface that specifies queue operations, implementing classes include LinkedList and ArrayDeque

16.
Parallel computing
–
Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time, there are several different forms of parallel computing, bit-level, instruction-level, data, and task parallelism. Parallelism has been employed for years, mainly in high-performance computing. As power consumption by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks. Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance, a theoretical upper bound on the speed-up of a single program as a result of parallelization is given by Amdahls law. Traditionally, computer software has been written for serial computation, to solve a problem, an algorithm is constructed and implemented as a serial stream of instructions. These instructions are executed on a central processing unit on one computer, only one instruction may execute at a time—after that instruction is finished, the next one is executed. Parallel computing, on the hand, uses multiple processing elements simultaneously to solve a problem. This is accomplished by breaking the problem into independent parts so that each processing element can execute its part of the algorithm simultaneously with the others. The processing elements can be diverse and include such as a single computer with multiple processors, several networked computers, specialized hardware. Frequency scaling was the dominant reason for improvements in performance from the mid-1980s until 2004. The runtime of a program is equal to the number of instructions multiplied by the time per instruction. Maintaining everything else constant, increasing the frequency decreases the average time it takes to execute an instruction. An increase in frequency thus decreases runtime for all compute-bound programs. However, power consumption P by a chip is given by the equation P = C × V2 × F, where C is the capacitance being switched per clock cycle, V is voltage, increases in frequency increase the amount of power used in a processor. Moores law is the observation that the number of transistors in a microprocessor doubles every 18 to 24 months. Despite power consumption issues, and repeated predictions of its end, with the end of frequency scaling, these additional transistors can be used to add extra hardware for parallel computing. Optimally, the speedup from parallelization would be linear—doubling the number of processing elements should halve the runtime, however, very few parallel algorithms achieve optimal speedup

17.
Microcode
–
Microcode is a technique that imposes an interpreter between the hardware and the architectural level of a computer. As such, the microcode is a layer of hardware-level instructions that implement higher-level machine code instructions or internal state machine sequencing in many digital processing elements. Microcode typically resides in special high-speed memory and translates machine instructions and it separates the machine instructions from the underlying electronics so that instructions can be designed and altered more freely. It also facilitates the building of complex multi-step instructions, while reducing the complexity of computer circuits, writing microcode is often called microprogramming and the microcode in a particular processor implementation is sometimes called a microprogram. Some hardware vendors, especially IBM, use the term microcode as a synonym for firmware, when compared to normal application programs, the elements composing a microprogram exist on a lower conceptual level. To avoid confusion, each microprogram-related element is differentiated by the prefix, microinstruction, microassembler, microprogrammer, microarchitecture. Engineers normally write the microcode during the phase of a processor, storing it in a read-only memory or programmable logic array structure. However, machines also exist that have some or all microcode stored in SRAM or flash memory and this is traditionally denoted as writeable control store in the context of computers, which can be either read-only or read-write memory. Complex digital processors may also more than one control unit in order to delegate sub-tasks that must be performed essentially asynchronously in parallel. A high-level programmer, or even an assembly programmer, does not normally see or change microcode, microprograms consist of series of microinstructions, which control the CPU at a very fundamental level of hardware circuitry. g. 128 bits on a 360/85 with an emulator feature, microcode was originally developed as a simpler method of developing the control logic for a computer. Initially, CPU instruction sets were hardwired, each step needed to fetch, decode, and execute the machine instructions was controlled directly by combinational logic and rather minimal sequential state machine circuitry. Microcode simplified the job by allowing much of the processors behaviour, even late in the design process, microcode could easily be changed, whereas hard-wired CPU designs were very cumbersome to change. Thus, this greatly facilitated CPU design, the IBM Future Systems project and Data General Fountainhead Processor are examples of this. During the 1970s, CPU speeds grew more quickly than memory speeds and numerous techniques such as memory block transfer, memory pre-fetch, high-level machine instructions, made possible by microcode, helped further, as fewer more complex machine instructions require less memory bandwidth. For example, an operation on a string can be done as a single machine instruction. Architectures with instruction sets implemented by complex microprograms included the IBM System/360, the approach of increasingly complex microcode-implemented instruction sets was later called CISC. An alternate approach, used in microprocessors, is to use PLAs or ROMs mainly for instruction decoding

18.
Multi-core processor
–
A multi-core processor is a single computing component with two or more independent actual processing units, which are units that read and execute program instructions. The instructions are ordinary CPU instructions, but the multiple cores can run multiple instructions at the same time, manufacturers typically integrate the cores onto a single integrated circuit die, or onto multiple dies in a single chip package. A multi-core processor implements multiprocessing in a physical package. Designers may couple cores in a multi-core device tightly or loosely, for example, cores may or may not share caches, and they may implement message passing or shared-memory inter-core communication methods. Common network topologies to interconnect cores include bus, ring, two-dimensional mesh, homogeneous multi-core systems include only identical cores, heterogeneous multi-core systems have cores that are not identical. Just as with single-processor systems, cores in multi-core systems may implement architectures such as VLIW, superscalar, Multi-core processors are widely used across many application domains, including general-purpose, embedded, network, digital signal processing, and graphics. The improvement in performance gained by the use of a multi-core processor depends very much on the algorithms used. In particular, possible gains are limited by the fraction of the software that can run in parallel simultaneously on multiple cores, most applications, however, are not accelerated so much unless programmers invest a prohibitive amount of effort in re-factoring the whole problem. The parallelization of software is a significant ongoing topic of research, the terms multi-core and dual-core most commonly refer to some sort of central processing unit, but are sometimes also applied to digital signal processors and system on a chip. This article uses the terms multi-core and dual-core for CPUs manufactured on the integrated circuit. In contrast to systems, the term multi-CPU refers to multiple physically separate processing-units. The terms many-core and massively multi-core are sometimes used to describe multi-core architectures with a high number of cores. Some systems use many soft microprocessor cores placed on a single FPGA, each core can be considered a semiconductor intellectual property core as well as a CPU core. While manufacturing technology improves, reducing the size of individual gates and these physical limitations can cause significant heat dissipation and data synchronization problems. Various other methods are used to improve CPU performance, some instruction-level parallelism methods such as superscalar pipelining are suitable for many applications, but are inefficient for others that contain difficult-to-predict code. Many applications are better suited to thread-level parallelism methods, and multiple independent CPUs are commonly used to increase a systems overall TLP, a combination of increased available space and the demand for increased TLP led to the development of multi-core CPUs. Several business motives drive the development of multi-core architectures, for decades, it was possible to improve performance of a CPU by shrinking the area of the integrated circuit, which reduced the cost per device on the IC. Alternatively, for the circuit area, more transistors could be used in the design

19.
OSI model
–
Its goal is the interoperability of diverse communication systems with standard protocols. The model partitions a communication system into abstraction layers, the original version of the model defined seven layers. A layer serves the layer above it and is served by the layer below it, two instances at the same layer are visualized as connected by a horizontal connection in that layer. The model is a product of the Open Systems Interconnection project at the International Organization for Standardization and these two international standards bodies each developed a document that defined similar networking models. In 1983, these two documents were merged to form a standard called The Basic Reference Model for Open Systems Interconnection, the standard is usually referred to as the Open Systems Interconnection Reference Model, the OSI Reference Model, or simply the OSI model. It was published in 1984 by both the ISO, as standard ISO7498, and the renamed CCITT as standard X.200. OSI had two components, an abstract model of networking, called the Basic Reference Model or seven-layer model. The concept of a model was provided by the work of Charles Bachman at Honeywell Information Services. Various aspects of OSI design evolved from experiences with the ARPANET, NPLNET, EIN, CYCLADES network, the new design was documented in ISO7498 and its various addenda. In this model, a system was divided into layers. Within each layer, one or more entities implement its functionality, each entity interacted directly only with the layer immediately beneath it, and provided facilities for use by the layer above it. Protocols enable an entity in one host to interact with an entity at the same layer in another host. Service definitions abstractly described the functionality provided to an -layer by an layer, the OSI standards documents are available from the ITU-T as the X. 200-series of recommendations. Some of the specifications were also available as part of the ITU-T X series. The equivalent ISO and ISO/IEC standards for the OSI model were available from ISO, the recommendation X.200 describes seven layers, labeled 1 to 7. Layer 1 is the lowest layer in this model, at each level N, two entities at the communicating devices exchange protocol data units by means of a layer N protocol. Each PDU contains a payload, called the service data unit, data processing by two communicating OSI-compatible devices is done as such, The data to be transmitted is composed at the topmost layer of the transmitting device into a protocol data unit. The PDU is passed to layer N-1, where it is known as the service data unit, at layer N-1 the SDU is concatenated with a header, a footer, or both, producing a layer N-1 PDU

20.
Call stack
–
In computer science, a call stack is a stack data structure that stores information about the active subroutines of a computer program. This kind of stack is also known as a stack, program stack, control stack, run-time stack, or machine stack. Although maintenance of the stack is important for the proper functioning of most software. Many computer instruction sets provide special instructions for manipulating stacks, a call stack is used for several related purposes, but the main reason for having one is to keep track of the point to which each active subroutine should return control when it finishes executing. An active subroutine is one that has been called but is yet to complete execution after which control should be handed back to the point of call, such activations of subroutines may be nested to any level, hence the stack structure. If, for example, a subroutine DrawSquare calls a subroutine DrawLine from four different places, to accomplish this, the address following the call instruction, the return address, is pushed onto the call stack with each call. If a called subroutine calls on yet another subroutine, it will push another return address onto the call stack, if the pushing consumes all of the space allocated for the call stack, an error called a stack overflow occurs, generally causing the program to crash. Adding a subroutines entry to the stack is sometimes called winding, conversely. There is usually exactly one call stack associated with a running program, in high-level programming languages, the specifics of the call stack are usually hidden from the programmer. They are given only to a set of functions. This is an example of abstraction, most assembly languages, on the other hand, require programmers to be involved with manipulating the stack. The actual details of the stack in a language depend upon the compiler, operating system. As noted above, the purpose of a call stack is to store the return addresses. When a subroutine is called, the location of the instruction at which it can later resume needs to be saved somewhere, using a stack to save the return address has important advantages over alternative calling conventions. One is that each task can have its own stack, and thus the subroutine can be reentrant, another benefit is that recursion is automatically supported. When a function calls itself recursively, a return address needs to be stored for each activation of the function so that it can later be used to return from the function activation, stack structures provide this capability automatically. It is often convenient to allocate space for use by simply moving the top of the stack by enough to provide the space. This is very fast when compared to dynamic memory allocation, which uses the heap space, note that each separate activation of a subroutine gets its own separate space in the stack for locals

21.
Encryption
–
In cryptography, encryption is the process of encoding a message or information in such a way that only authorized parties can access it. Encryption does not of itself prevent interference, but denies the intelligible content to a would-be interceptor, in an encryption scheme, the intended information or message, referred to as plaintext, is encrypted using an encryption algorithm, generating ciphertext that can only be read if decrypted. For technical reasons, an encryption scheme usually uses an encryption key generated by an algorithm. It is in possible to decrypt the message without possessing the key. An authorized recipient can decrypt the message with the key provided by the originator to recipients. In symmetric-key schemes, the encryption and decryption keys are the same, communicating parties must have the same key before they can achieve secure communication. In public-key encryption schemes, the key is published for anyone to use. However, only the party has access to the decryption key that enables messages to be read. Public-key encryption was first described in a document in 1973. Encryption has long used by militaries and governments to facilitate secret communication. It is now used in protecting information within many kinds of civilian systems. Encryption can be used to protect data at rest, such as stored on computers. In recent years, there have been reports of confidential data, such as customers personal records. Encrypting such files at rest helps protect them should physical security measures fail, in response to encryption of data at rest, cyber-adversaries have developed new types of attacks. There have been reports of data in transit being intercepted in recent years. Data should also be encrypted when transmitted across networks in order to protect against eavesdropping of network traffic by unauthorized users, standards for cryptographic software and hardware to perform encryption are widely available, but successfully using encryption to ensure security may be a challenging problem. A single error in design or execution can allow successful attacks. Sometimes an adversary can obtain unencrypted information without directly undoing the encryption, see, e. g. traffic analysis, TEMPEST, or Trojan horse

22.
Active networking
–
Active networking is a communication pattern that allows packets flowing through a telecommunications network to dynamically modify the operation of the network. Active network architecture is composed of execution environments, an operating system capable of supporting one or more execution environments. It also consists of hardware, capable of routing or switching as well as executing code within active packets. Network processors are one means of implementing active networking concepts, Active networks have also been implemented as overlay networks. Active networking allows the possibility of highly tailored and rapid changes to the underlying network operation. This enables such ideas as sending code along with packets of information allowing the data to change its form to match the channel characteristics, the smallest program that can generate a sequence of data can be found in the definition of Kolmogorov complexity. The use of genetic algorithms within the network to compose network services is also enabled by active networking. Active networking relates to other networking paradigms primarily based upon how computing, Active networking places computation within packets traveling through the network. Software-defined networking decouples the system makes decisions about where traffic is sent from the underlying systems that forward traffic to the selected destination. Active network research addresses the nature of how best to incorporate extremely dynamic capability within networks, in order to do this, active network research must address the problem of optimally allocating computation versus communication within communication networks. A similar problem related to the compression of code as a measure of complexity is addressed via algorithmic information theory, one of the challenges of active networking has been the inability of information theory to mathematically model the active network paradigm and enable active network engineering. This is due to the nature of the network in which communication packets contain code that dynamically change the operation of the network. Fundamental advances in theory are required in order to understand such networks. As the limit in reduction of size is reached with current technology. More on this can be found in nanoscale networking, nanoscale networking Network processing Software-defined networking Communication complexity Kolmogorov complexity Towards an Active Network Architecture, David L. Tennenhouse, et al. Programmable Networks for IP Service Deployment by Galis, A. Denazis, S. Brou, klein, C. - Artech House Books, London, June 20,450 pp. ISBN 1-58053-745-6 Introduction to Active Networks

23.
Computer engineering
–
Computer engineering is a discipline that integrates several fields of electrical engineering and computer science required to develop computer hardware and software. Computer engineers usually have training in engineering, software design. This field of engineering not only focuses on how computer systems themselves work, Computer engineers are also suited for robotics research, which relies heavily on using digital systems to control and monitor electrical systems like motors, communications, and sensors. Other institutions may require engineering students to one or two years of General Engineering before declaring computer engineering as their primary focus. The first computer engineering program in the United States was established at Case Western Reserve University in 1972. As of 2015, there were 238 ABET-accredited computer engineering programs in the US, in Europe, accreditation of computer engineering schools is done by a variety of agencies part of the EQANIE network. Both computer engineering and electronic engineering programs include analog and digital circuit design in their curriculum, as with most engineering disciplines, having a sound knowledge of mathematics and science is necessary for computer engineers. There are two major specialties in computer engineering, software and hardware, Computer software engineers develop, design, and test software. They construct, and maintain computer programs, as well as set up such as intranets for companies. Software engineers can design or code new applications to meet the needs of a business or individual. Some software engineers work independently as freelancers and sell their software products/applications to an enterprise or individual, most computer hardware engineers research, develop, design, and test various computer equipment. This can range from circuit boards and microprocessors to routers, some update existing computer equipment to be more efficient and work with newer software. Most computer hardware engineers work in laboratories and high-tech manufacturing firms. Some also work for the federal government, according to BLS, 95% of computer hardware engineers work in metropolitan areas. Approximately 33% of their work more than 40 hours a week. The median salary for employed qualified computer hardware engineers was $100,920 per year or $48.52 per hour, Computer hardware engineers held 83,300 jobs in 2012 in the USA. There are many specialty areas in the field of computer engineering, examples include work on wireless communications, multi-antenna systems, optical transmission, and digital watermarking. Those focusing on communications and wireless networks, work advancements in telecommunications systems and networks, modulation and error-control coding, high-speed network design, interference suppression and modulation, design and analysis of fault-tolerant system, and storage and transmission schemes are all a part of this specialty

24.
Internet
–
The Internet is the global system of interconnected computer networks that use the Internet protocol suite to link devices worldwide. The origins of the Internet date back to research commissioned by the United States federal government in the 1960s to build robust, the primary precursor network, the ARPANET, initially served as a backbone for interconnection of regional academic and military networks in the 1980s. Although the Internet was widely used by academia since the 1980s, Internet use grew rapidly in the West from the mid-1990s and from the late 1990s in the developing world. In the two decades since then, Internet use has grown 100-times, measured for the period of one year, newspaper, book, and other print publishing are adapting to website technology, or are reshaped into blogging, web feeds and online news aggregators. The entertainment industry was initially the fastest growing segment on the Internet, the Internet has enabled and accelerated new forms of personal interactions through instant messaging, Internet forums, and social networking. Business-to-business and financial services on the Internet affect supply chains across entire industries, the Internet has no centralized governance in either technological implementation or policies for access and usage, each constituent network sets its own policies. The term Internet, when used to refer to the global system of interconnected Internet Protocol networks, is a proper noun. In common use and the media, it is not capitalized. Some guides specify that the word should be capitalized when used as a noun, the Internet is also often referred to as the Net, as a short form of network. Historically, as early as 1849, the word internetted was used uncapitalized as an adjective, the designers of early computer networks used internet both as a noun and as a verb in shorthand form of internetwork or internetworking, meaning interconnecting computer networks. The terms Internet and World Wide Web are often used interchangeably in everyday speech, however, the World Wide Web or the Web is only one of a large number of Internet services. The Web is a collection of interconnected documents and other web resources, linked by hyperlinks, the term Interweb is a portmanteau of Internet and World Wide Web typically used sarcastically to parody a technically unsavvy user. The ARPANET project led to the development of protocols for internetworking, the third site was the Culler-Fried Interactive Mathematics Center at the University of California, Santa Barbara, followed by the University of Utah Graphics Department. In an early sign of growth, fifteen sites were connected to the young ARPANET by the end of 1971. These early years were documented in the 1972 film Computer Networks, early international collaborations on the ARPANET were rare. European developers were concerned with developing the X.25 networks, in December 1974, RFC675, by Vinton Cerf, Yogen Dalal, and Carl Sunshine, used the term internet as a shorthand for internetworking and later RFCs repeated this use. Access to the ARPANET was expanded in 1981 when the National Science Foundation funded the Computer Science Network, in 1982, the Internet Protocol Suite was standardized, which permitted worldwide proliferation of interconnected networks.5 Mbit/s and 45 Mbit/s. Commercial Internet service providers emerged in the late 1980s and early 1990s, the ARPANET was decommissioned in 1990

25.
Queueing theory
–
Queueing theory is the mathematical study of waiting lines, or queues. In queueing theory, a model is constructed so that queue lengths, Queueing theory is generally considered a branch of operations research because the results are often used when making business decisions about the resources needed to provide a service. Queueing theory has its origins in research by Agner Krarup Erlang when he created models to describe the Copenhagen telephone exchange. The ideas have seen applications including telecommunication, traffic engineering, computing. The spelling queueing over queuing is typically encountered in the research field. In fact, one of the journals of the profession is named Queueing Systems. Many theorems in queueing theory can be proved by reducing queues to mathematical systems known as Markov chains, Agner Krarup Erlang, a Danish engineer who worked for the Copenhagen Telephone Exchange, published the first paper on what would now be called queueing theory in 1909. He modeled the number of telephone calls arriving at an exchange by a Poisson process and solved the M/D/1 queue in 1917, in an M/G/1 queue the G stands for general and indicates an arbitrary probability distribution. The M/G/1 model was solved by Felix Pollaczek in 1930, a solution later recast in probabilistic terms by Aleksandr Khinchin, after the 1940s queueing theory became an area of research interest to mathematicians. In 1953 David George Kendall solved the GI/M/k queue and introduced the notation for queues. In 1957 Pollaczek studied the GI/G/1 using an integral equation, John Kingman gave a formula for the mean waiting time in a G/G/1 queue, Kingmans formula. The matrix geometric method and matrix analytic methods have allowed queues with phase-type distributed inter-arrival, problems such as performance metrics for the M/G/k queue remain an open problem. Last in first out This principle also serves customers one at a time, processor sharing Service capacity is shared equally between customers. Priority Customers with high priority are served first, priority queues can be of two types, non-preemptive and preemptive. No work is lost in either model, when a customer is serviced at one node it can join another node and queue for service, or leave the network. For a network of m the state of the system can be described by a vector where xi represents the number of customers at each node. This result was extended to the BCMP network where a network with very general service time, regimes, the normalizing constant can be calculated with the Buzens algorithm, proposed in 1973. Networks of customers have also investigated, Kelly networks where customers of different classes experience different priority levels at different service nodes

26.
Agere Systems
–
Agere Systems Inc. was an integrated circuit components company based in Allentown, Pennsylvania, USA. Spun out of Lucent Technologies in 2002, Agere was merged into LSI Corporation in 2007, LSI was in turn acquired by Avago Technologies in 2014. Agere was incorporated on August 1,2000 as a subsidiary of Lucent Technologies, the name Agere was that of a Texas-based electronics company that Lucent had acquired in 2000, although the pronunciations of the company names are different. The Texas company was pronounced with three syllables and a hard g, /eɪˈɡɪərˌʌ/, the company name was pronounced with two syllables and a hard g, /eɪˈɡɪər/. Apart from the office in Allentown, the company also maintained offices and facilities in, Reading, Pennsylvania, USA, The Reading Works facility, formerly Lucent/AT&T. Orlando, Florida, USA, The Orlando Plant was Ageres newest wholly owned wafer fabrication facility in the world. Opened in 1984 by AT&T, it was known for a time in the late 1990s as Cirent Semiconductor as it was operated as a joint venture with Cirrus Logic Corporation, the Orlando Plant was also home to Bell Labs Advanced Research and Development Facility. Dallas, Texas, USA, Agere Optoelectronics South, formerly Hermann Technologies, whitefield, India, located in the city of Bangalore, which is involved in ASIC design and software development. Raanana, Israel, This office was based on Modem-Art, a developer of advanced technology for 3G/UMTS mobile devices. Nieuwegein, Netherlands, This former NCR / AT&T / Lucent Technologies division known under the name WCND was active in the development of Wi-Fi-technology, ascot, Berkshire, UK, R&D and engineering site developing processor technology for GSM/GPRS/EDGE. Microsoft was sued by Agere for theft of key used in Internet telephony. The allegations concern meetings between Agere and Microsoft in 2002 and 2003, where the companies discussed selling Ageres stereophonic acoustic echo cancellation technology to Microsoft and this technology is used to improve the sound of telephone and teleconference communications over the Internet. LSI Corporation Official Website Agere Systems profile at Yahoo

27.
Alcatel-Lucent
–
Alcatel-Lucent S. A. was a French global telecommunications equipment company, headquartered in Boulogne-Billancourt, France. As of January 2016, the company is part of Nokia, the company focused on fixed, mobile and converged networking hardware, IP technologies, software and services, with operations in more than 130 countries. Alcatel-Lucents chief executive officer was Michel Combes and the chairman of the board was Philippe Camus. Camus joined the company in the quarter of 2008, alongside Ben Verwaayen as CEO, after Alcatel-Lucents first CEO Patricia Russo. For 2010, the company had revenues of €16 billion and a net loss of €334 million. For 2011, revenues were €15 billion, net loss of €1.1 billion, for 2012, revenues were €14.4 billion and net loss of €1.4 billion. On October 1,2014, it announced that it had closed the sale of its subsidiary Alcatel-Lucent Enterprise to China Huaxin Post & Telecommunication Economy Development Center, Alcatel-Lucent was formed when Alcatel merged with Lucent Technologies on December 1,2006. However, the predecessors of the company have been a part of industry since the late 19th century. The company has roots in two early telecommunications companies, La Compagnie Générale dElectricité and the Western Electric Manufacturing Company, Western Electric began in 1869 when Elisha Gray and Enos N. Barton started a manufacturing firm based in Cleveland, Ohio. By 1880, the company had relocated to Chicago, Illinois, CGE would become a leader in digital communications and would also be known for producing the TGV high-speed trains in France. Bell Telephone Laboratories was created in 1925 from the consolidation of the R&D organizations of Western Electric, Bell Labs researchers have won 7 Nobel Prizes. Also in 1925, Western Electric sold its International Western Electric Company subsidiary to ITT Corporation, CGE purchased the telecommunications part of ITT in the mid-1980s. AT&T re-entered the European telecommunications market in 1984 following the Bell System divestiture, Philips promoted the venture in part because its PRX public switching technology was ageing and it sought a partner to help fund the development costs of digital switching. In 1987, AT&T increased its holding to 60% and in 1990 it purchased the remainder of the Philips holding, in 1998, Alcatel Alsthom shifted its focus to the telecommunications industry, spinning off its Alsthom activities and changing the companys name to Alcatel. AT&T spun off Lucent Technologies in April 1996 with a public offering. In April 2004, TCL Corporation and Alcatel announced the creation of a mobile phone manufacturing joint venture, facing intense competition in the telecommunications industry, Alcatel and Lucent Technologies merged on November 30,2006. On April 5,2006, Alcatel announced that it would swap its shares of Alcatel Alenia Space and Telespazio for €673 million and a 12. 1% stake in Thales and this increased Alcatels stake in Thales to 20. 8%

28.
Altera
–
Altera Corporation is an Intel-owned American manufacturer of programmable logic devices, reconfigurable complex digital circuits. The company is an owned subsidiary of Intel. Altera released its first PLD in 1984, Alteras main products are the Stratix, Arria and Cyclone series FPGAs, the MAX series CPLDs, Quartus II design software, and Enpirion PowerSoC DC-DC power solutions. Altera and Intel announced on June 1,2015 that they have agreed that Intel would acquire Altera in a transaction valued at approximately $16.7 billion. As of December 28,2015, the acquisition had been completed, Cyclone series FPGAs and SoC FPGAs are the companys lowest cost, lowest power FPGAs, with variants offering integrated transceivers up to 5 Gbit/s. Arria FPGAs have integrated transceivers up to 10 Gbit/s, since December 2012, the company has been shipping SoC FPGA devices. According to Altera, fully depleted silicon on insulator chip manufacturing process is beneficial for FPGAs and these devices integrate FPGAs with full hard processor systems based around ARM processors onto a single device. In May 2013, Altera acquired embedded power chipmaker Enpirion for $134m in cash, since that time, Enpirion has been incorporated into Altera by becoming its own product offering within the Altera portfolio of products. The Enpirion products are power system-on-a-chip DC-DC converters that enable greater power densities, unlike converters made from discrete components Enpirion dc-dc converters are simulated, characterized, validated and production qualified at delivery. Previously Altera offered a publicly available ASIC design flow based on HardCopy ASICs and this design flow reduced design security risks as well as costs for higher volume production. Design engineers could prototype their designs in Stratix series FPGAs, the unique design flow makes hardware/software co-design and co-verification possible. The flow has been benchmarked to deliver systems to market 9 to 12 months faster, on average, Design engineers can employ a single RTL, set of intellectual property cores, and Quartus II design software for both FPGA and ASIC implementations. Alteras HardCopy Design Center manages test insertion, BaySand provides similar service to Alteras HardCopy called MetalCopy. MetalCopy is based on BaySands Metal Configurable Standard Cells and BaySands ASIC design methodology flow, MetalCopy consists of taking the design from RTL to ASIC including Synthesis, Physical Layout, timing closure, test insertion, Build In Test, formal verification and static timing analysis. BaySand currently offers Metal Copy for all FPGA designs including 28nm and 14nm FPGA devices, Altera and its partners offer an array of intellectual property cores that serve as building blocks that design engineers can drop into their system designs to perform specific functions. IP cores eliminate some of the tasks of creating every block in a design from scratch. Altera offers a portfolio with a broad selection of soft processor cores. ARM Cortex-M1 processor And one hard IP processor core, ARM Cortex-A9 processor All of Alteras devices are supported by a design environment

29.
AMD
–
While initially it manufactured its own processors, the company became fabless after GlobalFoundries was spun off in 2009. AMDs main products include microprocessors, motherboard chipsets, embedded processors and graphics processors for servers, workstations and personal computers, AMD is the second-largest supplier and only significant rival to Intel in the market for x86-based microprocessors. Since acquiring ATI in 2006, AMD and its competitor Nvidia have dominated the discrete Graphics Processing Unit market, Advanced Micro Devices was formally incorporated on May 1,1969, by Jerry Sanders, along with seven of his colleagues from Fairchild Semiconductor. In September 1969, AMD moved from its location in Santa Clara to Sunnyvale. To immediately secure a base, AMD initially became a second source supplier of microchips designed by Fairchild. AMD first focused on producing logic chips, in November 1969, the company manufactured its first product, the Am9300, a 4-bit MSI shift register, which began selling in 1970. Also in 1970, AMD produced its first proprietary product, the Am2501 logic counter and its best-selling product in 1971 was the Am2505, the fastest multiplier available. In 1971, AMD entered the RAM chip market, beginning with the Am3101 and that year AMD also greatly increased the sales volume of its linear integrated circuits, and by year end the companys total annual sales reached $4.6 million. AMD went public in September 1972, the company was a second source for Intel MOS/LSI circuits by 1973, with products such as Am14/1506 and Am14/1507, dual 100-bit dynamic shift registers. By 1975, AMD was producing 212 products – of which 49 were proprietary, including the Am9102, Intel had created the first microprocessor, its 4-bit 4004, in 1971. By 1975, AMD entered the market with the Am9080, a reverse-engineered clone of the Intel 8080. In 1977, AMD entered into a joint venture with Siemens, Siemens purchased 20% of AMDs stock, giving AMD an infusion of cash to increase its product lines. When the two companies vision for Advanced Micro Computers diverged, AMD bought out Siemens stake in the U. S. division in 1979, AMD closed its Advanced Micro Computers subsidiary in late 1981, after switching focus to manufacturing second-source Intel x86 microprocessors. Total sales in fiscal year 1978 topped $100 million, and in 1979, in 1980, AMD began supplying semiconductor products for telecommunications, an industry undergoing rapid expansion and innovation. Intel had introduced the first x86 microprocessors in 1978, in 1981, IBM created its PC, and wanted Intels x86 processors, but only under the condition that Intel also provide a second-source manufacturer for its patented x86 microprocessors. Intel and AMD entered into a 10-year technology exchange agreement, first signed in October 1981, the technical information and licenses needed to make and sell a part would be exchanged for a royalty to the developing company. The 1982 agreement also extended the 1976 AMD–Intel cross-licensing agreement through 1995, the agreement included the right to invoke arbitration of disagreements, and after five years the right of either party to end the agreement with one years notice. It also continued its successful concentration on proprietary bipolar chips, in 1983, it introduced INT. STD.1000, the highest manufacturing quality standard in the industry

30.
Analog Devices
–
In 2012, Analog Devices led the worldwide data converter market with a 48. 5% share, according to analyst firm Databeans. The company manufactures analog, mixed-signal and digital processing integrated circuits used in electronic equipment. These technologies are used to convert, condition and process real-world phenomena, such as light, sound, temperature, motion, the company was founded by two MIT graduates, Ray Stata and Matthew Lorber in 1965. The same year, the company released its first product, the model 101 op amp, in 1967, the company published the first issue of its technical magazine, Analog Dialogue. In 1969, Analog Devices filed a public offering and became a publicly traded company. Ten years later, the company was listed on the New York Stock Exchange, in 1973, the company was the first to launch laser trim wafers and the first CMOS digital-to-analog converter. By 1996, the company reported over $1 billion in company revenue and that same year, Jerald Fishman was named President and CEO, a position he held until his death in 2013. In 2000, ADIs sales grew by over 75% to $2.578 Billion and the company acquired five companies including BCO Technologies PLC, in January 2008, ON Semiconductor completed the acquisition of the CPU Voltage and PC Thermal Monitoring Business from ADI. for $184 million. By 2004, ADI had a base of 60,000. In July 2016, Analog and Linear Technology agreed that Analog would acquire Linear in an approximately $14.8 billion cash and stock deal. Analog Devices is headquartered in Norwood, Massachusetts, with headquarters located in Shanghai, China, Munich, Germany, Limerick, Ireland. Analog Devices has fabrication plants located in the United States and in the Republic of Ireland, the companys testing facility is located in the Philippines. Design centers are located in Australia, Canada, China, England, Germany, India, Israel, Japan, Scotland, Spain, raymond Stata is a founder of Analog Devices and was responsible for the business strategy and product roadmap. After founding the company in 1965, Stata served as the chairman of the board of directors since 1973, executive officer since 1996, CEO from 1973 to 1996. In addition, Stata is also a trustee of the Massachusetts Institute of Technology, Stata received the EE Times Lifetime Achievement award in 2008. Stata served as the chairman of the Semiconductor Industry Association for the year 2011, vincent Roche became President and CEO of Analog Devices in May 2013. He first joined the company in 1988 as a director in Limerick. Barrie Gilbert was named the first Technology Fellow of Analog Devices in 1979, in addition, Gilbert is an IEEE Life Fellow and holds over 65 patents

31.
Applied Micro Circuits Corporation
–
Applied Micro Circuits Corporation is a fabless semiconductor company designing network and embedded Power Architecture, and server processor ARM, optical transport and storage products. In 2004, AMCC bought assets, IP and engineers concerning the PowerPC400 microprocessors from IBM for $227 million, the deal also included access to IBMs SoC design methodology and advanced CMOS process technology. In 2009, AppliedMicro changed their branding from AMCC to AppliedMicro, in 2011, AppliedMicro became the first company to implement the ARMv8-A architecture with its X-Gene Platform. A silicon implementation of X-Gene was first exhibited publicly in June 2013, in April 2016, information about the forthcoming X-Gene 3 server chips was made available. The release schedule is for the half of 2017. The company projected an improved performance, over the X-Gene 2, in November 2016, Macom announced that they would purchase AppliedMicro. AppliedMicro has a sponsor level membership of Power. org and is one of the original members, AppliedMicro is also executive member of the Ethernet Alliance. AppliedMicro is also a member of the Open Compute Project, the Processor Products group designs and markets embedded microcontrollers as well as server processor, packet and storage processors. It includes the network processors of former MMC Networks with IBM PowerPC 4xx series microcontrollers, since purchasing the IBM PowerPC400 family, AppliedMicro has developed the 460 series with 440 CPU, and a multicore Power architecture devices. In January 2008, the AppliedMicro PowerPC 405EX was awarded PRODUCT OF THE YEAR2007, in October 2011, AppliedMicro announced its X-Gene Platform, an ARM 64-bit solution aimed at cloud and enterprise servers. The Connectivity Products group of AppliedMicro designs, manufacturers and markets physical layer devices, framers/mappers, throughout the years, AppliedMicro has acquired smaller companies to enter new markets. In 2005, the company paid $60 million to settle a lawsuit on behalf of investors against the company and certain of its current. The suit had charged the company issuing a series of materially false and misleading statements concerning the companys operations and prospects for Q42001. Under the terms of the settlement, the company and defendants denied any wrongdoing, about half of the amount of the settlement was covered by insurance

32.
Qualcomm Atheros
–
Qualcomm Atheros is a developer of semiconductors for network communications, particularly wireless chipsets. Founded under the name T-Span Systems in 1998 by experts in signal processing and VLSI design from Stanford University, the company was renamed Atheros Communications in 2000 and it completed an initial public offering in February 2004 trading on NASDAQ under the symbol ATHR. On January 5,2011, it was announced that Qualcomm had agreed to a takeover of the company for a valuation of US$3.7 billion, when the acquisition was completed on May 24,2011, Atheros became a subsidiary of Qualcomm operating under the name Qualcomm Atheros. Qualcomm Atheros chipsets for the IEEE802.11 standard of wireless networking are used by over 30 different wireless device manufacturers, the companys first office was a converted house on Encina Avenue, Palo Alto, adjacent to a car wash and Town & Country Village. In September 1999, the moved to an office at 3145 Porter Drive, Building A. In 2000, T-Span Systems was renamed Atheros Communications and the moved to a larger office at 529 Almanor Avenue. Atheros publicly demonstrated its inaugural chipset, the worlds first WLAN implemented in CMOS technology, in 2002, Atheros launched the first dual-band wireless solution, the AR5001X802. 11a/b. In 2002, Dr. Craig H. Barratt joined Atheros as VP Technology, Craig was promoted to CEO of Atheros in March 2003, a position he retained until Atheros acquisition by Qualcomm. In 2003, the company shipped its 10-millionth wireless chip, in 2004, Atheros unveiled a number of products, including the first video chipset for mainstream HDTV-quality wireless connectivity. In 2005, Atheros introduced the industrys first MIMO-enabled WLAN chip, as well as the ROCm family of high-performance, low-power WLAN solutions for mobile handsets, in 2006, Atheros launched its XSPAN solutions, which featured a single-chip, triple-radio solution for 802. 11n. In this same year, they began to collaborate with Qualcomm on a 3G/Wi-Fi solution for CDMA, in 2008, Atheros launched the Align 1-stream 802. 11n solutions for PCs and networking equipment. In 2010, Atheros shipped its 500-millionth WLAN chipset and 100-millionth Align 1-stream chipset and they released the first HomePlug AV chipset with a 500 Mbit/s PHY rate. On February 12,2004, Atheros completed its public offering on the NASDAQ exchange trading under the symbol ATHR. Shares opened at $14 per share with 9 million offered, prices on the first day ranged up to $18.45 and closed at $17.60 per share. At the time, Atheros had approximately 170 employees, in January 2011, Qualcomm agreed to acquire Atheros at $45 per share cash. This agreement was subject to regulatory approvals. In May 2011, Qualcomm completed its acquisition of Atheros Communications for a total of US$3.7 billion, Atheros became a subsidiary of Qualcomm under the name Qualcomm Atheros. After the acquisition, the division unveiled the WCN3660 Combo Chip, which integrated dual-band Wi-Fi, Bluetooth, Qualcomm Atheros launched the Skifta media shifting application for Android and released the first HomePlug Green PHY solution at the end of the year

33.
Broadcom
–
Broadcom Corporation was an American fabless semiconductor company that made products for the wireless and broadband communication industry. It was acquired by Avago Technologies in 2016 and currently operates as an owned subsidiary of the merged entity called Broadcom Limited. The division is headquartered in Irvine, California, Broadcom Corporation was founded by professor-student pair Henry Samueli and Henry Nicholas from UCLA in 1991. In 1995 the company moved from its Westwood, Los Angeles office to Irvine, in 1998, Broadcom became a public company on the NASDAQ exchange and now employs approximately 11,750 people worldwide in more than 15 countries. Broadcom is among Gartners Top 10 Semiconductor Vendors by revenue, Broadcom first landed on the Fortune 500 in 2009. In 2012, Broadcoms total revenue was $8.01 billion, in 2013, Broadcom stood at No.327 on the Fortune 500, having climbed 17 places from its 2012 ranking of No.344. In May 28,2015 chip maker Avago Technologies Ltd. agreed to buy Broadcom Corp. for $37 billion in cash, at closing, which completed on February 1,2016, Broadcom shareholders will hold 32% of the new Singapore-based company to be called Broadcom Limited. Hock Tan, Avago President and CEO, will be the new CEO of the new combined company, Dr. Samueli will be Chief Technology Officer and member of the combined companys board. And Dr. Nicholas will serve in an advisory role within the new company. The new merged entity is named Broadcom Limited but inherits the ticker symbol AVGO, the BRCM ticker symbol was retired. In May 2016 Cypress Semiconductor announced that it will acquire Broadcom Corporations full portfolio of IoT products for $550 million, under the deal, Cypress acquires Broadcoms IoT products and intellectual property for Wi-Fi, Bluetooth and ZigBee connectivity, as well as Broadcoms WICED platform and SDK for developers. Broadcoms product line spans computer and telecommunication networking, the company has products for enterprise/metropolitan high-speed networks, products include transceiver and processor ICs for Ethernet and wireless LANs, cable modems, digital subscriber line, servers, home networking devices and cellular phones. It is also known for a series of high-speed encryption co-processors, offloading this processor-intensive work to a dedicated chip and this has many practical benefits for e-commerce, and PGP or GPG secure communications. Major customers include Apple, Hewlett-Packard, Motorola, IBM, Dell, Asus, Lenovo, Linksys, Logitech, Nintendo, Nokia Siemens Networks, Nortel, TiVo, Tenda, in September 2011, Broadcom shut down its digital TV operations. Broadcom also shut down its Blu-ray chip business, the closure of these businesses began on September 19,2011. On June 2,2014, Broadcom announced intentions to exit the cellular baseband business, Vendors have included Broadcom NICs in their products. For example, the Dell blade-server M610 has two embedded Gigabit NetXtreme 5709 NICs, the latest member of the Trident family is the Trident II XGS which can support up to 32 x 40G ports or 104 x 10G ports on a single chip. Broadcom Crystal HD does video acceleration, more specifically, Broadcom would provide Bluetooth connectivity for Wiis controller

34.
BroadLight
–
BroadLight is a fabless semiconductor company which designs, manufactures and markets Fiber Access and embedded processors System-on-a-chip. Founded in 2000 by Ran Dror, David Levi, Haim Ben Amram, Didi Ivancovsky and Raanan Ivry, it is headquartered in Herzeliya, Israel with sales offices in the United States, China, Taiwan and Korea. BroadLight operates in the broadband telecommunication operators’ market with products for broadband, fixed and mobile networks for homes. BroadLights most notable innovations are the introduction of the control and data processing planes. The integration of the network processor enables flexibility via a programmable networking engine, since 2004 BroadLight has been awareded multiple patents in the area of fiber access and CPE network processing. On April 2012 Broadcom Corporation acquired BroadLight for $230M in a cash transaction, the company was venture capital funded by Azure Capital Partners, Benchmark Capital, Delta Ventures, Israel Seed Partners, Motorola Ventures, Star Ventures and Cipio Partners. The Chairman of the Board of Directors is Anthony T

Xilinx, Inc. (ZY-lingks) is an American technology company, primarily a supplier of programmable logic devices. It is …

Image: Xilinx Inc lobby

The Spartan-3 platform was the industry’s first 90nm FPGA, delivering more functionality and bandwidth per dollar than was previously possible, setting new standards in the programmable logic industry.

Queueing theory is the mathematical study of waiting lines, or queues. A queueing model is constructed so that queue …

Queue networks are systems in which single queues are connected by a routing network. In this image, servers are represented by circles, queues by a series of rectangles and the routing network by arrows. In the study of queue networks one typically tries to obtain the equilibrium distribution of the network, although in many applications the study of the transient state is fundamental.