10G-PON

10G-PON (also known as XG-PON) is a 2010 computer networking standard for data links, capable of delivering shared Internet access rates up to 10 Gbit/s (gigabits per second) over existing dark fiber. This is the ITU-T's next generation standard following on from G-PON or Gigabit-capable PON. Optical fibre is shared by many subscribers in a network known as FTTx in a way that centralises most of the telecommunications equipment, often displacing copper phone lines that connect premises to the phone exchange. Passive optical network (PON) architecture has become a cost-effective way to meet performance demands in access networks, and sometimes also in large optical local networks for "Fibre-to-the-desk".

Passive optical networks are used for the "Fibre-to-the-home" or "Fibre-to-the-premises" last mile with splitters that connect each central transmitter to many subscribers, the 10 Gbit/s shared capacity is the downstream speed broadcast to all users connected to the same PON, and the 2.5 Gbit/s upstream speed uses multiplexing techniques to prevent data frames from interfering with each other. Users have a network device that converts optical signals to the signals used in building wiring, such as Ethernet and wired analogue plain old telephone service.

As demand for network speed continues to grow, so new and faster technologies are spawned from the existing standards. 10G-PON is the next generation ultra-fast capability for G-PON providers, designed to coexist with installed G-PON user equipment on the same network; an example of Nielsen's law predicting demand for data downloads to double every year. The ITU-T completed parts of the standard in 2010. 10G-PON may initially find uses in connecting fibre nodes within multi-tenant units and commercial buildings.

Triple play services over IP of video, data and voice are often cited as driving user demand for heavier usage of broadband that justifies PON investment. While RF overlay has been popular in some countries and minimises congestion caused by usage of video services, the convergence of HDTV and IPTV, and the growth in internet cloud services could create demand for bandwidth that exceeds the capacity of gigabit services in future. Teleworking and video conferencing are other applications demanding such triple play capabilities.

Examples of bandwidth-intensive applications include IPTV, video-conferencing, interactive video, online interactive gaming, peer-to-peer networking, karaoke-on-demand, IP video surveillance, and cloud applications where remote storage and computing resources provide online service on demand to users with thin-client local systems.[1] Cloud applications could take advantage of in-country content hosting, and 10GPON may encourage explosive development of innovative services that become feasible as users move to faster connections.

Business continuity systems may also take advantage of 10GPON to enable cost-effective real-time backup/recovery/replication of critical business systems across multiple sites. Other businesses may just need to connect several sites as a virtual private network, effectively a virtual office, or may have e-commerce services that require business partners to have sufficient connectivity for constant database access.

Many of these applications are already growing in both popularity and demand for bandwidth.

Symmetric 10G-PON is also proposed as XG-PON2 with 10 Gbit/s upstream, but would require more expensive burst-mode lasers on optical network terminals (ONTs) to deliver the upstream transmission speed. Another symmetric 10G-PON standard is XGS-PON (ITU-T G.9807.1, approved 2016-06-22).

Framing is "G-PON like" but uses different wavelengths from G-PON (using a WDM to separate them)[3] so that G-PON subscribers can be upgraded to 10G-PON incrementally while GPON users continue on the original OLT, the G-PON standard is G.984.[4] This compares to the IEEE802.3av standard for 10G-EPON based on Ethernet, which has standardised upstream rates of both 1Gbit/s and 10Gbit/s.[1] The 10 Gigabit PON wavelengths (1577 nm down / 1270 nm up) differ from GPON and EPON (1490 nm down /1310 nm up), allowing it to coexist on the same fibre with either of the Gigabit PONs.[5]

G.987.1: General requirements of 10G-PON systems (approved 2010-01-13). Includes examples of services, user network interfaces (UNIs) and service node interfaces (SNIs), as well as the principal deployment configurations that are requested by network operators.

The ONU receives the downstream data from the Internet or private networks, and also uses time slots allocated by the OLT to send the upstream traffic in burst-mode. TDMA time slots prevent collisions with upstream traffic from other users sharing the same physical PON.

The OLT (Optical Line Terminal) connects the PON to aggregated backhaul uplinks, allocates time slots for ONUs and ONTs to transmit upstream data, and transmits shared downstream data in broadcast-mode over the PON to users, since 10GPON is designed to coexist with GPON devices, migration to a 10GPON capability could be done by upgrading the OLT and then migrating individual ONUs as needed.

Normally the OLT is on a card that slots into a chassis at the Central Office (CO), which uses special uplink cards for Ethernet backhaul to the telecommunications provider's network and internet. Uplink cards on access equipment will likely use multiple Ethernet interfaces, although it remains to be seen what uplink speeds manufacturers will offer to support 10GPON access. Locating OLTs in outside plant cabinets may be an option for reach extension as a way to minimise the number of central offices covering low population density areas.

ITU and IEEE are planning for convergence of their specifications at the physical layer in 10G that would allow for the shared chips, optics and hardware platforms, thus driving cost reductions for hardware manufacturers.[6]

"An Optical Distribution Network (ODN) being installed today will likely need to support four or more generations of PON over its expected 30 – 40 year life... The fibre should enable maximum flexibility to support any potential new PON technology, be protected with proven, reliable cabling making it easy to install and reliable, and be joined by advanced, low labor and low loss connectivity, the cost of the ODN materials (fibre, cable, and connectivity) at only about 8% comprises a surprisingly small portion of the total network cost."[5]

In an effort to extend the reach with support for 128 splits, the standard supports a range of optical budgets from 29 dB to 31 dB. A draft update to the standard is expected to further extend this to 33 dB and 35 dB budget classifications. A PON with a 35 dB optical budget could span 25 km or more and be shared/split among 128 subscribers.[7]

Some ONTs can receive a broad range of optical spectrum from 1480 nm to 1580 nm, so making the 10G-PON downstream signal visible to G-PON receivers. As a result, ONTs must block the unwanted downstream signals with a wavelength blocking filter (WBF), a small passive optical device.[7]

In October 2010, Portugal Telecom reported a successful field trial of 10G-PON, transmitting 3D-TV content using XG-PON1 capabilities.[8]

Verizon also successfully completed a field trial of the pre-standard XG-PON2 (synchronous 10G-PON) capable of delivering a 10 Gbit/s broadband connection both downstream and upstream. In October 2010, at a Verizon customer’s business in Taunton, Mass., the XG-PON2 trial used the same optical fibre that provides that business with its existing FiOS network connection and services.

BT in the UK is providing a trial 10-Gbit/s broadband service to a business customer in Cornwall using XGPON technology, it announced on 23rd Nov 2012.[9]

1.
Computer network
–
A computer network or data network is a telecommunications network which allows nodes to share resources. In computer networks, networked computing devices exchange data with other using a data link. The connections between nodes are established using either cable media or wireless media, the best-known computer network is the Internet. Network computer devices that originate, route and terminate the data are called network nodes, nodes can include hosts such as personal computers, phones, servers as well as networking hardware. Two such devices can be said to be networked together when one device is able to exchange information with the other device, Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the networks size, topology and organizational intent. In most cases, application-specific communications protocols are layered over other more general communications protocols and this formidable collection of information technology requires skilled network management to keep it all running reliably. The chronology of significant computer-network developments includes, In the late 1950s, in 1960, the commercial airline reservation system semi-automatic business research environment went online with two connected mainframes. Licklider developed a group he called the Intergalactic Computer Network. In 1964, researchers at Dartmouth College developed the Dartmouth Time Sharing System for distributed users of computer systems. The same year, at Massachusetts Institute of Technology, a group supported by General Electric and Bell Labs used a computer to route. Throughout the 1960s, Leonard Kleinrock, Paul Baran, and Donald Davies independently developed network systems that used packets to transfer information between computers over a network, in 1965, Thomas Marill and Lawrence G. Roberts created the first wide area network. This was an precursor to the ARPANET, of which Roberts became program manager. Also in 1965, Western Electric introduced the first widely used telephone switch that implemented true computer control, in 1972, commercial services using X.25 were deployed, and later used as an underlying infrastructure for expanding TCP/IP networks. In July 1976, Robert Metcalfe and David Boggs published their paper Ethernet, Distributed Packet Switching for Local Computer Networks, in 1979, Robert Metcalfe pursued making Ethernet an open standard. In 1976, John Murphy of Datapoint Corporation created ARCNET, a network first used to share storage devices. In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s, by 1998, Ethernet supported transmission speeds of a Gigabit. Subsequently, higher speeds of up to 100 Gbit/s were added, the ability of Ethernet to scale easily is a contributing factor to its continued use. Providing access to information on shared storage devices is an important feature of many networks, a network allows sharing of files, data, and other types of information giving authorized users the ability to access information stored on other computers on the network

2.
OSI model
–
Its goal is the interoperability of diverse communication systems with standard protocols. The model partitions a communication system into abstraction layers, the original version of the model defined seven layers. A layer serves the layer above it and is served by the layer below it, two instances at the same layer are visualized as connected by a horizontal connection in that layer. The model is a product of the Open Systems Interconnection project at the International Organization for Standardization and these two international standards bodies each developed a document that defined similar networking models. In 1983, these two documents were merged to form a standard called The Basic Reference Model for Open Systems Interconnection, the standard is usually referred to as the Open Systems Interconnection Reference Model, the OSI Reference Model, or simply the OSI model. It was published in 1984 by both the ISO, as standard ISO7498, and the renamed CCITT as standard X.200. OSI had two components, an abstract model of networking, called the Basic Reference Model or seven-layer model. The concept of a model was provided by the work of Charles Bachman at Honeywell Information Services. Various aspects of OSI design evolved from experiences with the ARPANET, NPLNET, EIN, CYCLADES network, the new design was documented in ISO7498 and its various addenda. In this model, a system was divided into layers. Within each layer, one or more entities implement its functionality, each entity interacted directly only with the layer immediately beneath it, and provided facilities for use by the layer above it. Protocols enable an entity in one host to interact with an entity at the same layer in another host. Service definitions abstractly described the functionality provided to an -layer by an layer, the OSI standards documents are available from the ITU-T as the X. 200-series of recommendations. Some of the specifications were also available as part of the ITU-T X series. The equivalent ISO and ISO/IEC standards for the OSI model were available from ISO, the recommendation X.200 describes seven layers, labeled 1 to 7. Layer 1 is the lowest layer in this model, at each level N, two entities at the communicating devices exchange protocol data units by means of a layer N protocol. Each PDU contains a payload, called the service data unit, data processing by two communicating OSI-compatible devices is done as such, The data to be transmitted is composed at the topmost layer of the transmitting device into a protocol data unit. The PDU is passed to layer N-1, where it is known as the service data unit, at layer N-1 the SDU is concatenated with a header, a footer, or both, producing a layer N-1 PDU

3.
Internet access
–
Internet access is the process that enables individuals and organisations to connect to the Internet using computer terminals, computers, mobile devices, sometimes via computer networks. Once connected to the Internet, users can access Internet services, such as email, Internet service providers offer Internet access through various technologies that offer a wide range of data signaling rates. Consumer use of the Internet first became popular through dial-up Internet access in the 1990s, by the first decade of the 21st century, many consumers in developed nations used faster, broadband Internet access technologies. By 2014 this was almost ubiquitous worldwide, with an average connection speed exceeding 4 Mbit/s. Use by a wider audience came in 1995 when restrictions on the use of the Internet to carry commercial traffic were lifted. LANs typically operated at 10 Mbit/s, while modem data-rates grew from 1200 bit/s in the early 1980s, initially, dial-up connections were made from terminals or computers running terminal emulation software to terminal servers on LANs. These dial-up connections did not support use of the Internet protocols. Broadband connections are made using a computers built in Ethernet networking capabilities. Most broadband services provide a continuous always on connection, there is no dial-in process required, made broadband Internet access a public policy issue. In 2000, most Internet access to homes was provided using dial-up, while many businesses, in 2000 there were just under 150 million dial-up subscriptions in the 34 OECD countries and fewer than 20 million broadband subscriptions. By 2004, broadband had grown and dial-up had declined so that the number of subscriptions were roughly equal at 130 million each, the broadband technologies in widest use are ADSL and cable Internet access. Newer technologies include VDSL and optical fibre extended closer to the subscriber in telephone and cable plants. In areas not served by ADSL or cable, some community organizations, Wireless and satellite Internet are often used in rural, undeveloped, or other hard to serve areas where wired Internet is not readily available. Newer technologies being deployed for fixed and mobile broadband access include WiMAX, LTE, starting in roughly 2006, mobile broadband access is increasingly available at the consumer level using 3G and 4G technologies such as HSPA, EV-DO, HSPA+, and LTE. Some libraries provide stations for physically connecting users laptops to local area networks, Wireless Internet access points are available in public places such as airport halls, in some cases just for brief use while standing. Some access points may also provide coin-operated computers, various terms are used, such as public Internet kiosk, public access terminal, and Web payphone. Many hotels also have public terminals, usually fee based and these services may be free to all, free to customers only, or fee-based. A Wi-Fi hotspot need not be limited to a location since multiple ones combined can cover a whole campus or park

4.
Dark fibre
–
A dark fibre or unlit fibre is an unused optical fibre, available for use in fibre-optic communication. In common vernacular, dark fibre may sometimes still be called if it has been lit by a fibre lessee. A dark fibre network or simply dark network is a privately operated optical fiber network that is run directly by its operator over dark fibre leased or purchased from another supplier and this is in contrast to purchasing bandwidth or leased line capacity on an existing network. Dark fibre networks may be used for networking, or as Internet access or infrastructure. Much of the cost of installing cables is in the engineering work required. This includes planning and routing, obtaining permissions, creating ducts and channels for the cables and this work usually accounts for more than 60% of the cost of developing fibre networks. For example, in Amsterdams city-wide installation of a network, roughly 80% of the costs involved were labour. Many fibre optic cable owners such as railroads or power utilities have always added additional fibres for lease to other carriers and this was based on the assumption that telecoms traffic, particularly data traffic, would continue to grow exponentially for the foreseeable future. The availability of wavelength-division multiplexing further reduced the demand for fibre by increasing the capacity that could be placed on a single fibre by a factor of as much as 100, as a result, the wholesale price of data traffic collapsed. A number of companies filed for bankruptcy protection as a result. Global Crossing and Worldcom are two examples in the US. According to Gerry Butters, the head of Lucents Optical Networking Group at Bell Labs. This progress in the ability to carry data over fiber reduced the need for more fibres, just as with the Railway Mania, the misfortune of one market sector became the good fortune of another, and this overcapacity created a new telecommunications market sector. Competitive local carriers were not required to sell dark fibre, and many do not and this increases the reach of their networks in places where their competitor has a presence, in exchange for provision of fibre capacity on places where that competitor has no presence. This is a known in the industry as coopetition. Meanwhile, other companies arose specialising as dark fibre providers, dark fibre became more available when there was enormous overcapacity after the boom years of the late 1990s through 2001. The market for dark fibre tightened up with the return of investment to light up existing fibre. In the last decade, many education institutions have bought up large quantities of existing fibre optics sitting dormant

5.
Passive optical network
–
A PON consists of an optical line terminal at the service providers central office and a number of optical network units or optical network terminals, near end users. A PON reduces the amount of fiber and central office equipment required compared with point-to-point architectures, a passive optical network is a form of fiber-optic access network. In most cases, downstream signals are broadcast to all premises sharing multiple fibers, upstream signals are combined using a multiple access protocol, usually time division multiple access. The Society of Cable Telecommunications Engineers also specified radio frequency over glass for carrying signals over an optical network. Starting in 1995, work on fiber to the home architectures was done by the Full Service Access Network working group, formed by major telecommunications service providers, the International Telecommunications Union did further work, and standardized on two generations of PON. The older ITU-T G.983 standard was based on Asynchronous Transfer Mode, a typical APON/BPON provides 622 megabits per second of downstream bandwidth and 155 Mbit/s of upstream traffic, although the standard accommodates higher rates. The ITU-T G. Again, the standards permit several choices of bit rate, but the industry has converged on 2.488 gigabits per second of downstream bandwidth, GPON Encapsulation Method allows very efficient packaging of user traffic with frame segmentation. By mid-2008, Verizon had installed over 800,000 lines, british Telecom, BSNL, Saudi Telecom Company, Etisalat, and AT&T were in advanced trials in Britain, India, Saudi Arabia, the UAE, and the USA, respectively. GPON networks have now been deployed in numerous networks across the globe, G.987 defined 10G-PON with 10 Gbit/s downstream and 2.5 Gbit/s upstream – framing is G-PON like and designed to coexist with GPON devices on the same network. The chief information officer of the United States Department of the Army issued a directive to adopt the technology by fiscal year 2013 and it is marketed to the US military by companies such as Telos Corporation. In 2004, the Ethernet PON standard 802. 3ah-2004 was ratified as part of the Ethernet in the first mile project of the IEEE802.3, EPON uses standard 802.3 Ethernet frames with symmetric 1 gigabit per second upstream and downstream rates. EPON is applicable for data-centric networks, as well as voice, data. 10 Gbit/s EPON or 10G-EPON was ratified as an amendment IEEE802. 3av to IEEE802.3, the upstream channel can support simultaneous operation of IEEE802. 3av and 1 Gbit/s 802. 3ah simultaneously on a single shared channel. There are currently over 40 million installed EPON ports making it the most widely deployed PON technology globally, EPON is also the foundation for cable operators’ business services as part of the DOCSIS Provisioning of EPON specifications. A PON takes advantage of wavelength division multiplexing, using one wavelength for downstream traffic, BPON, EPON, GEPON, and GPON have the same basic wavelength plan and use the 1490 nanometer wavelength for downstream traffic and 1310 nm wavelength for upstream traffic. 1550 nm is reserved for optional overlay services, typically RF video, as with bit rate, the standards describe several optical budgets, most common is 28 dB of loss budget for both BPON and GPON, but products have been announced using less expensive optics as well. 28 dB corresponds to about 20 km with a 32-way split, forward error correction may provide for another 2–3 dB of loss budget on GPON systems. As optics improve, the 28 dB budget will likely increase, although both the GPON and EPON protocols permit large split ratios, in practice most PONs are deployed with a split ratio of 1,32 or smaller

6.
Optical fiber
–
An optical fiber or optical fibre is a flexible, transparent fiber made by drawing glass or plastic to a diameter slightly thicker than that of a human hair. Fibers are also used for illumination, and are wrapped in bundles so that they may be used to carry images, thus allowing viewing in confined spaces, as in the case of a fiberscope. Specially designed fibers are used for a variety of other applications, some of them being fiber optic sensors. Optical fibers typically include a transparent core surrounded by a transparent cladding material with an index of refraction. Light is kept in the core by the phenomenon of internal reflection which causes the fiber to act as a waveguide. Fibers that support many propagation paths or transverse modes are called multi-mode fibers, multi-mode fibers generally have a wider core diameter and are used for short-distance communication links and for applications where high power must be transmitted. Single-mode fibers are used for most communication links longer than 1,000 meters, being able to join optical fibers with low loss is important in fiber optic communication. This is more complex than joining electrical wire or cable and involves careful cleaving of the fibers, precise alignment of the cores. For applications that demand a permanent connection a fusion splice is common, in this technique, an electric arc is used to melt the ends of the fibers together. Another common technique is a splice, where the ends of the fibers are held in contact by mechanical force. Temporary or semi-permanent connections are made by means of specialized optical fiber connectors, the field of applied science and engineering concerned with the design and application of optical fibers is known as fiber optics. The term was coined by Indian physicist Narinder Singh Kapany who is acknowledged as the father of fiber optics. Guiding of light by refraction, the principle that makes fiber optics possible, was first demonstrated by Daniel Colladon, John Tyndall included a demonstration of it in his public lectures in London,12 years later. When the ray passes from water to air it is bent from the perpendicular. If the angle which the ray in water encloses with the perpendicular to the surface be greater than 48 degrees, the angle which marks the limit where total reflection begins is called the limiting angle of the medium. For water this angle is 48°27′, for flint glass it is 38°41′, unpigmented human hairs have also been shown to act as an optical fiber. Practical applications, such as close internal illumination during dentistry, appeared early in the twentieth century, image transmission through tubes was demonstrated independently by the radio experimenter Clarence Hansell and the television pioneer John Logie Baird in the 1920s. The principle was first used for medical examinations by Heinrich Lamm in the following decade

7.
Fiber to the x
–
Fiber to the x is a generic term for any broadband network architecture using optical fiber to provide all or part of the local loop used for last mile telecommunications. As fiber optic cables are able to much more data than copper cables, especially over long distances. FTTX is a generalization for several configurations of fibre deployment, arranged into two groups, FTTP/FTTH/FTTB and FTTC/N, the telecommunications industry differentiates between several distinct FTTX configurations. The terms in most widespread use today are, FTTP, This term is used either as a term for both FTTH and FTTB, or where the fiber network includes both homes and small businesses. FTTH, Fiber reaches the boundary of the space, such as a box on the outside wall of a home. Passive optical networks and point-to-point Ethernet are architectures that deliver services over FTTH networks directly from an operators central office. FTTD, Fiber connection is installed from the computer room to a terminal or fiber media converter near the users desk. FTTO, Fiber connection is installed from the main computer room/core switch to a special mini-switch located at the user´s workstation or service points and this mini-switch provides Ethernet services to end user devices via standard twisted pair patch cords. The switches are located all over the building, but managed from one central point. FTTE and FTTZ are not considered part of the FTTX group of technologies, FTTF This is very similar to FTTB. In a fiber to the front yard scenario, each fiber node serves a single subscriber and this allows for multi-gigabit speeds using XG-fast technology. The fiber node may be reverse-powered by the subscriber modem, FTTN is often an interim step toward full FTTH and is typically used to deliver advanced triple-play telecommunications services. FTTC is occasionally ambiguously called FTTP, leading to confusion with the distinct fiber-to-the-premises system, the FTTH Councils do not have formal definitions for FTTC and FTTN. While fiber optic cables can carry data at speeds over long distances, copper cables used in traditional telephone lines. For example, the form of gigabit Ethernet runs over relatively economical category 5e, category 6 or augmented category 6 unshielded twisted-pair copper cabling. However,1 Gbit/s ethernet over fiber can easily reach tens of kilometres, therefore, FTTP has been selected by every major communications provider in the world to carry data over long 1 Gbit/s symmetrical connections directly to consumer homes. FTTP configurations that bring fiber directly into the building can offer the highest speeds since the segments can use standard ethernet or coaxial cable. Google Fiber provides speed of 1 Gbit/s, still, the type and length of employed fibers chosen, e. g. multimode vs. single-mode, are critical for applicability for future connections of over 1 Gbit/s

8.
Telecommunication
–
Telecommunication is the transmission of signs, signals, messages, writings, images and sounds or intelligence of any nature by wire, radio, optical or other electromagnetic systems. Telecommunication occurs when the exchange of information between communication participants includes the use of technology and it is transmitted either electrically over physical media, such as cables, or via electromagnetic radiation. Such transmission paths are divided into communication channels which afford the advantages of multiplexing. The term is used in its plural form, telecommunications. Early means of communicating over a distance included visual signals, such as beacons, smoke signals, semaphore telegraphs, signal flags, other examples of pre-modern long-distance communication included audio messages such as coded drumbeats, lung-blown horns, and loud whistles. Zworykin, John Logie Baird and Philo Farnsworth, the word telecommunication is a compound of the Greek prefix tele, meaning distant, far off, or afar, and the Latin communicare, meaning to share. Its modern use is adapted from the French, because its use was recorded in 1904 by the French engineer. Communication was first used as an English word in the late 14th century, in the Middle Ages, chains of beacons were commonly used on hilltops as a means of relaying a signal. Beacon chains suffered the drawback that they could pass a single bit of information. One notable instance of their use was during the Spanish Armada, in 1792, Claude Chappe, a French engineer, built the first fixed visual telegraphy system between Lille and Paris. However semaphore suffered from the need for skilled operators and expensive towers at intervals of ten to thirty kilometres, as a result of competition from the electrical telegraph, the last commercial line was abandoned in 1880. Homing pigeons have occasionally used throughout history by different cultures. Pigeon post is thought to have Persians roots and was used by the Romans to aid their military, frontinus said that Julius Caesar used pigeons as messengers in his conquest of Gaul. The Greeks also conveyed the names of the victors at the Olympic Games to various cities using homing pigeons, in the early 19th century, the Dutch government used the system in Java and Sumatra. And in 1849, Paul Julius Reuter started a service to fly stock prices between Aachen and Brussels, a service that operated for a year until the gap in the telegraph link was closed. Sir Charles Wheatstone and Sir William Fothergill Cooke invented the telegraph in 1837. Also, the first commercial electrical telegraph is purported to have constructed by Wheatstone and Cooke. Both inventors viewed their device as an improvement to the electromagnetic telegraph not as a new device, samuel Morse independently developed a version of the electrical telegraph that he unsuccessfully demonstrated on 2 September 1837

9.
Access network
–
An access network is a type of telecommunications network which connects subscribers to their immediate service provider. It is contrasted with the network, which connects local providers to each other. The access network may be divided between feeder plant or distribution network, and drop plant or edge network. An access network or outside plant refers to the series of wires, cables, the local exchange contains banks of automated switching equipment to direct a call or connection to the consumer. The access network is one of the oldest assets a telecoms operator owns, and is constantly evolving, growing as new customers are connected. This makes the network one of the most complex networks in the world to maintain. In 2007–2008 many telecommunication operators experienced increasing problems maintaining the quality of the records which describe the network, in 2006, according to an independent Yankee Group report, globally operators experience profit leakage in excess of €15 Billion each year. The access network is perhaps the most valuable asset an operator owns. Access networks consist largely of pairs of wires, each traveling in a direct path between the exchange and the customer. In some instances, these wires may even be aluminum, the use of which was common in the 1960s and 1970s following an increase in the cost of copper. As it happened, the increase was temporary, but the effect of this decision is still felt today because the aluminum wires oxidize. Operators offered additional services such as xDSL based broadband and IPTV to guarantee profit, the access network is again the main barrier to achieving these profits since operators worldwide have accurate records of only 40% to 60% of the network. Access networks around the world evolved to more and more optical fiber technology. The process of communicating with a network begins with an access attempt, an access attempt itself begins with an issuance of an access request by an access originator. Access failure can be the result of access outage, user blocking, incorrect access, Access denial can include, Access failure caused by the issuing of a system blocking signal by a communications system that does not have a call-originator camp-on feature. Access failure caused by exceeding the maximum time and nominal system access time fraction during an access attempt. Although some access charges are billed directly to interexchange carriers, a significant percentage of all charges are paid by the local end users. Faster PON standards generally support a split ratio of users per PON

10.
Last mile
–
More specifically, the last mile refers to the portion of the telecommunications network chain that physically reaches the end-users premises. The word mile is used metaphorically, the length of the last mile link may be more or less than a mile. Because the last mile of a network to the user is conversely the first mile from the premises to the outside world when the user is sending data. The last mile is typically the speed bottleneck in communication networks and this is because retail telecommunication networks have the topology of trees, with relatively few high capacity trunk communication channels branching out to feed many final mile leaves. To resolve, or at least mitigate, the problems involved with attempting to provide enhanced services over the last mile, one example is fixed wireless access, where a wireless network is used instead of wires to connect a stationary terminal to the wireline network. Various solutions are being developed which are seen as an alternative to the last mile of standard incumbent local exchange carriers and these include WiMAX and broadband over power lines. This ISDN30 can carry 30 simultaneous telephone calls and many direct dial telephone numbers, when leaving the telephone exchange, the ISDN30 cable can be buried in the ground, usually in ducting, at very little depth. Loss, therefore, of the last mile link, means the non-delivery of calls, any business with ISDN30 type connectivity must anticipate such failure in its business continuity planning. There are many options, as documented in customer proprietary network information, if the cable is damaged from one telephone exchange to the customer premises most of the calls can be delivered from the surviving route to the customer. Diverse routing is where the carrier can provide more than one route to supply ISDN30 connectivity from the exchange, or exchanges, carrier diversions are usually limited to all of the ISDN30 direct dial telephone numbers being delivered to one single number. Carrier diversions are usually limited to all of the ISDN30 direct dial telephone numbers being delivered to one single number, in the UK Teamphone offers this service in association with British Telecom. By not being in the exchanges, the Teamphone version offers an all or nothing diversion service if required and these are generally carrier-independent and there are a number of companies offering such solutions in the UK and AirNorth Communications in the United States. Hosted numbers is where the carriers or specialist companies can host the customers numbers within their own or the carriers networks, when a diversion service is required, the calls can be routed to alternative numbers. Both carriers and specialist companies offer this type of service in the UK, as demand has escalated, particularly fueled by the widespread adoption of the Internet, the need for economical high-speed access by end-users located at millions of locations has ballooned as well. As requirements have changed, the systems and networks that were initially pressed into service for this purpose have proven to be inadequate. To date, although a number of approaches have been tried, since the integral of the rate of information transfer with respect to time is information quantity, this requirement leads to a corresponding minimum energy per bit. The problem of sending any given amount of information across a channel can therefore be viewed in terms of sending sufficient Information-Carrying Energy, for this reason the concept of an ICE pipe or conduit is relevant and useful for examining existing systems. The distribution of information to a number of widely separated end-users can be compared to the distribution of many other resources

11.
Ethernet
–
Ethernet /ˈiːθərnɛt/ is a family of computer networking technologies commonly used in local area networks, metropolitan area networks and wide area networks. It was commercially introduced in 1980 and first standardized in 1983 as IEEE802.3, over time, Ethernet has largely replaced competing wired LAN technologies such as token ring, FDDI and ARCNET. The original 10BASE5 Ethernet uses coaxial cable as a medium, while the newer Ethernet variants use twisted pair. Over the course of its history, Ethernet data transfer rates have increased from the original 2.94 megabits per second to the latest 100 gigabits per second. The Ethernet standards comprise several wiring and signaling variants of the OSI physical layer in use with Ethernet, systems communicating over Ethernet divide a stream of data into shorter pieces called frames. As per the OSI model, Ethernet provides services up to, since its commercial release, Ethernet has retained a good degree of backward compatibility. Features such as the 48-bit MAC address and Ethernet frame format have influenced other networking protocols, the primary alternative for some uses of contemporary LANs is Wi-Fi, a wireless protocol standardized as IEEE802.11. Ethernet was developed at Xerox PARC between 1973 and 1974 and it was inspired by ALOHAnet, which Robert Metcalfe had studied as part of his PhD dissertation. In 1975, Xerox filed a patent application listing Metcalfe, David Boggs, Chuck Thacker, in 1976, after the system was deployed at PARC, Metcalfe and Boggs published a seminal paper. Metcalfe left Xerox in June 1979 to form 3Com and he convinced Digital Equipment Corporation, Intel, and Xerox to work together to promote Ethernet as a standard. The so-called DIX standard, for Digital/Intel/Xerox, specified 10 Mbit/s Ethernet, with 48-bit destination and source addresses and it was published on September 30,1980 as The Ethernet, A Local Area Network. Data Link Layer and Physical Layer Specifications, version 2 was published in November,1982 and defines what has become known as Ethernet II. Formal standardization efforts proceeded at the time and resulted in the publication of IEEE802.3 on June 23,1983. Ethernet initially competed with two largely proprietary systems, Token Ring and Token Bus, in the process, 3Com became a major company. 3Com shipped its first 10 Mbit/s Ethernet 3C100 NIC in March 1981, an Ethernet adapter card for the IBM PC was released in 1982, and, by 1985, 3Com had sold 100,000. Parallel port based Ethernet adapters were produced for a time, with drivers for DOS, by the early 1990s, Ethernet became so prevalent that it was a must-have feature for modern computers, and Ethernet ports began to appear on some PCs and most workstations. This process was sped up with the introduction of 10BASE-T and its relatively small modular connector. Since then, Ethernet technology has evolved to meet new bandwidth, in addition to computers, Ethernet is now used to interconnect appliances and other personal devices

12.
Jakob Nielsen (usability consultant)
–
Jakob Nielsen is a Danish web usability consultant. He holds a Ph. D. in human–computer interaction from the Technical University of Denmark in Copenhagen, Nielsens earlier affiliations include Bellcore, the Technical University of Denmark, and the IBM User Interface Institute at the Thomas J. Watson Research Center. From 1994 to 1998, he was a Sun Microsystems Distinguished Engineer and he was hired to make heavy-duty enterprise software easier to use, since large-scale applications had been the focus of most of his projects at the phone company and IBM. Therefore, Dr. Nielsen ended up spending most of his time at Sun defining the field of web usability. He was the usability lead for several rounds of Suns website and intranet. Nielsen is on the board of Morgan Kaufmann Publishers book series in Interactive Technologies. Nielsen writes a fortnightly newsletter, Alertbox, on web design matters and has published books on the subject of web design. Nielsen founded the discount usability engineering movement for fast and cheap improvements of user interfaces and has invented several usability methods and he holds 79 United States patents, mainly on ways of making the Web easier to use. Nielsen gave his name to Nielsens Law, in which he stated that network connection speeds for high-end home users would increase 50% per year, or double every 21 months. As a corollary, he noted that, since this growth rate is slower than predicted by Moores Law of processor power, user experience would remain bandwidth-bound. Nielsen has been quoted in the computing and the press for his criticism of Windows 8s user interface. In an interview with. net magazine, Nielsen explained that he wrote his guidelines from a usability perspective, in 2010, Nielsen was listed by Bloomberg Businessweek among 28 Worlds Most Influential Designers. September 2000 Jakob Nielsen Profile/Criticism by Nico Macdonald, originally published in New Media Creative, March 2001, pp. 38–43 Danielle Dunne, Jakob Nielsen and Vincent Flanders Speak Up

13.
Triple play (telecommunications)
–
Triple play focuses on a supplier convergence rather than solving technical issues or a common standard. However, standards like G. hn might deliver all services on a common technology. Calls at home are routed over the IP network and paid at a rate per month. No interruption or authorization for the shift is required—soft handoff takes place automatically as many times as the caller enters or leaves the range and this enabled the operator to deliver voice, video, and data services to subscribers’ homes via its 10 MB SDSL network. This approach, known as Point-to-Point Protocol over Ethernet and this FTTH architecture brought the operator the best ARPU in the industry for a number of consecutive years. Outside the United States, notably in Ecuador, Pakistan, India, Japan, other triple-play deployments include Deutsche Telekom, Telecom Italia, Swisscom, Telekom Austria, and Telus. Cable providers want to compete with telcos for local voice service, incumbent telcos want to deliver television service but want to block competition for voice service from cable operators. Both industries cloak their demands for favorable treatment in claims that their positions favor the public interests. Regulators in South Carolina and Nebraska had been allowing local telcos to block Time Warner Cable from offering local service in their states. In the other direction, also in March 2007, the FCC limited the powers of municipalities and states over telcos that want to compete with cable TV companies. All three Republican members of the FCC voted for this decision, while both Democratic members voted against it and one predicted either U. S. Congress or the courts would overturn it. For telephone local exchange carriers, triple play is delivered using a combination of optical fiber and this configuration uses fiber communications to reach distant locations and uses DSL over an existing POTS twisted pair cable as last mile access to the subscribers home. Subscriber homes can be in an environment, multi-dwelling units. Using DSL over twisted pair, television content is delivered using IPTV where the content is streamed to the subscriber in an MPEG-2 transport format, on an HFC network, television may be a mixture of analog and digital television signals. A set-top box is used at the home to allow the susbcriber to control viewing. Access to the Internet is provided through ATM or DOCSIS, typically provided as an Ethernet port to the subscriber, voice service can be provided using a traditional plain old telephone service interface as part of the legacy telephone network or can be delivered using voice over IP. In an HFC network, voice is delivered using VoIP and this is particularly common in greenfield developments where the capital expenditure is reduced by deploying one network to deliver all services. Over such a distance, DSL can deliver much higher bitrates than is possible running DSL over the local loop from the nearest central office

14.
IPTV
–
Unlike downloaded media, IPTV offers the ability to stream the source media continuously. As a result, a client media player can begin playing the data almost immediately and this is known as streaming media. Although IPTV uses the Internet protocol it is not limited to television media streamed from the Internet, IPTV in the telecommunications arena is notable for its ongoing standardisation process. Historically, many different definitions of IPTV have appeared, including elementary streams over IP networks, transport streams over IP networks and these services may include, for example, Live TV, Video On Demand and Interactive TV. These services are delivered across an access agnostic, packet switched network that employs the IP protocol to transport the audio, video, the term IPTV first appeared in 1995 with the founding of Precept Software by Judith Estrin and Bill Carrico. Precept developed an Internet video product named IP/TV, the software was written primarily by Steve Casner, Karl Auerbach, and Cha Chee Kuan. Precept was acquired by Cisco Systems in 1998, Internet radio company AudioNet started the first continuous live webcasts with content from WFAA-TV in January 1998 and KCTU-LP on January 10,1998. The operator added additional VoD service in October 2001 with Yes TV, kingston was one of the first companies in the world to introduce IPTV and IP VoD over ADSL as a commercial service. The service became the reference for various changes to UK Government regulations, in 2006, the KIT service was discontinued, subscribers having declined from a peak of 10,000 to 4,000. In 1999, NBTel was the first to commercially deploy Internet protocol television over DSL in Canada using the Alcatel 7350 DSLAM, the service was marketed under the brand VibeVision in New Brunswick, and later expanded into Nova Scotia in early 2000 after the formation of Aliant. IMagic TV was later sold to Alcatel, in 2002, Sasktel was the second in Canada to commercially deploy Internet Protocol video over DSL, using the Lucent Stinger DSL platform. In 2005, SureWest Communications was the first North American company to offer high-definition television channels over an IPTV service, in 2005, Bredbandsbolaget launched its IPTV service as the first service provider in Sweden. As of January 2009, they are not the biggest supplier any longer, TeliaSonera, in 2007, TPG became the first internet service provider in Australia to launch IPTV. Complementary to its ADSL2+ package this was, and still is, free of charge to customers on eligible plans and now offers over 45 local free to air channels, by 2010, iiNet and Telstra launched IPTV services in conjunction to internet plans but with extra fees. In 2008, PTCL launched IPTV under the name of PTCL Smart TV in Pakistan. S. Markets with an IPTV service called Prism and this was after successful test marketing in Florida. During the 2014 Winter Olympics Shortest path bridging was used to deliver 36 IPTV HD Olympic channels, in 2016, KCTV introduced the Set-top box called Manbang, claiming to provide video-on-demand services in North Korea via quasi-internet protocol television. According to KCTV, viewers can use the service not only in Pyongyang, stating that the demands for the equipment are particularly high in Sinuiju, with several hundred users in the region

15.
Cloud computing
–
Cloud computing is a type of Internet-based computing that provides shared computer processing resources and data to computers and other devices on demand. It is a model for enabling ubiquitous, on-demand access to a pool of configurable computing resources. Cloud computing relies on sharing of resources to achieve coherence and economy of scale, advocates claim that cloud computing allows companies to avoid up-front infrastructure costs. As well, it enables organizations to focus on their core businesses instead of spending time, Cloud providers typically use a pay as you go model. This will lead to unexpectedly high charges if administrators do not adapt to the pricing model. Companies can scale up as computing needs increase and then scale down again as demands decrease, the origin of the term cloud computing is unclear. In analogy to the usage, the word cloud was used as a metaphor for the Internet. Later it was used to depict the Internet in computer network diagrams, with this simplification, the implication is that the specifics of how the end points of a network are connected are not relevant for the purposes of understanding the diagram. The cloud symbol was used to represent networks of computing equipment in the original ARPANET by as early as 1977, the term cloud has been used to refer to platforms for distributed computing. No one had conceived that before, references to cloud computing in its modern sense appeared as early as 1996, with the earliest known mention in a Compaq internal document. The popularization of the term can be traced to 2006 when Amazon. com introduced its Elastic Compute Cloud, during the 1960s, the initial concepts of time-sharing became popularized via RJE, this terminology was mostly associated with large vendors such as IBM and DEC. Full time-sharing solutions were available by the early 1970s on such platforms as Multics, Cambridge CTSS, yet, the data center model where users submitted jobs to operators to run on IBM mainframes was overwhelmingly predominant. By switching traffic as they saw fit to balance server use and they began to use the cloud symbol to denote the demarcation point between what the provider was responsible for and what users were responsible for. Cloud computing extended this boundary to all servers as well as the network infrastructure. As computers became more diffused, scientists and technologists explored ways to make large-scale computing power available to users through time-sharing. They experimented with algorithms to optimize the infrastructure, platform, and applications to prioritize CPUs, since 2000, cloud computing has come into existence. Will result in growth in IT products in some areas. In August 2006 Amazon introduced its Elastic Compute Cloud, Microsoft Azure was announced as Azure in October 2008 and was released on 1 February 2010 as Windows Azure, before being renamed to Microsoft Azure on 25 March 2014

16.
Telecommuting
–
Telecommuting is a work arrangement in which employees do not commute or travel to a central place of work, such as an office building, warehouse or store. Although the concepts of telecommuting and telework are closely related, there is a difference between the two, all types of technology-assisted work conducted outside of a centrally located work space are regarded as telework. Telecommuters often maintain an office and usually work from an alternative work site from 1 to 3 days a week. Telecommuting refers more specifically to work undertaken at a location that reduces commuting time, in the 1990s, telecommuting became the subject of pop culture attention. In 1995, the motto that work is something you do, variations of this motto include, Work is something we DO, not a place that we GO and Work is what we do, not where we are. Telecommuting has been adopted by a range of businesses, governments, organizations may use telecommuting to reduce costs. Some organizations adopt telecommuting to improve quality of life, as teleworking typically reduces commuting time and time stuck in traffic jams. As well, teleworking may make it easier for workers to balance their work responsibilities with family roles, some organizations adopt teleworking for environmental reasons, as telework can reduce congestion and air pollution, as it can reduce the number of cars on the roads. Telecommuting is also called remote work, telework, or teleworking, a person who telecommutes is known as a telecommuter, teleworker, and sometimes as a home-sourced, or work-at-home employee. Many telecommuters work from home, while others, sometimes called nomad workers work at coffee shops or other locations, the terms telecommuting and telework were coined by Jack Nilles in 1973. The number of reported to have worked from their home on their primary job in 2010 has been reported as 9.4 million, though. Very few companies employ large numbers of home-based full-time staff, the call center industry is one notable exception, several U. S. call centers employ thousands of home-based workers. For many employees, the option to work from home is available as an employee benefit, studies show that at-home workers are willing to earn up to 30% less and experience heightened productivity. In 2009, the United States Office of Personnel Management reported that approximately 103,000 federal employees were teleworking, however, less than 14,000 were teleworking three or more days per week. On December 9,2010, the U. S, for example, telework allows employees to better manage their work and family obligations and thus helps retain a more resilient Federal workforce better able to meet agency goals. Study results from the 2013 Regus Global Economic Indicator were published in September 2013, the study engaged over 26,000 business managers across 90 countries, with 55% of respondents stating that the effective management of remote workers is an attainable goal. A living list of fully distributed companies can be found here, forrester Research’s US Telecommuting Forecast reporting that 34 million Americans work from home and the number is expected to reach a staggering 63 million – or 43% of the U. S. workforce – by 2016. Cisco reports that the company has generated an annual savings of $277 million in productivity by allowing employees to telecommute

17.
Videotelephony
–
Videotelephony comprises the technologies for the reception and transmission of audio-video signals by users at different locations, for communication between people in real-time. A videophone is a telephone with a display, capable of simultaneous video. Videoconferencing implies the use of technology for a group or organizational meeting rather than for individuals. Telepresence may refer either to a high-quality videotelephony system or to meetup technology which goes beyond video into robotics, Videoconferencing has also been called visual collaboration and is a type of groupware. It is also used in commercial and corporate settings to facilitate meetings and conferences, Simple analog videophone communication could be established as early as the invention of the television. Such an antecedent usually consisted of two closed-circuit television systems connected via cable or radio. An example of that was the German Reich Postzentralamt video telephone network serving Berlin, the development of the crucial video technology first started in the latter half of the 1920s in the United Kingdom and the United States, spurred notably by John Logie Baird and AT&Ts Bell Labs. This occurred in part, at least with AT&T, to serve as an adjunct supplementing the use of the telephone, a number of organizations believed that videotelephony would be superior to plain voice communications. However video technology was to be deployed in analog television broadcasting long before it could become practical—or popular—for videophones, during the first manned space flights, NASA used two radio-frequency video links, one in each direction. TV channels routinely use this type of videotelephony when reporting from distant locations, the news media were to become regular users of mobile links to satellites using specially equipped trucks, and much later via special satellite videophones in a briefcase. This technique was very expensive, though, and could not be used for such as telemedicine, distance education. Videotelephony developed in parallel with conventional voice telephone systems from the mid-to-late 20th century, Only in the late 20th century with the advent of powerful video codecs combined with high-speed Internet broadband and ISDN service did videotelephony become a practical technology for regular use. In the 1980s, digital telephony transmission networks became possible, such as with ISDN networks, assuring a bit rate for compressed video. During this time, there was research into other forms of digital video. Many of these technologies, such as the Media space, are not as used today as videoconferencing but were still an important area of research. The first dedicated systems started to appear in the market as ISDN networks were expanding throughout the world, one of the first commercial videoconferencing systems sold to companies came from PictureTel Corp. which had an Initial Public Offering in November,1984. The company also secured a patent for a codec for full-motion videoconferencing, in 1992 CU-SeeMe was developed at Cornell by Tim Dorcey et al. In 1995 the first public videoconference between North America and Africa took place, linking a technofair in San Francisco with a techno-rave and cyberdeli in Cape Town

18.
Virtual private network
–
A virtual private network is a virtualized extension of a private network across a public network, such as the Internet. It enables users to send and receive data across shared or public networks as if their computing devices were connected to the private network. Applications running across the VPN may therefore benefit from the functionality, security, VPNs may allow employees to securely access a corporate intranet while located outside the office. They are used to connect geographically separated offices of an organization. However, some Internet sites block access to known VPN technology to prevent the circumvention of their geo-restrictions, a VPN is created by establishing a virtual point-to-point connection through the use of dedicated connections, virtual tunneling protocols, or traffic encryption. A VPN available from the public Internet can provide some of the benefits of a wide area network, from a user perspective, the resources available within the private network can be accessed remotely. Designers have developed VPN variants, such as Virtual Private LAN Service and these networks are not considered true VPNs because they passively secure the data being transmitted by the creation of logical data streams. VPNs can be either remote-access or site-to-site, a VPN can also be used to interconnect two similar networks over a dissimilar middle network, for example, two IPv6 networks over an IPv4 network. VPN systems may be classified by, The protocols used to tunnel the traffic The tunnels termination point location, to prevent disclosure of private information, VPNs typically allow only authenticated remote access using tunneling protocols and encryption techniques. This standards-based security protocol is also used with IPv4 and the Layer 2 Tunneling Protocol. Its design meets most security goals, authentication, integrity, IPsec uses encryption, encapsulating an IP packet inside an IPsec packet. De-encapsulation happens at the end of the tunnel, where the original IP packet is decrypted and forwarded to its intended destination, Transport Layer Security can tunnel an entire networks traffic or secure an individual connection. A number of vendors provide remote-access VPN capabilities through SSL, an SSL VPN can connect from locations where IPsec runs into trouble with Network Address Translation and firewall rules. Datagram Transport Layer Security – used in Cisco AnyConnect VPN and in OpenConnect VPN to solve the issues SSL/TLS has with tunneling over UDP, Microsoft Point-to-Point Encryption works with the Point-to-Point Tunneling Protocol and in several compatible implementations on other platforms. Microsoft Secure Socket Tunneling Protocol tunnels Point-to-Point Protocol or Layer 2 Tunneling Protocol traffic through an SSL3.0 channel, ragula Systems Development Company owns the registered trademark MPVPN. Secure Shell VPN – OpenSSH offers VPN tunneling to secure remote connections to a network or to inter-network links, OpenSSH server provides a limited number of concurrent tunnels. The VPN feature itself does not support personal authentication, tunnel endpoints must be authenticated before secure VPN tunnels can be established. User-created remote-access VPNs may use passwords, biometrics, two-factor authentication or other cryptographic methods, network-to-network tunnels often use passwords or digital certificates

19.
E-commerce
–
E-commerce is a transaction of buying or selling online. Modern electronic commerce typically uses the World Wide Web for at least one part of the life cycle although it may also use other technologies such as e-mail. 1979, Michael Aldrich demonstrates the first online shopping system,1981, Thomson Holidays UK is the first business-to-business online shopping system to be installed. 1982, Minitel was introduced nationwide in France by France Télécom,1983, California State Assembly holds first hearing on electronic commerce in Volcano, California. Testifying are CPUC, MCI Mail, Prodigy, CompuServe, Volcano Telephone and it is the first comprehensive electronic commerce service. 1989, In May 1989, Sequoia Data Corp, introduced Compumarket The first internet based system for e-commerce. Sellers and buyers could post items for sale and buyers could search the database,1990, Tim Berners-Lee writes the first web browser, WorldWideWeb, using a NeXT computer. 1992, Book Stacks Unlimited in Cleveland opens a sales website selling books online with credit card processing. 1993, Paget Press releases edition No.3 of the first app store, The Electronic AppWrapper 1994, Netscape 1.0 is introduced in late 1994 with SSL encryption that made transactions secure. 1994, Ipswitch IMail Server becomes the first software available online for sale,1994, Ten Summoners Tales by Sting becomes the first secure online purchase through NetMarket. 1995, The US National Science Foundation lifts its former strict prohibition of commercial enterprise on the Internet, the shopping service at launch featured W H Smith, Tesco, Virgin Megastores/Our Price, Great Universal Stores, Interflora, Dixons Retail, Past Times, PC World and Innovations. 1995, Jeff Bezos launches Amazon. com and the first commercial-free 24-hour, internet-only radio stations, Radio HK, eBay is founded by computer programmer Pierre Omidyar as AuctionWeb. 1996, IndiaMART B2B marketplace established in India,1996, ECPlaza B2B marketplace established in Korea. 1996, The use of Excalibur BBS with replicated Storefronts was an implementation of electronic commerce started by a group of SysOps in Australia. 1998, Electronic postal stamps can be purchased and downloaded for printing from the Web,1999, Alibaba Group is established in China. Business. com sold for US $7.5 million to eCompanies, the peer-to-peer filesharing software Napster launches. ATG Stores launches to sell items for the home online. 2000, Complete Idiots Guide to E-commerce released on Amazon 2000,2001, Alibaba. com achieved profitability in December 2001

20.
Laser
–
A laser is a device that emits light through a process of optical amplification based on the stimulated emission of electromagnetic radiation. The term laser originated as an acronym for light amplification by stimulated emission of radiation, the first laser was built in 1960 by Theodore H. Maiman at Hughes Research Laboratories, based on theoretical work by Charles Hard Townes and Arthur Leonard Schawlow. A laser differs from other sources of light in that it emits light coherently, spatial coherence allows a laser to be focused to a tight spot, enabling applications such as laser cutting and lithography. Spatial coherence also allows a laser beam to stay narrow over great distances, Lasers can also have high temporal coherence, which allows them to emit light with a very narrow spectrum, i. e. they can emit a single color of light. Temporal coherence can be used to produce pulses of light as short as a femtosecond, Lasers are distinguished from other light sources by their coherence. Spatial coherence is typically expressed through the output being a narrow beam, Laser beams can be focused to very tiny spots, achieving a very high irradiance, or they can have very low divergence in order to concentrate their power at a great distance. Temporal coherence implies a polarized wave at a single frequency whose phase is correlated over a great distance along the beam. A beam produced by a thermal or other incoherent light source has an amplitude and phase that vary randomly with respect to time and position. Lasers are characterized according to their wavelength in a vacuum, most single wavelength lasers actually produce radiation in several modes having slightly differing frequencies, often not in a single polarization. Although temporal coherence implies monochromaticity, there are lasers that emit a broad spectrum of light or emit different wavelengths of light simultaneously, there are some lasers that are not single spatial mode and consequently have light beams that diverge more than is required by the diffraction limit. However, all devices are classified as lasers based on their method of producing light. Lasers are employed in applications where light of the spatial or temporal coherence could not be produced using simpler technologies. The word laser started as an acronym for light amplification by stimulated emission of radiation, in the early technical literature, especially at Bell Telephone Laboratories, the laser was called an optical maser, this term is now obsolete. A laser that produces light by itself is technically an optical rather than an optical amplifier as suggested by the acronym. It has been noted that the acronym LOSER, for light oscillation by stimulated emission of radiation. With the widespread use of the acronym as a common noun, optical amplifiers have come to be referred to as laser amplifiers. The back-formed verb to lase is frequently used in the field, meaning to produce light, especially in reference to the gain medium of a laser. Further use of the laser and maser in an extended sense, not referring to laser technology or devices, can be seen in usages such as astrophysical maser

21.
Network interface device
–
In telecommunications, a network interface device is a device that serves as the demarcation point between the carriers local loop and the customers premises wiring. Outdoor telephone NIDs also provide the subscriber access to the station wiring and serve as a convenient test point for verification of loop integrity. Generically, an NID may also be called a network interface unit, telephone network interface, system network interface, australias National Broadband Network uses the term network termination device or NTD. A smartjack is a type of NID with capabilities beyond simple electrical connection, an optical network terminal is a type of NID used with fiber-to-the-premises applications. The simplest NIDs are essentially just a set of wiring terminals. These will typically take the form of a small, weather-proof box, the telephone line from the telephone company will enter the NID and be connected to one side. The customer connects their wiring to the other side, a single NID enclosure may contain termination for a single line or multiple lines. In its role as the point, the NID separates the telephone companys equipment from the customers wiring. The telephone company owns the NID itself, and all wiring up to it, anything past the NID is the customers responsibility. To facilitate this, there is typically a test jack inside the NID, accessing the test jack disconnects the customer premises wiring from the public switched telephone network and allows the customer to plug a known good telephone into the jack to isolate trouble. If the telephone works at the test jack, the problem is the wiring. If the telephone does not work, the line is faulty, most NIDs also include circuit protectors, which are surge protectors for a telephone line. They protect customer wiring, equipment, and personnel from any transient energy on the line, simple NIDs contain no digital logic, they are dumb devices. They have no capabilities beyond wiring termination, circuit protection, several types of NIDs provide more than just a terminal for the connection of wiring. Such NIDs are colloquially called smartjacks or Intelligent Network Interface Devices as an indication of their intelligence, as opposed to a simple NID. Smartjacks are typically used for more complicated types of telecommunications service, plain old telephone service lines generally cannot be equipped with smartjacks. Despite the name, most smartjacks are much more than a telephone jack. One common form for a smartjack is a circuit board with a face plate on one edge

22.
Wavelength-division multiplexing
–
This technique enables bidirectional communications over one strand of fiber, as well as multiplication of capacity. This is purely convention because wavelength and frequency communicate the same information, a WDM system uses a multiplexer at the transmitter to join the several signals together, and a demultiplexer at the receiver to split them apart. With the right type of fiber it is possible to have a device that does both simultaneously, and can function as an optical add-drop multiplexer, the optical filtering devices used have conventionally been etalons. As there are three different WDM types, whereof one is called WDM, we would use the notation xWDM when discussing the technology as such. The concept was first published in 1978, and by 1980 WDM systems were being realized in the laboratory, the first WDM systems combined only two signals. Modern systems can handle 160 signals and can expand a basic 100 Gbit/s system over a single fiber pair to over 16 Tbit/s. A system of 320 channels in also present WDM systems are popular with telecommunications companies because they allow them to expand the capacity of the network without laying more fiber. By using WDM and optical amplifiers, they can accommodate several generations of development in their optical infrastructure without having to overhaul the backbone network. Capacity of a link can be expanded simply by upgrading the multiplexers and demultiplexers at each end. This is often done by use of optical-to-electrical-to-optical translation at the edge of the transport network. Most WDM systems operate on single-mode fiber optical cables, which have a diameter of 9 µm. Certain forms of WDM can also be used in multi-mode fiber cables which have diameters of 50 or 62.5 µm. Early WDM systems were expensive and complicated to run, however, recent standardization and better understanding of the dynamics of WDM systems have made WDM less expensive to deploy. Optical receivers, in contrast to laser sources, tend to be wideband devices, therefore, the demultiplexer must provide the wavelength selectivity of the receiver in the WDM system. WDM systems are divided into three different wavelength patterns, normal, coarse and dense, normal WDM uses the two normal wavelengths 1310 and 1550 on one fiber. Coarse WDM provides up to 16 channels across multiple transmission windows of silica fibers, dense wavelength division multiplexing uses the C-Band transmission window but with denser channel spacing. Channel plans vary, but a typical DWDM system would use 40 channels at 100 GHz spacing or 80 channels with 50 GHz spacing, some technologies are capable of 12.5 GHz spacing. New amplification options enable the extension of the wavelengths to the L-band

23.
Institute of Electrical and Electronics Engineers
–
The Institute of Electrical and Electronics Engineers is a professional association with its corporate office in New York City and its operations center in Piscataway, New Jersey. It was formed in 1963 from the amalgamation of the American Institute of Electrical Engineers, today, it is the worlds largest association of technical professionals with more than 400,000 members in chapters around the world. Its objectives are the educational and technical advancement of electrical and electronic engineering, telecommunications, computer engineering, IEEE stands for the Institute of Electrical and Electronics Engineers. The association is chartered under this full legal name, IEEEs membership has long been composed of engineers and scientists. For this reason the organization no longer goes by the name, except on legal business documents. The IEEE is dedicated to advancing technological innovation and excellence and it has about 430,000 members in about 160 countries, slightly less than half of whom reside in the United States. The major interests of the AIEE were wire communications and light, the IRE concerned mostly radio engineering, and was formed from two smaller organizations, the Society of Wireless and Telegraph Engineers and the Wireless Institute. After World War II, the two became increasingly competitive, and in 1961, the leadership of both the IRE and the AIEE resolved to consolidate the two organizations. The two organizations merged as the IEEE on January 1,1963. The IEEE is incorporated under the Not-for-Profit Corporation Law of the state of New York and it was formed in 1963 by the merger of the Institute of Radio Engineers and the American Institute of Electrical Engineers. The IEEE serves as a publisher of scientific journals and organizer of conferences, workshops. IEEE develops and participates in activities such as accreditation of electrical engineering programs in institutes of higher learning. The IEEE logo is a design which illustrates the right hand grip rule embedded in Benjamin Franklins kite. IEEE has a dual complementary regional and technical structure – with organizational units based on geography and it manages a separate organizational unit which recommends policies and implements programs specifically intended to benefit the members, the profession and the public in the United States. The IEEE includes 39 technical Societies, organized around specialized technical fields, the IEEE Standards Association is in charge of the standardization activities of the IEEE. The IEEE History Center became an organization to the Engineering. The new ETHW is an effort by various engineering societies as a formal repository of topic articles, oral histories, first-hand histories, Landmarks + Milestones. The IEEE History Center is annexed to Stevens University Hoboken, NJ, in 2016, the IEEE acquired GlobalSpec, adding the provision of engineering data for a profit to its organizational portfolio

24.
Demarcation point
–
In telephony, the demarcation point is the point at which the public switched telephone network ends and connects with the customers on-premises wiring. It is the line which determines who is responsible for installation and maintenance of wiring and equipment—customer/subscriber. The demarcation point varies between countries and has changed over time, demarcation point is sometimes abbreviated as demarc, DMARC, or similar. The term MPOE is synonymous, with the implication that it occurs as soon as possible upon entering the customer premises. A network interface device often serves as the demarcation point, aT&T owned the local loop, including the telephone wiring within the customer premises and the customer telephone equipment. A similar arrangement existed with smaller, regional telephone companies such as GTE, where the portions meet is called the demarcation point. The demarcation point varies from building type and service level, in its simplest form, the demarcation point is a junction block where telephone extensions join to connect to the network. This junction block usually includes a lightning arrester, in multi-line installations such as businesses or apartment buildings, the demarcation point may be a punch down block. In most places this hardware existed before deregulation, the modern demarcation point is the network interface device or intelligent network interface device also known as a smartjack. The NID is the telcos property, the NID may be outdoors or indoors. The NID is usually placed for access by a technician. The demarcation point has a user accessible RJ-11 jack, which is connected directly to the network. In most cases, everything from the office to and including the demarcation point is owned by the carrier. Demarcation points on houses built prior to the Bell System divestiture usually do not contain a test jack and they only contained a spark-gap surge protector, a grounding post and mount point to connect a single telephone line. The second wire pair was left unconnected and were kept as a spare pair in case the first pair was damaged. DEMARCs that handle both telephony and IT fiber optic internet lines often do not look like the ones pictured above, in many places several customers share one central DEMARC for a commercial or strip mall setting. Usually a DEMARC will be located indoors if it is serving more than a single customer, outdoor ones provide easier access, without disturbing other tenants, but call for weatherproofing and punching through a wall for each new addition of wires and service. Typically indoor DEMARCs will be identified by a patch panel of telephone wires on the wall next to a series of boxes with RJ48 jacks for T-1 lines

25.
Residential gateway
–
In telecommunications networking, a residential gateway allows the connection of a local area network to a wide area network. The WAN can be a computer network, or the Internet. WAN connectivity may be provided through DSL, cable modem, a mobile phone network. The term residential gateway was originally used to distinguish the inexpensive networking devices designated for use in the home from devices used in corporate LAN environments. In recent years, however, the expensive residential gateways have gained many of the capabilities of corporate gateways. Many home LANs now are able to provide most of the functions of small corporate LANs, as a part of the carrier network, the home gateway supports remote control, detection and configuration. A modem by itself provides none of the functions of a router and it merely allows ATM or Ethernet or PPP traffic to be transmitted across telephone lines, cable wires, optical fibers, or wireless radio frequencies. On the receiving end is another modem that re-converts the transmission back into digital data packets. This allows network bridging using telephone, cable, optical, the modem also provides handshake protocols, so that the devices on each end of the connection are able to recognize each other. However, a modem generally provides few other network functions, a USB modem plugs into a single PC and allows a connection of that single PC to a WAN. If properly configured, the PC can also function as the router for a home LAN, an internal modem can be installed on a single PC, also allowing that single PC to connect to a WAN. Again, the PC can be configured to function as a router for a home LAN, a wireless access point can function in a similar fashion to a modem. It can allow a connection from a home LAN to a WAN. However, many modems now incorporate the features mentioned below and thus are appropriately described as residential gateways, may also have an internal modem, most commonly for DSL or Cable ISP. It may also provide functions such as Dynamic DNS. Most routers are self-contained components, using internally stored firmware and they are generally OS-independent, i. e. they can be accessed with any operating system. Wireless routers perform the functions as a router, but also allow connectivity for wireless devices with the LAN. Majority of the vulnerabilities were present in the web administration consoles of the routers, allowing unauthorised control either via default passwords, vendor backdoors, the Residential Gateway Home Gateway Initiative, a group of broadband providers proposing specifications for residential gateways The Residential Home Gateway on About. com

26.
Firewall (computing)
–
In computing, a firewall is a network security system that monitors and controls the incoming and outgoing network traffic based on predetermined security rules. A firewall typically establishes a barrier between a trusted, secure network and another outside network, such as the Internet. Firewalls are often categorized as either network firewalls or host-based firewalls, Network firewalls filter traffic between two or more networks, they are either software appliances running on general purpose hardware, or hardware-based firewall computer appliances. Host-based firewalls provide a layer of software on one host that controls network traffic in, Firewall appliances may also offer other functionality to the internal network they protect, such as acting as a DHCP or VPN server for that network. The term firewall originally referred to a wall intended to confine a fire or potential fire within a building, later uses refer to similar structures, such as the metal sheet separating the engine compartment of a vehicle or aircraft from the passenger compartment. Firewall technology emerged in the late 1980s when the Internet was a new technology in terms of its global use. It has hit Berkeley, UC San Diego, Lawrence Livermore, Stanford, the Morris Worm spread itself through multiple vulnerabilities in the machines of the time. Although it was not malicious in intent, the Morris Worm was the first large attack on Internet security. The first type of firewall was the packet filter which looks at network addresses and ports of the packet, the first paper published on firewall technology was in 1988, when engineers from Digital Equipment Corporation developed filter systems known as packet filter firewalls. This fairly basic system was the first generation of what is now a highly involved, packet filters act by inspecting the packets which are transferred between computers on the Internet. If a packet does not match the packet filters set of filtering rules, conversely, if the packet matches one or more of the programmed filters, the packet is allowed to pass. This type of packet filtering pays no attention to whether a packet is part of a stream of traffic. Instead, it filters each packet based only on information contained in the packet itself, when the packet passes through the firewall, it filters the packet on a protocol/port number basis. For example, if a rule in the firewall exists to block telnet access, from 1989–1990 three colleagues from AT&T Bell Laboratories, Dave Presotto, Janardan Sharma, and Kshitij Nigam, developed the second generation of firewalls, calling them circuit-level gateways. Second-generation firewalls perform the work of their predecessors but operate up to layer 4 of the OSI model. This is achieved by retaining packets until enough information is available to make a judgement about its state, though static rules are still used, these rules can now contain connection state as one of their test criteria. Certain denial-of-service attacks bombard the firewall with thousands of fake connection packets in an attempt to overwhelm it by filling its connection state memory, Marcus Ranum, Wei Xu, and Peter Churchyard developed an application firewall known as Firewall Toolkit. In June 1994, Wei Xu extended the FWTK with the enhancement of IP filter

27.
Router (computing)
–
A router is a networking device that forwards data packets between computer networks. Routers perform the traffic directing functions on the Internet, a data packet is typically forwarded from one router to another router through the networks that constitute the internetwork until it reaches its destination node. A router is connected to two or more lines from different networks. When a data packet comes in on one of the lines, then, using information in its routing table or routing policy, it directs the packet to the next network on its journey. The most familiar type of routers are home and small office routers that simply pass IP packets between the computers and the Internet. An example of a router would be the cable or DSL router. Though routers are typically dedicated hardware devices, software-based routers also exist, when multiple routers are used in interconnected networks, the routers can exchange information about destination addresses using a dynamic routing protocol. Each router builds up a table listing the preferred routes between any two systems on the interconnected networks. A router may have interfaces for different physical types of connections, such as copper cables, fibre optic. Its firmware can also support different networking communications protocol standards, each network interface is used by this specialized computer software to enable data packets to be forwarded from one protocol transmission system to another. Routers may also be used to two or more logical groups of computer devices known as subnets, each with a different network prefix. The network prefixes recorded in the table do not necessarily map directly to the physical interface connections. It does this using internal pre-configured directives, called static routes, static and dynamic routes are stored in the Routing Information Base. The control-plane logic then strips non-essential directives from the RIB and builds a Forwarding Information Base to be used by the forwarding-plane, Forwarding plane, The router forwards data packets between incoming and outgoing interface connections. It routes them to the network type using information that the packet header contains. It uses data recorded in the routing control plane. Routers may provide connectivity within enterprises, between enterprises and the Internet, or between internet service providers networks, the largest routers interconnect the various ISPs, or may be used in large enterprise networks. Smaller routers usually provide connectivity for typical home and office networks, other networking solutions may be provided by a backbone Wireless Distribution System, which avoids the costs of introducing networking cables into buildings

28.
Cable television
–
This contrasts with broadcast television, in which the television signal is transmitted over the air by radio waves and received by a television antenna attached to the television. FM radio programming, high-speed Internet, telephone services, and similar non-television services may also be provided through these cables, analog television was standard in the 20th century, but since the 2000s, cable systems have been upgraded to digital cable operation. A cable channel is a television network available via cable television, alternative terms include non-broadcast channel or programming service, the latter being mainly used in legal contexts. Examples of cable/satellite channels/cable networks available in many countries are HBO, MTV, Cartoon Network, E. Eurosport, the abbreviation CATV is often used for cable television. It originally stood for Community Access Television or Community Antenna Television, in areas where over-the-air TV reception was limited by distance from transmitters or mountainous terrain, large community antennas were constructed, and cable was run from them to individual homes. The origins of cable broadcasting for radio are even older as radio programming was distributed by cable in some European cities as far back as 1924, Cable television has gone through a series of steps of evolution in the United States and Canada. Particularly in Canada, communities with their own signals were fertile cable markets, as viewers wanted to receive American signals. Early systems carried only a maximum of seven channels, using 2,4,5 or 6,7,9,11 and 13, as the equipment was unable to confine the signal discreetly within the assigned channel bandwidth. The reason 4 and 5 along with 6 and 7 could be used together was because of the 4 MHz gap between 4 and 5 and the nearly 90 MHz gap between 6 and 7. Even though eight channels are listed, in systems that maximized 7 channels. As equipment improved, all channels could be utilized, except where a local VHF television station broadcast. Local broadcast channels were not usable for signals deemed to be priority, later, the cable operators began to carry FM radio stations, and encouraged subscribers to connect their FM stereo sets to cable. Before stereo and bilingual TV sound became common, Pay-TV channel sound was added to the FM stereo cable line-ups, about this time, operators expanded beyond the 12-channel dial to use the midband and superband VHF channels adjacent to the high band 7-13 of North American television frequencies. Some operators as in Cornwall, Ontario, used a dual distribution network with Channels 2-13 on each of the two cables, during the 1980s, United States regulations not unlike public, educational, and government access created the beginning of cable-originated live television programming. These stations evolved partially into todays over-the-air digital subchannels, where a main broadcast TV station e. g, many live local programs with local interests were subsequently created all over the United States in most major television markets in the early 1980s. This evolved into todays many cable-only broadcasts of diverse programming, including cable-only produced television movies and miniseries, Cable specialty channels, starting with channels oriented to show movies and large sporting or performance events, diversified further, and narrowcasting became common. By the late 1980s, cable-only signals outnumbered broadcast signals on cable systems, by the mid-1980s in Canada, cable operators were allowed by the regulator to enter into distribution contracts with cable networks on their own. By the 1990s, tiers became common, with customers able to subscribe to different tiers to obtain different selections of additional channels above the basic selection, by subscribing to additional tiers, customers could get specialty channels, movie channels, and foreign channels

29.
G.hn
–
G. hn is a specification for home networking with data rates up to 1 Gbit/s and operation over three types of legacy wires, telephone wiring, coaxial cables and power lines. A single G. hn semiconductor device is able to network over any of the supported home wire types, some benefits of a multi-wire standard are lower equipment development costs and lower deployment costs for service providers. It was developed under the International Telecommunication Unions Telecommunication Standardization sector and promoted by the HomeGrid Forum, ITU-T Recommendation G.9960, which received approval on October 9,2009, specified the physical layers and the architecture of G. hn. The Data Link Layer was approved on June 11,2010, key promoters CEPCA, HomePNA, and UPA, creators of two of these interfaces, united behind the latest version of the standard in February 2009. The ITU-T extended the technology with multiple input, multiple output technology to increase data rates, the work on MIMO for G. hn at ITU-T is under the G.9963 standard. G. hn specifies a single physical layer based on fast Fourier transform orthogonal frequency-division multiplexing modulation, G. hn includes the capability to notch specific frequency bands to avoid interference with amateur radio bands and other licensed radio services. G. hn includes mechanisms to avoid interference with legacy home networking technologies, OFDM systems split the transmitted signal into multiple orthogonal sub-carriers. In G. hn each one of the sub-carriers is modulated using QAM, the maximum QAM constellation supported by G. hn is 4096-QAM. There are two types of TXOPs, Contention-Free Transmission Opportunities, which have a duration and are allocated to a specific pair of transmitter and receiver. CFTXOP are used for implementing TDMA Channel Access for specific applications that require quality of service guarantees, shared Transmission Opportunities, which are shared among multiple devices in the network. STXOP are divided into Time Slots, there are two types of TS, Contention-Free Time Slots, which are used for implementing implicit token passing Channel Access. In G. hn, a series of consecutive CFTS is allocated to a number of devices, the allocation is performed by the domain master and broadcast to all nodes in the network. There are pre-defined rules that specify which device can transmit after another device has finished using the channel, as all devices know who is next, there is no need to explicitly send a token between devices. The process of passing the token is implicit and ensures there are no collisions during Channel access. Contention-Based Time Slots, which are used for implementing CSMA/CARP Channel Access, in general, CSMA systems cannot completely avoid collisions, so CBTS are only useful for applications that do not have strict Quality of Service requirements. Although most elements of G. hn are common for all three media supported by the standard, G. hn includes media-specific optimizations for each media. Some of these parameters include, OFDM Carrier Spacing,195.31 kHz in coaxial,48.82 kHz in phone lines,24.41 kHz in power lines. FEC Rates, G. hns FEC can operate with code rates 1/2, 2/3, 5/6, 16/18 and 20/21

30.
Time-division multiple access
–
Time-division multiple access is a channel access method for shared-medium networks. It allows several users to share the same channel by dividing the signal into different time slots. The users transmit in rapid succession, one after the other and this allows multiple stations to share the same transmission medium while using only a part of its channel capacity. It is also used extensively in satellite systems, combat-net radio systems, for usage of Dynamic TDMA packet mode communication, see below. TDMA is a type of multiplexing, with the special point that instead of having one transmitter connected to one receiver. GSM, D-AMPS, PDC, iDEN, and PHS are examples of TDMA cellular systems, GSM combines TDMA with Frequency Hopping and wideband transmission to minimize common types of interference. In the GSM system, the synchronization of the phones is achieved by sending timing advance commands from the base station which instructs the mobile phone to transmit earlier. This compensates for the delay resulting from the light speed velocity of radio waves. The mobile phone is not allowed to transmit for its time slot. As the transmission moves into the period, the mobile network adjusts the timing advance to synchronize the transmission. Initial synchronization of a phone requires even more care, before a mobile transmits there is no way to actually know the offset required. For this reason, a time slot has to be dedicated to mobiles attempting to contact the network. The mobile attempts to broadcast at the beginning of the time slot, if the mobile is located next to the base station, there will be no time delay and this will succeed. If, however, the phone is at just less than 35 km from the base station. In that case, the mobile will be instructed to broadcast its messages starting nearly a whole time slot earlier than would be expected otherwise. Finally, if the mobile is beyond the 35 km cell range in GSM, then the RACH will arrive in a time slot. It is this feature, rather than limitations of power, that limits the range of a GSM cell to 35 km when no special extension techniques are used, in G. hn, a master device allocates Contention-Free Transmission Opportunities to other slave devices in the network. Only one device can use a CFTXOP at a time, thus avoiding collisions, flexRay protocol which is also a wired network used for safety-critical communication in modern cars, uses the TDMA method for data transmission control

31.
Cellular network
–
A cellular network or mobile network is a communication network where the last link is wireless. The network is distributed over land areas called cells, each served by at least one fixed-location transceiver and this base station provides the cell with the network coverage which can be used for transmission of voice, data and others. A cell might use a different set of frequencies from neighboring cells, to avoid interference, when joined together these cells provide radio coverage over a wide geographic area. This allows mobile phones and mobile computing devices to be connected to the switched telephone network. Private cellular networks can be used for research or for large organizations and fleets, each of these cells is assigned with multiple frequencies which have corresponding radio base stations. The group of frequencies can be reused in other cells, provided that the frequencies are not reused in adjacent neighboring cells as that would cause co-channel interference. If there is a single transmitter, only one transmission can be used on any given frequency. Unfortunately, there is some level of interference from the signal from the other cells which use the same frequency. This means that, in a standard FDMA system, there must be at least a one cell gap between cells which reuse the same frequency, in the simple case of the taxi company, each radio had a manually operated channel selector knob to tune to different frequencies. As the drivers moved around, they would change from channel to channel, the drivers knew which frequency covered approximately what area. When they did not receive a signal from the transmitter, they would try other channels until they found one that worked, the taxi drivers would only speak one at a time, when invited by the base station operator. With TDMA, the transmitting and receiving time slots used by different users in each cell are different from each other, with FDMA, the transmitting and receiving frequencies used by different users in each cell are different from each other. In a simple system, the taxi driver manually tuned to a frequency of a chosen cell to obtain a strong signal. The principle of CDMA is more complex, but achieves the same result, time division multiple access is used in combination with either FDMA or CDMA in a number of systems to give multiple channels within the coverage area of a single cell. The key characteristic of a network is the ability to re-use frequencies to increase both coverage and capacity. The elements that determine frequency reuse are the distance and the reuse factor. The reuse distance, D is calculated as D = R3 N, cells may vary in radius from 1 to 30 kilometres. The boundaries of the cells can also overlap between adjacent cells and large cells can be divided into smaller cells, the frequency reuse factor is the rate at which the same frequency can be used in the network

32.
Backhaul (telecommunications)
–
In contracts pertaining to such networks, backhaul is the obligation to carry packets to and from that global network. A non-technical business definition of backhaul is the commercial wholesale bandwidth provider who offers quality of service guarantees to the retailer, sometimes middle mile networks exist between the customers own LAN and those exchanges. These serve retail networks which in turn connect buildings and bill customers directly, see national broadband plans from around the world, many of which were motivated by the perceived need to break the monopoly of incumbent commercial providers. The US plan for instance specifies that all community anchor institutions should be connected by fiber optics before the end of 2020. Other examples include, Connecting wireless base stations to the base station controllers. Connecting DSLAMs to the nearest ATM or Ethernet aggregation node, Connecting a large companys site to a metro Ethernet network. Connecting a submarine cable system landing point with the main terrestrial telecommunications network of the country that the cable serves. The choice of backhaul technology must take account of such parameters as capacity, cost, reach, generally, backhaul solutions can largely be categorised into wired or wireless. Wired is usually an expensive solution and often impossible to deploy in remote areas. As data rates increase, the range of network coverage is reduced. Mesh networks are unique enablers that can reduce this cost due to their flexible architecture, with mesh networking, access points are connected wirelessly and exchange data frames with each other to forward to/from a gateway point. Since a mesh requires no costly cable constructions for its backhaul network, mesh technology’s capabilities can boost extending coverage of service areas easily and flexibly. For further cost reduction, a large-scale high-capacity mesh is desirable, for instance, Kyushu Universitys Mimo-Mesh Project, based in Fukuoka City, Fukuoka Prefecture, Japan, has developed and put into use new technology for building high capacity mesh infrastructure. A key component is called IPT, intermittent periodic transmit, a proprietary packet-forwarding scheme that is designed to reduce radio interference in the path of mesh networks. That network uses a wireless multi-hop relay of up to 11 access points while delivering high bandwidth to end users, actual throughput is double that of standard mesh network systems using conventional packet forwarding. Latency, as in all multi-hop relays, suffers, but not to the degree that it compromises voice over IP communications, many common wireless mesh network hotspot solutions are supported in open source router firmware including DD-WRT, OpenWRT and derivatives. Sputnik Agent, Hotspot System, Chillispot and the ad-supported AnchorFree are four examples that work even with lower end routers like the WRT54G. The IEEE802.21 standard specifies basic capabilities for such systems including 802. 11u unknown user authentication and 802. 11s ad hoc wireless mesh networking support

33.
Telephone exchange
–
A telephone exchange is a telecommunications system used in the public switched telephone network or in large enterprises. In historical perspective, telecommunication terms have been used with different semantics over time, the term telephone exchange is often used synonymously with central office, a Bell System term. Often, an office is defined as a building used to house the inside plant equipment of potentially several telephone exchanges. Such an area has also referred to as the exchange. Central office locations may also be identified in North America as wire centers, All central offices within a larger region, typically aggregated by state, were assigned a common numbering plan area code. For corporate or enterprise use, a telephone exchange is often referred to as a private branch exchange. Smaller installations might deploy a PBX or key telephone system in the office of a receptionist and this made it possible for subscribers to call each other at homes, businesses, or public spaces. These made telephony an available and comfortable communication tool for everyday use, one of the first to propose a telephone exchange was Hungarian Tivadar Puskás in 1877 while he was working for Thomas Edison. The first experimental telephone exchange was based on the ideas of Puskás, the worlds first commercial telephone exchange opened on November 12,1877 in Friedrichsberg close to Berlin. George W. Coy designed and built the first commercial US telephone exchange opened in New Haven. The switchboard was built from carriage bolts, handles from teapot lids and bustle wire, Charles Glidden is also credited with establishing an exchange in Lowell, MA. with 50 subscribers in 1878. In Europe other early telephone exchanges were based in London and Manchester, Belgium had its first International Bell exchange a year later. In 1887 Puskás introduced the multiplex switchboard, later exchanges consisted of one to several hundred plug boards staffed by switchboard operators. Each operator sat in front of a panel containing banks of ¼-inch tip-ring-sleeve jacks. In front of the jack panel lay a horizontal panel containing two rows of patch cords, each connected to a cord circuit. When a calling party lifted the receiver, the loop current lit a signal lamp near the jack. The operator responded by inserting the rear cord into the jack and switched her headset into the circuit to ask, Number. For a local call, the operator inserted the front cord of the pair into the partys local jack

34.
Outside plant
–
In the United States, the DOD defines outside plant as the communications equipment located between a main distribution frame and a user end instrument. The CATV industry divides its fixed assets between head end or inside plant, and outside plant, the electrical power industry also uses the term outside plant to refer to electric power distribution systems. Network connections between such as computers, printers, and phones require a physical infrastructure to carry. Typically, this infrastructure will consist of, Cables from wall outlets and jacks run to a communications closets, Cables connecting one communications closet to another, sometimes referred to as riser cable. Racks containing telecommunications hardware, such as switches, routers, Cables connecting one building to another. Exterior communications cabinets containing hardware outside of buildings, radio transceivers used inside or outside buildings, such as wireless access points, and hardware associated with them, such as antennas and towers. The portion of this infrastructure contained within a building is the plant. Where these two meet in a given structure is the demarcation point. Outside plant cabling, whether copper or fiber, is installed as aerial cable between poles, in an underground conduit system, or by direct burial. Hardware associated with the plant must be either protected from the elements or constructed with materials suitable for exposure to the elements. Installation of the outside plant elements often require construction of significant physical infrastructure, in older large installations, cabling is sometimes protected by air pressure systems designed to prevent water infiltration. While this is not an approach, the cost of replacement of the older cabling with sealed cabling is often prohibitively expensive. The cabling used in the plant must also be protected from electrical disturbances caused by lightning or voltage surges due to electrical shorts or induction. One or more twisted pairs, called a drop wire and these cables contain fifty or more twisted pairs. Secondary feeder lines run to a cabinet containing a distribution frame called a Serving Area Interface. The SAI is connected to the main frame, located at a Telephone exchange or other switching facility. An SAI may also contain a Digital subscriber line access multiplexer supporting DSL service, active equipment can then be connected to the line in order to provide service, but this is not considered part of outside plant. The environment can play a role in the quality and lifespan of equipment used in the outside plant

35.
Transverse mode
–
A transverse mode of electromagnetic radiation is a particular electromagnetic field pattern of radiation measured in a plane perpendicular to the propagation direction of the beam. Transverse modes occur in waves and microwaves confined to a waveguide. Transverse modes occur because of conditions imposed on the wave by the waveguide. For this reason, the supported by a waveguide are quantized. The allowed modes can be found by solving Maxwells equations for the conditions of a given waveguide. Unguided electromagnetic waves in space, or in a bulk isotropic dielectric, can be described as a superposition of plane waves. However in any sort of waveguide where boundary conditions are imposed by a physical structure and these modes generally follow different propagation constants. When two or more modes have a propagation constant along the waveguide, then there is more than one modal decomposition possible in order to describe a wave with that propagation constant. Modes in waveguides can be classified as follows, Transverse electromagnetic modes. Transverse electric modes, no field in the direction of propagation. These are sometimes called H modes because there is only a field along the direction of propagation. Transverse magnetic modes, no field in the direction of propagation. These are sometimes called E modes because there is only a field along the direction of propagation. Hybrid modes, non-zero electric and magnetic fields in the direction of propagation, hollow metallic waveguides filled with a homogeneous, isotropic material support TE and TM modes but not the TEM mode. In coaxial cable energy is transported in the fundamental TEM mode. The TEM mode is usually assumed for most other electrical conductor line formats as well. In an optical fiber or other dielectric waveguide, modes are generally of the hybrid type, in circular waveguides, circular modes exist and here m is the number of full-wave patterns along the circumference and n is the number of half-wave patterns along the diameter. In a laser with cylindrical symmetry, the transverse mode patterns are described by a combination of a Gaussian beam profile with a Laguerre polynomial, the modes are denoted TEMpl where p and l are integers labeling the radial and angular mode orders, respectively

36.
Beam splitter
–
A beam splitter is an optical device that splits a beam of light in two. It is the part of most interferometers. In its most common form, a cube, it is made from two glass prisms which are glued together at their base using polyester, epoxy, or urethane-based adhesives. The thickness of the layer is adjusted such that half of the light incident through one port is reflected. Polarizing beam splitters, such as the Wollaston prism, use birefringent materials, another design is the use of a half-silvered mirror, a sheet of glass or plastic with a transparently thin coating of metal, now usually aluminium deposited from aluminium vapor. The thickness of the deposit is controlled so that part of the light which is incident at a 45-degree angle and not absorbed by the coating is transmitted, a very thin half-silvered mirror used in photography is often called a pellicle mirror. To reduce loss of light due to absorption by the reflective coating, originally, these were sheets of highly polished metal perforated with holes to obtain the desired ratio of reflection to transmission. Instead of a coating, a dichroic optical coating may be used. Depending on its characteristics, the ratio of reflection to transmission will vary as a function of the wavelength of the incident light, dichroic mirrors are used in some ellipsoidal reflector spotlights to split off unwanted infrared radiation, and as output couplers in laser construction. A third version of the splitter is a dichroic mirrored prism assembly which uses dichroic optical coatings to divide an incoming light beam into a number of spectrally distinct output beams. Such a device was used in color television cameras and the three-strip Technicolor movie camera. It is currently used in modern three-CCD cameras, beam splitters with single mode fiber for PON networks use the single mode behavior to split the beam. The splitter is done by physically splicing two fibers together as an X, a beam splitter that consists of a glass plate with a reflective dielectric coating on one side gives a phase shift of 0 or π, depending on the side from which it is incident. Transmitted waves have no phase shift, reflected waves entering from the reflective side are phase-shifted by π, whereas reflected waves entering from the glass side have no phase shift. This is the case in the transition of air to reflector and this does not apply to partial reflection by conductive coatings, where other phase shifts occur in all paths. We consider a classical lossless beam-splitter with electric fields incident at both its inputs, the two output fields Ec and Ed are linearly related to the inputs through =, where the 2 ×2 element is the beam-splitter matrix. R and t are the reflectance and transmittance along a path through the beam-splitter. Expanding, we can write each r and t as a number having an amplitude and phase factor, for instance

37.
Distribution frame
–
In telecommunications, a distribution frame is a passive device which terminates cables, allowing arbitrary interconnections to be made. For example, the Main Distribution Frame located at a central office terminates the cables leading to subscribers on the one hand. Service is provided to a subscriber by manually wiring a twisted pair between the line and the relevant DSL or POTS line circuit. Connections can either be soldered, or made using terminal blocks, because the frame may carry live broadcast signals, it may be considered part of the airchain. In data communication, a distribution frame houses data switches. In major installations, audio distribution frames can have as many as 10,000 incoming and outgoing separate copper wires, telephone signals do not use a separate earth ground wire, but some urban exchanges have about 250,000 wires on their MDF. Installing and rewiring these jumpers is a task, leading to attempts in the industry to devise so-called active distribution frames or Automated Main Distribution Frames. The principal issues which stand in the way of their widespread adoption are cost, newer digital mixing consoles can act as control points for a distribution frame or router, which can handle audio from multiple studios at the same time. Multiple smaller frames, such as one for each studio, can be linked together with fibre-optics and this has the advantage of not having to route dozens of feeds through walls to a single point. Main distribution frame Intermediate distribution frame Wiring closet Patch panel Splicebox

38.
Point-to-multipoint communication
–
Point-to-multipoint is often abbreviated as P2MP, PTMP, or PMP. Point-to-multipoint telecommunications is most typically used in wireless Internet and IP telephony via gigahertz radio frequencies, P2MP systems have been designed both as single and bi-directional systems. A central antenna or antenna array broadcasts to several receiving antennas, Point to Multipoint is the most popular approach for wireless communications that have a large number of nodes, end destinations or end users. Point to Multipoint generally assumes there is a central Base Station to which remote Subscriber Units or Customer Premises Equipment, connections between the Base Station and Subscriber Units can be either Line of Sight or for lower-frequency radio systems Non-Line-of-Sight where link budgets permit. Generally, lower frequencies can offer non-Line-of Sight connections, various software planning tools can be used to determine feasibility of potential connections using topographic data as well as link budget simulation. Often the point to multipoint links are installed to reduce the cost of infrastructure and increase the number of CPEs, Point to Multipoint wireless networks employing directional antennas are affected by the hidden node problem in case they employ a CSMA/CA medium access control protocol. The negative impact of the hidden node problem can be mitigated using a TDMA based protocol or a polling protocol rather than the CSMA/CA protocol, the telecommunications signal in a Point to Multipoint system is typically bi-directional, either time division multiple access or channelized. Systems using Frequency Division Duplexing offer full duplex connections between base station and remote sites, and Time Division Duplex systems offer half duplex connections, Point to Multipoint systems can be implemented in Licensed, Semi-licensed or Unlicensed frequency bands depending on the specific application. The Base Station may have a single Omnidirectional antenna or multiple Sector Antennas which is used to increase both range and capacity. All-to-all communication Multipoint microwave distribution system Point-to-point Wireless Communications Wireless Access Point List of emerging technologies Backhaul

Older Ethernet equipment. Clockwise from top-left: An Ethernet transceiver with an in-line 10BASE2 adapter, a similar model transceiver with a 10BASE5 adapter, an AUI cable, a different style of transceiver with 10BASE2 BNC T-connector, two 10BASE5 end fittings (N connectors), an orange "vampire tap" installation tool (which includes a specialized drill bit at one end and a socket wrench at the other), and an early model 10BASE5 transceiver (h4000) manufactured by DEC. The short length of yellow 10BASE5 cable has one end fitted with a N connector and the other end prepared to have a N connector shell installed; the half-black, half-grey rectangular object through which the cable passes is an installed vampire tap.

A laser is a device that emits light through a process of optical amplification based on the stimulated emission of …

A laser beam used for welding.

Laser beams in fog, reflected on a car windshield

A helium–neon laser demonstration at the Kastler-Brossel Laboratory at Univ. Paris 6. The pink-orange glow running through the center of the tube is from the electric discharge which produces incoherent light, just as in a neon tube. This glowing plasma is excited and then acts as the gain medium through which the internal beam passes, as it is reflected between the two mirrors. Laser output through the front mirror can be seen to produce a tiny (about 1 mm in diameter) intense spot on the screen, to the right. Although it is a deep and pure red color, spots of laser light are so intense that cameras are typically overexposed and distort their color.

Cable television is a system of delivering television programming to paying subscribers via radio frequency (RF) …

A coaxial cable used to carry cable television onto subscribers' premises

The bottom product is a set-top box, an electronic device which cable subscribers use to connect the cable signal to their television set.

A cable television distribution box (left) in the basement of a building in Germany, with a splitter (right) which supplies the signal to separate cables which go to different rooms

Diagram of a modern hybrid fiber-coaxial cable television system. At the regional headend, the TV channels are sent multiplexed on a light beam which travels through optical fiber trunklines, which fan out from distribution hubs to optical nodes in local communities. Here the light signal from the fiber is translated to a radio frequency electrical signal, which is distributed through coaxial cable to individual subscriber homes.

In telephony, the demarcation point is the point at which the public switched telephone network ends and connects with …

Old and new style demarcation points in a Canadian home built in 1945. A DSL splitter has been plugged into the modern demarc (on the right). One line passes through a DSL filter before going to the old demarc, and from there to the remainder of the house.

20th century Bell Canada demarcation point for a single phone line. The spark-gap surge protectors on the left have hex heads for easy removal. Near center, a post is connected to the house's ground. The other two connection points are for tip and ring. A test jack can be attached nearby so the home-owner or tenant can verify whether the inside wiring is at fault. Modern Bell Canada demarcation points are network interface devices

A schematic illustrating how FTTX architectures vary with regard to the distance between the optical fiber and the end user. The building on the left is the central office; the building on the right is one of the buildings served by the central office. Dotted rectangles represent separate living or office spaces within the same building.