The interior of a cathode-ray tube for use in an oscilloscope. 1. Deflection voltage electrode; 2. Electron gun; 3. Electron beam; 4. Focusing coil; 5. Phosphor-coated inner side of the screen

A Tektronix model 475A portable analog oscilloscope, a typical instrument of the late 1970s

A modern Siglent SHS800 handheld digital storage oscilloscope (DSO) using an LCD for its display

An oscilloscope displaying capacitor discharge

An oscilloscope, previously called an oscillograph,[1][2] and informally known as a scope or o-scope, CRO (for cathode-ray oscilloscope), or DSO (for the more modern digital storage oscilloscope), is a type of electronic test instrument that allows observation of varying signal voltages, usually as a two-dimensional plot of one or more signals as a function of time. Other signals (such as sound or vibration) can be converted to voltages and displayed.

Oscilloscopes are used to observe the change of an electrical signal over time, such that voltage and time describe a shape which is continuously graphed against a calibrated scale. The observed waveform can be analyzed for such properties as amplitude, frequency, rise time, time interval, distortion and others. Modern digital instruments may calculate and display these properties directly. Originally, calculation of these values required manually measuring the waveform against the scales built into the screen of the instrument.[3]

The oscilloscope can be adjusted so that repetitive signals can be observed as a continuous shape on the screen. A storage oscilloscope allows single events to be captured by the instrument and displayed for a relatively long time, allowing observation of events too fast to be directly perceptible.

Oscilloscopes are used in the sciences, medicine, engineering, automotive and the telecommunications industry. General-purpose instruments are used for maintenance of electronic equipment and laboratory work. Special-purpose oscilloscopes may be used for such purposes as analyzing an automotive ignition system or to display the waveform of the heartbeat as an electrocardiogram.

Early oscilloscopes used cathode ray tubes (CRTs) as their display element (hence they were commonly referred to as CROs) and linear amplifiers for signal processing. Storage oscilloscopes used special storage CRTs to maintain a steady display of a single brief signal. CROs were later largely superseded by digital storage oscilloscopes (DSOs) with thin panel displays, fast analog-to-digital converters and digital signal processors. DSOs without integrated displays (sometimes known as digitisers) are available at lower cost and use a general-purpose digital computer to process and display waveforms.

The basic oscilloscope, as shown in the illustration, is typically divided into four sections: the display, vertical controls, horizontal controls and trigger controls. The display is usually a CRT (historically) or LCD panel which is laid out with both horizontal and vertical reference lines referred to as the graticule. CRT displays are additionally equipped with three controls: focus, intensity, and beam finder.

The vertical section controls the amplitude of the displayed signal. This section carries a Volts-per-Division (Volts/Div) selector knob, an AC/DC/Ground selector switch and the vertical (primary) input for the instrument. Additionally, this section is typically equipped with the vertical beam position knob.

The horizontal section controls the time base or "sweep" of the instrument. The primary control is the Seconds-per-Division (Sec/Div) selector switch. Also included is a horizontal input for plotting dual X-Y axis signals. The horizontal beam position knob is generally located in this section.

The trigger section controls the start event of the sweep. The trigger can be set to automatically restart after each sweep or it can be configured to respond to an internal or external event. The principal controls of this section will be the source and coupling selector switches. An external trigger input (EXT Input) and level adjustment will also be included.

In addition to the basic instrument, most oscilloscopes are supplied with a probe as shown. The probe will connect to any input on the instrument and typically has a resistor of ten times the oscilloscope's input impedance. This results in a .1 (‑10X) attenuation factor which helps to isolate the capacitive load presented by the probe cable from the signal being measured. Some probes have a switch allowing the operator to bypass the resistor when appropriate.[3]

Most modern oscilloscopes are lightweight, portable instruments that are compact enough to be easily carried by a single person. In addition to the portable units, the market offers a number of miniature battery-powered instruments for field service applications. Laboratory grade oscilloscopes, especially older units which use vacuum tubes, are generally bench-top devices or may be mounted into dedicated carts. Special-purpose oscilloscopes may be rack-mounted or permanently mounted into a custom instrument housing.

The signal to be measured is fed to one of the input connectors, which is usually a coaxial connector such as a BNC or UHF type. Binding posts or banana plugs may be used for lower frequencies.
If the signal source has its own coaxial connector, then a simple coaxial cable is used; otherwise, a specialized cable called a "scope probe", supplied with the oscilloscope, is used. In general, for routine use, an open wire test lead for connecting to the point being observed is not satisfactory, and a probe is generally necessary.
General-purpose oscilloscopes usually present an input impedance of 1 megohm in parallel with a small but known capacitance such as 20 picofarads.[4] This allows the use of standard oscilloscope probes.[5] Scopes for use with very high frequencies may have 50‑ohm inputs, which must be either connected directly to a 50‑ohm signal source or used with Z0 or active probes.

Less-frequently-used inputs include one (or two) for triggering the sweep, horizontal deflection for X‑Y mode displays, and trace brightening/darkening, sometimes called z'‑axis inputs.

Open wire test leads (flying leads) are likely to pick up interference, so they are not suitable for low level signals. Furthermore, the leads have a high inductance, so they are not suitable for high frequencies. Using a shielded cable (i.e., coaxial cable) is better for low level signals. Coaxial cable also has lower inductance, but it has higher capacitance: a typical 50 ohm cable has about 90 pF per meter. Consequently, a one-meter direct (1X) coaxial probe will load a circuit with a capacitance of about 110 pF and a resistance of 1 megohm.

To minimize loading, attenuator probes (e.g., 10X probes) are used. A typical probe uses a 9 megohm series resistor shunted by a low-value capacitor to make an RC compensated divider with the cable capacitance and scope input. The RC time constants are adjusted to match. For example, the 9 megohm series resistor is shunted by a 12.2 pF capacitor for a time constant of 110 microseconds. The cable capacitance of 90 pF in parallel with the scope input of 20 pF and 1 megohm (total capacitance 110 pF) also gives a time constant of 110 microseconds. In practice, there will be an adjustment so the operator can precisely match the low frequency time constant (called compensating the probe). Matching the time constants makes the attenuation independent of frequency. At low frequencies (where the resistance of R is much less than the reactance of C), the circuit looks like a resistive divider; at high frequencies (resistance much greater than reactance), the circuit looks like a capacitive divider.[6]

The result is a frequency compensated probe for modest frequencies that presents a load of about 10 megohms shunted by 12 pF. Although such a probe is an improvement, it does not work when the time scale shrinks to several cable transit times (transit time is typically 5 ns). In that time frame, the cable looks like its characteristic impedance, and there will be reflections from the transmission line mismatch at the scope input and the probe that causes ringing.[7] The modern scope probe uses lossy low capacitance transmission lines and sophisticated frequency shaping networks to make the 10X probe perform well at several hundred megahertz. Consequently, there are other adjustments for completing the compensation.[8][9]

Probes with 10:1 attenuation are by far the most common; for large signals (and slightly-less capacitive loading), 100:1 probes may be used. There are also probes that contain switches to select 10:1 or direct (1:1) ratios, but this setting has significant capacitance (tens of pF) at the probe tip, because the whole cable's capacitance is now directly connected.

Most oscilloscopes allow for probe attenuation factors, displaying the effective sensitivity at the probe tip. Historically, some auto-sensing circuitry used indicator lamps behind translucent windows in the panel to illuminate different parts of the sensitivity scale. To do so, the probe connectors (modified BNCs) had an extra contact to define the probe's attenuation. (A certain value of resistor, connected to ground, "encodes" the attenuation.) Because probes wear out, and because the auto-sensing circuitry is not compatible between different makes of oscilloscope, auto-sensing probe scaling is not fool proof. Likewise, manually setting the probe attenuation is prone to user error and it is a common mistake to have the probe scaling set incorrectly; the resultant reading will then be wrong by a factor of 10.

There are special high voltage probes which also form compensated attenuators with the oscilloscope input; the probe body is physically large, and some require partly filling a canister surrounding the series resistor with volatile liquid fluorocarbon to displace air. At the oscilloscope end is a box with several waveform-trimming adjustments. For safety, a barrier disc keeps one's fingers distant from the point being examined. Maximum voltage is in the low tens of kV. (Observing a high voltage ramp can create a staircase waveform with steps at different points every repetition, until the probe tip is in contact. Until then, a tiny arc charges the probe tip, and its capacitance holds the voltage (open circuit). As the voltage continues to climb, another tiny arc charges the tip further.)

There are also current probes, with cores that surround the conductor carrying current to be examined. One type has a hole for the conductor, and requires that the wire be passed through the hole; they are for semi-permanent or permanent mounting. However, other types, for testing, have a two-part core that permit them to be placed around a wire. Inside the probe, a coil wound around the core provides a current into an appropriate load, and the voltage across that load is proportional to current. However, this type of probe can sense AC, only.

A more-sophisticated probe includes a magnetic flux sensor (Hall effect sensor) in the magnetic circuit. The probe connects to an amplifier, which feeds (low frequency) current into the coil to cancel the sensed field; the magnitude of that current provides the low-frequency part of the current waveform, right down to DC. The coil still picks up high frequencies. There is a combining network akin to a loudspeaker crossover network.

This control adjusts CRT focus to obtain the sharpest, most-detailed trace. In practice, focus needs to be adjusted slightly when observing quite-different signals, which means that it needs to be an external control. Flat-panel displays do not need focus adjustments and therefore do not include this control.It is done by the focusing anode present in the cathode ray tube

This adjusts trace brightness. Slow traces on CRT oscilloscopes need less, and fast ones, especially if not often repeated, require more brightness. On flat panels, however, trace brightness is essentially independent of sweep speed, because the internal signal processing effectively synthesizes the display from the digitized data.

Can also be called "Shape" or "spot shape". Adjusts the relative voltages on two of the CRT anodes such that a displayed spot changes from elliptical in one plane through a circular spot to an ellipse at 90 degrees to the first. This control may be absent from simpler oscilloscope designs or may even be an internal control. It is not necessary with flat panel displays.

Modern oscilloscopes have direct-coupled deflection amplifiers, which means the trace could be deflected off-screen. They also might have their beam blanked without the operator knowing it. To help in restoring a visible display, the beam finder circuit overrides any blanking and limits the beam deflected to the visible portion of the screen. Beam-finder circuits often distort the trace while activated.

The graticule is a grid of squares that serve as reference marks for measuring the displayed trace. These markings, whether located directly on the screen or on a removable plastic filter, usually consist of a 1 cm grid with closer tick marks (often at 2 mm) on the centre vertical and horizontal axis. One expects to see ten major divisions across the screen; the number of vertical major divisions varies. Comparing the grid markings with the waveform permits one to measure both voltage (vertical axis) and time (horizontal axis). Frequency can also be determined by measuring the waveform period and calculating its reciprocal.

On old and lower-cost CRT oscilloscopes the graticule is a sheet of plastic, often with light-diffusing markings and concealed lamps at the edge of the graticule. The lamps had a brightness control. Higher-cost instruments have the graticule marked on the inside face of the CRT, to eliminate parallax errors; better ones also had adjustable edge illumination with diffusing markings. (Diffusing markings appear bright.) Digital oscilloscopes, however, generate the graticule markings on the display in the same way as the trace.

External graticules also protect the glass face of the CRT from accidental impact. Some CRT oscilloscopes with internal graticules have an unmarked tinted sheet plastic light filter to enhance trace contrast; this also serves to protect the faceplate of the CRT.

Accuracy and resolution of measurements using a graticule is relatively limited; better instruments sometimes have movable bright markers on the trace that permit internal circuits to make more refined measurements.

Both calibrated vertical sensitivity and calibrated horizontal time are set in 1 - 2 - 5 - 10 steps. This leads, however, to some awkward interpretations of minor divisions.

Digital oscilloscopes generate the graticule digitally, which means that the scale can vary, and accuracy of readings is much improved.

Computer Model of the impact of increasing the timebase time/division.

These select the horizontal speed of the CRT's spot as it creates the trace; this process is commonly referred to as the sweep. In all but the least-costly modern oscilloscopes, the sweep speed is selectable and calibrated in units of time per major graticule division. Quite a wide range of sweep speeds is generally provided, from seconds to as fast as picoseconds (in the fastest) per division. Usually, a continuously-variable control (often a knob in front of the calibrated selector knob) offers uncalibrated speeds, typically slower than calibrated. This control provides a range somewhat greater than that of consecutive calibrated steps, making any speed available between the extremes.

Found on some better analog oscilloscopes, this varies the time (holdoff) during which the sweep circuit ignores triggers. It provides a stable display of some repetitive events in which some triggers would create confusing displays. It is usually set to minimum, because a longer time decreases the number of sweeps per second, resulting in a dimmer trace. See Holdoff for a more detailed description.

To accommodate a wide range of input amplitudes, a switch selects calibrated sensitivity of the vertical deflection. Another control, often in front of the calibrated-selector knob, offers a continuously-variable sensitivity over a limited range from calibrated to less-sensitive settings.

Often the observed signal is offset by a steady component, and only the changes are of interest. A switch (AC position) connects a capacitor in series with the input that passes only the changes (provided that they are not too slow -- "slow" would mean visible). However, when the signal has a fixed offset of interest, or changes quite slowly, the input is connected directly (DC switch position). Most oscilloscopes offer the DC input option. For convenience, to see where zero volts input currently shows on the screen, many oscilloscopes have a third switch position (GND) that disconnects the input and grounds it. Often, in this case, the user centers the trace with the Vertical Position control.

Better oscilloscopes have a polarity selector. Normally, a positive input moves the trace upward, but this permits inverting—positive deflects the trace downward.

This control is found only on more elaborate oscilloscopes; it offers adjustable sensitivity for external horizontal inputs. It is only active when the instrument is in X-Y mode, that is, when the internal horizontal sweep is not in use.

The vertical position control moves the whole displayed trace up and down. It is used to set the no-input trace exactly on the center line of the graticule, but also permits offsetting vertically by a limited amount. With direct coupling, adjustment of this control can compensate for a limited DC component of an input.

Computer model of Horizontal position control from X offset increasing

The horizontal position control moves the display sidewise. It usually sets the left end of the trace at the left edge of the graticule, but it can displace the whole trace when desired. This control also moves the X-Y mode traces sidewise in some instruments, and can compensate for a limited DC component as for vertical position.

Each input channel usually has its own set of sensitivity, coupling, and position controls, although some four-trace oscilloscopes have only minimal controls for their third and fourth channels.

Dual-trace oscilloscopes have a mode switch to select either channel alone, both channels, or (in some) an X‑Y display, which uses the second channel for X deflection. When both channels are displayed, the type of channel switching can be selected on some oscilloscopes; on others, the type depends upon timebase setting. If manually selectable, channel switching can be free-running (asynchronous), or between consecutive sweeps. Some Philips dual-trace analog oscilloscopes had a fast analog multiplier, and provided a display of the product of the input channels.

Multiple-trace oscilloscopes have a switch for each channel to enable or disable display of that trace's signal.

These include controls for the delayed-sweep timebase, which is calibrated, and often also variable. The slowest speed is several steps faster than the slowest main sweep speed, although the fastest is generally the same. A calibrated multiturn delay time control offers wide range, high resolution delay settings; it spans the full duration of the main sweep, and its reading corresponds to graticule divisions (but with much finer precision). Its accuracy is also superior to that of the display.

A switch selects display modes: Main sweep only, with a brightened region showing when the delayed sweep is advancing, delayed sweep only, or (on some) a combination mode.

Good CRT oscilloscopes include a delayed-sweep intensity control, to allow for the dimmer trace of a much-faster delayed sweep that nevertheless occurs only once per main sweep. Such oscilloscopes also are likely to have a trace separation control for multiplexed display of both the main and delayed sweeps together.

A switch selects the Trigger Source. It can be an external input, one of the vertical channels of a dual or multiple-trace oscilloscope, or the AC line (mains) frequency. Another switch enables or disables Auto trigger mode, or selects single sweep, if provided in the oscilloscope. Either a spring-return switch position or a pushbutton arms single sweeps.

A Level control varies the voltage on the waveform which generates a trigger, and the Slope switch selects positive-going or negative-going polarity at the selected trigger level.

Type 465 Tektronix oscilloscope. This was a popular analog oscilloscope, portable, and is a representative example.

To display events with unchanging or slowly (visibly) changing waveforms, but occurring at times that may not be evenly spaced, modern oscilloscopes have triggered sweeps. Compared to simpler oscilloscopes with sweep oscillators that are always running, triggered-sweep oscilloscopes are markedly more versatile.

A triggered sweep starts at a selected point on the signal, providing a stable display. In this way, triggering allows the display of periodic signals such as sine waves and square waves, as well as nonperiodic signals such as single pulses, or pulses that do not recur at a fixed rate.

With triggered sweeps, the scope will blank the beam and start to reset the sweep circuit each time the beam reaches the extreme right side of the screen. For a period of time, called holdoff, (extendable by a front-panel control on some better oscilloscopes), the sweep circuit resets completely and ignores triggers. Once holdoff expires, the next trigger starts a sweep. The trigger event is usually the input waveform reaching some user-specified threshold voltage (trigger level) in the specified direction (going positive or going negative—trigger polarity).

In some cases, variable holdoff time can be really useful to make the sweep ignore interfering triggers that occur before the events to be observed. In the case of repetitive, but complex waveforms, variable holdoff can create a stable display that cannot otherwise be achieved.

Trigger holdoff defines a certain period following a trigger during which the scope will not trigger again. This makes it easier to establish a stable view of a waveform with multiple edges which would otherwise cause another trigger.[10]

Imagine the following repeating waveform:
The green line is the waveform, the red vertical partial line represents the location of the trigger, and the yellow line represents the trigger level. If the scope was simply set to trigger on every rising edge, this waveform would cause three triggers for each cycle:
Assuming the signal is fairly high frequency, the scope would probably look something like this:
Except that on the scope, each trigger would be the same channel, and so would be the same color.

It is desired to set the scope to only trigger on one edge per cycle, so it is necessary to set the holdoff to be slightly less than the period of the waveform. That will prevent it from triggering more than once per cycle, but still allow it to trigger on the first edge of the next cycle.

Triggered sweeps can display a blank screen if there are no triggers. To avoid this, these sweeps include a timing circuit that generates free-running triggers so a trace is always visible. Once triggers arrive, the timer stops providing pseudo-triggers. Automatic sweep mode can be de-selected when observing low repetition rates.

If the input signal is periodic, the sweep repetition rate can be adjusted to display a few cycles of the waveform. Early (tube) oscilloscopes and lowest-cost oscilloscopes have sweep oscillators that run continuously, and are uncalibrated. Such oscilloscopes are very simple, comparatively inexpensive, and were useful in radio servicing and some TV servicing. Measuring voltage or time is possible, but only with extra equipment, and is quite inconvenient. They are primarily qualitative instruments.

They have a few (widely spaced) frequency ranges, and relatively wide-range continuous frequency control within a given range. In use, the sweep frequency is set to slightly lower than some submultiple of the input frequency, to display typically at least two cycles of the input signal (so all details are visible). A very simple control feeds an adjustable amount of the vertical signal (or possibly, a related external signal) to the sweep oscillator. The signal triggers beam blanking and a sweep retrace sooner than it would occur free-running, and the display becomes stable.

Some oscilloscopes offer these—the sweep circuit is manually armed (typically by a pushbutton or equivalent) "Armed" means it's ready to respond to a trigger. Once the sweep is complete, it resets, and will not sweep until re-armed. This mode, combined with an oscilloscope camera, captures single-shot events.

Types of trigger include:

external trigger, a pulse from an external source connected to a dedicated input on the scope.

edge trigger, an edge-detector that generates a pulse when the input signal crosses a specified threshold voltage in a specified direction. These are the most-common types of triggers; the level control sets the threshold voltage, and the slope control selects the direction (negative or positive-going). (The first sentence of the description also applies to the inputs to some digital logic circuits; those inputs have fixed threshold and polarity response.)

video trigger, a circuit that extracts synchronizing pulses from video formats such as PAL and NTSC and triggers the timebase on every line, a specified line, every field, or every frame. This circuit is typically found in a waveform monitor device, although some better oscilloscopes include this function.

delayed trigger, which waits a specified time after an edge trigger before starting the sweep. As described under delayed sweeps, a trigger delay circuit (typically the main sweep) extends this delay to a known and adjustable interval. In this way, the operator can examine a particular pulse in a long train of pulses.

Some recent designs of oscilloscopes include more sophisticated triggering schemes; these are described toward the end of this article.

More sophisticated analog oscilloscopes contain a second timebase for a delayed sweep. A delayed sweep provides a very detailed look at some small selected portion of the main timebase. The main timebase serves as a controllable delay, after which the delayed timebase starts. This can start when the delay expires, or can be triggered (only) after the delay expires. Ordinarily, the delayed timebase is set for a faster sweep, sometimes much faster, such as 1000:1. At extreme ratios, jitter in the delays on consecutive main sweeps degrades the display, but delayed-sweep triggers can overcome that.

The display shows the vertical signal in one of several modes: the main timebase, or the delayed timebase only, or a combination thereof. When the delayed sweep is active, the main sweep trace brightens while the delayed sweep is advancing. In one combination mode, provided only on some oscilloscopes, the trace changes from the main sweep to the delayed sweep once the delayed sweep starts, although less of the delayed fast sweep is visible for longer delays. Another combination mode multiplexes (alternates) the main and delayed sweeps so that both appear at once; a trace separation control displaces them.

DSOs allow waveforms to be displayed in this way, without offering a delayed timebase as such.

Oscilloscopes with two vertical inputs, referred to as dual-trace oscilloscopes, are extremely useful and commonplace.
Using a single-beam CRT, they multiplex the inputs, usually switching between them fast enough to display two traces apparently at once. Less common are oscilloscopes with more traces; four inputs are common among these, but a few (Kikusui, for one) offered a display of the sweep trigger signal if desired. Some multi-trace oscilloscopes use the external trigger input as an optional vertical input, and some have third and fourth channels with only minimal controls. In all cases, the inputs, when independently displayed, are time-multiplexed, but dual-trace oscilloscopes often can add their inputs to display a real-time analog sum. (Inverting one channel provides a difference, provided that neither channel is overloaded. This difference mode can provide a moderate-performance differential input.)

Switching channels can be asynchronous, that is, free-running, with trace blanking while switching, or after each horizontal sweep is complete. Asynchronous switching is usually designated "Chopped", while sweep-synchronized is designated "Alt[ernate]". A given channel is alternately connected and disconnected, leading to the term "chopped". Multi-trace oscilloscopes also switch channels either in chopped or alternate modes.

In general, chopped mode is better for slower sweeps. It is possible for the internal chopping rate to be a multiple of the sweep repetition rate, creating blanks in the traces, but in practice this is rarely a problem; the gaps in one trace are overwritten by traces of the following sweep. A few oscilloscopes had a modulated chopping rate to avoid this occasional problem. Alternate mode, however, is better for faster sweeps.

True dual-beam CRT oscilloscopes did exist, but were not common. One type (Cossor, U.K.) had a beam-splitter plate in its CRT, and single-ended deflection following the splitter. Others had two complete electron guns, requiring tight control of axial (rotational) mechanical alignment in manufacturing the CRT. Beam-splitter types had horizontal deflection common to both vertical channels, but dual-gun oscilloscopes could have separate time bases, or use one time base for both channels. Multiple-gun CRTs (up to ten guns) were made in past decades. With ten guns, the envelope (bulb) was cylindrical throughout its length. (Also see "CRT Invention" in Oscilloscope history.)

In an analog oscilloscope, the vertical amplifier acquires the signal[s] to be displayed. In better oscilloscopes, it delays them by a fraction of a microsecond, and provides a signal large enough to deflect the CRT's beam. That deflection is at least somewhat beyond the edges of the graticule, and more typically some distance off-screen. The amplifier has to have low distortion to display its input accurately (it must be linear), and it has to recover quickly from overloads. As well, its time-domain response has to represent transients accurately—minimal overshoot, rounding, and tilt of a flat pulse top.

A vertical input goes to a frequency-compensated step attenuator to reduce large signals to prevent overload. The attenuator feeds a low-level stage (or a few), which in turn feed gain stages (and a delay-line driver if there is a delay). Following are more gain stages, up to the final output stage which develops a large signal swing (tens of volts, sometimes over 100 volts) for CRT electrostatic deflection.

In dual and multiple-trace oscilloscopes, an internal electronic switch selects the relatively low-level output of one channel's amplifiers and sends it to the following stages of the vertical amplifier, which is only a single channel, so to speak, from that point on.

In free-running ("chopped") mode, the oscillator (which may be simply a different operating mode of the switch driver) blanks the beam before switching, and unblanks it only after the switching transients have settled.

Part way through the amplifier is a feed to the sweep trigger circuits, for internal triggering from the signal. This feed would be from an individual channel's amplifier in a dual or multi-trace oscilloscope, the channel depending upon the setting of the trigger source selector.

This feed precedes the delay (if there is one), which allows the sweep circuit to unblank the CRT and start the forward sweep, so the CRT can show the triggering event. High-quality analog delays add a modest cost to an oscilloscope, and are omitted in oscilloscopes that are cost-sensitive.

The delay, itself, comes from a special cable with a pair of conductors wound around a flexible, magnetically soft core. The coiling provides distributed inductance, while a conductive layer close to the wires provides distributed capacitance. The combination is a wideband transmission line with considerable delay per unit length. Both ends of the delay cable require matched impedances to avoid reflections.

Most modern oscilloscopes have several inputs for voltages, and thus can be used to plot one varying voltage versus another. This is especially useful for graphing I-V curves (current versus voltage characteristics) for components such as diodes, as well Lissajous patterns. Lissajous figures are an example of how an oscilloscope can be used to track phase differences between multiple input signals. This is very frequently used in broadcast engineering to plot the left and right stereophonic channels, to ensure that the stereo generator is calibrated properly. Historically, stable Lissajous figures were used to show that two sine waves had a relatively simple frequency relationship, a numerically-small ratio. They also indicated phase difference between two sine waves of the same frequency.

The X-Y mode also allows the oscilloscope to be used as a vector monitor to display images or user interfaces. Many early games, such as Tennis for Two, used an oscilloscope as an output device.[11]

Complete loss of signal in an X-Y CRT display means that the beam strikes a small spot, which risks burning the phosphor. Older phosphors burned more easily. Some dedicated X-Y displays reduce beam current greatly, or blank the display entirely, if there are no inputs present.

As with all practical instruments, oscilloscopes do not respond equally to all possible input frequencies. The range of frequencies an oscilloscope can usefully display is referred to as its bandwidth. Bandwidth applies primarily to the Y-axis, although the X-axis sweeps have to be fast enough to show the highest-frequency waveforms.

The bandwidth is defined as the frequency at which the sensitivity is 0.707 of that at DC or the lowest AC frequency
(a drop of 3 dB).[12] The oscilloscope's response will drop off rapidly as the input frequency is raised above that point. Within the stated bandwidth the response will not necessarily be exactly uniform (or "flat"), but should always fall within a +0 to -3 dB range. One source[12] states that there is a noticeable effect on the accuracy of voltage measurements at only 20 percent of the stated bandwidth. Some oscilloscopes' specifications do include a narrower tolerance range within the stated bandwidth.

Probes also have bandwidth limits and must be chosen and used to properly handle the frequencies of interest. To achieve the flattest response, most probes must be "compensated" (an adjustment performed using a test signal from the oscilloscope) to allow for the reactance of the probe's cable.

Another related specification is rise time. This is the duration of the fastest pulse that can be resolved by the scope. It is related to the bandwidth approximately by:

For example, an oscilloscope intended to resolve pulses with a rise time of 1 nanosecond would have a bandwidth of 350 MHz.

In analog instruments, the bandwidth of the oscilloscope is limited by the vertical amplifiers and the CRT or other display subsystem. In digital instruments, the sampling rate of the analog to digital converter (ADC) is a factor, but the stated analog bandwidth (and therefore the overall bandwidth of the instrument) is usually less than the ADC's Nyquist frequency. This is due to limitations in the analog signal amplifier, deliberate design of the anti-aliasing filter that precedes the ADC, or both.

For a digital oscilloscope, a rule of thumb is that the continuous sampling rate should be ten times the highest frequency desired to resolve; for example a 20 megasample/second rate would be applicable for measuring signals up to about 2 megahertz. This allows the anti-aliasing filter to be designed with a 3 dB down point of 2 MHz and an effective cutoff at 10 MHz (the Nyquist frequency), avoiding the artifacts of a very steep ("brick-wall") filter.

A sampling oscilloscope can display signals of considerably higher frequency than the sampling rate if the signals are exactly, or nearly, repetitive. It does this by taking one sample from each successive repetition of the input waveform, each sample being at an increased time interval from the trigger event. The waveform is then displayed from these collected samples. This mechanism is referred to as "equivalent-time sampling".[14] Some oscilloscopes can operate in either this mode or in the more traditional "real-time" mode at the operator's choice.

Some oscilloscopes have cursors, which are lines that can be moved about the screen to measure the time interval between two points, or the difference between two voltages. A few older oscilloscopes simply brightened the trace at movable locations. These cursors are more accurate than visual estimates referring to graticule lines.

Better quality general purpose oscilloscopes include a calibration signal for setting up the compensation of test probes; this is (often) a 1 kHz square-wave signal of a definite peak-to-peak voltage available at a test terminal on the front panel. Some better oscilloscopes also have a squared-off loop for checking and adjusting current probes.

Sometimes the event that the user wants to see may only happen occasionally.
To catch these events, some oscilloscopes, known as "storage scopes", preserve the most recent sweep on the screen. This was originally achieved by using a special CRT, a "storage tube", which would retain the image of even a very brief event for a long time.

Some digital oscilloscopes can sweep at speeds as slow as once per hour, emulating a strip chart recorder.
That is, the signal scrolls across the screen from right to left. Most oscilloscopes with this facility switch from a sweep to a strip-chart mode at about one sweep per ten seconds. This is because otherwise, the scope looks broken: it's collecting data, but the dot cannot be seen.

In current oscilloscopes, digital signal sampling is more often used for all but the simplest models. Samples feed fast analog-to-digital converters, following which all signal processing (and storage) is digital.

Many oscilloscopes have different plug-in modules for different purposes, e.g., high-sensitivity amplifiers of relatively narrow bandwidth, differential amplifiers, amplifiers with four or more channels, sampling plugins for repetitive signals of very high frequency, and special-purpose plugins, including audio/ultrasonic spectrum analyzers, and stable-offset-voltage direct-coupled channels with relatively high gain.

Lissajous figures on an oscilloscope, with 90 degrees phase difference between x and y inputs.

One of the most frequent uses of scopes is troubleshooting malfunctioning electronic equipment. One of the advantages of a scope is that it can graphically show signals: where a voltmeter may show a totally unexpected voltage, a scope may reveal that the circuit is oscillating. In other cases the precise shape or timing of a pulse is important.

In a piece of electronic equipment, for example, the connections between stages (e.g. electronic mixers, electronic oscillators, amplifiers) may be 'probed' for the expected signal, using the scope as a simple signal tracer. If the expected signal is absent or incorrect, some preceding stage of the electronics is not operating correctly. Since most failures occur because of a single faulty component, each measurement can prove that half of the stages of a complex piece of equipment either work, or probably did not cause the fault.

Once the faulty stage is found, further probing can usually tell a skilled technician exactly which component has failed. Once the component is replaced, the unit can be restored to service, or at least the next fault can be isolated. This sort of troubleshooting is typical of radio and TV receivers, as well as audio amplifiers, but can apply to quite-different devices such as electronic motor drives.

Another use is to check newly designed circuitry. Very often a newly designed circuit will misbehave because of design errors, bad voltage levels, electrical noise etc. Digital electronics usually operate from a clock, so a dual-trace scope which shows both the clock signal and a test signal dependent upon the clock is useful. Storage scopes are helpful for "capturing" rare electronic events that cause defective operation.

First appearing in the 1970s for ignition system analysis, automotive oscilloscopes are becoming an important workshop tool for testing sensors and output signals on electronic engine management systems, braking and stability systems. Some oscilloscopes can trigger and decode serial bus messages, such as the CAN bus commonly used in automotive applications.

For work at high frequencies and with fast digital signals, the bandwidth of the vertical amplifiers and sampling rate must be high enough. For general-purpose use, a bandwidth of at least 100 MHz is usually satisfactory. A much lower bandwidth is sufficient for audio-frequency applications only.
A useful sweep range is from one second to 100 nanoseconds, with appropriate triggering and (for analog instruments) sweep delay. A well-designed, stable trigger circuit is required for a steady display. The chief benefit of a quality oscilloscope is the quality of the trigger circuit.[citation needed]

Key selection criteria of a DSO (apart from input bandwidth) are the sample memory depth and sample rate. Early DSOs in the mid- to late 1990s only had a few KB of sample memory per channel. This is adequate for basic waveform display, but does not allow detailed examination of the waveform or inspection of long data packets for example. Even entry-level (<$500) modern DSOs now have 1 MB or more of sample memory per channel, and this has become the expected minimum in any modern DSO.[citation needed] Often this sample memory is shared between channels, and can sometimes only be fully available at lower sample rates. At the highest sample rates, the memory may be limited to a few tens of KB.[15]
Any modern "real-time" sample rate DSO will have typically 5–10 times the input bandwidth in sample rate. So a 100 MHz bandwidth DSO would have 500 Ms/s – 1 Gs/s sample rate. The theoretical minimum sample rate required, using SinX/x interpolation, is 2.5 times the bandwidth.[16]

Analog oscilloscopes have been almost totally displaced by digital storage scopes except for use exclusively at lower frequencies. Greatly increased sample rates have largely eliminated the display of incorrect signals, known as "aliasing", that was sometimes present in the first generation of digital scopes. The problem can still occur when, for example, viewing a short section of a repetitive waveform that repeats at intervals thousands of times longer than the section viewed (for example a short synchronization pulse at the beginning of a particular television line), with an oscilloscope that cannot store the extremely large number of samples between one instance of the short section and the next.

The used test equipment market, particularly on-line auction venues, typically has a wide selection of older analog scopes available. However it is becoming more difficult to obtain replacement parts for these instruments, and repair services are generally unavailable from the original manufacturer. Used instruments are usually out of calibration, and recalibration by companies with the equipment and expertise usually costs more than the second-hand value of the instrument.[citation needed]

On the lowest end, an inexpensive hobby-grade single-channel DSO could be purchased for under $90 as of June 2011. These often have limited bandwidth and other facilities, but fulfill the basic functions of an oscilloscope.

The earliest and simplest type of oscilloscope consisted of a cathode ray tube, a vertical amplifier, a timebase, a horizontal amplifier and a power supply. These are now called "analog" scopes to distinguish them from the "digital" scopes that became common in the 1990s and 2000s.

Analog scopes do not necessarily include a calibrated reference grid for size measurement of waves, and they may not display waves in the traditional sense of a line segment sweeping from left to right. Instead, they could be used for signal analysis by feeding a reference signal into one axis and the signal to measure into the other axis. For an oscillating reference and measurement signal, this results in a complex looping pattern referred to as a Lissajous curve. The shape of the curve can be interpreted to identify properties of the measurement signal in relation to the reference signal, and is useful across a wide range of oscillation frequencies.

The dual-beam analog oscilloscope can display two signals simultaneously. A special dual-beam CRT generates and deflects two separate beams. Although multi-trace analog oscilloscopes can simulate a dual-beam display with chop and alternate sweeps, those features do not provide simultaneous displays. (Real time digital oscilloscopes offer the same benefits of a dual-beam oscilloscope, but they do not require a dual-beam display.)
The disadvantages of the dual trace oscilloscope are that it cannot switch quickly between the traces and it cannot capture two fast transient events. In order to avoid this problems a dual beam oscilloscope is used.

Trace storage is an extra feature available on some analog scopes; they used direct-view storage CRTs. Storage allows the trace pattern that normally decays in a fraction of a second to remain on the screen for several minutes or longer. An electrical circuit can then be deliberately activated to store and erase the trace on the screen.

While analog devices make use of continually varying voltages, digital devices employ binary numbers which correspond to samples of the voltage. In the case of digital oscilloscopes, an analog-to-digital converter (ADC) is used to change the measured voltages into digital information.

A Siglent SDS1000 Series Oscilloscope. A modern low cost DSO.

The digital storage oscilloscope, or DSO for short, is now the preferred type for most industrial applications, although simple analog CROs are still used by hobbyists. It replaces the electrostatic storage method used in analog storage scopes with digital memory, which can store data as long as required without degradation and with uniform brightness. It also allows complex processing of the signal by high-speed digital signal processing circuits.[3]

A standard DSO is limited to capturing signals with a bandwidth of less than half the sampling rate of the ADC (called the Nyquist limit). There is a variation of the DSO called the digital sampling oscilloscope that can exceed this limit for certain types of signal, such as high-speed communications signals, where the waveform consists of repeating pulses. This type of DSO deliberately samples at a much lower frequency than the Nyquist limit and then uses signal processing to reconstruct a composite view of a typical pulse. A similar technique, with analog rather than digital samples, was used before the digital era in analog sampling oscilloscopes.[17][18]

A digital phosphor oscilloscope (DPO) uses color information to convey information about a signal. It may, for example, display infrequent signal data in blue to make it stand out. In a conventional analog scope, such a rare trace may not be visible.

A mixed-signal oscilloscope (or MSO) has two kinds of inputs, a small number of analog channels (typically two or four), and a larger number of digital channels(typically sixteen). It provides the ability to accurately time-correlate analog and digital channels, thus offering a distinct advantage over a separate oscilloscope and logic analyser. Typically, digital channels may be grouped and displayed as a bus with each bus value displayed at the bottom of the display in hex or binary. On most MSOs, the trigger can be set across both analog and digital channels.

In a mixed-domain oscilloscope (MDO) you have an additional RF input port that goes into a spectrum analyzer part.[dubious – discuss] It links those traditionally separate instruments, so that you can e.g. time correlate events in the time domain (like a specific serial data package) with events happening in the frequency domain (like RF transmissions).

Many hand-held and bench oscilloscopes have the ground reference voltage common to all input channels. If more than one measurement channel is used at the same time, all the input signals must have the same voltage reference, and the shared default reference is the "earth". If there is no differential preamplifier or external signal isolator, this traditional desktop oscilloscope is not suitable for floating measurements. (Occasionally an oscilloscope user will break the ground pin in the power supply cord of a bench-top oscilloscope in an attempt to isolate the signal common from the earth ground. This practice is unreliable since the entire stray capacitance of the instrument cabinet will be connected into the circuit. Since it is also a hazard to break a safety ground connection, instruction manuals strongly advise against this practice.)

Siglent Isolation Oscilloscope SHS1000 Series

Some models of oscilloscope have isolated inputs, where the signal reference level terminals are not connected together. Each input channel can be used to make a "floating" measurement with an independent signal reference level. Measurements can be made without tying one side of the oscilloscope input to the circuit signal common or ground reference.

A new type of oscilloscope is emerging that consists of a specialized signal acquisition board (which can be an external USB or parallel port device, or an internal add-on PCI or ISA card). The user interface and signal processing software runs on the user's computer, rather than on an embedded computer as in the case of a conventional DSO.

A large number of instruments used in a variety of technical fields are really oscilloscopes with inputs, calibration, controls, display calibration, etc., specialized and optimized for a particular application. Examples of such oscilloscope-based instruments include waveform monitors for analyzing video levels in television productions and medical devices such as vital function monitors and electrocardiogram and electroencephalogram instruments. In automobile repair, an ignition analyzer is used to show the spark waveforms for each cylinder. All of these are essentially oscilloscopes, performing the basic task of showing the changes in one or more input signals over time in an X‑Y display.

Other instruments convert the results of their measurements to a repetitive electrical signal, and incorporate an oscilloscope as a display element. Such complex measurement systems include spectrum analyzers, transistor analyzers, and time domain reflectometers (TDRs). Unlike an oscilloscope, these instruments automatically generate stimulus or sweep a measurement parameter.

The Braun tube was known in 1897, and in 1899 Jonathan Zenneck equipped it with beam-forming plates and a magnetic field for sweeping the trace.[19] Early cathode ray tubes had been applied experimentally to laboratory measurements as early as the 1920s, but suffered from poor stability of the vacuum and the cathode emitters. V. K. Zworykin described a permanently sealed, high-vacuum cathode ray tube with a thermionic emitter in 1931. This stable and reproducible component allowed General Radio to manufacture an oscilloscope that was usable outside a laboratory setting.[3]
After World War II surplus electronic parts became the basis of revival of Heathkit Corporation, and a $50 oscilloscope kit made from such parts was a first market success.

^Sampling Oscilloscope Techniques(PDF), Tektronix, 1989, Technique Primer 47W-7209, archived(PDF) from the original on 3 March 2016, retrieved 11 October 2012, In 1960 Tektronix made it possible to measure signals over 100 MHz with the introduction of the first analog sampling oscilloscope.

1.
Oscilloscope history
–
This article discusses the history and development of oscilloscope technology. The modern day digital oscilloscope grew out of multiple developments of analog oscilloscopes, the oscillograph started as a hand drawn chart which was later slightly automated. This then grew into galvanometer driven recorders and photographic recorders, eventually, the cathode ray tube came along and displaced the oscillograph, eventually taking over the majority of the market when advancements such as triggers were added to them. However, the lives on to a degree in pen chart recorders for electrical signals. By slowly advancing around the rotor, a standing wave can be drawn on graphing paper by recording the degrees of rotation. This process was first partially automated by Jules François Joubert with his method of wave form measurement. This consisted of a special single-contact commutator attached to the shaft of a spinning rotor, the contact point could be moved around the rotor following a precise degree indicator scale and the output appearing on a galvanometer, to be hand-graphed by the technician. This process could produce a very rough waveform approximation since it was formed over a period of several thousand wave cycles. The first automated oscillographs used a galvanometer to move a pen across a scroll or drum of paper, the device known as the Hospitalier Ondograph was based on this method of wave form measurement. This was done with the development of the moving-coil oscillograph by William Duddell which in modern times is referred to as a mirror galvanometer. This reduced the measurement device to a mirror that could move at high speeds to match the waveform. Although the measurements were more precise than the built-up paper recorders. In the 1920s, a tiny tilting mirror attached to a diaphragm at the apex of a horn provided good response up to a few kHz, perhaps even 10 kHz. A time base, unsynchronized, was provided by a spinning mirror polygon, even earlier, audio applied to a diaphragm on the gas feed to a flame made the flame height vary, and a spinning mirror polygon gave an early glimpse of waveforms. Moving-paper oscillographs using UV-sensitive paper and advanced mirror galvanometers provided multi-channel recordings in the mid-20th century, frequency response was into at least the low audio range. Cathode ray tubes were developed in the late 19th century, at that time, the tubes were intended primarily to demonstrate and explore the physics of electrons. Karl Ferdinand Braun invented the CRT oscilloscope as a curiosity in 1897. Braun tubes were laboratory apparatus, using a cold-cathode emitter and very high voltages, with only vertical deflection applied to the internal plates, the face of the tube was observed through a rotating mirror to provide a horizontal time base

2.
Oscilloscope types
–
This is a subdivision of the Oscilloscope article, discussing the various types and models of oscilloscopes in greater detail. While analog devices make use of varying voltages, digital devices employ binary numbers which correspond to samples of the voltage. In the case of digital oscilloscopes, a converter is used to change the measured voltages into digital information. Waveforms are taken as a series of samples, the samples are stored, accumulating until enough are taken in order to describe the waveform, which are then reassembled for display. Digital technology allows the information to be displayed with brightness, clarity, there are, however, limitations as with the performance of any oscilloscope. The highest frequency at which the oscilloscope can operate is determined by the bandwidth of the front-end components of the instrument. Digital oscilloscopes can be classified into three categories, digital storage oscilloscopes, digital phosphor oscilloscopes, and digital sampling oscilloscopes. Newer variants include PC-based oscilloscopes and mixed-signal oscilloscopes, the digital storage oscilloscope, or DSO for short, is now the preferred type for most industrial applications. Instead of storage-type cathode ray tubes, DSOs use digital memory, a digital storage oscilloscope also allows complex processing of the signal by high-speed digital signal processing circuits. The vertical input is digitized by an analog to digital converter to create a set that is stored in the memory of a microprocessor. The data set is processed and then sent to the display, which in early DSOs was a cathode ray tube, DSOs with color LCD displays are common. The data set can be sent over a LAN or a WAN for processing or archiving, the screen image can be directly recorded on paper by means of an attached printer or plotter, without the need for an oscilloscope camera. Digital storage also makes possible another type of oscilloscope, the equivalent-time sample oscilloscope, instead of taking consecutive samples after the trigger event, only one sample is taken. However, the oscilloscope is able to vary its timebase to precisely time its sample and this requires that either a clock or repeating pattern be provided. This type of oscilloscope is used for very high speed communication because it allows for a very high sample rate. Digital oscilloscopes are limited principally by the performance of the input circuitry, the duration of the sample window. A disadvantage of digital oscilloscopes is the refresh rate of the screen. On an analog oscilloscope, the user can get a sense of the trigger rate simply by looking at the steadiness of the CRT trace

3.
Cathode ray tube
–
The cathode ray tube is a vacuum tube that contains one or more electron guns and a phosphorescent screen, and is used to display images. It modulates, accelerates, and deflects electron beam onto the screen to create the images, the images may represent electrical waveforms, pictures, radar targets, or others. CRTs have also used as memory devices, in which case the visible light emitted from the fluorescent material is not intended to have significant meaning to a visual observer. In television sets and computer monitors, the front area of the tube is scanned repetitively and systematically in a fixed pattern called a raster. An image is produced by controlling the intensity of each of the three beams, one for each additive primary color with a video signal as a reference. A CRT is constructed from an envelope which is large, deep, fairly heavy. The interior of a CRT is evacuated to approximately 0.01 Pa to 133 nPa. evacuation being necessary to facilitate the flight of electrons from the gun to the tubes face. That it is evacuated makes handling an intact CRT potentially dangerous due to the risk of breaking the tube and causing a violent implosion that can hurl shards of glass at great velocity. As a matter of safety, the face is made of thick lead glass so as to be highly shatter-resistant and to block most X-ray emissions. Flat panel displays can also be made in large sizes, whereas 38 to 40 was about the largest size of a CRT television, flat panels are available in 60. Cathode rays were discovered by Johann Hittorf in 1869 in primitive Crookes tubes and he observed that some unknown rays were emitted from the cathode which could cast shadows on the glowing wall of the tube, indicating the rays were traveling in straight lines. In 1890, Arthur Schuster demonstrated cathode rays could be deflected by electric fields, the earliest version of the CRT was known as the Braun tube, invented by the German physicist Ferdinand Braun in 1897. It was a diode, a modification of the Crookes tube with a phosphor-coated screen. In 1907, Russian scientist Boris Rosing used a CRT in the end of an experimental video signal to form a picture. He managed to display simple geometric shapes onto the screen, which marked the first time that CRT technology was used for what is now known as television. The first cathode ray tube to use a hot cathode was developed by John B. Johnson and Harry Weiner Weinhart of Western Electric and it was named by inventor Vladimir K. Zworykin in 1929. RCA was granted a trademark for the term in 1932, it released the term to the public domain in 1950. The first commercially made electronic television sets with cathode ray tubes were manufactured by Telefunken in Germany in 1934, in oscilloscope CRTs, electrostatic deflection is used, rather than the magnetic deflection commonly used with television and other large CRTs

4.
Tektronix
–
Tek is an American company best known for manufacturing test and measurement devices such as oscilloscopes, logic analyzers, and video and mobile test protocol equipment. Originally an independent company, it is now a subsidiary of Fortive, several charities are or were associated with Tektronix, including the Tektronix Foundation and the M. J. Murdock Charitable Trust in Vancouver, Washington. The company traces its roots to the revolution that immediately followed World War II. The company’s founders C. Howard Vollum and Melvin J. Jack Murdock invented the world’s first triggered oscilloscope in 1946 and this oscilloscope touted by Tektronix was the model 511. The model 511 was a triggering with sweep oscilloscope, the first oscilloscope with a true time-base was the Tektronix Model 513. The leading oscilloscope manufacturer at the time was DuMont Laboratories, DuMont pioneered the frequency-synch trigger and sweep. Tektronix was incorporated in 1946 with its headquarters at SE Foster Road and SE 59th Avenue in Portland, in 1947 there were 12 employees, and 250 in 1951. By 1950 the company building a manufacturing facility in Washington County, Oregon, at Barnes Road and the Sunset Highway. The company then moved its headquarters to this site, following an employee vote, a detailed story of Howard Vollum and Jack Murdock along with the products that made Tektronix a leading maker of oscilloscopes can be found at the Museum of Vintage Tektronix Equipment. In 1956, a piece of property in nearby Beaverton became available. Construction began in 1957 and on May 1,1959 Tektronix moved into its new Beaverton headquarters campus, in the late 1950s, Tektronix set a new trend in oscilloscope applications that would continue into the 1980s. This was the introduction of the plug-in oscilloscope, started with the 530 and 540 series oscilloscopes, the operator could switch in different horizontal sweep or vertical input plug-ins. This allowed the oscilloscope to be a flexible or adaptable test instrument, later Tektronix would add in plug-ins to have the scope operate as a spectrum analyzer, waveform sampler, cable tester and transistor curve tracer. The 530 and 540 series also ushered in the delayed trigger and this allows more stable triggering and better waveform reproduction. In 1961, Tektronix sold its first completely portable oscilloscope, the model 321 and this oscilloscope could run on AC line or on rechargeable batteries. It also brought the oscilloscope into the transistor age, a year and a half later the model 321A came out and that was all transistors. The 560 series introduced the rectangular CRT to oscilloscopes, in 1964 Tektronix made an oscilloscope breakthrough, the worlds first mass-produced analog storage oscilloscope the model 564. Hughes Aircraft Company is credited with the first working storage oscilloscope, in 1966, Tektronix brought out a line of high frequency full function oscilloscopes called the 400 series

5.
Liquid-crystal display
–
A liquid-crystal display is a flat-panel display or other electronically modulated optical device that uses the light-modulating properties of liquid crystals. Liquid crystals do not emit light directly, instead using a backlight or reflector to produce images in color or monochrome and they use the same basic technology, except that arbitrary images are made up of a large number of small pixels, while other displays have larger elements. LCDs are used in a range of applications including computer monitors, televisions, instrument panels, aircraft cockpit displays. Small LCD screens are common in consumer devices such as digital cameras, watches, calculators. LCD screens are used on consumer electronics products such as DVD players, video game devices. LCD screens have replaced heavy, bulky cathode ray tube displays in all applications. LCD screens are available in a range of screen sizes than CRT and plasma displays, with LCD screens available in sizes ranging from tiny digital watches to huge. Since LCD screens do not use phosphors, they do not suffer image burn-in when an image is displayed on a screen for a long time. LCDs are, however, susceptible to image persistence, the LCD screen is more energy-efficient and can be disposed of more safely than a CRT can. Its low electrical power consumption enables it to be used in battery-powered electronic equipment more efficiently than CRTs can be, by 2008, annual sales of televisions with LCD screens exceeded sales of CRT units worldwide, and the CRT became obsolete for most purposes. Without the liquid crystal between the filters, light passing through the first filter would be blocked by the second polarizer. Before an electric field is applied, the orientation of the molecules is determined by the alignment at the surfaces of electrodes. In a twisted nematic device, the surface alignment directions at the two electrodes are perpendicular to other, and so the molecules arrange themselves in a helical structure. This induces the rotation of the polarization of the incident light, and this light will then be mainly polarized perpendicular to the second filter, and thus be blocked and the pixel will appear black. By controlling the voltage applied across the liquid crystal layer in each pixel, color LCD systems use the same technique, with color filters used to generate red, green, and blue pixels. The optical effect of a TN device in the state is far less dependent on variations in the device thickness than that in the voltage-off state. Because of this, TN displays with low content and no backlighting are usually operated between crossed polarizers such that they appear bright with no voltage. When no image is displayed, different arrangements are used, for this purpose, TN LCDs are operated between parallel polarizers, whereas IPS LCDs feature crossed polarizers

6.
Electronic test equipment
–
Electronic test equipment is used to create signals and capture responses from electronic devices under test. In this way, the operation of the DUT can be proven or faults in the device can be traced. Use of electronic test equipment is essential to any work on electronics systems. ATE often includes many of instruments in real and simulated forms. Generally, more advanced test gear is necessary when developing circuits, designing a test system’s switching configuration requires an understanding of the signals to be switched and the tests to be performed, as well as the switching hardware form factors available. The following items are used for measurement of voltages, currents. Voltmeter Ohmmeter Ammeter, e. g. Galvanometer or Milliameter Multimeter e. g. VOM or DMM RLC Meter e. g and these systems are widely employed for incoming inspection, quality assurance, and production testing of electronic devices and subassemblies. The General Purpose Interface Bus is an IEEE-488 standard parallel interface used for attaching sensors, GPIB is a digital 8-bit parallel communications interface capable of achieving data transfers of more than 8 Mbytes/s. It allows daisy-chaining up to 14 instruments to a system using a 24-pin connector. It is one of the most common I/O interfaces present in instruments and is designed specifically for instrument control applications, the IEEE-488 specifications standardized this bus and defined its electrical, mechanical, and functional specifications, while also defining its basic software communication rules. GPIB works best for applications in industrial settings that require a connection for instrument control. The original GPIB standard was developed in the late 1960s by Hewlett-Packard to connect, the introduction of digital controllers and programmable test equipment created a need for a standard, high-speed interface for communication between instruments and controllers from various vendors. This standard was revised in 1978 and 1990. The IEEE488.2 specification includes the Standard Commands for Programmable Instrumentation, SCPI ensures compatibility and configurability among these instruments. The IEEE-488 bus has long been popular because it is simple to use and takes advantage of a selection of programmable instruments. Large systems, however, have the limitations, Driver fanout capacity limits the system to 14 devices plus a controller. Cable length limits the distance to two meters per device or 20 meters total, whichever is less. This imposes transmission problems on systems spread out in a room or on systems that require remote measurements, primary addresses limit the system to 30 devices with primary addresses

7.
Voltage
–
Voltage, electric potential difference, electric pressure or electric tension is the difference in electric potential energy between two points per unit electric charge. The voltage between two points is equal to the work done per unit of charge against an electric field to move the test charge between two points. This is measured in units of volts, voltage can be caused by static electric fields, by electric current through a magnetic field, by time-varying magnetic fields, or some combination of these three. A voltmeter can be used to measure the voltage between two points in a system, often a reference potential such as the ground of the system is used as one of the points. A voltage may represent either a source of energy or lost, used, given two points in space, x A and x B, voltage is the difference in electric potential between those two points. Electric potential must be distinguished from electric energy by noting that the potential is a per-unit-charge quantity. Like mechanical potential energy, the zero of electric potential can be chosen at any point, so the difference in potential, i. e. the voltage, is the quantity which is physically meaningful. The voltage between point A to point B is equal to the work which would have to be done, per unit charge, against or by the electric field to move the charge from A to B. The voltage between the two ends of a path is the energy required to move a small electric charge along that path. Mathematically this is expressed as the integral of the electric field. In the general case, both an electric field and a dynamic electromagnetic field must be included in determining the voltage between two points. Historically this quantity has also called tension and pressure. Pressure is now obsolete but tension is used, for example within the phrase high tension which is commonly used in thermionic valve based electronics. Voltage is defined so that negatively charged objects are pulled towards higher voltages, therefore, the conventional current in a wire or resistor always flows from higher voltage to lower voltage. Current can flow from lower voltage to higher voltage, but only when a source of energy is present to push it against the electric field. This is the case within any electric power source, for example, inside a battery, chemical reactions provide the energy needed for ion current to flow from the negative to the positive terminal. The electric field is not the only factor determining charge flow in a material, the electric potential of a material is not even a well defined quantity, since it varies on the subatomic scale. A more convenient definition of voltage can be found instead in the concept of Fermi level, in this case the voltage between two bodies is the thermodynamic work required to move a unit of charge between them

8.
Waveform
–
A waveform is the shape and form of a signal such as a wave moving in a physical medium or an abstract representation. In many cases the medium in which the wave propagates does not permit a direct observation of the true form, in these cases, the term waveform refers to the shape of a graph of the varying quantity against time. An instrument called an oscilloscope can be used to represent a wave as a repeating image on a screen. To be more specific, a waveform is depicted by a graph that shows the changes in a signals amplitude over the duration of recording. The amplitude of the signal is measured on the y -axis, most programs show waveforms to give the user a visual aid of what has been recorded. If the waveform is of low or high height, the recording was most likely conducted under conditions with a low or high input volume, respectively. From this example, it follows that the represented by the waveform is affected by both the input signal and conditions under which it is recorded. A periodic waveforms include these while t is time, λ is wavelength, the amplitude of the waveform follows a trigonometric sine function with respect to time. Square wave = { a, mod λ < duty − a and this waveform is commonly used to represent digital information. A square wave of constant period contains odd harmonics that decrease at −6 dB/octave, triangle wave =2 a π arcsin ⁡ sin ⁡2 π t − ϕ λ. It contains odd harmonics that decrease at −12 dB/octave, sawtooth wave =2 a π arctan ⁡ tan ⁡2 π t − ϕ2 λ. This looks like the teeth of a saw, found often in time bases for display scanning. It is used as the point for subtractive synthesis, as a sawtooth wave of constant period contains odd. Other waveforms are often called composite waveforms and can often be described as a combination of a number of waves or other basis functions added together. The Fourier series describes the decomposition of periodic waveforms, such that any periodic waveform can be formed by the sum of a set of fundamental, finite-energy non-periodic waveforms can be analyzed into sinusoids by the Fourier transform. AC waveform Arbitrary waveform generator Spectrum analyzer Waveform monitor Waveform viewer Wave packet Yuchuan Wei, common Waveform Analysis, A New And Practical Generalization of Fourier Analysis. Springer US, Aug 31,2000 Waveform Definition Hao He, Jian Li, Waveform design for active sensing systems, a computational approach. Solomon W. Golomb, and Guang Gong, Signal design for good correlation, for wireless communication, cryptography, and radar

9.
Amplitude
–
The amplitude of a periodic variable is a measure of its change over a single period. There are various definitions of amplitude, which are all functions of the magnitude of the difference between the extreme values. In older texts the phase is called the amplitude. Peak-to-peak amplitude is the change between peak and trough, with appropriate circuitry, peak-to-peak amplitudes of electric oscillations can be measured by meters or by viewing the waveform on an oscilloscope. Peak-to-peak is a measurement on an oscilloscope, the peaks of the waveform being easily identified and measured against the graticule. This remains a common way of specifying amplitude, but sometimes other measures of amplitude are more appropriate. In audio system measurements, telecommunications and other areas where the measurand is a signal that swings above and below a value but is not sinusoidal. If the reference is zero, this is the absolute value of the signal, if the reference is a mean value. Semi-amplitude means half the peak-to-peak amplitude, some scientists use amplitude or peak amplitude to mean semi-amplitude, that is, half the peak-to-peak amplitude. It is the most widely used measure of orbital wobble in astronomy, the RMS of the AC waveform. For complicated waveforms, especially non-repeating signals like noise, the RMS amplitude is used because it is both unambiguous and has physical significance. For example, the power transmitted by an acoustic or electromagnetic wave or by an electrical signal is proportional to the square of the RMS amplitude. For alternating current electric power, the practice is to specify RMS values of a sinusoidal waveform. One property of root mean square voltages and currents is that they produce the same heating effect as direct current in a given resistance, the peak-to-peak value is used, for example, when choosing rectifiers for power supplies, or when estimating the maximum voltage that insulation must withstand. Some common voltmeters are calibrated for RMS amplitude, but respond to the value of a rectified waveform. Many digital voltmeters and all moving coil meters are in this category, the RMS calibration is only correct for a sine wave input since the ratio between peak, average and RMS values is dependent on waveform. If the wave shape being measured is greatly different from a sine wave, true RMS-responding meters were used in radio frequency measurements, where instruments measured the heating effect in a resistor to measure current. The advent of microprocessor controlled meters capable of calculating RMS by sampling the waveform has made true RMS measurement commonplace

10.
Frequency
–
Frequency is the number of occurrences of a repeating event per unit time. It is also referred to as frequency, which emphasizes the contrast to spatial frequency. The period is the duration of time of one cycle in a repeating event, for example, if a newborn babys heart beats at a frequency of 120 times a minute, its period—the time interval between beats—is half a second. Frequency is an important parameter used in science and engineering to specify the rate of oscillatory and vibratory phenomena, such as vibrations, audio signals, radio waves. For cyclical processes, such as rotation, oscillations, or waves, in physics and engineering disciplines, such as optics, acoustics, and radio, frequency is usually denoted by a Latin letter f or by the Greek letter ν or ν. For a simple motion, the relation between the frequency and the period T is given by f =1 T. The SI unit of frequency is the hertz, named after the German physicist Heinrich Hertz, a previous name for this unit was cycles per second. The SI unit for period is the second, a traditional unit of measure used with rotating mechanical devices is revolutions per minute, abbreviated r/min or rpm. As a matter of convenience, longer and slower waves, such as ocean surface waves, short and fast waves, like audio and radio, are usually described by their frequency instead of period. Spatial frequency is analogous to temporal frequency, but the axis is replaced by one or more spatial displacement axes. Y = sin ⁡ = sin ⁡ d θ d x = k Wavenumber, in the case of more than one spatial dimension, wavenumber is a vector quantity. For periodic waves in nondispersive media, frequency has a relationship to the wavelength. Even in dispersive media, the frequency f of a wave is equal to the phase velocity v of the wave divided by the wavelength λ of the wave. In the special case of electromagnetic waves moving through a vacuum, then v = c, where c is the speed of light in a vacuum, and this expression becomes, f = c λ. When waves from a monochrome source travel from one medium to another, their remains the same—only their wavelength. For example, if 71 events occur within 15 seconds the frequency is, the latter method introduces a random error into the count of between zero and one count, so on average half a count. This is called gating error and causes an error in the calculated frequency of Δf = 1/, or a fractional error of Δf / f = 1/ where Tm is the timing interval. This error decreases with frequency, so it is a problem at low frequencies where the number of counts N is small, an older method of measuring the frequency of rotating or vibrating objects is to use a stroboscope

11.
Distortion
–
Distortion is the alteration of the original shape of something, such as an object, image, sound or waveform. Distortion is usually unwanted, and so engineers strive to eliminate distortion, in some situations, however, distortion may be desirable. The important signal processing operation of heterodyning is based on nonlinear mixing of signals to cause intermodulation, Distortion is also used as a musical effect, particularly with electric guitars. The addition of noise or other outside signals is not deemed distortion, a quality measure that explicitly reflects both the noise and the distortion is the Signal-to-noise-and-distortion ratio. Distortion occurs when the transfer function F is more complicated than this, if F is a linear function, for instance a filter whose gain and/or delay varies with frequency, the signal suffers linear distortion. Linear distortion does not introduce new frequency components to a signal and this diagram shows the behaviour of a signal as it is passed through various distorting functions. The first trace shows the input and it also shows the output from a non-distorting transfer function. A high-pass filter distorts the shape of a wave by reducing its low frequency components. This is the cause of the droop seen on the top of the pulses and this pulse distortion can be very significant when a train of pulses must pass through an AC-coupled amplifier. As the sine wave contains only one frequency, its shape is unaltered, a low-pass filter rounds the pulses by removing the high frequency components. All systems are low pass to some extent, note that the phase of the sine wave is different for the lowpass and the highpass cases, due to the phase distortion of the filters. A slightly non-linear transfer function, this one gently compresses the peaks of the sine wave and this generates small amounts of low order harmonics. A hard-clipping transfer function generates high order harmonics, parts of the transfer function are flat, which indicates that all information about the input signal has been lost in this region. The transfer function of an amplifier, with perfect gain. The true behavior of the system is usually different, amplitude distortion is distortion occurring in a system, subsystem, or device when the output amplitude is not a linear function of the input amplitude under specified conditions. Harmonic distortion adds overtones that are whole number multiples of a sound waves frequencies, nonlinearities that give rise to amplitude distortion in audio systems are most often measured in terms of the harmonics added to a pure sinewave fed to the system. The level at which harmonic distortion becomes audible depends on the nature of the distortion. Different types of distortion are more audible than others if the THD measurements are identical

12.
Flat panel display
–
They are far lighter and thinner than traditional cathode ray tube television sets and video displays and are usually less than 10 centimetres thick. Flat-panel displays can be divided into two display device categories, volatile and static, volatile displays require that pixels be periodically electronically refreshed to retain their state. A volatile display only shows an image when it has battery or AC mains power, static flat-panel displays rely on materials whose color states are bistable, and as such, flat-panel displays retain the text or images on the screen even when the power is off. As of 2016, flat-panel displays have almost completely replaced old CRT displays, most 2010s-era flat-panel displays use LCD and/or LED technologies. Most LCD screens are back-lit to make them easier to read or view in bright environments, flat-panel displays are thin and lightweight and provide better linearity and they are capable of higher resolution than typical consumer-grade TVs from earlier eras. The highest resolution for consumer-grade CRT TVs was 1080i, in contrast, many touchscreen-enabled devices can display a virtual QWERTY or numeric keyboard on the screen, to enable the user to type words or numbers. In many instances, an MFM also includes a TV tuner, the first engineering proposal for a flat-panel TV was by General Electric as a result of its work on radar monitors. Their publication of their findings gave all the basics of future flat-panel TVs, but GE did not continue with the R&D required and never built a working flat panel at that time. The first production flat-panel display was the Aiken tube, developed in the early 1950s and this saw some use in military systems as a heads up display, but conventional technologies overtook its development. Attempts to commercialize the system for television use ran into continued problems. The Philco Predicta featured a relatively flat cathode ray tube setup and would be the first commercially released flat panel upon its launch in 1958, the plasma display panel was invented in 1964 at the University of Illinois, according to The History of Plasma Display Panels. The first active-matrix addressed display was made by T Peter Brodys Thin-Film Devices department at Westinghouse Electric Corporation in 1968, in 1977, James P Mitchell prototyped and later demonstrated what was perhaps the earliest monochromatic flat panel LED television display LED Display. As of 2012, 50% of global market share in flat-panel display production is by Taiwanese manufacturers such as AU Optronics, Liquid crystal displays are lightweight, compact, portable, cheap, more reliable, and easier on the eyes than cathode ray tube screens. LCD screens use a thin layer of crystal, a liquid that exhibits crystalline properties. It is sandwiched between two conducting plates. The top plate has transparent electrodes deposited on it, and the plate is illuminated so that the viewer can see the images on the screen. By applying controlled electrical signals across the plates, various segments of the crystal can be activated. These segments can either transmit or block light, an image is produced by passing light through selected segments of the liquid crystal to the viewer

13.
Analog-to-digital converter
–
In electronics, an analog-to-digital converter is a system that converts an analog signal, such as a sound picked up by a microphone or light entering a digital camera, into a digital signal. Typically the digital output is a twos complement binary number that is proportional to the input, due to the complexity and the need for precisely matched components, all but the most specialized ADCs are implemented as integrated circuits. A digital-to-analog converter performs the function, it converts a digital signal into an analog signal. The conversion involves quantization of the input, so it necessarily introduces a small amount of error, furthermore, instead of continuously performing the conversion, an ADC does the conversion periodically, sampling the input. The result is a sequence of values that have been converted from a continuous-time and continuous-amplitude analog signal to a discrete-time. An ADC is defined by its bandwidth and its signal-to-noise ratio, the bandwidth of an ADC is characterized primarily by its sampling rate. The dynamic range of an ADC is influenced by many factors, including the resolution, linearity and accuracy, aliasing and jitter. The dynamic range of an ADC is often summarized in terms of its number of bits. An ideal ADC has an ENOB equal to its resolution, ADCs are chosen to match the bandwidth and required signal-to-noise ratio of the signal to be quantized. If an ADC operates at a rate greater than twice the bandwidth of the signal, then perfect reconstruction is possible given an ideal ADC. The presence of quantization error limits the range of even an ideal ADC. However, if the range of the ADC exceeds that of the input signal. The resolution of the converter indicates the number of values it can produce over the range of analog values. The resolution determines the magnitude of the error and therefore determines the maximum possible average signal to noise ratio for an ideal ADC without the use of oversampling. The values are stored electronically in binary form, so the resolution is usually expressed in bits. In consequence, the number of discrete values available, or levels, is assumed to be a power of two, for example, an ADC with a resolution of 8 bits can encode an analog input to one in 256 different levels, since 28 =256. The values can represent the ranges from 0 to 255 or from −128 to 127, resolution can also be defined electrically, and expressed in volts. The minimum change in required to guarantee a change in the output code level is called the least significant bit voltage

14.
Digital signal processor
–
A digital signal processor is a specialized microprocessor, with its architecture optimized for the operational needs of digital signal processing. The goal of DSPs is usually to measure, filter or compress continuous real-world analog signals, DSPs often use special memory architectures that are able to fetch multiple data or instructions at the same time. Digital signal processing algorithms typically require a number of mathematical operations to be performed quickly and repeatedly on a series of data samples. Signals are constantly converted from analog to digital, manipulated digitally, many DSP applications have constraints on latency, that is, for the system to work, the DSP operation must be completed within some fixed time, and deferred processing is not viable. A specialized digital signal processor, however, will tend to provide a lower-cost solution, with performance, lower latency. For example, the SES-12 and SES-14 satellites from operator SES, the architecture of a digital signal processor is optimized specifically for digital signal processing. Most also support some of the features as a processor or microcontroller. Some useful features for optimizing DSP algorithms are outlined below, sometimes various sticky bits operation modes are available. DSPs can sometimes rely on supporting code to know about cache hierarchies and this is a tradeoff that allows for better performance. In addition, extensive use of DMA is employed, DSPs frequently use multi-tasking operating systems, but have no support for virtual memory or memory protection. Operating systems that use virtual memory require more time for context switching among processes, the AMD2901 bit-slice chip with its family of components was a very popular choice. There were reference designs from AMD, but very often the specifics of a design were application specific. These bit slice architectures would sometimes include a peripheral multiplier chip, examples of these multipliers were a series from TRW including the TDC1008 and TDC1010, some of which included an accumulator, providing the requisite multiply–accumulate function. In 1976, Richard Wiggins proposed the Speak & Spell concept to Paul Breedlove, Larry Brantingham, two years later in 1978 they produced the first Speak & Spell, with the technological centerpiece being the TMS5100, the industrys first digital signal processor. It also set other milestones, being the first chip to use Linear predictive coding to perform speech synthesis, in 1978, Intel released the 2920 as an analog signal processor. It had an on-chip ADC/DAC with a signal processor. In 1979, AMI released the S2811 and it was designed as a microprocessor peripheral, and it had to be initialized by the host. The S2811 was likewise not successful in the market, in 1980 the first stand-alone, complete DSPs – the NEC µPD7720 and AT&T DSP1 – were presented at the International Solid-State Circuits Conference 80

15.
Digital computer
–
A computer is a device that can be instructed to carry out an arbitrary set of arithmetic or logical operations automatically. The ability of computers to follow a sequence of operations, called a program, such computers are used as control systems for a very wide variety of industrial and consumer devices. The Internet is run on computers and it millions of other computers. Since ancient times, simple manual devices like the abacus aided people in doing calculations, early in the Industrial Revolution, some mechanical devices were built to automate long tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century, the first digital electronic calculating machines were developed during World War II. The speed, power, and versatility of computers has increased continuously and dramatically since then, conventionally, a modern computer consists of at least one processing element, typically a central processing unit, and some form of memory. The processing element carries out arithmetic and logical operations, and a sequencing, peripheral devices include input devices, output devices, and input/output devices that perform both functions. Peripheral devices allow information to be retrieved from an external source and this usage of the term referred to a person who carried out calculations or computations. The word continued with the same meaning until the middle of the 20th century, from the end of the 19th century the word began to take on its more familiar meaning, a machine that carries out computations. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, one who calculates, the Online Etymology Dictionary states that the use of the term to mean calculating machine is from 1897. The Online Etymology Dictionary indicates that the use of the term. 1945 under this name, theoretical from 1937, as Turing machine, devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was probably a form of tally stick, later record keeping aids throughout the Fertile Crescent included calculi which represented counts of items, probably livestock or grains, sealed in hollow unbaked clay containers. The use of counting rods is one example, the abacus was initially used for arithmetic tasks. The Roman abacus was developed from used in Babylonia as early as 2400 BC. Since then, many forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, the Antikythera mechanism is believed to be the earliest mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions and it was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to circa 100 BC

16.
Vacuum tube
–
In electronics, a vacuum tube, an electron tube, or just a tube, or valve, is a device that controls electric current between electrodes in an evacuated container. Vacuum tubes mostly rely on thermionic emission of electrons from a hot filament or a cathode heated by the filament and this type is called a thermionic tube or thermionic valve. A phototube, however, achieves electron emission through the photoelectric effect, the simplest vacuum tube, the diode, contains only a heater, a heated electron-emitting cathode, and a plate. Current can only flow in one direction through the device between the two electrodes, as electrons emitted by the travel through the tube and are collected by the anode. Adding one or more control grids within the tube allows the current between the cathode and anode to be controlled by the voltage on the grid or grids, Tubes with grids can be used for many purposes, including amplification, rectification, switching, oscillation, and display. In the 1940s the invention of devices made it possible to produce solid-state devices, which are smaller, more efficient, more reliable, more durable. Hence, from the mid-1950s solid-state devices such as transistors gradually replaced tubes, the cathode-ray tube remained the basis for televisions and video monitors until superseded in the 21st century. However, there are still a few applications for which tubes are preferred to semiconductors, for example, the used in microwave ovens. One classification of vacuum tubes is by the number of active electrodes, a device with two active elements is a diode, usually used for rectification. Devices with three elements are used for amplification and switching. Additional electrodes create tetrodes, pentodes, and so forth, which have additional functions made possible by the additional controllable electrodes. X-ray tubes are vacuum tubes. Phototubes and photomultipliers rely on electron flow through a vacuum, though in those cases electron emission from the cathode depends on energy from photons rather than thermionic emission, since these sorts of vacuum tubes have functions other than electronic amplification and rectification they are described in their own articles. A vacuum tube consists of two or more electrodes in a vacuum inside an airtight enclosure, most tubes have glass envelopes, though ceramic and metal envelopes have been used. The electrodes are attached to leads which pass through the envelope via an airtight seal, Tubes were a frequent cause of failure in electronic equipment, and consumers were expected to be able to replace tubes themselves. In addition to the terminals, some tubes had an electrode terminating at a top cap. The principal reason for doing this was to avoid leakage resistance through the tube base, the bases were commonly made with phenolic insulation which performs poorly as an insulator in humid conditions. There was even a design that had two top cap connections

17.
Rack-mounted
–
A 19-inch rack is a standardized frame or enclosure for mounting multiple electronic equipment modules. Each module has a front panel that is 19 inches wide, the 19-inch dimension includes the edges, or ears, that protrude on each side which allow the module to be fastened to the rack frame with screws. Common uses include server, audio, and scientific lab equipment, the height of the electronic modules is also standardized as multiples of 1.752 inches or one rack unit or U. The industry standard rack cabinet is 42U tall, the term relay rack appeared first in the world of telephony. By 1911, the term was also being used in railroad signaling, there is little evidence that the dimensions of these early racks were standardized. The 19-inch rack format with rack-units of 1.75 inches was established as a standard by AT&T around 1922 in order to reduce the space required for repeater, the earliest repeaters from 1914 were installed in ad-hoc fashion on shelves, in wooden boxes and cabinets. Once serial production started, they were built into bespoke racks, the height of the different panels will vary, but in all cases to be a whole multiple of 1¾ inches. The 19-inch rack format has remained constant while the technology that is mounted within it has changed considerably, nineteen-inch racks in two-post or four-post form hold most equipment in modern data centers, ISP facilities, and professionally designed corporate server rooms. They allow for dense hardware configurations without occupying excessive floorspace or requiring shelving, nineteen-inch racks are also often used to house professional audio and video equipment, including amplifiers, effects units, interfaces, headphone amplifiers, and even small scale audio mixers. A third common use for rack-mounted equipment is industrial power, control, typically, a piece of equipment being installed has a front panel height 1⁄32 inch less than the allotted number of Us. Thus, a 1U rackmount computer is not 1.75 inches tall but is 1.719 inches tall, 2U would be 3.469 inches instead of 3.5 inches. This gap allows a bit of room above and below a piece of equipment so it may be removed without binding on the adjacent equipment. State-of-the-art rackmount cases are now constructed of thermo stamped composite, carbon fiber. Originally, the holes were tapped with a particular screw thread. Tapping large numbers of holes that may never be used is expensive, nonetheless tapped-hole racks are still in use, examples include telephone exchanges, network cabling panels, broadcast studios and some government and military applications. The tapped-hole rack was first replaced by clearance-hole racks, the holes are large enough to permit a bolt to be freely inserted through without binding, and bolts are fastened in place using cage nuts. In the event of a nut being stripped out or a bolt breaking, production of clearance-hole racks is less expensive because tapping the holes is eliminated and replaced with fewer, less expensive, cage nuts. The next innovation in design has been the square-hole rack

18.
BNC connector
–
The BNC connector is a miniature quick connect/disconnect radio frequency connector used for coaxial cable. It features two bayonet lugs on the connector, mating is fully achieved with a quarter turn of the coupling nut. BNC connectors are used with coaxial cable in radio, television, and other radio-frequency electronic equipment, test instruments. The BNC was commonly used for computer networks, including ARCnet, the IBM PC Network. BNC connectors are made to match the impedance of cable at either 50 ohms or 75 ohms. They are usually applied for frequencies below 4 GHz and voltages below 500 volts, similar connectors using the bayonet connection principle exist, and a threaded connector is also available. The BNC was originally designed for use and has gained wide acceptance in video. The BNC uses a slotted outer conductor and some plastic dielectric on each gender connector and this dielectric causes increasing losses at higher frequencies. Above 4 GHz, the slots may radiate signals, so the connector is usable, both 50 ohm and 75 ohm versions are available. The BNC connector is used for signal connections such as, analog, the BNC connector is used for composite video on commercial video devices. Consumer electronics devices with RCA connector jacks can be used with BNC-only commercial video equipment by inserting an adapter, BNC connectors were commonly used on 10base2 thin Ethernet network cables and network cards. BNC connections can also be found in recording studios, digital recording equipment uses the connection for synchronization of various components via the transmission of word clock timing signals. Typically the male connector is fitted to a cable, and the female to a panel on equipment, cable connectors are often designed to be fitted by crimping using a special power or manual tool. Wire strippers which strip outer jacket, shield braid, and inner dielectric to the lengths in one operation are used. The connector was named the BNC after its bayonet mount locking mechanism and its inventors, Paul Neill, Neill worked at Bell Labs and also invented the N connector, Concelman worked at Amphenol and also invented the C connector. A backronym has been applied to it, British Naval Connector. Another common incorrectly attributed origin is Berkeley Nucleonics Corporation, the basis for the development of the BNC connector was largely the work of Octavio M. Salati, a graduate of the Moore School of Electrical Engineering of the University of Pennsylvania. In 1945, while working at Hazeltine Electronics Corporation, he filed a patent for a connector for coaxial cables that would minimize wave reflection/loss, the patent was granted in 1951

19.
UHF connector
–
The UHF connector is a World War II or earlier threaded RF connector design, from an era when UHF referred to frequencies over 30 MHz. Originally intended for use as a connector in radar applications. This connector was developed on basis of a banana plug. Originally the connector was designed to carry signals at frequencies up to 300 MHz, the coupling shell has a 5⁄8 inch 24tpi UNEF standard thread. The most popular cable plug and corresponding chassis-mount socket carry the old Signal Corps nomenclatures PL-259 and these are also known as Navy type 49190 and 49194 respectively. PL-259, SO-239, and several other related military references refer to one specific mechanical design collectively known as the UHF Connector, similar connectors with an incompatible 16mm diameter, 1mm metric thread have been produced, but these are not standard UHF connectors by definition. UHF connectors have a non-constant surge impedance, for this reason, UHF connectors are generally usable through HF and the lower portion of what is now known as the VHF frequency range. UHF connectors can handle RF peak power levels over one kilowatt based on the rating of 500 volts peak. The UHF connector is not weatherproof, in many applications, UHF connectors were replaced by designs that have a more uniform surge impedance over the length of the connector, such as the N connector and the BNC connector. UHF connectors are widely used in amateur radio, Citizens Band radio. Miniature UHF connector RF connector UHF connector overview

20.
Binding post
–
A binding post is a connector commonly used on electronic test equipment to terminate a single wire or test lead. They are also found on loudspeakers and audio amplifiers as well as electrical equipment. A binding post contains a central threaded metal rod and a cap that screws down on that rod. The cap is commonly insulated with plastic and color-coded, red commonly means an active or positive terminal, black indicates an inactive or negative terminal, bare wire inserted through the same hole and clamped, or Wrapped around the metal post and clamped. A lug terminal, with a 1/4-inch inner diameter, inserted around the metal post, the binding post was a commercial invention of the General Radio Corporation. Even so-called isolated binding posts are not sufficiently isolated to protect users from coming into contact with their metal parts carrying voltage. As such they are not suitable to be used for carrying dangerous voltages, on several types of equipment it has been becoming common to no longer use the traditional binding posts, but safety banana jacks. The universal property of binding posts is lost here, since safety banana jacks can only be used with traditional and safety banana plugs. But this also impaired safety as two wires or pin connectors could be inserted from opposite sides of two binding posts and the tips of the wires or probes might inadvertently short together, holes are now normally aligned in such a fashion that such shorts cannot occur. In order to permit the use of double banana plugs, the distance between the positive and negative plugs should be 3/4 inch, fahnestock clip — an earlier device, now largely supplanted by binding posts Banana connector About. com glossary definition

21.
Banana connector
–
A banana connector is a single-wire electrical connector used for joining wires to equipment. The term 4 mm connector is used, especially in Europe. The plug is typically a four-leafed spring tip that fits snugly into the jack, the plugs are frequently used to terminate patch cords for electronic test equipment. Invention of the plug is claimed by two entities, the Hirschmann company claims it was invented by Richard Hirschmann in 1924. A competing claim is made for the General Radio Company, which stated 1924, GenRad developed banana plug - replaces pin plugs, and that it was introduced in this country by GR in 1924. The original plug consists of a metal pin about 20 millimetres long. This pin length is common in Europe and other parts of the world. However other sizes have emerged, such as 15 millimetres pins, intermediate lengths of 11 millimetres to 25 millimetres are less common. The pins diameter is nominally 4 millimetres, the pin has one or more lengthwise springs that bulge outwards slightly, giving the appearance of a banana. Taking the springs into account, the diameter of a banana plug is typically a bit larger than 4 mm when not plugged in. When inserted into a matching 4 mm socket the springs press against the sides of the socket, improving the electrical contact and preventing the pin from falling out. The other end of the plug has a lug connector to which a length of flexible insulated equipment wire can be attached, an insulating plastic cover is usually fitted over this rear end of the connector. The rear end of a 4 mm plug often has a 4 mm hole drilled in it, either transversely or axially and this type is called a stackable 4 mm plug. For high voltage use, a special sheathed version of the banana plug and this version has an insulating sheath around both the male and female connectors to avoid accidental contact. The sheathed male plug will not work with a female socket. Individual banana plugs and jacks are commonly color-coded red and black, dual banana plugs are often black with some physical feature such as a molded ridge or thick tab, marked Gnd indicating the relative polarity of the two plugs. Besides plugging into specific banana jacks, banana plugs may plug into five-way or universal binding posts on audio equipment. A number of widely used plugs are based on combining 2 or more banana plugs with a handle and other features for ease of use

22.
Coaxial cable
–
Coaxial cable, or coax, is a type of cable that has an inner conductor surrounded by a tubular insulating layer, surrounded by a tubular conducting shield. Many coaxial cables also have an outer sheath or jacket. The term coaxial comes from the conductor and the outer shield sharing a geometric axis. Coaxial cable was invented by English engineer and mathematician Oliver Heaviside, Coaxial cable is used as a transmission line for radio frequency signals. Its applications include feedlines connecting radio transmitters and receivers with their antennas, computer network connections, digital audio and this allows coaxial cable runs to be installed next to metal objects such as gutters without the power losses that occur in other types of transmission lines. Coaxial cable also provides protection of the signal from external electromagnetic interference, the cable is protected by an outer insulating jacket. Normally, the shield is kept at ground potential and a signal carrying voltage is applied to the center conductor, the advantage of coaxial design is that electric and magnetic fields are restricted to the dielectric with little leakage outside the shield. Conversely, electric and magnetic fields outside the cable are largely kept from interfering with signals inside the cable, larger diameter cables and cables with multiple shields have less leakage. Common applications of coaxial cable include video and CATV distribution, RF and microwave transmission, the characteristic impedance of the cable is determined by the dielectric constant of the inner insulator and the radii of the inner and outer conductors. A controlled cable characteristic impedance is important because the source and load impedance should be matched to ensure maximum power transfer, other important properties of coaxial cable include attenuation as a function of frequency, voltage handling capability, and shield quality. Coaxial cable design choices affect physical size, frequency performance, attenuation, power handling capabilities, flexibility, strength, the inner conductor might be solid or stranded, stranded is more flexible. To get better performance, the inner conductor may be silver-plated. Copper-plated steel wire is used as an inner conductor for cable used in the cable TV industry. The insulator surrounding the conductor may be solid plastic, a foam plastic. The properties of control some electrical properties of the cable. A common choice is a solid polyethylene insulator, used in lower-loss cables, solid Teflon is also used as an insulator. Some coaxial lines use air and have spacers to keep the conductor from touching the shield. Many conventional coaxial cables use braided copper wire forming the shield and this allows the cable to be flexible, but it also means there are gaps in the shield layer, and the inner dimension of the shield varies slightly because the braid cannot be flat

23.
Test probe
–
A test probe is a physical device used to connect test equipment to a device under test. Test probes range from simple, robust devices to complex probes that are sophisticated, expensive. Specific types include test prods, oscilloscope probes and current probes, a test probe is often supplied as a test lead, which includes the probe, cable and terminating connector. Voltage probes are used to measure voltages present on the DUT, to achieve high accuracy, the test instrument and its probe must not significantly affect the voltage being measured. This is accomplished by ensuring that the combination of instrument and probe exhibit a high impedance that will not load the DUT. For AC measurements, the component of impedance may be more important than the resistive. The handle allows a person to hold and guide the probe without influencing the measurement or being exposed to dangerous voltages that might cause electric shock, within the probe body, the wire is connected to a rigid, pointed metal tip that contacts the DUT. Some probes allow an alligator clip to be attached to the tip, test leads are usually made with finely stranded wire to keep them flexible, of wire gauges sufficient to conduct a few amperes of electric current. The insulation is chosen to be flexible and have a breakdown voltage higher than the voltmeters maximum input voltage. The many fine strands and the thick insulation make the wire thicker than ordinary hookup wire, two probes are used together to measure voltage, current, and two-terminal components such as resistors and capacitors. When making DC measurements it is necessary to know which probe is positive, depending upon the accuracy required, they can be used with signal frequencies ranging from DC to a few kilohertz. When sensitive measurements must be made shields, guards, and techniques such as four-terminal Kelvin sensing are used, tweezer probes are a pair of simple probes fixed to a tweezer mechanism, operated with one hand, for measuring voltages or other electronic circuit parameters between closely spaced pins. Spring probes are spring-loaded pins used in electrical test fixtures to contact test points, component leads, oscilloscopes display the instantaneous waveform of varying electrical quantities, unlike other instruments which give numerical values of relatively stable quantities. Scope probes fall into two categories, passive and active. Passive scope probes contain no active parts, such as transistors. Because of the high frequencies often involved, oscilloscopes do not normally use simple wires to connect to the DUT, flying leads are likely to pick up interference, so they are not suitable for low-level signals. Furthermore, the inductance of flying leads make them unsuitable for high frequency signals, instead, a specific scope probe is used, which uses a coaxial cable to transmit the signal from the tip of the probe to the oscilloscope. Although coaxial cable has lower inductance than flying leads, it has higher capacitance, consequently, a one-meter high-impedance direct coaxial probe may load the circuit with a capacitance of about 110 pF and a resistance of 1 megohm

24.
Ohm
–
The ohm is the SI derived unit of electrical resistance, named after German physicist Georg Simon Ohm. The definition of the ohm was revised several times, today the definition of the ohm is expressed from the quantum Hall effect. In many cases the resistance of a conductor in ohms is approximately constant within a range of voltages, temperatures. In alternating current circuits, electrical impedance is also measured in ohms, the siemens is the SI derived unit of electric conductance and admittance, also known as the mho, it is the reciprocal of resistance in ohms. The power dissipated by a resistor may be calculated from its resistance, non-linear resistors have a value that may vary depending on the applied voltage. The rapid rise of electrotechnology in the last half of the 19th century created a demand for a rational, coherent, consistent, telegraphers and other early users of electricity in the 19th century needed a practical standard unit of measurement for resistance. Two different methods of establishing a system of units can be chosen. Various artifacts, such as a length of wire or a standard cell, could be specified as producing defined quantities for resistance, voltage. This latter method ensures coherence with the units of energy, defining a unit for resistance that is coherent with units of energy and time in effect also requires defining units for potential and current. Some early definitions of a unit of resistance, for example, the absolute-units system related magnetic and electrostatic quantities to metric base units of mass, time, and length. These units had the advantage of simplifying the equations used in the solution of electromagnetic problems. However, the CGS units turned out to have impractical sizes for practical measurements, various artifact standards were proposed as the definition of the unit of resistance. In 1860 Werner Siemens published a suggestion for a reproducible resistance standard in Poggendorffs Annalen der Physik und Chemie and he proposed a column of pure mercury, of one square millimetre cross section, one metre long, Siemens mercury unit. However, this unit was not coherent with other units, one proposal was to devise a unit based on a mercury column that would be coherent – in effect, adjusting the length to make the resistance one ohm. Not all users of units had the resources to carry out experiments to the required precision. The BAAS in 1861 appointed a committee including Maxwell and Thomson to report upon Standards of Electrical Resistance, in the third report of the committee,1864, the resistance unit is referred to as B. A. unit, or Ohmad. By 1867 the unit is referred to as simply Ohm, the B. A. ohm was intended to be 109 CGS units but owing to an error in calculations the definition was 1. 3% too small. The error was significant for preparation of working standards, on September 21,1881 the Congrès internationale délectriciens defined a practical unit of Ohm for the resistance, based on CGS units, using a mercury column at zero deg

25.
Hall effect
–
The Hall effect is the production of a voltage difference across an electrical conductor, transverse to an electric current in the conductor and a magnetic field perpendicular to the current. It was discovered by Edwin Hall in 1879, the Hall coefficient is defined as the ratio of the induced electric field to the product of the current density and the applied magnetic field. It is a characteristic of the material from which the conductor is made, since its value depends on the type, number, the Hall effect was discovered in 1879 by Edwin Hall while he was working on his doctoral degree at Johns Hopkins University in Baltimore, Maryland. The Hall effect is due to the nature of the current in a conductor, current consists of the movement of many small charge carriers, typically electrons, holes, ions or all three. When a magnetic field is present, these charges experience a force, when such a magnetic field is absent, the charges follow approximately straight, line of sight paths between collisions with impurities, phonons, etc. However, when a field with a perpendicular component is applied. This leaves equal and opposite charges exposed on the other face, the result is an asymmetric distribution of charge density across the Hall element, arising from a force that is perpendicular to both the line of sight path and the applied magnetic field. The separation of charge establishes an electric field opposes the migration of further charge. In classical electromagnetism electrons move in the direction of the current I. In some semiconductors it appears holes are actually flowing because the direction of the voltage is opposite to the derivation below, the v x term is the drift velocity of the current which is assumed at this point to be holes by convention. The v x B z term is negative in the direction by the right hand rule. F = q 0 = E y − v x B z where E y is assigned in direction of y-axis, in wires, electrons instead of holes are flowing, so v x → − v x and q → − q. The Hall coefficient is defined as R H = E y j x B z where j is the current density of the carrier electrons, in SI units, this becomes R H = E y j x B = V H t I B = −1 n e. As a result, the Hall effect is useful as a means to measure either the carrier density or the magnetic field. One very important feature of the Hall effect is that it differentiates between positive charges moving in one direction and negative charges moving in the opposite, the Hall effect offered the first real proof that electric currents in metals are carried by moving electrons, not by protons. The Hall effect also showed that in some substances, it is appropriate to think of the current as positive holes moving rather than negative electrons. This confusion, however, can only be resolved by modern quantum theory of transport in solids. The sample inhomogeneity might result in spurious sign of the Hall effect, for example, positive Hall effect was observed in evidently n-type semiconductors

26.
Parallax
–
The term is derived from the Greek word παράλλαξις, meaning alternation. Due to foreshortening, nearby objects have a larger parallax than more distant objects when observed from different positions, astronomers use the principle of parallax to measure distances to the closer stars. Here, the parallax is the semi-angle of inclination between two sight-lines to the star, as observed when the Earth is on opposite sides of the Sun in its orbit. Parallax also affects optical instruments such as rifle scopes, binoculars, microscopes, many animals, including humans, have two eyes with overlapping visual fields that use parallax to gain depth perception, this process is known as stereopsis. In computer vision the effect is used for stereo vision, and there is a device called a parallax rangefinder that uses it to find range. A simple everyday example of parallax can be seen in the dashboard of motor vehicles that use a needle-style speedometer gauge. When viewed from directly in front, the speed may show exactly 60, as the eyes of humans and other animals are in different positions on the head, they present different views simultaneously. This is the basis of stereopsis, the process by which the brain exploits the parallax due to the different views from the eye to gain depth perception, animals also use motion parallax, in which the animals move to gain different viewpoints. For example, pigeons bob their heads up and down to see depth, the motion parallax is exploited also in wiggle stereoscopy, computer graphics which provide depth cues through viewpoint-shifting animation rather than through binocular vision. Parallax arises due to change in viewpoint occurring due to motion of the observer, of the observed, what is essential is relative motion. By observing parallax, measuring angles, and using geometry, one can determine distance, astronomers also use the word parallax as a synonym for distance measurent by other methods, see parallax #Astronomy. In a geostatic model, the movement of the star would have to be taken as real with the star oscillating across the sky with respect to the background stars, the parsec is defined as the distance for which the annual parallax is 1 arcsecond. Annual parallax is measured by observing the position of a star at different times of the year as the Earth moves through its orbit. Measurement of annual parallax was the first reliable way to determine the distances to the closest stars, the first successful measurements of stellar parallax were made by Friedrich Bessel in 1838 for the star 61 Cygni using a heliometer. Stellar parallax remains the standard for calibrating other measurement methods, accurate calculations of distance based on stellar parallax require a measurement of the distance from the Earth to the Sun, now based on radar reflection off the surfaces of planets. The angles involved in these calculations are very small and thus difficult to measure, the nearest star to the Sun, Proxima Centauri, has a parallax of 0.7687 ±0.0003 arcsec. This angle is approximately that subtended by an object 2 centimeters in diameter located 5.3 kilometers away, the fact that stellar parallax was so small that it was unobservable at the time was used as the main scientific argument against heliocentrism during the early modern age. In 1989, the satellite Hipparcos was launched primarily for obtaining improved parallaxes and proper motions for over 100,000 nearby stars, increasing the reach of the method tenfold

27.
Video
–
Video is an electronic medium for the recording, copying, playback, broadcasting, and display of moving visual media. Video systems vary greatly in the resolution of the display and refresh rate, video can be carried on a variety of media, including radio broadcast, tapes, DVDs, computer files etc. Video was originally exclusively a live technology, charles Ginsburg led an Ampex research team developing one of the first practical video tape recorder. In 1951 the first video tape recorder captured live images from television cameras by converting the electrical impulses. Video recorders were sold for $50,000 in 1956, however, prices gradually dropped over the years, in 1971, Sony began selling videocassette recorder decks and tapes into the consumer market. The use of techniques in video created digital video, which allowed higher quality and, eventually. After the invention of the DVD in 1997 and Blu-ray Disc in 2006, sales of videotape, the advent of digital broadcasting and the subsequent digital television transition is in the process of relegating analog video to the status of a legacy technology in most parts of the world. PAL standards and SECAM specify 25 frame/s, while NTSC standards specify 29.97 frames, film is shot at the slower frame rate of 24 frames per second, which slightly complicates the process of transferring a cinematic motion picture to video. The minimum frame rate to achieve a comfortable illusion of an image is about sixteen frames per second. Video can be interlaced or progressive, analog display devices reproduce each frame in the same way, effectively doubling the frame rate as far as perceptible overall flicker is concerned. NTSC, PAL and SECAM are interlaced formats, abbreviated video resolution specifications often include an i to indicate interlacing. For example, PAL video format is specified as 576i50, where 576 indicates the total number of horizontal scan lines, i indicates interlacing. In progressive scan systems, each refresh period updates all scan lines in each frame in sequence, when displaying a natively progressive broadcast or recorded signal, the result is optimum spatial resolution of both the stationary and moving parts of the image. Deinterlacing cannot, however, produce video quality that is equivalent to true progressive scan source material, aspect ratio describes the dimensions of video screens and video picture elements. All popular video formats are rectilinear, and so can be described by a ratio between width and height, the screen aspect ratio of a traditional television screen is 4,3, or about 1.33,1. High definition televisions use a ratio of 16,9. The aspect ratio of a full 35 mm film frame with soundtrack is 1.375,1. Therefore, a 720 by 480 pixel NTSC DV image displayes with the 4,3 aspect ratio if the pixels are thin, the popularity of viewing video on mobile phones has led to the growth of vertical video

28.
PAL
–
Phase Alternating Line is a colour encoding system for analogue television used in broadcast television systems in most countries broadcasting at 625-line /50 field per second. Other common colour encoding systems are NTSC and SECAM, all the countries using PAL are currently in process of conversion or have already converted standards to DVB, ISDB or DTMB. This page primarily discusses the PAL colour encoding system, the articles on broadcast television systems and analogue television further describe frame rates, image resolution and audio modulation. To overcome NTSCs shortcomings, alternative standards were devised, resulting in the development of the PAL, the goal was to provide a colour TV standard for the European picture frequency of 50 fields per second, and finding a way to eliminate the problems with NTSC. PAL was developed by Walter Bruch at Telefunken in Hannover, Germany, with important input from Dr. Kruse, the format was patented by Telefunken in 1962, citing Bruch as inventor, and unveiled to members of the European Broadcasting Union on 3 January 1963. When asked, why the system was named PAL and not Bruch the inventor answered that a Bruch system would not have sold very well. The first broadcasts began in the United Kingdom in June 1967, the one BBC channel initially using the broadcast standard was BBC2, which had been the first UK TV service to introduce 625-lines in 1964. Telefunken PALcolor 708T was the first PAL commercial TV set and it was followed by Loewe-Farbfernseher S920 & F900. Telefunken was later bought by the French electronics manufacturer Thomson, Thomson also bought the Compagnie Générale de Télévision where Henri de France developed SECAM, the first European Standard for colour television. The term PAL was often used informally and somewhat imprecisely to refer to the 625-line/50 Hz television system in general, accordingly, DVDs were labelled as PAL or NTSC even though technically the discs do not carry either PAL or NTSC composite signal. CCIR 625/50 and EIA 525/60 are the names for these standards, PAL. Both the PAL and the NTSC system use a quadrature amplitude modulated subcarrier carrying the chrominance information added to the video signal to form a composite video baseband signal. The frequency of this subcarrier is 4.43361875 MHz for PAL and NTSC4.43, the SECAM system, on the other hand, uses a frequency modulation scheme on its two line alternate colour subcarriers 4.25000 and 4.40625 MHz. Early PAL receivers relied on the eye to do that cancelling, however. The effect is that phase errors result in changes, which are less objectionable than the equivalent hue changes of NTSC. In any case, NTSC, PAL, and SECAM all have chrominance bandwidth reduced greatly compared to the luminance signal. The 4.43361875 MHz frequency of the carrier is a result of 283.75 colour clock cycles per line plus a 25 Hz offset to avoid interferences. Since the line frequency is 15625 Hz, the carrier frequency calculates as follows,4.43361875 MHz =283.75 ×15625 Hz +25 Hz

29.
NTSC
–
The first NTSC standard was developed in 1941 and had no provision for color. In 1953 a second NTSC standard was adopted, which allowed for television broadcasting which was compatible with the existing stock of black-and-white receivers. NTSC was the first widely adopted broadcast color system and remained dominant until 1997, North America, parts of Central America, and South Korea are adopting or have adopted the ATSC standards, while other countries are adopting or have adopted other standards instead of ATSC. After nearly 70 years, the majority of over-the-air NTSC transmissions in the United States ceased on January 1,2010, the majority of NTSC transmissions ended in Japan on July 24,2011, with the Japanese prefectures of Iwate, Miyagi, and Fukushima ending the next year. In March 1941, the committee issued a standard for black-and-white television that built upon a 1936 recommendation made by the Radio Manufacturers Association. Technical advancements of the side band technique allowed for the opportunity to increase the image resolution. The NTSC selected 525 scan lines as a compromise between RCAs 441-scan line standard and Philcos and DuMonts desire to increase the number of lines to between 605 and 800. The standard recommended a frame rate of 30 frames per second, other standards in the final recommendation were an aspect ratio of 4,3, and frequency modulation for the sound signal. In January 1950, the committee was reconstituted to standardize color television, in December 1953, it unanimously approved what is now called the NTSC color television standard. The compatible color standard retained full backward compatibility with existing black-and-white television sets, Color information was added to the black-and-white image by introducing a color subcarrier of precisely 315/88 MHz. These changes amounted to 0.1 percent and were tolerated by existing television receivers. The FCC had briefly approved a different color standard, starting in October 1950. However, this standard was incompatible with black-and-white broadcasts and it used a rotating color wheel, reduced the number of scan lines from 525 to 405, and increased the field rate from 60 to 144, but had an effective frame rate of only 24 frames per second. CBS rescinded its system in March 1953, and the FCC replaced it on December 17,1953, with the NTSC color standard, later that year, the improved TK-41 became the standard camera used throughout much of the 1960s. The NTSC standard has been adopted by countries, including most of the Americas. With the advent of television, analog broadcasts are being phased out. Most US NTSC broadcasters were required by the FCC to shut down their analog transmitters in 2009, low-power stations, Class A stations and translators were required to shut down by 2015. NTSC color encoding is used with the System M television signal, each frame is composed of two fields, each consisting of 262.5 scan lines, for a total of 525 scan lines

30.
Waveform monitor
–
A waveform monitor is a special type of oscilloscope used in television production applications. It is typically used to measure and display the level, or voltage, the level of a video signal usually corresponds to the brightness, or luminance, of the part of the image being drawn onto a regular video screen at the same point in time. A waveform monitor can be used to display the overall brightness of a television picture and it can also be used to visualize and observe special signals in the vertical blanking interval of a video signal, as well as the colorburst between each line of video. To diagnose and troubleshoot a television studio, or the equipment located therein, to assist with installation of equipment into a television facility, or with the commissioning or certification of a facility. In manufacturing test and research and development applications, for setting camera exposure in the case of video and digital cinema cameras. A waveform monitor is used in conjunction with a vectorscope. Originally, these were separate devices, however modern waveform monitors include vectorscope functionality as a separate mode. Originally, waveform monitors were entirely analog devices, the video signal was filtered and amplified. A sync stripper circuit was used to isolate the sync pulses and colorburst from the video signal, early waveform monitors differed little from oscilloscopes, except for the specialized video trigger circuitry. Waveform monitors also permit the use of reference, in this mode the sync. With the advent of television and digital signal processing, the waveform monitor acquired many new features and capabilities. Modern waveform monitors and other oscilloscopes have largely abandoned old-style CRT technology as well, all new waveform monitors are based on a rasterizer, a piece of graphics hardware that duplicates the behavior of a CRT vector display, generating a raster signal. They may come with a liquid crystal display, or they may be sold without a display. Color suite Non-linear editing system Linear video editing Control room Television studio production-control room Television production Software Waveform Monitor

31.
Multiplexing
–
In telecommunications and computer networks, multiplexing is a method by which multiple analog or digital signals are combined into one signal over a shared medium. The aim is to share an expensive resource, for example, in telecommunications, several telephone calls may be carried using one wire. Multiplexing originated in telegraphy in the 1870s, and is now applied in communications. In telephony, George Owen Squier is credited with the development of telephone carrier multiplexing in 1910, the multiplexed signal is transmitted over a communication channel such as a cable. The multiplexing divides the capacity of the channel into several logical channels. A reverse process, known as demultiplexing, extracts the original channels on the receiver end, a device that performs the multiplexing is called a multiplexer, and a device that performs the reverse process is called a demultiplexer. Multiple variable bit rate digital bit streams may be transferred efficiently over a fixed bandwidth channel by means of statistical multiplexing. This is an asynchronous mode time-domain multiplexing which is a form of time-division multiplexing, digital bit streams can be transferred over an analog channel by means of code-division multiplexing techniques such as frequency-hopping spread spectrum and direct-sequence spread spectrum. In wired communication, space-division multiplexing, also known as Space-division multiple access is the use of separate point-to-point electrical conductors for each transmitted channel, in wireless communication, space-division multiplexing is achieved with multiple antenna elements forming a phased array antenna. Examples are multiple-input and multiple-output, single-input and multiple-output and multiple-input and single-output multiplexing, different antennas would give different multi-path propagation signatures, making it possible for digital signal processing techniques to separate different signals from each other. These techniques may also be utilized for space diversity or beamforming rather than multiplexing Frequency-division multiplexing is inherently an analog technology, FDM achieves the combining of several signals into one medium by sending signals in several distinct frequency ranges over a single medium. In FDM the signals are electrical signals, one of the most common applications for FDM is traditional radio and television broadcasting from terrestrial, mobile or satellite stations, or cable television. Only one cable reaches a customers residential area, but the provider can send multiple television channels or signals simultaneously over that cable to all subscribers without interference. Receivers must tune to the frequency to access the desired signal. A variant technology, called wavelength-division multiplexing is used in optical communications, time-division multiplexing is a digital technology which uses time, instead of space or frequency, to separate the different data streams. TDM involves sequencing groups of a few bits or bytes from each individual input stream, one after the other, if done sufficiently quickly, the receiving devices will not detect that some of the circuit time was used to serve another logical communication path. Consider an application requiring four terminals at an airport to reach a central computer, each terminal communicated at 2400 baud, so rather than acquire four individual circuits to carry such a low-speed transmission, the airline has installed a pair of multiplexers. A pair of 9600 baud modems and one dedicated analog communications circuit from the ticket desk back to the airline data center are also installed

32.
24-hour clock
–
The 24-hour clock is the convention of time keeping in which the day runs from midnight to midnight and is divided into 24 hours, indicated by the hours passed since midnight, from 0 to 23. This system is the most commonly used time notation in the world today, in the practice of medicine, the 24-hour clock is generally used in documentation of care as it prevents any ambiguity as to when events occurred in a patients medical history. It is popularly referred to as time in the United States. In the case of a second, the value of ss may extend to 60. A leading zero is added for numbers under 10 and this zero is optional for the hours, but very commonly used in computer applications, where many specifications require it. Where subsecond resolution is required, the seconds can be a fraction, that is. The most commonly used separator symbol between hours, minutes and seconds is the colon, which is also the used in ISO8601. In the past, some European countries used the dot on the line as a separator, in some contexts, no separator is used and times are written as, for example,2359. In East Asia, time notation was 24-hour before westernization in modern times, the clocks were changed into 12 dual-hours style when they were shipped to China in the Qing dynasty. In the 24-hour time notation, the day begins at midnight,00,00, and the last minute of the day begins at 23,59. Where convenient, the notation 24,00 may also be used to refer to midnight at the end of a given date – that is,24,00 of one day is the time as 00,00 of the following day. The notation 24,00 mainly serves to refer to the end of a day in a time interval. A typical usage is giving opening hours ending at midnight, similarly, some railway timetables show 00,00 as departure time and 24,00 as arrival time. Legal contracts often run from the date at 00,00 until the end date at 24,00. While the 24-hour notation unambiguously distinguishes between midnight at the start and end of any date, there is no commonly accepted distinction among users of the 12-hour notation. Sometimes the use of 00,00 is also avoided, in variance with this, the correspondence manual for the U. S. Navy and U. S. Marine Corps formerly specified 0001 to 2400. The manual was updated in June 2015 to use 0000 to 2359, time-of-day notations beyond 24,00 are not commonly used and not covered by the relevant standards. In most countries, computers by default show the time in 24-hour notation, for example, Microsoft Windows and macOS activate the 12-hour notation by default only if a computer is in a handful of specific language and region settings

33.
Vector monitor
–
A vector monitor or vector display is a display device used for computer graphics up through the 1970s. It is a type of CRT, similar to that of an early oscilloscope, in a vector display, the image is composed of drawn lines rather than a grid of glowing pixels as in raster graphics. The electron beam follows a path tracing the connected sloped lines. The beam skips over dark areas of the image without visiting their points, some refresh vector displays use a normal phosphor that fades rapidly and needs constant refreshing 30-40 times per second to show a stable image. These displays such as the Imlac PDS-1 require some local memory to hold the vector endpoint data. Other storage tube displays such as the popular Tektronix 4010 use a phosphor that continues glowing for many minutes. Storage displays do not require any local memory, in the 1970s, both types of vector displays were much more affordable than bitmap raster graphics displays when a megapixel computer memory was still very expensive. Today, raster displays have replaced all uses of vector displays. Vector displays do not suffer from the artifacts of aliasing. But they are limited in that they can display only a shapes outline, text is crudely drawn from short strokes. Refresh vector displays are limited in how many lines or how much text can be shown without refresh flicker, irregular beam motion is slower than steady beam motion of raster displays. Beam deflections are typically driven by magnetic coils, and those coils fight against rapid changes to their current, notable among vector displays are Tektronix large-screen computer terminals that use direct-view storage CRTs. Storage means that the display, once written, will persist for several minutes, but that permanent image cannot be easily changed. Like an Etch-a-Sketch, any deletion or movement requires erasing the entire screen with a green flash. Animation with this type of monitor is not practical, vector displays were used for head-up displays in fighter aircraft because of the brighter displays that can be achieved by moving the electron beam more slowly across the phosphors. Brightness is critical in this application because the display must be visible to the pilot in direct sunlight. Vector monitors were used by some late-1970s to mid-1980s arcade games such as Asteroids, Tempest. Atari used the term Quadrascan to describe the technology used in their video game arcades

34.
Resistor ladder
–
A resistor ladder is an electrical circuit made from repeating units of resistors. Two configurations are discussed below, a resistor ladder and an R–2R ladder. An R–2R Ladder is a simple and inexpensive way to perform digital-to-analog conversion, the resistors act as voltage dividers between the referenced voltages. Each tap of the string generates a different voltage, which can be compared with another voltage, often a voltage is converted to a current, enabling the possibility to use an R–2R ladder network. Advantage, higher values can be reached using the same number of components. A basic R–2R resistor ladder network is shown in Figure 1, bit an−1 through bit a0 are driven from digital logic gates. Ideally, the bit inputs are switched between V =0 and V = Vref, the R–2R network causes these digital bits to be weighted in their contribution to the output voltage Vout. Depending on which bits are set to 1 and which to 0, the actual value of Vref will depend on the type of technology used to generate the digital signals. It is fast and has fixed output impedance R, the R–2R ladder operates as a string of current dividers, whose output accuracy is solely dependent on how well each resistor is matched to the others. Small inaccuracies in the MSB resistors can entirely overwhelm the contribution of the LSB resistors and this may result in non-monotonic behavior at major crossings, such as from 011112 to 100002. Depending on the type of logic gates used and design of the logic circuits and these can be filtered with capacitance at the output node. Finally, the 2R resistance is in series with the digital-output impedance, high-output-impedance gates may be unsuitable in some cases. Further, to problems at the 100002-to-011112 transition, the sum of the inaccuracies in the lower bits must be significantly less than R/32. The required accuracy doubles with each bit, for 8 bits. Within integrated circuits, high-accuracy R–2R networks may be printed directly onto a substrate using thin-film technology. Even so, they must often be laser-trimmed to achieve the required precision, such on-chip resistor ladders for digital-to-analog converters achieving 16-bit accuracy have been demonstrated. For a 10-bit converter, even using 0. 1% precision resistors would not guarantee monotonicity of output and this being said, high resolution R-2R ladders formed from discrete components are sometimes used, the nonlinearity being corrected in software. One example of such approach can be seen in the Korad 3005 power supply and it is not necessary that each rung of the R–2R ladder use the same resistor values

35.
Electric current
–
An electric current is a flow of electric charge. In electric circuits this charge is carried by moving electrons in a wire. It can also be carried by ions in an electrolyte, or by both ions and electrons such as in an ionised gas. The SI unit for measuring a current is the ampere. Electric current is measured using a device called an ammeter, electric currents cause Joule heating, which creates light in incandescent light bulbs. They also create magnetic fields, which are used in motors, inductors and generators, the particles that carry the charge in an electric current are called charge carriers. In metals, one or more electrons from each atom are loosely bound to the atom and these conduction electrons are the charge carriers in metal conductors. The conventional symbol for current is I, which originates from the French phrase intensité de courant, current intensity is often referred to simply as current. The I symbol was used by André-Marie Ampère, after whom the unit of current is named, in formulating the eponymous Ampères force law. The notation travelled from France to Great Britain, where it became standard, in a conductive material, the moving charged particles which constitute the electric current are called charge carriers. In other materials, notably the semiconductors, the carriers can be positive or negative. Positive and negative charge carriers may even be present at the same time, a flow of positive charges gives the same electric current, and has the same effect in a circuit, as an equal flow of negative charges in the opposite direction. Since current can be the flow of positive or negative charges. The direction of current is arbitrarily defined as the same direction as positive charges flow. This is called the direction of current I. If the current flows in the direction, the variable I has a negative value. When analyzing electrical circuits, the direction of current through a specific circuit element is usually unknown. Consequently, the directions of currents are often assigned arbitrarily

36.
Diode
–
In electronics, a diode is a two-terminal electronic component that conducts primarily in one direction, it has low resistance to the current in one direction, and high resistance in the other. A semiconductor diode, the most common today, is a crystalline piece of semiconductor material with a p–n junction connected to two electrical terminals. A vacuum tube diode has two electrodes, a plate and a heated cathode, semiconductor diodes were the first semiconductor electronic devices. The discovery of crystals rectifying abilities was made by German physicist Ferdinand Braun in 1874, the first semiconductor diodes, called cats whisker diodes, developed around 1906, were made of mineral crystals such as galena. Today, most diodes are made of silicon, but other such as selenium and germanium are sometimes used. The most common function of a diode is to allow a current to pass in one direction. Thus, the diode can be viewed as a version of a check valve. However, diodes can have complicated behavior than this simple on–off action. Semiconductor diodes begin conducting electricity only if a threshold voltage or cut-in voltage is present in the forward direction. The voltage drop across a forward-biased diode varies only a little with the current, and is a function of temperature, a semiconductor diodes current–voltage characteristic can be tailored by selecting the semiconductor materials and the doping impurities introduced into the materials during manufacture. These techniques are used to create special-purpose diodes that perform different functions. Tunnel, Gunn and IMPATT diodes exhibit negative resistance, which is useful in microwave, Diodes, both vacuum and semiconductor, can be used as shot-noise generators. Thermionic diodes and solid state diodes were developed separately, at approximately the time, in the early 1900s. Until the 1950s vacuum tube diodes were used frequently in radios because the early point-contact type semiconductor diodes were less stable. In 1873, Frederick Guthrie discovered the principle of operation of thermionic diodes. Guthrie discovered that a positively charged electroscope could be discharged by bringing a piece of white-hot metal close to it. The same did not apply to a negatively charged electroscope, indicating that the current flow was possible in one direction. Thomas Edison independently rediscovered the principle on February 13,1880, at the time, Edison was investigating why the filaments of his carbon-filament light bulbs nearly always burned out at the positive-connected end

37.
Lissajous curve
–
This family of curves was investigated by Nathaniel Bowditch in 1815, and later in more detail by Jules Antoine Lissajous in 1857. The appearance of the figure is highly sensitive to the ratio a/b, for a ratio of 1, the figure is an ellipse, with special cases including circles and lines. Another simple Lissajous figure is the parabola, other ratios produce more complicated curves, which are closed only if a/b is rational. The visual form of these curves is often suggestive of a three-dimensional knot, visually, the ratio a/b determines the number of lobes of the figure. For example, a ratio of 3/1 or 1/3 produces a figure with three major lobes, similarly, a ratio of 5/4 produces a figure with five horizontal lobes and four vertical lobes. Rational ratios produce closed or still figures, while irrational ratios produce figures that appear to rotate, the ratio A/B determines the relative width-to-height ratio of the curve. For example, a ratio of 2/1 produces a figure that is twice as wide as it is high, finally, the value of δ determines the apparent rotation angle of the figure, viewed as if it were actually a three-dimensional curve. For example, δ =0 produces x and y components that are exactly in phase, in contrast, any non-zero δ produces a figure that appears to be rotated, either as a left–right or an up–down rotation. Lissajous figures where a =1, b = N and δ = N −1 N π2 are Chebyshev polynomials of the first kind of degree N, the animation shows the curve adaptation with continuously increasing a/b fraction from 0 to 1 in steps of 0.01. Below are examples of Lissajous figures with δ = π/2, an odd natural number a, a natural number b. Prior to modern electronic equipment, Lissajous curves could be generated mechanically by means of a harmonograph, Lissajous curves can also be generated using an oscilloscope. An octopus circuit can be used to demonstrate the waveform images on an oscilloscope, two phase-shifted sinusoid inputs are applied to the oscilloscope in X-Y mode and the phase relationship between the signals is presented as a Lissajous figure. In the professional world, this method is used for realtime analysis of the phase relationship between the left and right channels of a stereo audio signal. On larger, more sophisticated audio mixing consoles an oscilloscope may be built-in for this purpose. A purely mechanical application of a Lissajous curve with a =1, b =2 is in the mechanism of the Mars Light type of oscillating beam lamps popular with railroads in the mid-1900s. The beam in some versions traces out a lopsided figure-8 pattern on its side, when the input to an LTI system is sinusoidal, the output is sinusoidal with the same frequency, but it may have a different amplitude and some phase shift. The figure below summarizes how the Lissajous figure changes over different phase shifts, the phase shifts are all negative so that delay semantics can be used with a causal LTI system. The arrows show the direction of rotation of the Lissajous figure, a Lissajous curve is used in experimental tests to determine if a device may be properly categorized as a memristor

38.
Phase (waves)
–
Phase is the position of a point in time on a waveform cycle. A complete cycle is defined as the interval required for the waveform to return to its initial value. The graphic to the right shows how one cycle constitutes 360° of phase, the graphic also shows how phase is sometimes expressed in radians, where one radian of phase equals approximately 57. 3°. Phase can also be an expression of relative displacement between two corresponding features of two waveforms having the same frequency, in sinusoidal functions or in waves phase has two different, but closely related, meanings. One is the angle of a sinusoidal function at its origin and is sometimes called phase offset or phase difference. Another usage is the fraction of the cycle that has elapsed relative to the origin. Phase shift is any change that occurs in the phase of one quantity and this symbol, φ is sometimes referred to as a phase shift or phase offset because it represents a shift from zero phase. For infinitely long sinusoids, a change in φ is the same as a shift in time, if x is delayed by 14 of its cycle, it becomes, x = A ⋅ cos ⁡ = A ⋅ cos ⁡ whose phase is now φ − π2. It has been shifted by π2 radians, Phase difference is the difference, expressed in degrees or time, between two waves having the same frequency and referenced to the same point in time. Two oscillators that have the frequency and no phase difference are said to be in phase. Two oscillators that have the frequency and different phases have a phase difference. The amount by which such oscillators are out of phase with each other can be expressed in degrees from 0° to 360°, if the phase difference is 180 degrees, then the two oscillators are said to be in antiphase. If two interacting waves meet at a point where they are in antiphase, then interference will occur. It is common for waves of electromagnetic, acoustic or other energy to become superposed in their transmission medium, when that happens, the phase difference determines whether they reinforce or weaken each other. Complete cancellation is possible for waves with equal amplitudes, time is sometimes used to express position within the cycle of an oscillation. A phase difference is analogous to two athletes running around a track at the same speed and direction but starting at different positions on the track. They pass a point at different instants in time, but the time difference between them is a constant - same for every pass since they are at the same speed and in the same direction. If they were at different speeds, the difference is undefined

39.
Stereophonic
–
Stereophonic sound or, more commonly, stereo, is a method of sound reproduction that creates an illusion of multi-directional audible perspective. Thus the term applies to so-called quadraphonic and surround-sound systems as well as the more common two-channel. It is often contrasted with monophonic, or mono sound, where audio is heard as coming from one position, in the 2000s, stereo sound is common in entertainment systems such as broadcast radio and TV, recorded music and the cinema. The word stereophonic derives from the Greek στερεός, firm, solid + φωνή, sound, tone, voice and it was coined in 1927 by Western Electric, the signal is then reproduced over multiple loudspeakers to recreate, as closely as possible, the live sound. Secondly artificial or pan-pot stereo, in which a sound is reproduced over multiple loudspeakers. By varying the amplitude of the signal sent to each speaker an artificial direction can be suggested. The control which is used to vary this relative amplitude of the signal is known as a pan-pot, by combining multiple pan-potted mono signals together, a complete, yet entirely artificial, sound field can be created. In technical usage, true stereo sound recording and sound reproduction that uses stereographic projection to encode the relative positions of objects and events recorded. During two-channel stereo recording, two microphones are placed in strategically chosen locations relative to the source, with both recording simultaneously. The two recorded channels will be similar, but each will have distinct time-of-arrival and sound-pressure-level information, during playback, the listeners brain uses those subtle differences in timing and sound level to triangulate the positions of the recorded objects. Stereo recordings often cannot be played on systems without a significant loss of fidelity. This phenomenon is known as phase cancellation and this two-channel telephonic process was commercialized in France from 1890 to 1932 as the Théâtrophone, and in England from 1895 to 1925 as the Electrophone. Both were services available by coin-operated receivers at hotels and cafés, modern stereophonic technology was invented in the 1930s by British engineer Alan Blumlein at EMI, who patented stereo records, stereo films, and also surround sound. In early 1931, Blumlein and his wife were at a local cinema, Blumlein declared to his wife that he had found a way to make the sound follow the actor across the screen. The genesis of ideas is uncertain, but he explained them to Isaac Shoenberg in the late summer of 1931. His earliest notes on the subject are dated 25 September 1931, the application was dated 14 December 1931, and was accepted on 14 June 1933 as UK patent number 394,325. The patent covered many ideas in stereo, some of which are used today and these discs used the two walls of the groove at right angles in order to carry the two channels. Much of the development work on this system for cinematic use did not reach completion until 1935, in Blumleins short test films, his original intent of having the sound follow the actor was fully realised

40.
Signal generator
–
A signal generator is an electronic device that generates repeating or non-repeating electronic signals in either the analog or the digital domain. It is generally used in designing, testing, troubleshooting, and repairing electronic or electroacoustic devices, there are many different types of signal generators with different purposes and applications and at varying levels of expense. These types include function generators, RF and microwave signal generators, pitch generators, arbitrary waveform generators, digital pattern generators, in general, no device is suitable for all possible applications. Traditionally, signal generators have been embedded hardware units, but since the age of multimedia PCs, a function generator is a device which produces simple repetitive waveforms. Such devices contain an electronic oscillator, a circuit that is capable of creating a repetitive waveform, the most common waveform is a sine wave, but sawtooth, step, square, and triangular waveform oscillators are commonly available as are arbitrary waveform generators. An arbitrary waveform generator is a signal generator that generates arbitrary waveforms within published limits of frequency range, accuracy. Unlike a function generator that produces a set of specific waveforms. An AWG is generally more expensive than a generator and often has less bandwidth. An AWG is used in design and test applications. New high-speed DACs provide up to 16-bit resolution at sample rates in excess of 1 GS/s and these devices provide the foundation for an AWG with the bandwidth and dynamic range to address modern radio and communication applications. Example applications include commercial wireless standards such as Wi-Fi, WiMAX and LTE, also, broad modulation bandwidth allows multi-carrier signal generation, necessary for testing receiver adjacent channel rejection. RF and microwave signal generators normally have similar features and capabilities, RF signal generators typically range from a few kHz to 6 GHz, while microwave signal generators cover a much wider frequency range, from less than 1 MHz to at least 20 GHz. Some models go as high as 70 GHz with a direct coaxial output, RF and microwave signal generators can be classified further as analog or vector signal generators. Analog signal generators based on a sine-wave oscillator were common before the inception of digital electronics, there was a sharp distinction in purpose and design of radio-frequency and audio-frequency signal generators. RF RF signal generators are capable of producing CW tones, the output frequency can usually be tuned anywhere in their frequency range. Many models offer various types of modulation, either as standard equipment or as an optional capability to the base unit. This could include AM, FM, ΦM and pulse modulation, another common feature is a built-in attenuator which makes it possible to vary the signal’s output power. Depending on the manufacturer and model, output powers can range from -135 to +30 dBm, a wide range of output power is desirable, since different applications require different amounts of signal power

41.
Calibration
–
Calibration in measurement technology and metrology is the comparison of measurement values delivered by a device under test with those of a calibration standard of known accuracy. Such a standard could be another measurement device of known accuracy, strictly, the term calibration means just the act of comparison, and does not include any subsequent adjustment. The calibration standard is normally traceable to a national standard held by a National Metrological Institute and this definition states that the calibration process is purely a comparison, but introduces the concept of Measurement uncertainty in relating the accuracies of the device under test and the standard. The increasing need for accuracy and uncertainty and the need to have consistent. In many countries a National Metrology Institute will exist which will maintain primary standards of measurement which will be used to provide traceability to customers instruments by calibration. The NMI supports the metrological infrastructure in that country by establishing an unbroken chain, examples of National Metrology Institutes are NPL in the UK, NIST in the United States, PTB in Germany and many others. This may be done by national standards laboratories operated by the government or by private firms offering metrology services, quality management systems call for an effective metrology system which includes formal, periodic, and documented calibration of all measuring instruments. ISO9000 and ISO17025 standards require that these actions are to a high level. To communicate the quality of a calibration the calibration value is often accompanied by a traceable uncertainty statement to a confidence level. This is evaluated through careful uncertainty analysis, some times a DFS is required to operate machinery in a degraded state. Whenever this does happen, it must be in writing and authorized by a manager with the assistance of a calibration technician. Measuring devices and instruments are categorized according to the quantities they are designed to measure. These vary internationally, e. g. NIST 150-2G in the U. S. the standard instrument for each test device varies accordingly, e. g. a dead weight tester for pressure gauge calibration and a dry block temperature tester for temperature gauge calibration. g. This is the perception of the instruments end-user, however, very few instruments can be adjusted to exactly match the standards they are compared to. For the vast majority of calibrations, the process is actually the comparison of an unknown to a known. The calibration process begins with the design of the instrument that needs to be calibrated. The design has to be able to hold a calibration through its calibration interval, in other words, the design has to be capable of measurements that are within engineering tolerance when used within the stated environmental conditions over some reasonable period of time. Having a design with these characteristics increases the likelihood of the measuring instruments performing as expected

42.
Tennis for Two
–
Tennis for Two is a sports video game developed in 1958, which simulates a game of tennis, and was one of the first games developed in the early history of video games. He designed the game, displayed on an oscilloscope and played with two custom aluminum controllers, in a few hours, after which he and technician Robert V. Dvorak built it over three weeks. The game was popular during the three-day exhibition, with players lining up to see the game. It was shown again the year with a larger oscilloscope screen. It was then dismantled and largely forgotten until the late 1970s, since then, it has been celebrated as one of the earliest video games, and Brookhaven has made recreations of the original device. In 1958, American physicist William Higinbotham worked in the Brookhaven National Laboratory in Upton, Higinbotham had a bachelors degree in physics from Williams College, and had previously worked as technician in the physics department at Cornell University while unsuccessfully pursuing a PhD there. He served as the head of the division of the Manhattan Project from 1943 to 1945, and began working at Brookhaven in 1947. Once a year, the government research facility held an exhibition for the public, with one day each for high school students, college students, and the general public. Higinbotham designed a game used an oscilloscope to display the path of a simulated ball on a tennis court viewed from the side. The attached computer calculated the path of the ball and reversed its path when it hit the ground, the game also simulated the ball hitting the net if it did not achieve a high enough arc as well as changes in velocity due to drag from air resistance. Two aluminum controllers were attached to the computer, each consisting of a button, pressing the button hit the ball, and turning a knob controlled the angle of the shot. Originally, Higinbotham considered having a knob to control the velocity of the shot. The device was designed in a few hours with the help of colleague Dave Potter and was assembled over three weeks with the help of technician Robert V. Dvorak, excluding the oscilloscope and controller, the games circuitry approximately took up the space of a microwave oven. Tennis for Two was first shown on October 18,1958, the game was rendered as a horizontal line, representing the tennis court, and a short vertical line in the center, representing the tennis net. The first player would press the button on their controller to send the ball, a point of light, over the net, and it would either hit the net, reach the other side of the court, or fly out of bounds. The second player could then hit the back with their controller while it was on their side. Hundreds of visitors lined up to play the new game during its debut, Higinbotham claimed later that the high schoolers liked it best, you couldnt pull them away from it. Due to the popularity, an upgraded version was shown the following year, with enhancements including a larger screen

43.
Bandwidth (signal processing)
–
Bandwidth is the difference between the upper and lower frequencies in a continuous set of frequencies. It is typically measured in hertz, and may refer to passband bandwidth, sometimes to baseband bandwidth. Passband bandwidth is the difference between the upper and lower frequencies of, for example, a band-pass filter, a communication channel. In the case of a filter or baseband signal, the bandwidth is equal to its upper cutoff frequency. A key characteristic of bandwidth is that any band of a given width can carry the amount of information. For example, a 3 kHz band can carry a telephone conversation whether that band is at baseband or modulated to some higher frequency, Bandwidth is a key concept in many telecommunications applications. In radio communications, for example, bandwidth is the range occupied by a modulated carrier signal. An FM radio receivers tuner spans a range of frequencies. A government agency may apportion the regionally available bandwidth to broadcast license holders so that their signals do not mutually interfere, each transmitter owns a slice of bandwidth. For different applications there are different precise definitions, which are different for signals than for systems. One definition of bandwidth, for a system, could be the range of frequencies over which the system produces a level of performance. A less strict and more practically useful definition will refer to the frequencies beyond which frequency response is small, small could mean less than 3 dB below the maximum value, or more rarely 10 dB below, or it could mean below a certain absolute value. As with any definition of the width of a function, many definitions are suitable for different purposes, in some contexts, the signal bandwidth in hertz refers to the frequency range in which the signals spectral density is nonzero or above a small threshold value. That definition is used in calculations of the lowest sampling rate that will satisfy the sampling theorem, the threshold value is often defined relative to the maximum value, and is most commonly the 3dB point, that is the point where the spectral density is half its maximum value. The word bandwidth applies to signals as described above, but it could apply to systems. To say that a system has a certain bandwidth means that the system can process signals of that bandwidth, or that the system reduces the bandwidth of a white noise input to that bandwidth. If the maximum gain is 0 dB, the 3 dB bandwidth is the range where the gain is more than −3 dB. This is also the range of frequencies where the gain is above 70. 7% of the maximum amplitude gain

44.
Decibel
–
The decibel is a logarithmic unit used to express the ratio of two values of a physical quantity. One of these values is often a reference value, in which case the decibel is used to express the level of the other value relative to this reference. When used in way, the decibel symbol is often qualified with a suffix that indicates the reference quantity that has been used or some other property of the quantity being measured. For example, dBm indicates a power of one milliwatt. There are two different scales used when expressing a ratio in decibels depending on the nature of the quantities, when expressing power quantities, the number of decibels is ten times the logarithm to base 10 of the ratio of two power quantities. That is, a change in power by a factor of 10 corresponds to a 10 dB change in level, when expressing field quantities, a change in amplitude by a factor of 10 corresponds to a 20 dB change in level. The difference in scales relates to the square law of fields in three-dimensional linear space. The decibel scales differ so that comparisons can be made between related power and field quantities when they are expressed in decibels. The definition of the decibel is based on the measurement of power in telephony of the early 20th century in the Bell System in the United States. One decibel is one tenth of one bel, named in honor of Alexander Graham Bell, however, today, the decibel is used for a wide variety of measurements in science and engineering, most prominently in acoustics, electronics, and control theory. In electronics, the gains of amplifiers, attenuation of signals, the decibel originates from methods used to quantify signal loss in telegraph and telephone circuits. The unit for loss was originally Miles of Standard Cable, the standard telephone cable implied was a cable having uniformly distributed resistance of 88 ohms per loop mile and uniformly distributed shunt capacitance of 0.054 microfarad per mile. 1 TU was defined such that the number of TUs was ten times the logarithm of the ratio of measured power to a reference power level. The definition was conveniently chosen such that 1 TU approximated 1 MSC, in 1928, the Bell system renamed the TU into the decibel, being one tenth of a newly defined unit for the base-10 logarithm of the power ratio. It was named the bel, in honor of the telecommunications pioneer Alexander Graham Bell, the bel is seldom used, as the decibel was the proposed working unit. However, the decibel is recognized by international bodies such as the International Electrotechnical Commission. The term field quantity is deprecated by ISO 80000-1, which favors root-power, in spite of their widespread use, suffixes are not recognized by the IEC or ISO. The ISO Standard 80000-3,2006 defines the following quantities, the decibel is one-tenth of a bel,1 dB =0.1 B

A multimeter with a built in clamp facility. Pushing the large button at the bottom opens the lower jaw of the clamp, allowing the clamp to be placed around a conductor (wire). Depending on sensor, some can measure both AC and DC current.

Calibration in measurement technology and metrology is the comparison of measurement values delivered by a device under …

Manual calibration - US serviceman calibrating a temperature gauge. The device under test is on his left and the test standard on his right.

Automatic calibration - A U.S. serviceman using a 3666C auto pressure calibrator

An instrument rack with tamper-indicating seals

An example of a weighing scale with a ½ ounce calibration error at zero. This is a "zeroing error" which is inherently indicated, and can normally be adjusted by the user, but may be due to the string and rubber band in this case

The Hall effect is the production of a voltage difference (the Hall voltage) across an electrical conductor, transverse …

Hall effect measurement setup for electrons. Initially, the electrons follow the curved arrow, due to the magnetic force. At some distance from the current-introducing contacts, electrons pile up on the left side and deplete from the right side, which creates an electric field ξy in the direction of the assigned VH. VH is negative for some semi-conductors where "holes" appear to flow. In steady-state, ξy will be strong enough to exactly cancel out the magnetic force, thus the electrons follow the straight arrow (dashed).

Hall effect current sensor with internal integrated circuit amplifier. 8 mm opening. Zero current output voltage is midway between the supply voltages that maintain a 4 to 8 volt differential. Non-zero current response is proportional to the voltage supplied and is linear to 60 amperes for this particular (25 A) device.

Diagram of Hall effect current transducer integrated into ferrite ring.

Frequency is the number of occurrences of a repeating event per unit of time. It is also referred to as temporal …

Modern frequency counter

Image: Resonant reed frequency meter

Image: Czestosciomierz 49.9Hz

As time elapses—here moving left to right on the horizontal axis—the five sinusoidal waves vary, or cycle, regularly at different rates. The red wave (top) has the lowest frequency (i.e., cycles at the slowest rate) while the purple wave (bottom) has the highest frequency (cycles at the fastest rate).

Flat-panel displays are electronic viewing technologies used to enable people to see content (still images, moving …

While flat-panel TVs have existed in research labs since 1964, they did not become the main display technology until the early 2000s, when the technologies became affordable. They are much thinner and lighter than early 1950s-mid 2000s televisions and monitors, which typically used heavy, bulky cathode ray tube (CRT) picture tubes. The flat-panel TV depicted here is from 2008.

Amazon's Kindle Keyboard e-reader displaying a page of an e-book. The Kindle's image of the book's text will remain onscreen even if the battery runs out, as it is a static screen technology. Without power, however, the user cannot change to a new page.

Parallax is a displacement or difference in the apparent position of an object viewed along two different lines of …

Contax III rangefinder camera with macro photography setting. Because the viewfinder is on top of the lens and of the close proximity of the subject, goggles are fitted in front of the rangefinder and a dedicated viewfinder installed to compensate for parallax.

Image: The sun, street light and Parallax edit

Parallax is an angle subtended by a line on a point. In the upper diagram, the earth in its orbit sweeps the parallax angle subtended on the sun. The lower diagram shows an equal angle swept by the sun in a geostatic model. A similar diagram can be drawn for a star except that the angle of parallax would be minuscule.

Phase is the position of a point in time (an instant) on a waveform cycle. A complete cycle is defined as the interval …

Left: the real part of a plane wave moving from top to bottom. Right: the same wave after a central section underwent a phase shift, for example, by passing through a glass of different thickness than the other parts.

Illustration of phase shift. The horizontal axis represents an angle (phase) that is increasing with time.

Bandwidth is the difference between the upper and lower frequencies in a continuous set of frequencies. It is typically …

Baseband bandwidth. Here the bandwidth equals the upper frequency.

A graph of a band-pass filter's gain magnitude, illustrating the concept of −3 dB bandwidth at a gain of approximately 0.707. The frequency axis of this symbolic diagram can be linear or logarithmically scaled.