A person having a mass of 100 kilograms who climbs a 3-meter-high ladder in 5 seconds is doing work at a rate of about 600 watts. Mass times acceleration due to gravity times height divided by the time it takes to lift the object to the given height gives the rate of doing work or power.[notes 1]

A laborer over the course of an 8-hour day can sustain an average output of about 75 watts; higher power levels can be achieved for short intervals and by athletes.[2]

The femtowatt (fW) is equal to one quadrillionth (10−15) of a watt. Technologically important powers that are measured in femtowatts are typically found in reference(s) to radio and radar receivers, for example, meaningful FM tuner performance figures for sensitivity, quieting and signal-to-noise require that the RF energy applied to the antenna input be specified. These input levels are often stated in dBf (decibels referenced to 1 femtowatt), this is 0.2739 microvolt across a 75-ohm load or 0.5477 microvolt across a 300-ohm load; the specification takes into account the RF input impedance of the tuner.

The picowatt (pW), not to be confused with the much larger petawatt (PW), is equal to one trillionth (10−12) of a watt. Technologically important powers that are measured in picowatts are typically used in reference to radio and radar receivers, acoustics and in the science of radio astronomy.

The microwatt (µW) is equal to one millionth (10−6) of a watt. Important powers that are measured in microwatts are typically stated in medical instrumentation systems such as the EEG and the ECG, in a wide variety of scientific and engineering instruments and also in reference to radio and radar receivers. Compact solar cells for devices such as calculators and watches are typically measured in microwatts.[3]

The milliwatt (mW) is equal to one thousandth (10−3) of a watt. A typical laser pointer outputs about five milliwatts of light power, whereas a typical hearing aid for people uses less than one milliwatt.[4]Audio signals and other electronic signal levels are often measured in dBm, referenced to one milliwatt.

The kilowatt (kW) is equal to one thousand (103) watts, this unit is typically used to express the output power of engines and the power of electric motors, tools, machines, and heaters. It is also a common unit used to express the electromagnetic power output of broadcast radio and television transmitters.

One kilowatt is approximately equal to 1.34 horsepower. A small electric heater with one heating element can use 1.0 kilowatt. The average electric power consumption of a household in the United States is about one kilowatt.[notes 2][5]

Also, kilowatts of light power can be measured in the output pulses of some lasers.

A surface area of one square meter on Earth receives typically about one kilowatt of sunlight from the sun (the solar irradiance) (on a clear day at mid day, close to the equator).[6]

The megawatt (MW) is equal to one million (106) watts. Many events or machines produce or sustain the conversion of energy on this scale, including large electric motors; large warships such as aircraft carriers, cruisers, and submarines; large server farms or data centers; and some scientific research equipment, such as supercolliders, and the output pulses of very large lasers. A large residential or commercial building may use several megawatts in electric power and heat, on railways, modern high-powered electric locomotives typically have a peak power output of 5 or 6 MW, although some produce much more. The Eurostar, for example, uses more than 12 MW, while heavy diesel-electriclocomotives typically produce/use 3 to 5 MW. U.S. nuclear power plants have net summer capacities between about 500 and 1300 MW.[7]

The gigawatt (GW) is equal to one billion (109) watts or 1 gigawatt = 1000 megawatts, this unit is often used for large power plants or power grids. For example, by the end of 2010 power shortages in China's Shanxi province were expected to increase to 5–6 GW[8] and the installed capacity of wind power in Germany was 25.8 GW.[9] The largest unit (out of four) of the Belgian Doel Nuclear Power Station has a peak output of 1.04 GW.[10]HVDC converters have been built with power ratings of up to 2 GW.[11]

The terawatt (TW) is equal to one trillion (1012) watts, the total power used by humans worldwide is commonly measured in terawatts (see primary energy). The most powerful lasers from the mid-1960s to the mid-1990s produced power in terawatts, but only for nanosecond time frames, the average lightning strike peaks at 1 terawatt, but these strikes only last for 30 microseconds.

The petawatt (PW) is equal to one quadrillion (1015) watts and can be produced by the current generation of lasers for time-scales on the order of picoseconds (10−12 s). One such laser is the Lawrence Livermore's Nova laser, which achieved a power output of 1.25 PW (7015125000000000000♠1.25×1015 W) by a process called chirped pulse amplification. The duration of the pulse was roughly 0.5 ps (6987500000000000000♠5×10−13 s), giving a total energy of 600 J.[12] Another example is the Laser for Fast Ignition Experiments (LFEX) at the Institute of Laser Engineering (ILE), Osaka University, which achieved a power output of 2 PW for a duration of approximately 1 ps.[13][14]

In the electric power industry, megawatt electrical (MWe[16] or MWe[17]) refers by convention to the electric power produced by a generator, while megawatt thermal or thermal megawatt[18] (MWt, MWt, or MWth, MWth) refers to thermal power produced by the plant. For example, the Embalse nuclear power plant in Argentina uses a fission reactor to generate 2109 MWt (i.e. heat), which creates steam to drive a turbine, which generates 648 MWe (i.e. electricity). Other SI prefixes are sometimes used, for example gigawatt electrical (GWe). The International Bureau of Weights and Measures, which maintains the SI-standard, states that further information about a quantity should not be attached to the unit symbol but instead to the quantity symbol (i.e., Pthermal = 270 W rather than P = 270 Wth) and so these units are non-SI.[19] In compliance with SI the energy company DONG Energy uses the unit megawatt for produced electrical power and the equivalent unit megajoule/s for delivered heating power in a combined heat and power station such as Avedøre Power Station.[20]

Radio stations usually report the power of their transmitters in units of watts, referring to the effective radiated power. It refers to the relative power of the transmission when it is directed towards the horizon for maximum geographic coverage, rather than uniformly broadcast in all directions.

The terms power and energy are frequently confused. Power is the rate at which energy is generated or consumed and hence is measured in units (e.g. watts) that represent energy per unit time.

For example, when a light bulb with a power rating of 100W is turned on for one hour, the energy used is 100 watt hours (W·h), 0.1 kilowatt hour, or 360 kJ. This same amount of energy would light a 40-watt bulb for 2.5 hours, or a 50-watt bulb for 2 hours. A power station would be rated in multiples of watts (for example, the Three Gorges Dam is rated at approximately 22 gigawatts), but its annual energy sales or output would be in multiples of watt hours. Major energy production or consumption is often expressed as terawatt hours for a given period that is often a calendar year or financial year. One terawatt hour is equal to a sustained power of approximately 114 megawatts for a period of one year.

The watt second is a unit of energy, equal to the joule. One kilowatt hour is 3,600,000 watt seconds, the watt second is used, for example, to rate the energy storage of flash lamps used in photography, although the term joule is generally employed.[citation needed]

Invented and incorrect terms such as watts per hour (W/h) are often misused when watts would be correct.[21] Watts per hour would properly refers to a change of power per hour. Watts per hour might be useful, clutching at an obscure case, to characterize the ramp-up behavior of power plants, or slow-reacting plant where their power could only change slowly, for example, a power plant that changes its power output from 0 MW to 1 MW in 15 minutes would have a ramp-up rate of 4 MW/h.

^The energy in climbing the stairs is given by mgh. Setting m = 100 kg, g = 9.8 m/s2 and h = 3 m gives 2940 J. Dividing this by the time taken (5 s) gives a power of 588 W.

^Average household electric power consumption is 1.19 kW in the US, 0.53 kW in the UK. In India it is 0.13 kW (urban) and 0.03 kW (rural) – computed from GJ figures quoted by Nakagami, Murakoshi and Iwafune.

1.
System of measurement
–
A system of measurement is a collection of units of measurement and rules relating them to each other. Systems of measurement have historically been important, regulated and defined for the purposes of science and commerce, systems of measurement in modern use include the metric system, the imperial system, and United States customary units. The French Revolution gave rise to the system, and this has spread around the world. In most systems, length, mass, and time are base quantities, later science developments showed that either electric charge or electric current could be added to extend the set of base quantities by which many other metrological units could be easily defined. Other quantities, such as power and speed, are derived from the set, for example. Such arrangements were satisfactory in their own contexts, the preference for a more universal and consistent system only gradually spread with the growth of science. Changing a measurement system has substantial financial and cultural costs which must be offset against the advantages to be obtained using a more rational system. However pressure built up, including scientists and engineers for conversion to a more rational. The unifying characteristic is that there was some definition based on some standard, eventually cubits and strides gave way to customary units to met the needs of merchants and scientists. In the metric system and other recent systems, a basic unit is used for each base quantity. Often secondary units are derived from the units by multiplying by powers of ten. Thus the basic unit of length is the metre, a distance of 1.234 m is 1,234 millimetres. Metrication is complete or nearly complete in almost all countries, US customary units are heavily used in the United States and to some degree in Liberia. Traditional Burmese units of measurement are used in Burma, U. S. units are used in limited contexts in Canada due to the large volume of trade, there is also considerable use of Imperial weights and measures, despite de jure Canadian conversion to metric. In the United States, metric units are used almost universally in science, widely in the military, and partially in industry, but customary units predominate in household use. At retail stores, the liter is a used unit for volume, especially on bottles of beverages. Some other standard non-SI units are still in use, such as nautical miles and knots in aviation. Metric systems of units have evolved since the adoption of the first well-defined system in France in 1795, during this evolution the use of these systems has spread throughout the world, first to non-English-speaking countries, and then to English speaking countries

System of measurement
–
A baby bottle that measures in three measurement systems—imperial (UK), US customary, and metric.

2.
SI base unit
–
The International System of Units defines seven units of measure as a basic set from which all other SI units can be derived. The SI base units form a set of mutually independent dimensions as required by dimensional analysis commonly employed in science, thus, the kelvin, named after Lord Kelvin, has the symbol K and the ampere, named after André-Marie Ampère, has the symbol A. Many other units, such as the litre, are not part of the SI. The definitions of the units have been modified several times since the Metre Convention in 1875. Since the redefinition of the metre in 1960, the kilogram is the unit that is directly defined in terms of a physical artifact. However, the mole, the ampere, and the candela are linked through their definitions to the mass of the platinum–iridium cylinder stored in a vault near Paris. It has long been an objective in metrology to define the kilogram in terms of a fundamental constant, two possibilities have attracted particular attention, the Planck constant and the Avogadro constant. The 23rd CGPM decided to postpone any formal change until the next General Conference in 2011

SI base unit
–
The seven SI base units and the interdependency of their definitions: for example, to extract the definition of the metre from the speed of light, the definition of the second must be known while the ampere and candela are both dependent on the definition of energy which in turn is defined in terms of length, mass and time.

3.
Kilogram
–
The kilogram or kilogramme is the base unit of mass in the International System of Units and is defined as being equal to the mass of the International Prototype of the Kilogram. The avoirdupois pound, used in both the imperial and US customary systems, is defined as exactly 0.45359237 kg, making one kilogram approximately equal to 2.2046 avoirdupois pounds. Other traditional units of weight and mass around the world are also defined in terms of the kilogram, the gram, 1/1000 of a kilogram, was provisionally defined in 1795 as the mass of one cubic centimeter of water at the melting point of ice. The final kilogram, manufactured as a prototype in 1799 and from which the IPK was derived in 1875, had an equal to the mass of 1 dm3 of water at its maximum density. The kilogram is the only SI base unit with an SI prefix as part of its name and it is also the only SI unit that is still directly defined by an artifact rather than a fundamental physical property that can be reproduced in different laboratories. Three other base units and 17 derived units in the SI system are defined relative to the kilogram, only 8 other units do not require the kilogram in their definition, temperature, time and frequency, length, and angle. At its 2011 meeting, the CGPM agreed in principle that the kilogram should be redefined in terms of the Planck constant, the decision was originally deferred until 2014, in 2014 it was deferred again until the next meeting. There are currently several different proposals for the redefinition, these are described in the Proposed Future Definitions section below, the International Prototype Kilogram is rarely used or handled. In the decree of 1795, the term gramme thus replaced gravet, the French spelling was adopted in the United Kingdom when the word was used for the first time in English in 1797, with the spelling kilogram being adopted in the United States. In the United Kingdom both spellings are used, with kilogram having become by far the more common, UK law regulating the units to be used when trading by weight or measure does not prevent the use of either spelling. In the 19th century the French word kilo, a shortening of kilogramme, was imported into the English language where it has used to mean both kilogram and kilometer. In 1935 this was adopted by the IEC as the Giorgi system, now known as MKS system. In 1948 the CGPM commissioned the CIPM to make recommendations for a practical system of units of measurement. This led to the launch of SI in 1960 and the subsequent publication of the SI Brochure, the kilogram is a unit of mass, a property which corresponds to the common perception of how heavy an object is. Mass is a property, that is, it is related to the tendency of an object at rest to remain at rest, or if in motion to remain in motion at a constant velocity. Accordingly, for astronauts in microgravity, no effort is required to hold objects off the cabin floor, they are weightless. However, since objects in microgravity still retain their mass and inertia, the ratio of the force of gravity on the two objects, measured by the scale, is equal to the ratio of their masses. On April 7,1795, the gram was decreed in France to be the weight of a volume of pure water equal to the cube of the hundredth part of the metre

Kilogram
–
A domestic-quality one-kilogram weight made of cast iron (the credit card is for scale). The shape follows OIML recommendation R52 for cast-iron hexagonal weights
Kilogram
–
Measurement of weight – gravitational attraction of the measurand causes a distortion of the spring
Kilogram
–
Measurement of mass – the gravitational force on the measurand is balanced against the gravitational force on the weights.
Kilogram
–
The Arago kilogram, an exact copy of the "Kilogramme des Archives" commissioned in 1821 by the US under supervision of French physicist François Arago that served as the US's first kilogram standard of mass until 1889, when the US converted to primary metric standards and received its current kilogram prototypes, K4 and K20.

4.
Metre
–
The metre or meter, is the base unit of length in the International System of Units. The metre is defined as the length of the path travelled by light in a vacuum in 1/299792458 seconds, the metre was originally defined in 1793 as one ten-millionth of the distance from the equator to the North Pole. In 1799, it was redefined in terms of a metre bar. In 1960, the metre was redefined in terms of a number of wavelengths of a certain emission line of krypton-86. In 1983, the current definition was adopted, the imperial inch is defined as 0.0254 metres. One metre is about 3 3⁄8 inches longer than a yard, Metre is the standard spelling of the metric unit for length in nearly all English-speaking nations except the United States and the Philippines, which use meter. Measuring devices are spelled -meter in all variants of English, the suffix -meter has the same Greek origin as the unit of length. This range of uses is found in Latin, French, English. Thus calls for measurement and moderation. In 1668 the English cleric and philosopher John Wilkins proposed in an essay a decimal-based unit of length, as a result of the French Revolution, the French Academy of Sciences charged a commission with determining a single scale for all measures. In 1668, Wilkins proposed using Christopher Wrens suggestion of defining the metre using a pendulum with a length which produced a half-period of one second, christiaan Huygens had observed that length to be 38 Rijnland inches or 39.26 English inches. This is the equivalent of what is now known to be 997 mm, no official action was taken regarding this suggestion. In the 18th century, there were two approaches to the definition of the unit of length. One favoured Wilkins approach, to define the metre in terms of the length of a pendulum which produced a half-period of one second. The other approach was to define the metre as one ten-millionth of the length of a quadrant along the Earths meridian, that is, the distance from the Equator to the North Pole. This means that the quadrant would have defined as exactly 10000000 metres at that time. To establish a universally accepted foundation for the definition of the metre, more measurements of this meridian were needed. This portion of the meridian, assumed to be the length as the Paris meridian, was to serve as the basis for the length of the half meridian connecting the North Pole with the Equator

5.
International System of Units
–
The International System of Units is the modern form of the metric system, and is the most widely used system of measurement. It comprises a coherent system of units of measurement built on seven base units, the system also establishes a set of twenty prefixes to the unit names and unit symbols that may be used when specifying multiples and fractions of the units. The system was published in 1960 as the result of an initiative began in 1948. It is based on the system of units rather than any variant of the centimetre-gram-second system. The motivation for the development of the SI was the diversity of units that had sprung up within the CGS systems, the International System of Units has been adopted by most developed countries, however, the adoption has not been universal in all English-speaking countries. The metric system was first implemented during the French Revolution with just the metre and kilogram as standards of length, in the 1830s Carl Friedrich Gauss laid the foundations for a coherent system based on length, mass, and time. In the 1860s a group working under the auspices of the British Association for the Advancement of Science formulated the requirement for a coherent system of units with base units and derived units. Meanwhile, in 1875, the Treaty of the Metre passed responsibility for verification of the kilogram, in 1921, the Treaty was extended to include all physical quantities including electrical units originally defined in 1893. The units associated with these quantities were the metre, kilogram, second, ampere, kelvin, in 1971, a seventh base quantity, amount of substance represented by the mole, was added to the definition of SI. On 11 July 1792, the proposed the names metre, are, litre and grave for the units of length, area, capacity. The committee also proposed that multiples and submultiples of these units were to be denoted by decimal-based prefixes such as centi for a hundredth, on 10 December 1799, the law by which the metric system was to be definitively adopted in France was passed. Prior to this, the strength of the magnetic field had only been described in relative terms. The technique used by Gauss was to equate the torque induced on a magnet of known mass by the earth’s magnetic field with the torque induced on an equivalent system under gravity. The resultant calculations enabled him to assign dimensions based on mass, length, a French-inspired initiative for international cooperation in metrology led to the signing in 1875 of the Metre Convention. Initially the convention only covered standards for the metre and the kilogram, one of each was selected at random to become the International prototype metre and International prototype kilogram that replaced the mètre des Archives and kilogramme des Archives respectively. Each member state was entitled to one of each of the prototypes to serve as the national prototype for that country. Initially its prime purpose was a periodic recalibration of national prototype metres. The official language of the Metre Convention is French and the version of all official documents published by or on behalf of the CGPM is the French-language version

6.
Joule
–
The joule, symbol J, is a derived unit of energy in the International System of Units. It is equal to the transferred to an object when a force of one newton acts on that object in the direction of its motion through a distance of one metre. It is also the energy dissipated as heat when a current of one ampere passes through a resistance of one ohm for one second. It is named after the English physicist James Prescott Joule, one joule can also be defined as, The work required to move an electric charge of one coulomb through an electrical potential difference of one volt, or one coulomb volt. This relationship can be used to define the volt, the work required to produce one watt of power for one second, or one watt second. This relationship can be used to define the watt and this SI unit is named after James Prescott Joule. As with every International System of Units unit named for a person, note that degree Celsius conforms to this rule because the d is lowercase. — Based on The International System of Units, section 5.2. The CGPM has given the unit of energy the name Joule, the use of newton metres for torque and joules for energy is helpful to avoid misunderstandings and miscommunications. The distinction may be also in the fact that energy is a scalar – the dot product of a vector force. By contrast, torque is a vector – the cross product of a distance vector, torque and energy are related to one another by the equation E = τ θ, where E is energy, τ is torque, and θ is the angle swept. Since radians are dimensionless, it follows that torque and energy have the same dimensions, one joule in everyday life represents approximately, The energy required to lift a medium-size tomato 1 m vertically from the surface of the Earth. The energy released when that same tomato falls back down to the ground, the energy required to accelerate a 1 kg mass at 1 m·s−2 through a 1 m distance in space. The heat required to raise the temperature of 1 g of water by 0.24 °C, the typical energy released as heat by a person at rest every 1/60 s. The kinetic energy of a 50 kg human moving very slowly, the kinetic energy of a 56 g tennis ball moving at 6 m/s. The kinetic energy of an object with mass 1 kg moving at √2 ≈1.4 m/s, the amount of electricity required to light a 1 W LED for 1 s. Since the joule is also a watt-second and the unit for electricity sales to homes is the kW·h. For additional examples, see, Orders of magnitude The zeptojoule is equal to one sextillionth of one joule,160 zeptojoules is equivalent to one electronvolt. The nanojoule is equal to one billionth of one joule, one nanojoule is about 1/160 of the kinetic energy of a flying mosquito

7.
Velocity
–
The velocity of an object is the rate of change of its position with respect to a frame of reference, and is a function of time. Velocity is equivalent to a specification of its speed and direction of motion, Velocity is an important concept in kinematics, the branch of classical mechanics that describes the motion of bodies. Velocity is a vector quantity, both magnitude and direction are needed to define it. The scalar absolute value of velocity is called speed, being a coherent derived unit whose quantity is measured in the SI system as metres per second or as the SI base unit of. For example,5 metres per second is a scalar, whereas 5 metres per second east is a vector, if there is a change in speed, direction or both, then the object has a changing velocity and is said to be undergoing an acceleration. To have a constant velocity, an object must have a constant speed in a constant direction, constant direction constrains the object to motion in a straight path thus, a constant velocity means motion in a straight line at a constant speed. For example, a car moving at a constant 20 kilometres per hour in a path has a constant speed. Hence, the car is considered to be undergoing an acceleration, Speed describes only how fast an object is moving, whereas velocity gives both how fast and in what direction the object is moving. If a car is said to travel at 60 km/h, its speed has been specified, however, if the car is said to move at 60 km/h to the north, its velocity has now been specified. The big difference can be noticed when we consider movement around a circle and this is because the average velocity is calculated by only considering the displacement between the starting and the end points while the average speed considers only the total distance traveled. Velocity is defined as the rate of change of position with respect to time, average velocity can be calculated as, v ¯ = Δ x Δ t. The average velocity is less than or equal to the average speed of an object. This can be seen by realizing that while distance is always strictly increasing, from this derivative equation, in the one-dimensional case it can be seen that the area under a velocity vs. time is the displacement, x. In calculus terms, the integral of the velocity v is the displacement function x. In the figure, this corresponds to the area under the curve labeled s. Since the derivative of the position with respect to time gives the change in position divided by the change in time, although velocity is defined as the rate of change of position, it is often common to start with an expression for an objects acceleration. As seen by the three green tangent lines in the figure, an objects instantaneous acceleration at a point in time is the slope of the tangent to the curve of a v graph at that point. In other words, acceleration is defined as the derivative of velocity with respect to time, from there, we can obtain an expression for velocity as the area under an a acceleration vs. time graph

Velocity
–
As a change of direction occurs while the cars turn on the curved track, their velocity is not constant.

8.
Newton (unit)
–
The newton is the International System of Units derived unit of force. It is named after Isaac Newton in recognition of his work on classical mechanics, see below for the conversion factors. One newton is the force needed to one kilogram of mass at the rate of one metre per second squared in direction of the applied force. In 1948, the 9th CGPM resolution 7 adopted the name newton for this force, the MKS system then became the blueprint for todays SI system of units. The newton thus became the unit of force in le Système International dUnités. This SI unit is named after Isaac Newton, as with every International System of Units unit named for a person, the first letter of its symbol is upper case. Note that degree Celsius conforms to this rule because the d is lowercase. — Based on The International System of Units, section 5.2. Newtons second law of motion states that F = ma, where F is the applied, m is the mass of the object receiving the force. The newton is therefore, where the symbols are used for the units, N for newton, kg for kilogram, m for metre. In dimensional analysis, F = M L T2 where F is force, M is mass, L is length, at average gravity on earth, a kilogram mass exerts a force of about 9.8 newtons. An average-sized apple exerts about one newton of force, which we measure as the apples weight, for example, the tractive effort of a Class Y steam train and the thrust of an F100 fighter jet engine are both around 130 kN. One kilonewton,1 kN, is 102.0 kgf,1 kN =102 kg ×9.81 m/s2 So for example, a platform rated at 321 kilonewtons will safely support a 32,100 kilograms load. Specifications in kilonewtons are common in safety specifications for, the values of fasteners, Earth anchors. Working loads in tension and in shear, thrust of rocket engines and launch vehicles clamping forces of the various moulds in injection moulding machines used to manufacture plastic parts

9.
Electromagnetism
–
Electromagnetism is a branch of physics involving the study of the electromagnetic force, a type of physical interaction that occurs between electrically charged particles. The electromagnetic force usually exhibits electromagnetic fields such as fields, magnetic fields. The other three fundamental interactions are the interaction, the weak interaction, and gravitation. The word electromagnetism is a form of two Greek terms, ἤλεκτρον, ēlektron, amber, and μαγνῆτις λίθος magnētis lithos, which means magnesian stone. The electromagnetic force plays a role in determining the internal properties of most objects encountered in daily life. Ordinary matter takes its form as a result of forces between individual atoms and molecules in matter, and is a manifestation of the electromagnetic force. Electrons are bound by the force to atomic nuclei, and their orbital shapes. The electromagnetic force governs the processes involved in chemistry, which arise from interactions between the electrons of neighboring atoms, there are numerous mathematical descriptions of the electromagnetic field. In classical electrodynamics, electric fields are described as electric potential, although electromagnetism is considered one of the four fundamental forces, at high energy the weak force and electromagnetic force are unified as a single electroweak force. In the history of the universe, during the epoch the unified force broke into the two separate forces as the universe cooled. Originally, electricity and magnetism were considered to be two separate forces, Magnetic poles attract or repel one another in a manner similar to positive and negative charges and always exist as pairs, every north pole is yoked to a south pole. An electric current inside a wire creates a corresponding magnetic field outside the wire. Its direction depends on the direction of the current in the wire. A current is induced in a loop of wire when it is moved toward or away from a field, or a magnet is moved towards or away from it. While preparing for a lecture on 21 April 1820, Hans Christian Ørsted made a surprising observation. As he was setting up his materials, he noticed a compass needle deflected away from north when the electric current from the battery he was using was switched on. At the time of discovery, Ørsted did not suggest any explanation of the phenomenon. However, three later he began more intensive investigations

10.
Potential difference
–
Voltage, electric potential difference, electric pressure or electric tension is the difference in electric potential energy between two points per unit electric charge. The voltage between two points is equal to the work done per unit of charge against an electric field to move the test charge between two points. This is measured in units of volts, voltage can be caused by static electric fields, by electric current through a magnetic field, by time-varying magnetic fields, or some combination of these three. A voltmeter can be used to measure the voltage between two points in a system, often a reference potential such as the ground of the system is used as one of the points. A voltage may represent either a source of energy or lost, used, given two points in space, x A and x B, voltage is the difference in electric potential between those two points. Electric potential must be distinguished from electric energy by noting that the potential is a per-unit-charge quantity. Like mechanical potential energy, the zero of electric potential can be chosen at any point, so the difference in potential, i. e. the voltage, is the quantity which is physically meaningful. The voltage between point A to point B is equal to the work which would have to be done, per unit charge, against or by the electric field to move the charge from A to B. The voltage between the two ends of a path is the energy required to move a small electric charge along that path. Mathematically this is expressed as the integral of the electric field. In the general case, both an electric field and a dynamic electromagnetic field must be included in determining the voltage between two points. Historically this quantity has also called tension and pressure. Pressure is now obsolete but tension is used, for example within the phrase high tension which is commonly used in thermionic valve based electronics. Voltage is defined so that negatively charged objects are pulled towards higher voltages, therefore, the conventional current in a wire or resistor always flows from higher voltage to lower voltage. Current can flow from lower voltage to higher voltage, but only when a source of energy is present to push it against the electric field. This is the case within any electric power source, for example, inside a battery, chemical reactions provide the energy needed for ion current to flow from the negative to the positive terminal. The electric field is not the only factor determining charge flow in a material, the electric potential of a material is not even a well defined quantity, since it varies on the subatomic scale. A more convenient definition of voltage can be found instead in the concept of Fermi level, in this case the voltage between two bodies is the thermodynamic work required to move a unit of charge between them

11.
Conversion of units
–
Conversion of units is the conversion between different units of measurement for the same quantity, typically through multiplicative conversion factors. The process of conversion depends on the situation and the intended purpose. This may be governed by regulation, contract, technical specifications or other published standards, engineering judgment may include such factors as, The precision and accuracy of measurement and the associated uncertainty of measurement. The statistical confidence interval or tolerance interval of the initial measurement, the number of significant figures of the measurement. The intended use of the measurement including the engineering tolerances, historical definitions of the units and their derivatives used in old measurements, e. g. international foot vs. Some conversions from one system of units to another need to be exact and this is sometimes called soft conversion. It does not involve changing the configuration of the item being measured. By contrast, a conversion or an adaptive conversion may not be exactly equivalent. It changes the measurement to convenient and workable numbers and units in the new system and it sometimes involves a slightly different configuration, or size substitution, of the item. Nominal values are allowed and used. A conversion factor is used to change the units of a quantity without changing its value. The unity bracket method of unit conversion consists of a fraction in which the denominator is equal to the numerator, because of the identity property of multiplication, the value of a number will not change as long as it is multiplied by one. Also, if the numerator and denominator of a fraction are equal to each other, so as long as the numerator and denominator of the fraction are equivalent, they will not affect the value of the measured quantity. There are many applications that offer the thousands of the various units with conversions. For example, the free software movement offers a command line utility GNU units for Linux and this article gives lists of conversion factors for each of a number of physical quantities, which are listed in the index. For each physical quantity, a number of different units are shown, Conversion between units in the metric system can be discerned by their prefixes and are thus not listed in this article. Exceptions are made if the unit is known by another name. Within each table, the units are listed alphabetically, and the SI units are highlighted, notes, See Weight for detail of mass/weight distinction and conversion

12.
Steam engine
–
A steam engine is a heat engine that performs mechanical work using steam as its working fluid. Steam engines are combustion engines, where the working fluid is separated from the combustion products. Non-combustion heat sources such as power, nuclear power or geothermal energy may be used. The ideal thermodynamic cycle used to analyze this process is called the Rankine cycle, in the cycle, water is heated and transforms into steam within a boiler operating at a high pressure. When expanded through pistons or turbines, mechanical work is done, the reduced-pressure steam is then exhausted to the atmosphere, or condensed and pumped back into the boiler. Specialized devices such as hammers and steam pile drivers are dependent on the steam pressure supplied from a separate boiler. The use of boiling water to mechanical motion goes back over 2000 years. The Spanish inventor Jerónimo de Ayanz y Beaumont obtained the first patent for an engine in 1606. In 1698 Thomas Savery patented a steam pump that used steam in direct contact with the water being pumped, Saverys steam pump used condensing steam to create a vacuum and draw water into a chamber, and then applied pressurized steam to further pump the water. Thomas Newcomens atmospheric engine was the first commercial steam engine using a piston. In 1781 James Watt patented an engine that produced continuous rotary motion. Watts ten-horsepower engines enabled a range of manufacturing machinery to be powered. The engines could be sited anywhere that water and coal or wood fuel could be obtained, by 1883, engines that could provide 10,000 hp had become feasible. The stationary steam engine was a key component of the Industrial Revolution, the aeolipile described by Hero of Alexandria in the 1st century AD is considered to be the first recorded steam engine. Torque was produced by steam jets exiting the turbine, in the Spanish Empire, the great inventor Jerónimo de Ayanz y Beaumont obtained a patent for the first steam engine in history in 1603. Thomas Savery, in 1698, patented the first practical, atmospheric pressure and it had no piston or moving parts, only taps. It was an engine, a kind of thermic syphon, in which steam was admitted to an empty container. The vacuum thus created was used to water from the sump at the bottom of the mine

13.
Hydroelectricity
–
Hydroelectricity is electricity produced from hydropower. In 2015 hydropower generated 16. 6% of the total electricity and 70% of all renewable electricity. Hydropower is produced in 150 countries, with the Asia-Pacific region generating 33 percent of global hydropower in 2013, China is the largest hydroelectricity producer, with 920 TWh of production in 2013, representing 16.9 percent of domestic electricity use. The cost of hydroelectricity is relatively low, making it a source of renewable electricity. The hydro station consumes no water, unlike coal or gas plants, the average cost of electricity from a hydro station larger than 10 megawatts is 3 to 5 U. S. cents per kilowatt-hour. With a dam and reservoir it is also a source of electricity since the amount produced by the station can be changed up or down very quickly to adapt to changing energy demands. Once a hydroelectric complex is constructed, the project produces no direct waste, Hydropower has been used since ancient times to grind flour and perform other tasks. In the mid-1770s, French engineer Bernard Forest de Bélidor published Architecture Hydraulique which described vertical-, by the late 19th century, the electrical generator was developed and could now be coupled with hydraulics. The growing demand for the Industrial Revolution would drive development as well, in 1878 the worlds first hydroelectric power scheme was developed at Cragside in Northumberland, England by William George Armstrong. It was used to power an arc lamp in his art gallery. The old Schoelkopf Power Station No.1 near Niagara Falls in the U. S. side began to produce electricity in 1881. The first Edison hydroelectric power station, the Vulcan Street Plant, began operating September 30,1882, in Appleton, Wisconsin, by 1886 there were 45 hydroelectric power stations in the U. S. and Canada. By 1889 there were 200 in the U. S. alone, at the beginning of the 20th century, many small hydroelectric power stations were being constructed by commercial companies in mountains near metropolitan areas. Grenoble, France held the International Exhibition of Hydropower and Tourism with over one million visitors, by 1920 as 40% of the power produced in the United States was hydroelectric, the Federal Power Act was enacted into law. The Act created the Federal Power Commission to regulate hydroelectric power stations on federal land, as the power stations became larger, their associated dams developed additional purposes to include flood control, irrigation and navigation. Federal funding became necessary for development and federally owned corporations, such as the Tennessee Valley Authority. Hydroelectric power stations continued to become larger throughout the 20th century, Hydropower was referred to as white coal for its power and plenty. Hoover Dams initial 1,345 MW power station was the worlds largest hydroelectric station in 1936

Hydroelectricity
–
The Three Gorges Dam in Central China is the world's largest power producing facilitiy of any kind.
Hydroelectricity
–
Museum Hydroelectric power plant ″Under the Town″ in Serbia, built in 1900.
Hydroelectricity
–
Turbine row at Los Nihuiles Power Station in Mendoza, Argentina

14.
Decibel
–
The decibel is a logarithmic unit used to express the ratio of two values of a physical quantity. One of these values is often a reference value, in which case the decibel is used to express the level of the other value relative to this reference. When used in way, the decibel symbol is often qualified with a suffix that indicates the reference quantity that has been used or some other property of the quantity being measured. For example, dBm indicates a power of one milliwatt. There are two different scales used when expressing a ratio in decibels depending on the nature of the quantities, when expressing power quantities, the number of decibels is ten times the logarithm to base 10 of the ratio of two power quantities. That is, a change in power by a factor of 10 corresponds to a 10 dB change in level, when expressing field quantities, a change in amplitude by a factor of 10 corresponds to a 20 dB change in level. The difference in scales relates to the square law of fields in three-dimensional linear space. The decibel scales differ so that comparisons can be made between related power and field quantities when they are expressed in decibels. The definition of the decibel is based on the measurement of power in telephony of the early 20th century in the Bell System in the United States. One decibel is one tenth of one bel, named in honor of Alexander Graham Bell, however, today, the decibel is used for a wide variety of measurements in science and engineering, most prominently in acoustics, electronics, and control theory. In electronics, the gains of amplifiers, attenuation of signals, the decibel originates from methods used to quantify signal loss in telegraph and telephone circuits. The unit for loss was originally Miles of Standard Cable, the standard telephone cable implied was a cable having uniformly distributed resistance of 88 ohms per loop mile and uniformly distributed shunt capacitance of 0.054 microfarad per mile. 1 TU was defined such that the number of TUs was ten times the logarithm of the ratio of measured power to a reference power level. The definition was conveniently chosen such that 1 TU approximated 1 MSC, in 1928, the Bell system renamed the TU into the decibel, being one tenth of a newly defined unit for the base-10 logarithm of the power ratio. It was named the bel, in honor of the telecommunications pioneer Alexander Graham Bell, the bel is seldom used, as the decibel was the proposed working unit. However, the decibel is recognized by international bodies such as the International Electrotechnical Commission. The term field quantity is deprecated by ISO 80000-1, which favors root-power, in spite of their widespread use, suffixes are not recognized by the IEC or ISO. The ISO Standard 80000-3,2006 defines the following quantities, the decibel is one-tenth of a bel,1 dB =0.1 B

15.
Radio astronomy
–
Radio astronomy is a subfield of astronomy that studies celestial objects at radio frequencies. The first detection of radio waves from an object was in 1932. Subsequent observations have identified a number of different sources of radio emission and these include stars and galaxies, as well as entirely new classes of objects, such as radio galaxies, quasars, pulsars, and masers. The discovery of the microwave background radiation, regarded as evidence for the Big Bang theory, was made through radio astronomy. Before Jansky observed the Milky Way in the 1930s, physicists speculated that radio waves could be observed from astronomical sources, in the 1860s, James Clerk Maxwells equations had shown that electromagnetic radiation is associated with electricity and magnetism, and could exist at any wavelength. These attempts were unable to detect any emission due to limitations of the instruments. The discovery of the radio reflecting ionosphere in 1902, led physicists to conclude that the layer would bounce any astronomical radio transmission back into space, Karl Jansky made the discovery of the first astronomical radio source serendipitously in the early 1930s. As an engineer with Bell Telephone Laboratories, he was investigating static that interfered with short wave transatlantic voice transmissions, using a large directional antenna, Jansky noticed that his analog pen-and-paper recording system kept recording a repeating signal of unknown origin. Since the signal peaked about every 24 hours, Jansky originally suspected the source of the interference was the Sun crossing the view of his directional antenna. Continued analysis showed that the source was not following the 24-hour daily cycle of the Sun exactly and he concluded that since the Sun were not large emitters of radio noise, the strange radio interference may be generated by interstellar gas and dust in the galaxy. Jansky announced his discovery in 1933 and he wanted to investigate the radio waves from the Milky Way in further detail, but Bell Labs reassigned him to another project, so he did no further work in the field of astronomy. His pioneering efforts in the field of astronomy have been recognized by the naming of the fundamental unit of flux density. Grote Reber was inspired by Janskys work, and built a radio telescope 9m in diameter in his backyard in 1937. He began by repeating Janskys observations, and then conducted the first sky survey in the radio frequencies, on February 27,1942, James Stanley Hey, a British Army research officer, made the first detection of radio waves emitted by the Sun. Later that year George Clark Southworth, at Bell Labs like Jansky, both researchers were bound by wartime security surrounding radar, so Reber, who was not, published his 1944 findings first. Several other people independently discovered solar radiowaves, including E. Schott in Denmark, at Cambridge University, where ionospheric research had taken place during World War II, J. A. This early research soon branched out into the observation of celestial radio sources. Martin Ryle and Antony Hewish at the Cavendish Astrophysics Group developed the technique of Earth-rotation aperture synthesis, the radio astronomy group in Cambridge went on to found the Mullard Radio Astronomy Observatory near Cambridge in the 1950s

16.
EEG
–
Electroencephalography is an electrophysiological monitoring method to record electrical activity of the brain. It is typically noninvasive, with the electrodes placed along the scalp, EEG measures voltage fluctuations resulting from ionic current within the neurons of the brain. In clinical contexts, EEG refers to the recording of the brains electrical activity over a period of time. Diagnostic applications generally focus on the content of EEG, that is. EEG is most often used to diagnose epilepsy, which causes abnormalities in EEG readings and it is also used to diagnose sleep disorders, coma, encephalopathies, and brain death. Despite limited spatial resolution, EEG continues to be a tool for research and diagnosis. Derivatives of the EEG technique include evoked potentials, which involves averaging the EEG activity time-locked to the presentation of a stimulus of some sort, the history of EEG is detailed by Barbara E. Swartz in Electroencephalography and Clinical Neurophysiology. In 1890, Polish physiologist Adolf Beck published an investigation of spontaneous activity of the brain of rabbits. Beck started experiments on the brain activity of animals. Beck placed electrodes directly on the surface of brain to test for sensory stimulation and his observation of fluctuating brain activity led to the conclusion of brain waves. In 1912, Russian physiologist Vladimir Vladimirovich Pravdich-Neminsky published the first animal EEG, in 1914, Napoleon Cybulski and Jelenska-Macieszyna photographed EEG recordings of experimentally induced seizures. German physiologist and psychiatrist Hans Berger recorded the first human EEG in 1924 and his discoveries were first confirmed by British scientists Edgar Douglas Adrian and B. H. C. Matthews in 1934 and developed by them. In 1934, Fisher and Lowenback first demonstrated epileptiform spikes, in 1935 Gibbs, Davis and Lennox described interictal spike waves and the three cycles/s pattern of clinical absence seizures, which began the field of clinical electroencephalography. Subsequently, in 1936 Gibbs and Jasper reported the interictal spike as the signature of epilepsy. The same year, the first EEG laboratory opened at Massachusetts General Hospital, franklin Offner, professor of biophysics at Northwestern University developed a prototype of the EEG that incorporated a piezoelectric inkwriter called a Crystograph. In 1947, The American EEG Society was founded and the first International EEG congress was held, in 1953 Aserinsky and Kleitman described REM sleep. In the 1950s, William Grey Walter developed an adjunct to EEG called EEG topography and this enjoyed a brief period of popularity in the 1980s and seemed especially promising for psychiatry. It was never accepted by neurologists and remains primarily a research tool, a routine clinical EEG recording typically lasts 20–30 minutes and usually involves recording from scalp electrodes

17.
Hearing aid
–
A hearing aid or deaf aid is a device designed to improve hearing. Hearing aids are classified as medical devices in most countries, small audio amplifiers such as PSAPs or other plain sound reinforcing systems cannot be sold as hearing aids. Earlier devices, such as ear trumpets or ear horns, were passive amplification cones designed to gather sound energy, modern devices are computerised electroacoustic systems that transform environmental sound to make it more intelligible or comfortable, according to audiometrical and cognitive rules. Such sound processing can be considerable, such as highlighting a spatial region, shifting frequencies, cancelling noise and wind, modern hearing aids require configuration to match the hearing loss, physical features, and lifestyle of the wearer. This process is called fitting and is performed by audiologists, the amount of benefit a hearing aid delivers depends in large part on the quality of its fitting. Devices similar to hearing aids include the bone anchored hearing aid, Hearing aids are incapable of truly correcting a hearing loss, they are an aid to make sounds more accessible. Two primary issues minimize the effectiveness of hearing aids, When the primary auditory cortex does not receive regular stimulation, cell loss increases as the degree of hearing loss increases. Damage to the cells of the inner ear results in sensorineural hearing loss. This often manifests as an ability to understand speech. Hearing aids are incapable of truly correcting a hearing loss, they are an aid to make more accessible. Three primary issues minimize the effectiveness of hearing aids, The occlusion effect is a common complaint, though if the aids are worn regularly, most people will become acclimated after a few weeks. If the effect persists, an audiologist or Hearing Instrument Specialist can sometimes further tune the hearing aid, the compression effect, The amplification needed to make quiet sounds audible, if applied to loud sounds would damage the inner ear. Louder sounds are therefore reduced giving a smaller audible volume range, Hearing protection is also provided by an overall cap to the sound pressure. Also of protective value is impulse noise suppression, available in some high-end aids, the initial fitting appointment is rarely sufficient, and multiple follow-up visits are often necessary. Most audiologists or Hearing Instrument Specialists will recommend an up-to-date audiogram at the time of purchase, there are several ways of evaluating how well a hearing aid compensates for hearing loss. One approach is audiometry which measures a subjects hearing levels in laboratory conditions, the threshold of audibility for various sounds and intensities is measured in a variety of conditions. Although audiometric tests may attempt to mimic real-world conditions, the patients own every day experiences may differ, an alternative approach is self-report assessment, where the patient reports their experience with the hearing aid. Real ear measurements are an assessment of the characteristics of hearing aid amplification near the ear drum using a silicone probe tube microphone, there are many types of hearing aids, which vary in size, power and circuitry

Hearing aid
–
In-ear hearing aid
Hearing aid
–
NIH illustration of different hearing aid types.
Hearing aid
–
Illustration of some of the first bulky vacuum tube hearing aids from 1933
Hearing aid
–
A sign in a train station explains that the public announcement system uses a "Hearing Induction Loop" (Audio induction loop). Hearing aid users can use a telecoil (T) switch to hear announcements directly through their hearing aid receiver.

18.
Engine
–
An engine or motor is a machine designed to convert one form of energy into mechanical energy. Heat engines burn a fuel to heat, which is then used to create a force. Electric motors convert electrical energy into motion, pneumatic motors use compressed air. In biological systems, molecular motors, like myosins in muscles, use energy to create forces. The word engine derives from Old French engin, from the Latin ingenium–the root of the word ingenious. Pre-industrial weapons of war, such as catapults, trebuchets and battering rams, were called siege engines, the word gin, as in cotton gin, is short for engine. Most mechanical devices invented during the revolution were described as engines—the steam engine being a notable example. However, the steam engines, such as those by Thomas Savery, were not mechanical engines. In this manner, an engine in its original form was merely a water pump. Devices converting heat energy into motion are commonly referred to simply as engines, examples of engines which exert a torque include the familiar automobile gasoline and diesel engines, as well as turboshafts. Examples of engines which produce thrust include turbofans and rockets, the term motor derives from the Latin verb moto which means to set in motion, or maintain motion. Thus a motor is a device that imparts motion, motor and engine later came to be used largely interchangeably in casual discourse. However, technically, the two words have different meanings, however, rocketry uses the term rocket motor, even though they consume fuel. A heat engine may also serve as a prime mover—a component that transforms the flow or changes in pressure of a fluid into mechanical energy. An automobile powered by a combustion engine may make use of various motors and pumps. Another way of looking at it is that a motor receives power from an external source, simple machines, such as the club and oar, are prehistoric. More complex engines using human power, animal power, water power, wind power and these were used in cranes and aboard ships in Ancient Greece, as well as in mines, water pumps and siege engines in Ancient Rome. The writers of those times, including Vitruvius, Frontinus and Pliny the Elder, treat these engines as commonplace, by the 1st century AD, cattle and horses were used in mills, driving machines similar to those powered by humans in earlier times

19.
Horsepower
–
Horsepower is a unit of measurement of power. There are many different standards and types of horsepower, two common definitions being used today are the mechanical horsepower, which is approximately 746 watts, and the metric horsepower, which is approximately 735.5 watts. The term was adopted in the late 18th century by Scottish engineer James Watt to compare the output of engines with the power of draft horses. It was later expanded to include the power of other types of piston engines, as well as turbines, electric motors. The definition of the unit varied among geographical regions, most countries now use the SI unit watt for measurement of power. With the implementation of the EU Directive 80/181/EEC on January 1,2010, units called horsepower have differing definitions, The mechanical horsepower, also known as imperial horsepower equals approximately 745.7 watts. It was defined originally as exactly 550 foot-pounds per second [745.7 N. m/s), the metric horsepower equals approximately 735.5 watts. It was defined originally as 75 kgf-m per second is approximately equivalent to 735.5 watts, the Pferdestärke PS is a name for a group of similar power measurements used in Germany around the end of the 19th century, all of about one metric horsepower in size. The boiler horsepower equals 9809.5 watts and it was used for rating steam boilers and is equivalent to 34.5 pounds of water evaporated per hour at 212 degrees Fahrenheit. One horsepower for rating electric motors is equal to 746 watts, one horsepower for rating Continental European electric motors is equal to 735 watts. Continental European electric motors used to have dual ratings, one British Royal Automobile Club horsepower can equal a range of values based on estimates of several engine dimensions. It is one of the tax horsepower systems adopted around Europe, the development of the steam engine provided a reason to compare the output of horses with that of the engines that could replace them. He had previously agreed to take royalties of one third of the savings in coal from the older Newcomen steam engines and this royalty scheme did not work with customers who did not have existing steam engines but used horses instead. Watt determined that a horse could turn a mill wheel 144 times in an hour, the wheel was 12 feet in radius, therefore, the horse travelled 2.4 × 2π ×12 feet in one minute. Watt judged that the horse could pull with a force of 180 pounds-force. So, P = W t = F d t =180 l b f ×2.4 ×2 π ×12 f t 1 m i n =32,572 f t ⋅ l b f m i n. Watt defined and calculated the horsepower as 32,572 ft·lbf/min, Watt determined that a pony could lift an average 220 lbf 100 ft per minute over a four-hour working shift. Watt then judged a horse was 50% more powerful than a pony, engineering in History recounts that John Smeaton initially estimated that a horse could produce 22,916 foot-pounds per minute

20.
Visible light
–
Light is electromagnetic radiation within a certain portion of the electromagnetic spectrum. The word usually refers to light, which is visible to the human eye and is responsible for the sense of sight. Visible light is defined as having wavelengths in the range of 400–700 nanometres, or 4.00 × 10−7 to 7.00 × 10−7 m. This wavelength means a range of roughly 430–750 terahertz. The main source of light on Earth is the Sun, sunlight provides the energy that green plants use to create sugars mostly in the form of starches, which release energy into the living things that digest them. This process of photosynthesis provides virtually all the used by living things. Historically, another important source of light for humans has been fire, with the development of electric lights and power systems, electric lighting has effectively replaced firelight. Some species of animals generate their own light, a process called bioluminescence, for example, fireflies use light to locate mates, and vampire squids use it to hide themselves from prey. Visible light, as all types of electromagnetic radiation, is experimentally found to always move at this speed in a vacuum. In physics, the term sometimes refers to electromagnetic radiation of any wavelength. In this sense, gamma rays, X-rays, microwaves and radio waves are also light, like all types of light, visible light is emitted and absorbed in tiny packets called photons and exhibits properties of both waves and particles. This property is referred to as the wave–particle duality, the study of light, known as optics, is an important research area in modern physics. Generally, EM radiation, or EMR, is classified by wavelength into radio, microwave, infrared, the behavior of EMR depends on its wavelength. Higher frequencies have shorter wavelengths, and lower frequencies have longer wavelengths, when EMR interacts with single atoms and molecules, its behavior depends on the amount of energy per quantum it carries. There exist animals that are sensitive to various types of infrared, infrared sensing in snakes depends on a kind of natural thermal imaging, in which tiny packets of cellular water are raised in temperature by the infrared radiation. EMR in this range causes molecular vibration and heating effects, which is how these animals detect it, above the range of visible light, ultraviolet light becomes invisible to humans, mostly because it is absorbed by the cornea below 360 nanometers and the internal lens below 400. Furthermore, the rods and cones located in the retina of the eye cannot detect the very short ultraviolet wavelengths and are in fact damaged by ultraviolet. Many animals with eyes that do not require lenses are able to detect ultraviolet, by quantum photon-absorption mechanisms, various sources define visible light as narrowly as 420 to 680 to as broadly as 380 to 800 nm

Visible light
–
An example of refraction of light. The straw appears bent, because of refraction of light as it enters liquid from air.
Visible light
–
A triangular prism dispersing a beam of white light. The longer wavelengths (red) and the shorter wavelengths (blue) get separated.
Visible light
–
A cloud illuminated by sunlight
Visible light
–
A city illuminated by artificial lighting

21.
Sunlight
–
Sunlight is a portion of the electromagnetic radiation given off by the Sun, in particular infrared, visible, and ultraviolet light. On Earth, sunlight is filtered through Earths atmosphere, and is obvious as daylight when the Sun is above the horizon, when the direct solar radiation is not blocked by clouds, it is experienced as sunshine, a combination of bright light and radiant heat. When it is blocked by clouds or reflects off other objects, the World Meteorological Organization uses the term sunshine duration to mean the cumulative time during which an area receives direct irradiance from the Sun of at least 120 watts per square meter. Other sources indicate an Average over the earth of 164 Watts per square meter over a 24 hour day. The ultraviolet radiation in sunlight has both positive and negative effects, as it is both a principal source of vitamin D3 and a mutagen. Sunlight takes about 8.3 minutes to reach Earth from the surface of the Sun. A photon starting at the center of the Sun and changing every time it encounters a charged particle would take between 10,000 and 170,000 years to get to the surface. Researchers may record sunlight using a sunshine recorder, pyranometer, or pyrheliometer, to calculate the amount of sunlight reaching the ground, both Earths elliptical orbit and the attenuation by Earths atmosphere have to be taken into account. In this formula dn–3 is used, because in modern times Earths perihelion, the closest approach to the Sun and, therefore, the value of 0.033412 is determined knowing that the ratio between the perihelion squared and the aphelion squared should be approximately 0.935338. The solar illuminance constant, is equal to 128×103 lx, the atmospheric extinction brings the number of lux down to around 100000. The total amount of energy received at ground level from the Sun at the zenith depends on the distance to the Sun and it is about 3. 3% higher than average in January and 3. 3% lower in July. In terms of energy, sunlight at Earths surface is around 52 to 55 percent infrared,42 to 43 percent visible, and 3 to 5 percent ultraviolet. At the top of the atmosphere, sunlight is about 30% more intense, having about 8% ultraviolet, direct sunlight has a luminous efficacy of about 93 lumens per watt of radiant flux. This is higher than the efficacy of most artificial lighting, which means using sunlight for illumination heats up a less than using most forms of artificial lighting. Multiplying the figure of 1050 watts per square metre by 93 lumens per watt indicates that bright sunlight provides an illuminance of approximately 98000 lux on a surface at sea level. The illumination of a surface will be considerably less than this if the Sun is not very high in the sky. Averaged over a day, the highest amount of sunlight on a horizontal surface occurs in January at the South Pole, dividing the irradiance of 1050 W/m2 by the size of the suns disk in steradians gives an average radiance of 15.4 MW per square metre per steradian. Multiplying this by π gives a limit to the irradiance which can be focused on a surface using mirrors,48.5 MW/m2

22.
Solar irradiance
–
Solar irradiance is the power per unit area received from the Sun in the form of electromagnetic radiation in the wavelength range of the measuring instrument. Irradiance may be measured in space or at the Earths surface after atmospheric absorption and it is measured perpendicular to the incoming sunlight. Total solar irradiance, is a measure of the power over all wavelengths per unit area incident on the Earths upper atmosphere. The solar constant is a measure of mean TSI at a distance of one astronomical Unit. Irradiance is a function of distance from the Sun, the solar cycle, Irradiance on Earth is also measured perpendicular to the incoming sunlight. Insolation is the received on Earth per unit area on a horizontal surface. It depends on the height of the Sun above the horizon, the solar irradiance integrated over time is called solar irradiation, solar exposure or insolation. However, insolation is often used interchangeably with irradiance in practice, the SI unit of irradiance is watt per square meter. An alternate unit of measure is the Langley per unit time, the solar energy industry uses watt-hour per square metre per unit time, the relation to the SI unit is thus 1 kW/m2 =24 kWh/m2/day =8760 kWh/m2/year. Irradiance can also be expressed in Suns, where one Sun equals 1000 W/m2 at the point of arrival, part of the radiation reaching an object is absorbed and the remainder reflected. Usually the absorbed radiation is converted to energy, increasing the objects temperature. Manmade or natural systems, however, can convert part of the radiation into another form such as electricity or chemical bonds. The proportion of reflected radiation is the objects reflectivity or albedo, insolation onto a surface is largest when the surface directly faces the sun. As the angle between the surface and the Sun moves from normal, the insolation is reduced in proportion to the angles cosine, see effect of sun angle on climate. In the figure, the angle shown is between the ground and the rather than between the vertical direction and the sunbeam, hence the sine rather than the cosine is appropriate. A sunbeam one mile wide arrives from directly overhead, and another at a 30° angle to the horizontal, the sine of a 30° angle is 1/2, whereas the sine of a 90° angle is 1. Therefore, the angled sunbeam spreads the light over twice the area, consequently, half as much light falls on each square mile. This projection effect is the reason why Earths polar regions are much colder than equatorial regions

Solar irradiance
–
A Pyranometer, a component of a temporary remote meteorological station, measures insolation on Skagit Bay, Washington.
Solar irradiance
–
Annual mean insolation at the top of Earth's atmosphere (TOA) and at the planet's surface

23.
Warship
–
A warship is a naval ship that is built and primarily intended for naval warfare. Usually they belong to the forces of a state. As well as being armed, warships are designed to damage and are usually faster. Unlike a merchant ship, which carries cargo, a warship typically carries weapons, ammunition. Warships usually belong to a navy, though they have also operated by individuals, cooperatives. In wartime, the distinction between warships and merchant ships is often blurred, in war, merchant ships are often armed and used as auxiliary warships, such as the Q-ships of the First World War and the armed merchant cruisers of the Second World War. Until the 17th century it was common for merchant ships to be pressed into naval service, until the threat of piracy subsided in the 19th century, it was normal practice to arm larger merchant ships such as galleons. Warships have also often used as troop carriers or supply ships. The development of catapults in the 4th century BC and the subsequent refinement of technology enabled the first fleets of artillery-equipped warships by the Hellenistic age. During late antiquity, ramming fell out of use and the galley tactics against other ships used during the Middle Ages until the late 16th century focused on boarding. Naval artillery was redeveloped in the 14th century, but cannon did not become common at sea until the guns were capable of being reloaded quickly enough to be reused in the same battle. The size of a required to carry a large number of cannons made oar-based propulsion impossible. The sailing man-of-war emerged during the 16th century, by the middle of the 17th century, warships were carrying increasing numbers of cannon on their broadsides and tactics evolved to bring each ships firepower to bear in a line of battle. The man-of-war now evolved into the ship of the line, in the 18th century, the frigate and sloop-of-war – too small to stand in the line of battle – evolved to convoy trade, scout for enemy ships and blockade enemy coasts. During the 19th century a revolution took place in the means of propulsion, naval armament. Marine steam engines were introduced, at first as an auxiliary force, the Crimean War gave a great stimulus to the development of guns. The introduction of explosive shells soon led to the introduction of iron, the first ironclad warships, the French Gloire and British Warrior, made wooden vessels obsolete. Metal soon entirely replaced wood as the material for warship construction

24.
Cruiser
–
A cruiser is a type of warship. The term has been in use for several hundred years, and has had different meanings throughout this period. In the middle of the 19th century, cruiser came to be a classification for the intended for cruising distant waters, commerce raiding. Cruisers came in a variety of sizes, from the medium-sized protected cruiser to large armored cruisers that were nearly as big as a pre-dreadnought battleship. With the advent of the battleship before World War I. The very large battlecruisers of the World War I era that succeeded armored cruisers were now classified, along with dreadnought battleships, in the later 20th century, the obsolescence of the battleship left the cruiser as the largest and most powerful surface combatant after the aircraft carrier. The role of the cruiser varied according to ship and navy, often including air defense, during the Cold War, the Soviet Navys cruisers had heavy anti-ship missile armament designed to sink NATO carrier task forces via saturation attack. The U. S. Adams guided-missile destroyers tasked with the air defense role. Indeed, the newest U. S. Navy destroyers are more heavily-armed than some of the cruisers that they succeeded, currently only three nations operate cruisers, the United States, Russia, and Peru. The term cruiser or cruizer was first commonly used in the 17th century to refer to an independent warship, Cruiser meant the purpose or mission of a ship, rather than a category of vessel. However, the term was used to mean a smaller, faster warship suitable for such a role. The Dutch navy was noted for its cruisers in the 17th century, while the Royal Navy—and later French and Spanish navies—subsequently caught up in terms of their numbers, during the 18th century the frigate became the preeminent type of cruiser. A frigate was a small, fast, long range, lightly armed ship used for scouting, carrying dispatches, the other principal type of cruiser was the sloop, but many other miscellaneous types of ship were used as well. During the 19th century, navies began to use steam power for their fleets, the 1840s saw the construction of experimental steam-powered frigates and sloops. By the middle of the 1850s, the British and U. S. Navies were both building steam frigates with very long hulls and a gun armament, for instance USS Merrimack or Mersey. The 1860s saw the introduction of the ironclad, the first ironclads were frigates, in the sense of having one gun deck, however, they were also clearly the most powerful ships in the navy, and were principally to serve in the line of battle. In spite of their speed, they would have been wasted in a cruising role. The French constructed a number of smaller ironclads for overseas cruising duties, starting with the Belliqueuse and these station ironclads were the beginning of the development of the armored cruisers, a type of ironclad specifically for the traditional cruiser missions of fast, independent raiding and patrol

25.
Submarine
–
A submarine is a watercraft capable of independent operation underwater. It differs from a submersible, which has more limited underwater capability, the term most commonly refers to a large, crewed vessel. It is also used historically or colloquially to refer to remotely operated vehicles and robots, as well as medium-sized or smaller vessels, such as the midget submarine. The noun submarine evolved as a form of submarine boat, by naval tradition, submarines are usually referred to as boats rather than as ships. Although experimental submarines had been built before, submarine design took off during the 19th century, Submarines were first widely used during World War I, and now figure in many navies large and small. Civilian uses for submarines include marine science, salvage, exploration and facility inspection, Submarines can also be modified to perform more specialized functions such as search-and-rescue missions or undersea cable repair. Submarines are also used in tourism, and for undersea archaeology, most large submarines consist of a cylindrical body with hemispherical ends and a vertical structure, usually located amidships, which houses communications and sensing devices as well as periscopes. In modern submarines, this structure is the sail in American usage, a conning tower was a feature of earlier designs, a separate pressure hull above the main body of the boat that allowed the use of shorter periscopes. There is a propeller at the rear, and various hydrodynamic control fins, smaller, deep-diving and specialty submarines may deviate significantly from this traditional layout. Submarines use diving planes and also change the amount of water, Submarines have one of the widest ranges of types and capabilities of any vessel. Submarines can work at greater depths than are survivable or practical for human divers, modern deep-diving submarines derive from the bathyscaphe, which in turn evolved from the diving bell. In 1578, the English mathematician William Bourne recorded in his book Inventions or Devises one of the first plans for an underwater navigation vehicle and its unclear whether he ever carried out his idea. The first submersible of whose construction there exists reliable information was designed and built in 1620 by Cornelis Drebbel and it was propelled by means of oars. By the mid-18th century, over a dozen patents for submarines/submersible boats had been granted in England, in 1747, Nathaniel Symons patented and built the first known working example of the use of a ballast tank for submersion. His design used leather bags that could fill with water to submerge the craft, a mechanism was used to twist the water out of the bags and cause the boat to resurface. In 1749, the Gentlemens Magazine reported that a design had initially been proposed by Giovanni Borelli in 1680. By this point of development, further improvement in design stagnated for over a century, until new industrial technologies for propulsion. The first military submarine was the Turtle, a hand-powered acorn-shaped device designed by the American David Bushnell to accommodate a single person and it was the first verified submarine capable of independent underwater operation and movement, and the first to use screws for propulsion

26.
Server farm
–
A server farm or server cluster is a collection of computer servers - usually maintained by an organization to supply server functionality far beyond the capability of a single machine. Server farms often consist of thousands of computers which require an amount of power to run. At the optimum level, a server farm has enormous costs associated with it. Server farms often have backup servers, which can take over the function of primary servers in the event of a primary-server failure, Server farms are typically collocated with the network switches and/or routers which enable communication between the different parts of the cluster and the users of the cluster. Server farmers typically mount the computers, routers, power supplies, Server farms are commonly used for cluster computing. Many modern supercomputers comprise giant server farms of high-speed processors connected by either Gigabit Ethernet or custom interconnects such as Infiniband or Myrinet, web hosting is a common use of a server farm, such a system is sometimes collectively referred to as a web farm. Other uses of server farms include scientific simulations and the rendering of 3D computer generated imagery, Server farms are increasingly being used instead of or in addition to mainframe computers by large enterprises, although server farms do not yet reach the same reliability levels as mainframes. The EEMBC EnergyBench, SPECpower, and the Transaction Processing Performance Council TPC-Energy are benchmarks designed to predict performance per watt in a server farm, the power used by each rack of equipment can be measured at the power distribution unit. Some servers include power tracking hardware so the running the server farm can measure the power used by each server. The power used by the server farm may be reported in terms of power usage effectiveness or data center infrastructure efficiency. According to some estimates, for every 100 watts spent on running the servers, for this reason, the siting of a Server Farm can be as important as processor selection in achieving power efficiency. Iceland, which has a cold climate all year as well as cheap, fibre optic cables are being laid from Iceland to North America and Europe to enable companies there to locate their servers in Iceland. Other countries with favorable conditions, such as Canada, Finland, Sweden, in these countries, heat from the servers can be cheaply vented or used to help heat buildings, thus reducing the energy consumption of conventional heaters. Blade server Compile farm Data center Green computing Render farm Link farm

Server farm
–
A row of racks in a server farm
Server farm
–
This server farm supports the various computer networks of the Joint Task Force Guantanamo

27.
Data center
–
A data center is a facility used to house computer systems and associated components, such as telecommunications and storage systems. It generally includes redundant or backup power supplies, redundant data communications connections, environmental controls, large data centers are industrial scale operations using as much electricity as a small town. Data centers have their roots in the computer rooms of the early ages of the computing industry. Early computer systems, complex to operate and maintain, required an environment in which to operate. Many cables were necessary to all the components, and methods to accommodate and organize these were devised such as standard racks to mount equipment, raised floors. A single mainframe required a deal of power, and had to be cooled to avoid overheating. Security became important – computers were expensive, and were used for military purposes. Basic design-guidelines for controlling access to the room were therefore devised. During the boom of the industry, and especially during the 1980s, users started to deploy computers everywhere. However, as information technology operations started to grow in complexity, the advent of Unix from the early 1970s led to the subsequent proliferation of freely available Linux-compatible PC operating-systems during the 1990s. These were called servers, as timesharing operating systems like Unix rely heavily on the model to facilitate sharing unique resources between multiple users. The use of the data center, as applied to specially designed computer rooms. The boom of data centers came during the bubble of 1997–2000. Companies needed fast Internet connectivity and non-stop operation to deploy systems, installing such equipment was not viable for many smaller companies. Many companies started building very large facilities, called Internet data centers, New technologies and practices were designed to handle the scale and the operational requirements of such large-scale operations. These practices eventually migrated toward the data centers, and were adopted largely because of their practical results. Data centers for cloud computing are called cloud data centers, but nowadays, the division of these terms has almost disappeared and they are being integrated into a term data center. Standards documents from accredited professional groups, such as the Telecommunications Industry Association, well-known operational metrics for data-center availability can serve to evaluate the commercial impact of a disruption

28.
Electric locomotives
–
An electric locomotive is a locomotive powered by electricity from overhead lines, a third rail or on-board energy storage such as a battery or fuel cell. Electricity is used to smoke and take advantage of the high efficiency of electric motors. One advantage of electrification is the lack of pollution from the locomotives, electrification results in higher performance, lower maintenance costs and lower energy costs. Power plants, even if they burn fossil fuels, are far cleaner than mobile sources such as locomotive engines, the power can come from clean or renewable sources, including geothermal power, hydroelectric power, nuclear power, solar power and wind turbines. Electric locomotives are quiet compared to locomotives since there is no engine and exhaust noise. The lack of reciprocating parts means electric locomotives are easier on the track, Electric locomotives are ideal for commuter rail service with frequent stops. They are used on high-speed lines, such as ICE in Germany, Acela in the U. S. Shinkansen in Japan, China Railway High-speed in China, Electric locomotives are used on freight routes with consistently high traffic volumes, or in areas with advanced rail networks. Electric locomotives benefit from the efficiency of electric motors, often above 90%. Additional efficiency can be gained from regenerative braking, which allows energy to be recovered during braking to put power back on the line. Newer electric locomotives use AC motor-inverter drive systems provide for regenerative braking. The chief disadvantage of electrification is the cost for infrastructure, overhead lines or third rail, substations, public policy in the U. S. interferes with electrification, higher property taxes are imposed on privately owned rail facilities if they are electrified. In Europe and elsewhere, railway networks are considered part of the transport infrastructure, just like roads, highways and waterways. Operators of the rolling stock pay fees according to rail use and this makes possible the large investments required for the technically and, in the long-term, also economically advantageous electrification. Because railroad infrastructure is owned in the U. S. railroads are unwilling to make the necessary investments for electrification. The first known electric locomotive was built in 1837 by chemist Robert Davidson of Aberdeen and it was powered by galvanic cells. Davidson later built a locomotive named Galvani, exhibited at the Royal Scottish Society of Arts Exhibition in 1841. The seven-ton vehicle had two direct-drive reluctance motors, with fixed electromagnets acting on iron bars attached to a cylinder on each axle. It hauled a load of six tons at four miles per hour for a distance of one and it was tested on the Edinburgh and Glasgow Railway in September of the following year, but the limited power from batteries prevented its general use

29.
British Rail Class 373
–
The British Rail Class 373 or TGV TMST train is an electric multiple unit that operates Eurostars inter-city high-speed rail service between Britain, France and Belgium via the Channel Tunnel. It is both the second longest—387 metres —and second fastest train in regular UK passenger service, operating at speeds of up to 300 kilometres per hour. It is beaten in both aspects by the Class 374 which is 400 metres long and has a top speed of 320 kilometres per hour, though this is never achieved on HS1 in Britain. It was built by the French company GEC-Alsthom at its sites in La Rochelle, Belfort and Washwood Heath, two types were constructed,31 Three Capitals sets consisting of two power cars and 18 carriages, including two with powered bogies. They are 387 metres long and have 750 seats,206 in first class,544 in standard class, seven North of London trains with 14 carriages, including two carriages with powered bogies. They are 312.36 metres in length and have 558 seats,114 in first class,444 in standard class. The North of London sets were intended to provide Regional Eurostar services from continental Europe to and from north of London, using the West Coast and East Coast Main Lines. The sets were ordered by the companies involved,16 by SNCF, four by NMBS/SNCB. Upon the privatisation of British Rail, the BR sets were bought by London and Continental Railways, the first set was built at Belfort in 1992. Identified as PS1, it was formed of two cars and seven coaches, and was delivered for test running in January 1993. Its first powered runs were between Strasbourg and Mulhouse, and it was transferred to the UK for third-rail DC tests in June 1993, full-length pre-series train PS2 was completed in May 1993. An extra power car, numbered 3999, was built as a spare and this was required for a couple of years, when 3999 was renumbered and replaced another power car whilst it underwent rebuilding at Le Landy. It was overhauled and renumbered 3204 in 2016, the 27 sets still operating on Eurostar were refurbished in 2004/05 with a new interior, designed by Philippe Starck. The grey-yellow look in Standard class and the look in First class were replaced with a more grey-brown scheme in Standard. In 2008, Eurostar announced that it was beginning the process to institute a mid-life update, as a part of the update process, the Italian company Pininfarina was contracted to redesign the interiors, the first refurbished Eurostar was not originally due in service until 2012. The refurbishment could also include a maintenance and a new livery. Maintenance is carried out at close to the three capital cities. In France the trains are maintained at Le Landy depot in northern Paris, the bulk of operations are on Eurostars core routes from London St Pancras to Paris Gare du Nord and Brussels South

British Rail Class 373
–
373 218 leaving Chambéry in Savoie, France
British Rail Class 373
–
The interior of a Class 373
British Rail Class 373
–
The original Standard Class interior of a Class 373
British Rail Class 373
–
A pair of Class 373s in the standard Eurostar livery at Waterloo International

30.
Diesel-electric transmission
–
Diesel-electric transmission, or diesel-electric powertrain is used by a number of vehicle and ship types for providing locomotion. A diesel-electric transmission system includes a diesel engine connected to an electrical generator, before diesel engines came into widespread use, a similar system, using a petrol engine and called petrol-electric or gas-electric, was sometimes used. Diesel-electric transmission is used on railways by diesel locomotives and diesel electric multiple units. Diesel-electric systems are used in submarines and surface ships and some land vehicles. In some high-efficiency applications, electrical energy may be stored in rechargeable batteries, the first diesel motorship was also the first diesel-electric ship, the Russian tanker Vandal from Branobel, which was launched in 1903. Steam turbine-electric propulsion has been in use since the 1920s, using diesel-electric powerplants in surface ships has increased lately, the Finnish coastal defence ships Ilmarinen and Väinämöinen laid down in 1928–1929, were among the first surface ships to use diesel-electric transmission. Later, the technology was used in diesel powered icebreakers, in World War II the United States built diesel-electric surface warships. Due to machinery shortages destroyer escorts of the Evarts and Cannon classes were diesel-electric, the Wind-class icebreakers, on the other hand, were designed for diesel-electric propulsion because of its flexibility and resistance to damage. An example of this is Harmony of the Seas, the largest passenger ship as of 2016 and this provides a relatively simple way to use the high-speed, low-torque output of a turbine to drive a low-speed propeller, without the need for excessive reduction gearing. Early submarines used a mechanical connection between the engine and propeller, switching between diesel engines for surface running, and electric motors for submerged propulsion. This was effectively a type of hybrid, since the motor. On the surface, the motor was used as a generator to recharge the batteries, the engine would be disconnected for submerged operation, with batteries powering the electric motor and supplying all other power as well. The concept was pioneered in 1929 in the S-class submarines S-3, S-6, the first production submarines with this system were the Porpoise-class, and it was used on most subsequent US diesel submarines through the 1960s. This mechanically isolates the engine compartment from the outer pressure hull. Some nuclear submarines also use a similar propulsion system, with propulsion turbo generators driven by reactor plant steam. During World War I, there was a strategic need for rail engines without plumes of smoke above them, diesel technology was not yet sufficiently developed but a few precursor attempts were made, especially for petrol-electric transmissions by the French and British. About 300 of these locomotives, only 96 being standard gauge, were in use at various points in the conflict, even before the war, the GE 57-ton gas-electric boxcab had been produced in the USA. In the 1920s, diesel-electric technology first saw limited use in switchers, locomotives used for moving trains around in railroad yards, an early company offering Oil-Electric locomotives was the American Locomotive Company

31.
Locomotive
–
A locomotive or engine is a rail transport vehicle that provides the motive power for a train. A locomotive has no payload capacity of its own, and its purpose is to move the train along the tracks. In contrast, some trains have self-propelled payload-carrying vehicles and these are not normally considered locomotives, and may be referred to as multiple units, motor coaches or railcars. The use of these vehicles is increasingly common for passenger trains. Traditionally, locomotives pulled trains from the front, however, push-pull operation has become common, where the train may have a locomotive at the front, at the rear, or at each end. Prior to locomotives, the force for railroads had been generated by various lower-technology methods such as human power, horse power. The first successful locomotives were built by Cornish inventor Richard Trevithick, in 1804 his unnamed steam locomotive hauled a train along the tramway of the Penydarren ironworks, near Merthyr Tydfil in Wales. Although the locomotive hauled a train of 10 long tons of iron and 70 passengers in five wagons over nine miles, the locomotive only ran three trips before it was abandoned. Trevithick built a series of locomotives after the Penydarren experiment, including one which ran at a colliery in Tyneside in northern England, the first commercially successful steam locomotive was Matthew Murrays rack locomotive, Salamanca, built for the narrow gauge Middleton Railway in 1812. This was followed in 1813 by the Puffing Billy built by Christopher Blackett and William Hedley for the Wylam Colliery Railway, Puffing Billy is now on display in the Science Museum in London, the oldest locomotive in existence. In 1814 George Stephenson, inspired by the locomotives of Trevithick. He built the Blücher, one of the first successful flanged-wheel adhesion locomotives, Stephenson played a pivotal role in the development and widespread adoption of steam locomotives. His designs improved on the work of the pioneers, in 1825 he built the Locomotion for the Stockton and Darlington Railway, north east England, which became the first public steam railway. In 1829 he built The Rocket which was entered in and won the Rainhill Trials and this success led to Stephenson establishing his company as the pre-eminent builder of steam locomotives used on railways in the United Kingdom, the United States and much of Europe. The first inter city passenger railway, Liverpool and Manchester Railway, opened in 1830, there are a few basic reasons to isolate locomotive train power, as compared to self-propelled vehicles. Maximum utilization of power cars Separate locomotives facilitate movement of costly motive power assets as needed, flexibility Large locomotives can substitute for small locomotives when more power is required, for example, where grades are steeper. As needed, a locomotive can be used for freight duties. Obsolescence cycles Separating motive power from payload-hauling cars enables replacement without affecting the other, to illustrate, locomotives might become obsolete when their associated cars did not, and vice versa

32.
World energy consumption
–
World energy consumption is the total energy used by all of human civilization. It does not include energy from food, and the extent to which direct biomass burning has been accounted for is poorly documented, being the power source metric of civilization, World Energy Consumption has deep implications for humanitys social-economic-political sphere. Institutions such as the International Energy Agency, the U. S. Energy Information Administration, and the European Environment Agency record and publish energy data periodically. The IEA estimates that, in 2013, total energy consumption was 9,301 Mtoe, or 3.89 ×1020 joules. From 2000–2012 coal was the source of energy with the largest growth, the use of oil and natural gas also had considerable growth, followed by hydro power and renewable energy. Renewable energy grew at a faster than any other time in history during this period. The demand for nuclear energy decreased, possibly due to the accidents at Chernobyl, in 2011, expenditures on energy totalled over 6 trillion USD, or about 10% of the world gross domestic product. Europe spends close to one-quarter of the worlds energy expenditures, North America close to 20%, World final energy consumption refers to the fraction of the worlds primary energy that is used in its final form by humanity. In 2012, world energy supply amounted to 155,505 terawatt-hour or 13,371 Mtoe. World final energy consumption includes products as lubricants, asphalt and petrochemicals which have chemical energy content but are not used as fuel and this non-energy use amounted to 9,404 TWh in 2012. The worlds electricity consumption was 18,608 TWh in 2012 and this figure is about 18% smaller than the generated electricity, due to grid losses, storage losses, and self-consumption from power plants. Cogeneration power stations use some of the energy that is wasted for heating buildings or in industrial processes. The United States Energy Information Administration regularly publishes a report on world consumption for most types of energy resources. The most recent estimate of the energy consumption was 5.67 ×1020 joules, or 157,481 TWh. According to the IEA the total energy consumption was 143,851 TWh in 2008,133,602 TWh in 2005,117,687 TWh in 2000. Total world electricity consumption was 19,504 TWh in 2013,16,503 TWh in 2008,15,105 TWh in 2005, and 12,116 TWh in 2000. In 2013, world consumption by power source was oil 31. 1%, coal 28. 9%, natural gas 21. 4%, biofuels and waste 10. 2%, nuclear 4. 8%, hydro 2. 4%. Oil, coal, and natural gas were the most popular energy fuels, from 2000–2012 renewable energy grew at a rate higher than any other point in history, with a consumption increase of 176.5 million tonnes of oil

World energy consumption
–
The world's increasing demand for energy

33.
Nova (laser)
–
Nova was a high-power laser built at the Lawrence Livermore National Laboratory in 1984 which conducted advanced inertial confinement fusion experiments until its dismantling in 1999. Nova was the first ICF experiment built with the intention of reaching ignition, Nova also generated considerable amounts of data on high-density matter physics, regardless of the lack of ignition, which is useful both in fusion power and nuclear weapons research. Inertial confinement fusion devices use drivers to rapidly heat the outer layers of a target in order to compress it, the target is a small spherical pellet containing a few milligrams of fusion fuel, typically a mix of deuterium and tritium. The heat of the laser burns the surface of the pellet into a plasma, the remaining portion of the target is driven inwards due to Newtons Third Law, eventually collapsing into a small point of very high density. The rapid blowoff also creates a wave that travels towards the center of the compressed fuel. When it reaches the center of the fuel and meets the shock from the side of the target. If the temperature and density of small spot can be raised high enough. The fusion reactions release high-energy particles, some of which collide with the high density fuel around it and this heats the fuel further, and can potentially cause that fuel to undergo fusion as well. This is a known as ignition, which can lead to a significant portion of the fuel in the target undergoing fusion. To date most ICF experiments have used lasers to heat the targets, calculations show that the energy must be delivered quickly in order to compress the core before it disassembles, as well as creating a suitable shock wave. The energy must also be focused extremely evenly across the outer surface in order to collapse the fuel into a symmetric core. Although other drivers have been suggested, notably heavy ions driven in particle accelerators, lasers are currently the only devices with the combination of features. Although this sounds very low powered compared to machines, at the time it was just beyond the state of the art. Prior to the construction of Nova, LLNL had designed and built a series of lasers that explored the problems of basic ICF design. LLNL was primarily interested in the Nd, glass laser, which, LLNL had decided early on to concentrate on glass lasers, while other facilities studied gas lasers using carbon dioxide or KrF. Building large Nd, glass lasers had not been attempted before, one problem was the homogeneity of the beams. Even minor variations in intensity of the beams would result in self-focusing in the air, the resulting beam included small filaments of extremely high light intensity, so high it would damage the glass optics of the device. This problem was solved in the Cyclops laser with the introduction of the spatial filtering technique, Cyclops was followed by the Argus laser of greater power, which explored the problems of controlling more than one beam and illuminating a target more evenly

Nova (laser)
–
View down Nova's laser bay between two banks of beamlines. The blue boxes contain the amplifiers and their flashtube "pumps", the tubes between the banks of amplifiers are the spatial filters.
Nova (laser)
–
Maintenance on the Nova target chamber. The various devices all point towards the center of the chamber where the targets are placed during experimental runs. The targets are held in place on the end of the white-colored "needle" at the end of the arm running vertically down into the chamber.
Nova (laser)
–
The Nova laser target chamber during alignment and initial installation (ca. early 1980s). Some of the larger diameter holes hold various measurement devices, which are designed to a standard size to fit into these ports while others are used as beam ports.
Nova (laser)

34.
Chirped pulse amplification
–
Chirped pulse amplification is a technique for amplifying an ultrashort laser pulse up to the petawatt level with the laser pulse being stretched out temporally and spectrally prior to amplification. CPA is the current state of the art technique which all of the highest power lasers in the world currently utilize, apart from these state-of-the-art research systems, a number of commercial manufacturers sell Ti, sapphire-based CPAs with peak powers of 10 to 100 gigawatts. Chirped-pulse amplification was introduced as a technique to increase the available power in radar in 1960. CPA for lasers was invented by Gérard Mourou and Donna Strickland at the University of Rochester in the mid 1980s, in addition to the higher peak power, CPA makes it possible to miniaturize laser systems. A compact high-power laser, known as a tabletop terawatt laser, there are several ways to construct compressors and stretchers. The most practical way to achieve this is with grating-based stretchers and compressors, stretchers and compressors are characterized by their dispersion. With negative dispersion, light with higher frequencies takes less time to travel through the device than light with lower frequencies, with positive dispersion, it is the other way around. In a CPA, the dispersions of the stretcher and compressor should cancel out, because of practical considerations, the stretcher is usually designed with positive dispersion and the compressor with negative dispersion. In principle, the dispersion of a device is a function τ. Each component in the chain from the seed laser to the output of the compressor contributes to the dispersion. It turns out to be hard to tune the dispersions of the stretcher, for this, additional dispersive elements may be needed. Figure 1 shows the simplest grating configuration, where long-wavelength components travel a distance than the short-wavelength components. Often, only a single grating is used, with extra mirrors such that the hits the grating four times rather than two times as shown in the picture. This setup is used as a compressor, since it does not involve components that could lead to unwanted side-effects when dealing with high-intensity pulses. The dispersion can be tuned easily by changing the distance between the two gratings, Figure 2 shows a more complicated grating configuration that involves focusing elements, here depicted as lenses. The lenses are placed at a distance 2 f from each other, if L < f, the setup acts as a positive-dispersion stretcher and if L > f, it is a negative-dispersion stretcher. And the case L = f is used in the pulse shaper, usually, the focusing element is a spherical or cylindrical mirror rather than a lens. As with the configuration in Figure 1, it is possible to use an additional mirror and this setup requires that the beam diameter is very small compared to the length of the telescope, otherwise undesirable aberrations will be introduced

35.
Total solar irradiance
–
Solar irradiance is the power per unit area received from the Sun in the form of electromagnetic radiation in the wavelength range of the measuring instrument. Irradiance may be measured in space or at the Earths surface after atmospheric absorption and it is measured perpendicular to the incoming sunlight. Total solar irradiance, is a measure of the power over all wavelengths per unit area incident on the Earths upper atmosphere. The solar constant is a measure of mean TSI at a distance of one astronomical Unit. Irradiance is a function of distance from the Sun, the solar cycle, Irradiance on Earth is also measured perpendicular to the incoming sunlight. Insolation is the received on Earth per unit area on a horizontal surface. It depends on the height of the Sun above the horizon, the solar irradiance integrated over time is called solar irradiation, solar exposure or insolation. However, insolation is often used interchangeably with irradiance in practice, the SI unit of irradiance is watt per square meter. An alternate unit of measure is the Langley per unit time, the solar energy industry uses watt-hour per square metre per unit time, the relation to the SI unit is thus 1 kW/m2 =24 kWh/m2/day =8760 kWh/m2/year. Irradiance can also be expressed in Suns, where one Sun equals 1000 W/m2 at the point of arrival, part of the radiation reaching an object is absorbed and the remainder reflected. Usually the absorbed radiation is converted to energy, increasing the objects temperature. Manmade or natural systems, however, can convert part of the radiation into another form such as electricity or chemical bonds. The proportion of reflected radiation is the objects reflectivity or albedo, insolation onto a surface is largest when the surface directly faces the sun. As the angle between the surface and the Sun moves from normal, the insolation is reduced in proportion to the angles cosine, see effect of sun angle on climate. In the figure, the angle shown is between the ground and the rather than between the vertical direction and the sunbeam, hence the sine rather than the cosine is appropriate. A sunbeam one mile wide arrives from directly overhead, and another at a 30° angle to the horizontal, the sine of a 30° angle is 1/2, whereas the sine of a 90° angle is 1. Therefore, the angled sunbeam spreads the light over twice the area, consequently, half as much light falls on each square mile. This projection effect is the reason why Earths polar regions are much colder than equatorial regions

Total solar irradiance
–
Samuel Heinrich Schwabe (1789–1875). German astronomer, discovered the solar cycle through extended observations of sunspots
Total solar irradiance
–
400 year sunspot history, including the Maunder Minimum
Total solar irradiance
–
Rudolf Wolf (1816–1893), Swiss astronomer, carried out historical reconstruction of solar activity back to the seventeenth century
Total solar irradiance
–
A drawing of a sunspot in the Chronicles of John of Worcester.

36.
Electric power
–
Electric power is the rate, per unit time, at which electrical energy is transferred by an electric circuit. The SI unit of power is the watt, one joule per second, Electric power is usually produced by electric generators, but can also be supplied by sources such as electric batteries. It is usually supplied to businesses and homes by the power industry through an electric power grid. Electric power is sold by the kilowatt hour which is the product of power in kilowatts multiplied by running time in hours. Electric utilities measure power using an electricity meter, which keeps a total of the electric energy delivered to a customer. Electrical power provides a low form of energy and can be carried long distances and converted into other forms of energy such as motion. Electric power, like power, is the rate of doing work, measured in watts. The term wattage is used colloquially to mean electric power in watts, the potential energy of the charges due to the voltage between the terminals is converted to kinetic energy in the device. These devices are called passive components or loads, they consume electric power from the circuit, converting it to other forms of such as mechanical work, heat, light. Examples are electrical appliances, such as bulbs, electric motors. In alternating current circuits the direction of the voltage periodically reverses, devices in which this occurs are called active devices or power sources, such as electric generators and batteries. Some devices can be either a source or a load, depending on the voltage, for example, a rechargeable battery acts as a source when it provides power to a circuit, but as a load when it is connected to a battery charger and is being recharged. Since electric power can flow either into or out of a component, Electric power flowing out of a circuit into a component is arbitrarily defined to have a positive sign, while power flowing into a circuit from a component is defined to have a negative sign. Thus passive components have positive power consumption, while power sources have negative power consumption and this is called the passive sign convention. In alternating current circuits, energy storage such as inductance and capacitance may result in periodic reversals of the direction of energy flow. The portion of power flow that, averaged over a cycle of the AC waveform. That portion of power due to stored energy, that returns to the source in each cycle, is known as reactive power. Real power is represented as a vector and reactive power is represented as a vertical vector

37.
Embalse nuclear power plant
–
The Embalse Nuclear Power Station is one of the three operational nuclear power plants in Argentina. It is located on the shore of a reservoir on the Río Tercero, near the city of Embalse. The plant is a CANDU Pressurised Heavy Water Reactor and it employs natural uranium, and uses heavy water for cooling and neutron moderation. It has a power of 2,109 MWth, and generates 648 MWe of electricity, with a net output of about 600 MWe. Additionally, Embalse produces the cobalt-60 radioisotope, which is employed in medicine, Argentina is one of the largest producers and exporters of this isotope in the world, along with Canada and Russia. Embalse was started in 1974 and began operation in 1983 and it was built by an Italian-Canadian consortium formed by AECL, acting as the turn-key supplier of the nuclear portion, and Italimpianti, the turn-key supplier of the conventional portion. On 31 December 2015, the plant was offline, having completed its first operating cycle of about 30 years. On 1 September 2016, the plant received the last two of four generators, fundamental elements for the life extension of the plant. The plant is being reconditioned to deliver power for another 30 years, the plant is expected to be offline until 2018 and will have a power uprate to a gross capacity of 683 MW. National Atomic Energy Commission Atucha I Nuclear Power Plant Atucha II Nuclear Power Plant CNEA Nucleoeléctrica Argentina S. A, media related to Embalse Nuclear Power Plant at Wikimedia Commons

Embalse nuclear power plant
–
Embalse Nuclear Power Station

38.
Fission reactor
–
This article is a subarticle of Nuclear power. A nuclear reactor, formerly known as a pile, is a device used to initiate. Nuclear reactors are used at power plants for electricity generation. Heat from nuclear fission is passed to a fluid, which runs through steam turbines. These either drive a ships propellers or turn electrical generators, Nuclear generated steam in principle can be used for industrial process heat or for district heating. Some reactors are used to produce isotopes for medical and industrial use, some are run only for research. As of April 2014, the IAEA reports there are 435 nuclear power reactors in operation, when a large fissile atomic nucleus such as uranium-235 or plutonium-239 absorbs a neutron, it may undergo nuclear fission. The heavy nucleus splits into two or more nuclei, releasing kinetic energy, gamma radiation, and free neutrons. A portion of neutrons may later be absorbed by other fissile atoms and trigger further fission events, which release more neutrons. This is known as a chain reaction. To control such a chain reaction, neutron poisons and neutron moderators can change the portion of neutrons that will go on to cause more fission. Nuclear reactors generally have automatic and manual systems to shut the fission reaction down if monitoring detects unsafe conditions, commonly-used moderators include regular water, solid graphite and heavy water. Some experimental types of reactor have used beryllium, and hydrocarbons have been suggested as another possibility, the reactor core generates heat in a number of ways, The kinetic energy of fission products is converted to thermal energy when these nuclei collide with nearby atoms. The reactor absorbs some of the rays produced during fission. Heat is produced by the decay of fission products and materials that have been activated by neutron absorption. This decay heat-source will remain for some even after the reactor is shut down. A kilogram of uranium-235 converted via nuclear processes releases approximately three times more energy than a kilogram of coal burned conventionally. A nuclear reactor coolant — usually water but sometimes a gas or a metal or molten salt — is circulated past the reactor core to absorb the heat that it generates

Fission reactor
–
Core of CROCUS, a small nuclear reactor used for research at the EPFL in Switzerland
Fission reactor
–
Lise Meitner and Otto Hahn in their laboratory.
Fission reactor
–
The control room of NC State 's Pulstar Nuclear Reactor.
Fission reactor
–
Treatment of the interior part of a VVER-1000 reactor frame on Atommash.

39.
SI prefix
–
A metric prefix is a unit prefix that precedes a basic unit of measure to indicate a multiple or fraction of the unit. While all metric prefixes in use today are decadic, historically there have been a number of binary metric prefixes as well. Each prefix has a symbol that is prepended to the unit symbol. The prefix kilo-, for example, may be added to gram to indicate multiplication by one thousand, the prefix milli-, likewise, may be added to metre to indicate division by one thousand, one millimetre is equal to one thousandth of a metre. Decimal multiplicative prefixes have been a feature of all forms of the system with six dating back to the systems introduction in the 1790s. Metric prefixes have even been prepended to non-metric units, the SI prefixes are standardized for use in the International System of Units by the International Bureau of Weights and Measures in resolutions dating from 1960 to 1991. Since 2009, they have formed part of the International System of Quantities, the BIPM specifies twenty prefixes for the International System of Units. Each prefix name has a symbol which is used in combination with the symbols for units of measure. For example, the symbol for kilo- is k, and is used to produce km, kg, and kW, which are the SI symbols for kilometre, kilogram, prefixes corresponding to an integer power of one thousand are generally preferred. Hence 100 m is preferred over 1 hm or 10 dam, the prefixes hecto, deca, deci, and centi are commonly used for everyday purposes, and the centimetre is especially common. However, some building codes require that the millimetre be used in preference to the centimetre, because use of centimetres leads to extensive usage of decimal points. Prefixes may not be used in combination and this also applies to mass, for which the SI base unit already contains a prefix. For example, milligram is used instead of microkilogram, in the arithmetic of measurements having units, the units are treated as multiplicative factors to values. If they have prefixes, all but one of the prefixes must be expanded to their numeric multiplier,1 km2 means one square kilometre, or the area of a square of 1000 m by 1000 m and not 1000 square metres. 2 Mm3 means two cubic megametres, or the volume of two cubes of 1000000 m by 1000000 m by 1000000 m or 2×1018 m3, and not 2000000 cubic metres, examples 5 cm = 5×10−2 m =5 ×0.01 m =0. The prefixes, including those introduced after 1960, are used with any metric unit, metric prefixes may also be used with non-metric units. The choice of prefixes with a unit is usually dictated by convenience of use. Unit prefixes for amounts that are larger or smaller than those actually encountered are seldom used

SI prefix
–
Distance marker on the Rhine: 36 (XXXVI) myriametres from Basel. Note that the stated distance is 360 km; comma is the decimal mark in Germany.

40.
Megajoule
–
The joule, symbol J, is a derived unit of energy in the International System of Units. It is equal to the transferred to an object when a force of one newton acts on that object in the direction of its motion through a distance of one metre. It is also the energy dissipated as heat when a current of one ampere passes through a resistance of one ohm for one second. It is named after the English physicist James Prescott Joule, one joule can also be defined as, The work required to move an electric charge of one coulomb through an electrical potential difference of one volt, or one coulomb volt. This relationship can be used to define the volt, the work required to produce one watt of power for one second, or one watt second. This relationship can be used to define the watt and this SI unit is named after James Prescott Joule. As with every International System of Units unit named for a person, note that degree Celsius conforms to this rule because the d is lowercase. — Based on The International System of Units, section 5.2. The CGPM has given the unit of energy the name Joule, the use of newton metres for torque and joules for energy is helpful to avoid misunderstandings and miscommunications. The distinction may be also in the fact that energy is a scalar – the dot product of a vector force. By contrast, torque is a vector – the cross product of a distance vector, torque and energy are related to one another by the equation E = τ θ, where E is energy, τ is torque, and θ is the angle swept. Since radians are dimensionless, it follows that torque and energy have the same dimensions, one joule in everyday life represents approximately, The energy required to lift a medium-size tomato 1 m vertically from the surface of the Earth. The energy released when that same tomato falls back down to the ground, the energy required to accelerate a 1 kg mass at 1 m·s−2 through a 1 m distance in space. The heat required to raise the temperature of 1 g of water by 0.24 °C, the typical energy released as heat by a person at rest every 1/60 s. The kinetic energy of a 50 kg human moving very slowly, the kinetic energy of a 56 g tennis ball moving at 6 m/s. The kinetic energy of an object with mass 1 kg moving at √2 ≈1.4 m/s, the amount of electricity required to light a 1 W LED for 1 s. Since the joule is also a watt-second and the unit for electricity sales to homes is the kW·h. For additional examples, see, Orders of magnitude The zeptojoule is equal to one sextillionth of one joule,160 zeptojoules is equivalent to one electronvolt. The nanojoule is equal to one billionth of one joule, one nanojoule is about 1/160 of the kinetic energy of a flying mosquito

41.
Alternating current
–
Alternating current, is an electric current which periodically reverses direction, whereas direct current flows only in one direction. A common source of DC power is a cell in a flashlight. The abbreviations AC and DC are often used to mean simply alternating and direct, the usual waveform of alternating current in most electric power circuits is a sine wave. In certain applications, different waveforms are used, such as triangular or square waves, audio and radio signals carried on electrical wires are also examples of alternating current. These types of alternating current carry information encoded onto the AC signal and these currents typically alternate at higher frequencies than those used in power transmission. Electrical energy is distributed as alternating current because AC voltage may be increased or decreased with a transformer, use of a higher voltage leads to significantly more efficient transmission of power. The power losses in a conductor are a product of the square of the current and this means that when transmitting a fixed power on a given wire, if the current is halved, the power loss will be four times less. Power is often transmitted at hundreds of kilovolts, and transformed to 100–240 volts for domestic use, high voltages have disadvantages, such as the increased insulation required, and generally increased difficulty in their safe handling. In a power plant, energy is generated at a convenient voltage for the design of a generator, near the loads, the transmission voltage is stepped down to the voltages used by equipment. Consumer voltages vary somewhat depending on the country and size of load, the voltage delivered to equipment such as lighting and motor loads is standardized, with an allowable range of voltage over which equipment is expected to operate. Standard power utilization voltages and percentage tolerance vary in the different mains power systems found in the world, high-voltage direct-current electric power transmission systems have become more viable as technology has provided efficient means of changing the voltage of DC power. HVDC systems, however, tend to be expensive and less efficient over shorter distances than transformers. Three-phase electrical generation is very common, the simplest way is to use three separate coils in the generator stator, physically offset by an angle of 120° to each other. Three current waveforms are produced that are equal in magnitude and 120° out of phase to each other, if coils are added opposite to these, they generate the same phases with reverse polarity and so can be simply wired together. In practice, higher pole orders are commonly used, for example, a 12-pole machine would have 36 coils. The advantage is that lower rotational speeds can be used to generate the same frequency, for example, a 2-pole machine running at 3600 rpm and a 12-pole machine running at 600 rpm produce the same frequency, the lower speed is preferable for larger machines. If the load on a system is balanced equally among the phases. Even in the worst-case unbalanced load, the current will not exceed the highest of the phase currents

Alternating current
–
High voltage transmission lines deliver power from electric generation plants over long distances using alternating current. These particular lines are located in eastern Utah.
Alternating current
–
Alternating current (green curve). The horizontal axis measures time; the vertical, current or voltage.
Alternating current
–
The prototype of ZBD. transformer is on display at the Széchenyi István Memorial Exhibition, Nagycenk, Hungary
Alternating current

42.
Volt-ampere
–
A volt-ampere is the unit used for the apparent power in an electrical circuit, equal to the product of root-mean-square voltage and RMS current. In direct current circuits, this product is equal to the power in watts. Volt-amperes are useful only in the context of alternating current circuits, some devices, including uninterruptible power supplies, have ratings both for maximum volt-amperes and maximum watts. The VA rating is limited by the maximum current. When a UPS powers equipment which presents a load with a low power factor. For example, a UPS system rated to deliver 400,000 volt-amperes at 220 volts can deliver a current of 1818 amperes, VA ratings are also often used for transformers, maximum output current is then VA rating divided by nominal output voltage. Transformers with the same sized core usually have the same VA rating, the convention of using the volt-ampere to distinguish apparent power from real power is allowed by the SI standard. SI allows one to specify units to indicate common sense physical considerations, AC power Power factor Volt-ampere reactive

Volt-ampere
–
Apparent power is the magnitude of the vector sum (S) of real (P) and reactive (j Q) AC power vectors

43.
Resistor
–
A resistor is a passive two-terminal electrical component that implements electrical resistance as a circuit element. In electronic circuits, resistors are used to reduce current flow, adjust signal levels, to divide voltages, bias active elements, and terminate transmission lines, among other uses. High-power resistors that can dissipate many watts of power as heat may be used as part of motor controls, in power distribution systems. Fixed resistors have resistances that only slightly with temperature, time or operating voltage. Variable resistors can be used to adjust circuit elements, or as sensing devices for heat, light, humidity, force, Resistors are common elements of electrical networks and electronic circuits and are ubiquitous in electronic equipment. Practical resistors as discrete components can be composed of various compounds, Resistors are also implemented within integrated circuits. The electrical function of a resistor is specified by its resistance, the nominal value of the resistance falls within the manufacturing tolerance, indicated on the component. Two typical schematic diagram symbols are as follows, The notation to state a resistors value in a circuit diagram varies, one common scheme is the letter and digit code for resistance values following IEC60062. It avoids using a separator and replaces the decimal separator with a letter loosely associated with SI prefixes corresponding with the parts resistance. For example, 8K2 as part marking code, in a diagram or in a bill of materials indicates a resistor value of 8.2 kΩ. Additional zeros imply a tighter tolerance, for example 15M0 for three significant digits, when the value can be expressed without the need for a prefix, an R is used instead of the decimal separator. For example, 1R2 indicates 1.2 Ω, and 18R indicates 18 Ω, for example, if a 300 ohm resistor is attached across the terminals of a 12 volt battery, then a current of 12 /300 =0.04 amperes flows through that resistor. Practical resistors also have some inductance and capacitance which affect the relation between voltage and current in alternating current circuits, the ohm is the SI unit of electrical resistance, named after Georg Simon Ohm. An ohm is equivalent to a volt per ampere, since resistors are specified and manufactured over a very large range of values, the derived units of milliohm, kilohm, and megohm are also in common usage. The total resistance of resistors connected in series is the sum of their resistance values. R e q = R1 + R2 + ⋯ + R n, the total resistance of resistors connected in parallel is the reciprocal of the sum of the reciprocals of the individual resistors. 1 R e q =1 R1 +1 R2 + ⋯ +1 R n. For example, a 10 ohm resistor connected in parallel with a 5 ohm resistor, a resistor network that is a combination of parallel and series connections can be broken up into smaller parts that are either one or the other

Resistor
–
A typical axial-lead resistor
Resistor
–
Axial -lead resistors on tape. The component is cut from the tape during assembly and the part is inserted into the board.
Resistor
–
An aluminium-housed power resistor rated for 50 W when heat-sinked
Resistor
–
Resistors with wire leads for through-hole mounting

44.
Electrical network
–
An electrical network is an interconnection of electrical components or a model of such an interconnection, consisting of electrical elements. An electrical circuit is a network consisting of a closed loop, linear electrical networks, a special type consisting only of sources, linear lumped elements, and linear distributed elements, have the property that signals are linearly superimposable. They are thus more easily analyzed, using powerful frequency domain methods such as Laplace transforms, to determine DC response, AC response, a resistive circuit is a circuit containing only resistors and ideal current and voltage sources. Analysis of resistive circuits is less complicated than analysis of circuits containing capacitors and inductors, if the sources are constant sources, the result is a DC circuit. A network that contains active components is known as an electronic circuit. Such networks are generally nonlinear and require more complex design and analysis tools, an active network is a network that contains an active source – either a voltage source or current source. A passive network is a network that does not contain an active source, a network is linear if its signals obey the principle of superposition, otherwise it is non-linear. Sources can be classified as independent sources and dependent sources Ideal Independent Source maintains same voltage or current regardless of the elements present in the circuit. Its value is either constant or sinusoidal, the strength of voltage or current is not changed by any variation in connected network. Dependent Sources depend upon a particular element of the circuit for delivering the power or voltage or current depending upon the type of source it is, a number of electrical laws apply to all electrical networks. These include, Kirchhoffs current law, The sum of all currents entering a node is equal to the sum of all currents leaving the node, Kirchhoffs voltage law, The directed sum of the electrical potential differences around a loop must be zero. Ohms law, The voltage across a resistor is equal to the product of the resistance, nortons theorem, Any network of voltage or current sources and resistors is electrically equivalent to an ideal current source in parallel with a single resistor. Thévenins theorem, Any network of voltage or current sources and resistors is electrically equivalent to a voltage source in series with a single resistor. Other more complex laws may be needed if the network contains nonlinear or reactive components, non-linear self-regenerative heterodyning systems can be approximated. Applying these laws results in a set of equations that can be solved either algebraically or numerically. To design any electrical circuit, either analog or digital, electrical engineers need to be able to predict the voltages, simple linear circuits can be analyzed by hand using complex number theory. In more complex cases the circuit may be analyzed with specialized programs or estimation techniques such as the piecewise-linear model. More complex circuits can be analyzed numerically with software such as SPICE or GNUCAP, once the steady state solution is found, the operating points of each element in the circuit are known

Electrical network
–
A simple electric circuit made up of a voltage source and a resistor. Here,, according to Ohm's Law.

45.
Radio stations
–
Radio broadcasting is a unidirectional wireless transmission over radio waves intended to reach a wide audience. Stations can be linked in radio networks to broadcast a radio format. Audio broadcasting also can be done via radio, local wire television networks, satellite radio. The signal types can be either analog audio or digital audio, the earliest radio stations were simply radiotelegraphy systems and did not carry audio. For audio broadcasts to be possible, electronic detection and amplification devices had to be incorporated, the thermionic valve was invented in 1904 by the English physicist John Ambrose Fleming. He developed a device he called an oscillation valve, the heated filament, or cathode, was capable of thermionic emission of electrons that would flow to the plate when it was at a higher voltage. Electrons, however, could not pass in the direction because the plate was not heated. Later known as the Fleming valve, it could be used as a rectifier of alternating current and this greatly improved the crystal set which rectified the radio signal using an early solid-state diode based on a crystal and a so-called cats whisker. However, what was required was an amplifier. The triode was patented on March 4,1906, by the Austrian Robert von Lieben independent from that, on October 25,1906 and it wasnt put to practical use until 1912 when its amplifying ability became recognized by researchers. By about 1920, valve technology had matured to the point where radio broadcasting was quickly becoming viable, however, an early audio transmission that could be termed a broadcast may have occurred on Christmas Eve in 1906 by Reginald Fessenden, although this is disputed. Charles Herrold started broadcasting in California in 1909 and was carrying audio by the next year, in The Hague, the Netherlands, PCGG started broadcasting on November 6,1919, making it, arguably the first commercial broadcasting station. In 1916, Frank Conrad, an engineer employed at the Westinghouse Electric Corporation, began broadcasting from his Wilkinsburg. Later, the station was moved to the top of the Westinghouse factory building in East Pittsburgh, Westinghouse relaunched the station as KDKA on November 2,1920, as the first commercially licensed radio station in America. The commercial broadcasting designation came from the type of broadcast license, the first licensed broadcast in the United States came from KDKA itself, the results of the Harding/Cox Presidential Election. In 1920, wireless broadcasts for entertainment began in the UK from the Marconi Research Centre 2MT at Writtle near Chelmsford, England. A famous broadcast from Marconis New Street Works factory in Chelmsford was made by the famous soprano Dame Nellie Melba on 15 June 1920 and she was the first artist of international renown to participate in direct radio broadcasts. The 2MT station began to broadcast regular entertainment in 1922, the BBC was amalgamated in 1922 and received a Royal Charter in 1926, making it the first national broadcaster in the world, followed by Czech Radio and other European broadcasters in 1923

46.
Energy
–
In physics, energy is the property that must be transferred to an object in order to perform work on – or to heat – the object, and can be converted in form, but not created or destroyed. The SI unit of energy is the joule, which is the transferred to an object by the mechanical work of moving it a distance of 1 metre against a force of 1 newton. Mass and energy are closely related, for example, with a sensitive enough scale, one could measure an increase in mass after heating an object. Living organisms require available energy to stay alive, such as the humans get from food. Civilisation gets the energy it needs from energy resources such as fuels, nuclear fuel. The processes of Earths climate and ecosystem are driven by the radiant energy Earth receives from the sun, the total energy of a system can be subdivided and classified in various ways. It may also be convenient to distinguish gravitational energy, thermal energy, several types of energy, electric energy. Many of these overlap, for instance, thermal energy usually consists partly of kinetic. Some types of energy are a mix of both potential and kinetic energy. An example is energy which is the sum of kinetic. Whenever physical scientists discover that a phenomenon appears to violate the law of energy conservation. Heat and work are special cases in that they are not properties of systems, in general we cannot measure how much heat or work are present in an object, but rather only how much energy is transferred among objects in certain ways during the occurrence of a given process. Heat and work are measured as positive or negative depending on which side of the transfer we view them from, the distinctions between different kinds of energy is not always clear-cut. In contrast to the definition, energeia was a qualitative philosophical concept, broad enough to include ideas such as happiness. The modern analog of this property, kinetic energy, differs from vis viva only by a factor of two, in 1807, Thomas Young was possibly the first to use the term energy instead of vis viva, in its modern sense. Gustave-Gaspard Coriolis described kinetic energy in 1829 in its modern sense, the law of conservation of energy was also first postulated in the early 19th century, and applies to any isolated system. It was argued for years whether heat was a physical substance, dubbed the caloric, or merely a physical quantity. In 1845 James Prescott Joule discovered the link between mechanical work and the generation of heat and these developments led to the theory of conservation of energy, formalized largely by William Thomson as the field of thermodynamics

47.
Flash (photography)
–
A flash is a device used in photography producing a flash of artificial light at a color temperature of about 5500 K to help illuminate a scene. A major purpose of a flash is to illuminate a dark scene, other uses are capturing quickly moving objects or changing the quality of light. Flash refers either to the flash of light itself or to the flash unit discharging the light. Most current flash units are electronic, having evolved from single-use flashbulbs, modern cameras often activate flash units automatically. Flash units are built directly into a camera. Some cameras allow separate flash units to be mounted via an accessory mount bracket. In professional studio equipment, flashes may be large, standalone units, or studio strobes, studies of magnesium by Bunsen and Roscoe in 1859 showed that burning this metal produced a light with similar qualities to daylight. The potential application to photography inspired Edward Sonstadt to investigate methods of manufacturing magnesium so that it would burn reliably for this use and he applied for patents in 1862 and by 1864 had started the Manchester Magnesium Company with Edward Mellor. It also had the benefit of being a simpler and cheaper process than making round wire, mather was also credited with the invention of a holder for the ribbon, which formed a lamp to burn it in. The packaging also implies that the ribbon was not necessarily broken off before being ignited. An alternative to ribbon was flash powder, a mixture of powder and potassium chlorate, introduced by its German inventors Adolf Miethe. A measured amount was put into a pan or trough and ignited by hand, producing a brilliant flash of light, along with the smoke. This could be an activity, especially if the flash powder was damp. An electrically triggered flash lamp was invented by Joshua Lionel Cowen in 1899 and his patent describes a device for igniting photographers’ flash powder by using dry cell batteries to heat a wire fuse. Variations and alternatives were touted from time to time and a few found a measure of success in the marketplace, especially for amateur use. The use of powder in an open lamp was replaced by flashbulbs, magnesium filaments were contained in bulbs filled with oxygen gas. Manufactured flashbulbs were first produced commercially in Germany in 1929, such a bulb could only be used once, and was too hot to handle immediately after use, but the confinement of what would otherwise have amounted to a small explosion was an important advance. A later innovation was the coating of flashbulbs with a film to maintain bulb integrity in the event of the glass shattering during the flash

Flash (photography)
–
The high-speed wing action of a hummingbird hawk-moth is frozen by flash. The flash has given the foreground more illumination than the background. See Inverse-square law.
Flash (photography)
–
1909 flash-lamp 1903 view camera
Flash (photography)
–
Ernst Leitz Wetzlar flash from 1950's
Flash (photography)
–
Flashbulbs have ranged in size from the diminutive AG-1 to the massive No. 75.

48.
Power plant
–
A power station, also referred to as a power plant or powerhouse and sometimes generating station or generating plant, is an industrial facility for the generation of electric power. Most power stations contain one or more generators, a machine that converts mechanical power into electrical power. The relative motion between a field and a conductor creates an electrical current. The energy source harnessed to turn the generator varies widely, most power stations in the world burn fossil fuels such as coal, oil, and natural gas to generate electricity. Others use nuclear power, but there is an use of cleaner renewable sources such as solar, wind, wave. There is some debate within utility and engineering circles over whether a solar array, or wind farm, should be referred to as a power station, in 1868 a hydro electric power station was designed and built by Lord Armstrong at Cragside, England. It used water from lakes on his estate to power Siemens dynamos, the electricity supplied power to lights, heating, produced hot water, ran an elevator as well as labor-saving devices and farm buildings. In the early 1870s Belgian inventor Zénobe Gramme invented a powerful enough to produce power on a commercial scale for industry. In the autumn of 1882, a central station providing public power was built in Godalming and it was proposed after the town failed to reach an agreement on the rate charged by the gas company, so the town council decided to use electricity. It used hydroelectric power that was used to street and household lighting, the system was not a commercial success and the town reverted to gas. In 1882 a the worlds first coal-fired public power station, the Edison Electric Light Station, was built in London, a Babcock & Wilcox boiler powered a 125-horsepower steam engine that drove a 27-ton generator. This supplied electricity to premises in the area that could be reached through the culverts of the viaduct without digging up the road, the customers included the City Temple and the Old Bailey. Another important customer was the Telegraph Office of the General Post Office, Johnson arranged for the supply cable to be run overhead, via Holborn Tavern and Newgate. In September 1882 in New York, the Pearl Street Station was established by Edison to provide lighting in the lower Manhattan Island area. The station ran until destroyed by fire in 1890, the station used reciprocating steam engines to turn direct-current generators. Because of the DC distribution, the area was small. The War of Currents eventually resolved in favor of AC distribution and utilization, DC systems with a service radius of a mile or so were necessarily smaller, less efficient of fuel consumption, and more labor-intensive to operate than much larger central AC generating stations. AC systems used a range of frequencies depending on the type of load, lighting load using higher frequencies

49.
Watt balance
–
A watt balance is an experimental electromechanical weight measuring instrument that measures the weight of a test object very precisely by the strength of an electric current and a voltage. In 2016, metrologists agreed to rename watt balances as Kibble balances, in honour of and it is being developed as a metrological instrument that may one day provide a definition of the kilogram unit of mass based on electronic units, a so-called electronic or electrical kilogram. The name watt balance comes from the fact that the weight of the test mass is proportional to the product of the current and the voltage, which is measured in units of watts. In this new application, the balance will be used in the opposite sense, the weight of the kilogram is then used to compute the mass of the kilogram by accurately determining the local gravitational acceleration. This will define the mass of a kilogram in terms of a current, the principle that is used in the watt balance was proposed by B. P. Kibble of the UK National Physical Laboratory in 1975 for measurement of the gyromagnetic ratio. The main weakness of the balance method is that the result depends on the accuracy with which the dimensions of the coils are measured. The watt balance method has an extra step in which the effect of the geometry of the coils is eliminated. This extra step involves moving the force coil through a magnetic flux at a known speed. This step was done in 1990, in 2014, NRC researchers published the most accurate measurement of the Planck constant to date, with a relative uncertainty of 1. 8×10−8. A conducting wire of length L that carries an electric current I perpendicular to a field of strength B will experience a Laplace force equal to BLI. In the watt balance, the current is varied so that this force exactly counteracts the weight w of a mass m. This is also the principle behind the ampere balance, W is given by the mass m multiplied by the local gravitational acceleration g. Kibbles watt balance avoids the problems of measuring B and L with a calibration step. The same wire is moved through the magnetic field at a known speed v. By Faradays law of induction, a potential difference U is generated across the ends of the wire. The unknown product BL can be eliminated from the equations to give U I = m g v. With U, I, g, and v accurately measured, this gives an accurate value for m. Both sides of the equation have the dimensions of power, measured in watts in the International System of Units, the current watt balance experiments are equivalent to measuring the value of the conventional watt in SI units. The importance of measurements is that they are also a direct measurement of the Planck constant h, h =4 K J2 R K. The principle of the kilogram would be to define the value of the Planck constant in the same way that the meter is defined by the speed of light

Watt balance
–
The NIST watt balance; the vacuum chamber dome, which lowers over the entire apparatus, is visible at top
Watt balance
–
Precision Ampere balance at the US National Bureau of Standards (now NIST) in 1927. The current coils are visible under the balance, attached to the right balance arm. The Watt balance is a development of the Ampere balance.

50.
Wattmeter
–
The wattmeter is an instrument for measuring the electric power in watts of any given circuit. Electromagnetic wattmeters are used for measurement of utility frequency and audio power, other types are required for radio frequency measurements. The traditional analog wattmeter is an electrodynamic instrument, the device consists of a pair of fixed coils, known as current coils, and a movable coil known as the potential coil. The current coils are connected in series with the circuit, while the potential coil is connected in parallel, also, on analog wattmeters, the potential coil carries a needle that moves over a scale to indicate the measurement. A current flowing through the current coil generates a field around the coil. The strength of field is proportional to the line current. The potential coil has, as a rule, a high-value resistor connected in series with it to reduce the current that flows through it. The result of arrangement is that on a dc circuit. For AC power, current and voltage may not be in phase, on an ac circuit the deflection is proportional to the average instantaneous product of voltage and current, thus measuring true power, P=VI cos φ. Here, cosφ represents the factor which shows that the power transmitted may be less than the apparent power obtained by multiplying the readings of a voltmeter and ammeter in the same circuit. The two circuits of a wattmeter can be damaged by excessive current and this is because the position of the pointer depends on the power factor, voltage and current. Thus, a circuit with a low power factor will give a low reading on the wattmeter, therefore, a wattmeter is rated not only in watts, but also in volts and amperes. A typical wattmeter in educational labs has two coils and a current coil. We can connect the two coils in series or parallel to each other to change the ranges of the wattmeter. Another feature is that the coil can also be tapped to change the meters range. If the pressure coil has range of 300 volts, the half of it can be used so that the range becomes 150 volts, electronic wattmeters are used for direct, small power measurements or for power measurements at frequencies beyond the range of electrodynamometer-type instruments. A modern digital electronic wattmeter/energy meter samples the voltage and current thousands of times a second, for each sample, the voltage is multiplied by the current at the same instant, the average over at least one cycle is the real power. The real power divided by the apparent volt-amperes is the power factor, a computer circuit uses the sampled values to calculate RMS voltage, RMS current, VA, power, power factor, and kilowatt-hours

Wattmeter
–
Wattmeter
Wattmeter
–
Early wattmeter on display at the Historic Archive and Museum of Mining in Pachuca, Mexico
Wattmeter
–
Prodigit Model 2000MU (UK version), shown in use and displaying a reading of 10 watts being consumed by the appliance
Wattmeter
–
Itron OpenWay wattmeter with two-way communications for remote reading, in use by DTE Energy

51.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each edition and variation of a book, for example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, the method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated in 1967 based upon the 9-digit Standard Book Numbering created in 1966, the 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108. Occasionally, a book may appear without a printed ISBN if it is printed privately or the author does not follow the usual ISBN procedure, however, this can be rectified later. Another identifier, the International Standard Serial Number, identifies periodical publications such as magazines, the ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker and in 1968 in the US by Emery Koltay. The 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108, the United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978, an SBN may be converted to an ISBN by prefixing the digit 0. For example, the edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has SBN340013818 -340 indicating the publisher,01381 their serial number. This can be converted to ISBN 0-340-01381-8, the check digit does not need to be re-calculated, since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with Bookland European Article Number EAN-13s. An ISBN is assigned to each edition and variation of a book, for example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, a 13-digit ISBN can be separated into its parts, and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts of a 10-digit ISBN is also done with either hyphens or spaces, figuring out how to correctly separate a given ISBN number is complicated, because most of the parts do not use a fixed number of digits. ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for country or territory regardless of the publication language. Some ISBN registration agencies are based in national libraries or within ministries of culture, in other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the purpose of encouraging Canadian culture. In the United Kingdom, United States, and some countries, where the service is provided by non-government-funded organisations. Australia, ISBNs are issued by the library services agency Thorpe-Bowker

International Standard Book Number
–
A 13-digit ISBN, 978-3-16-148410-0, as represented by an EAN-13 bar code

52.
Pacific Grove, California
–
Pacific Grove is a coastal city in Monterey County, California in the United States. The United States Census Bureau estimated its 2013 population at 15,504, Pacific Grove is located between Point Pinos and Monterey. Pacific Grove is known for its Victorian homes, Asilomar State Beach, the city is endowed with more historical houses per capita than anywhere else in California. Seventy-five percent of the homes in Pacific Grove are considered historical, some of them have been turned into bed and breakfast inns. The city is known as the location of the Point Pinos Lighthouse. The Pacific Grove Museum of Natural History and Pacific Grove Art Center are located in the historic downtown, Pacific Grove was also the main filming location for Roger Spottiswoodes 1989 film Turner & Hooch as well as A Summer Place, starring Sandra Dee. In prehistoric times the Rumsen were one of the linguistically distinct Ohlone groups of the Monterey Bay Area who inhabited the area now known as Pacific Grove and this tribe subsisted with hunting, fishing and gathering in what has been deduced as a biologically rich Monterey Peninsula. In time, the butterflies, fragrant pines and fresh sea air brought others to the Pacific Grove Retreat to rest, the initial camp meeting of the Pacific Coast branch of the Chautauqua Literary and Scientific Circle was held in Pacific Grove in June 1879. Modeled after the Methodist Sunday school teachers training camp established in 1874 at Chautauqua Lake, New York, in November 1879, after the summer campers returned home, Robert Louis Stevenson wandered into the deserted campgrounds, I have never been in any place so dreamlike. Indeed, it was not so much like a town as like a scene upon the stage by daylight. The Pacific Grove post office opened in 1886, closed later that year, Pacific Grove, like Carmel-by-the-Sea and Monterey, became an artists haven in the 1890s and subsequent period. Artists of the En plein air school in both Europe and the United States were seeking a venue which had natural beauty, so that Pacific Grove was a magnet for this movement. William Adam was an English painter who first moved to Monterey, at about the same time Eugen Neuhaus, a German painter, arrived in Pacific Grove with his new bride. The Asilomar Conference Grounds are located at the edge of Pacific Grove. Asilomar opened in 1913 as a YWCA summer retreat it now belongs to the California State Park System, thirteen buildings on these grounds were designed by the architect Julia Morgan, who also designed Hearst Castle. For a number of years, John Steinbeck lived in a cottage in Pacific Grove owned by his father, Ernest, the cottage still stands on a quiet side street at 147 11th St. without any plaque or special sign, virtually overlooked by most Steinbeck fans. Another Steinbeck related house is at 222 Central Ave, which was his grandmothers house, a golden statue of Steinbeck in the front yard stood for years before it was removed. In Steinbecks book Sweet Thursday, a chapter is dedicated to describing a rivalry that arose among the residents over the game of roque

53.
Portable document format
–
The Portable Document Format is a file format used to present documents in a manner independent of application software, hardware, and operating systems. Each PDF file encapsulates a complete description of a fixed-layout flat document, including the text, fonts, graphics, PDF was developed in the early 1990s as a way to share computer documents, including text formatting and inline images. It was among a number of competing formats such as DjVu, Envoy, Common Ground Digital Paper, Farallon Replica, in those early years before the rise of the World Wide Web and HTML documents, PDF was popular mainly in desktop publishing workflows. Adobe Systems made the PDF specification available free of charge in 1993 and these proprietary technologies are not standardized and their specification is published only on Adobe’s website. Many of them are not supported by popular third-party implementations of PDF. So when organizations publish PDFs which use proprietary technologies, they present accessibility issues for some users. In 2014, ISO TC171 voted to deprecate XFA for ISO 32000-2, on January 9,2017, the final draft for ISO 32000-2 was published, thus reaching the approval stage. The PDF combines three technologies, A subset of the PostScript page description programming language, for generating the layout, a font-embedding/replacement system to allow fonts to travel with the documents. A structured storage system to bundle these elements and any associated content into a single file, PostScript is a page description language run in an interpreter to generate an image, a process requiring many resources. It can handle graphics and standard features of programming such as if. PDF is largely based on PostScript but simplified to remove flow control features like these, often, the PostScript-like PDF code is generated from a source PostScript file. The graphics commands that are output by the PostScript code are collected and tokenized, any files, graphics, or fonts to which the document refers also are collected. Then, everything is compressed to a single file, therefore, the entire PostScript world remains intact. PDF supports graphic transparency, PostScript does not, PostScript is an interpreted programming language with an implicit global state, so instructions accompanying the description of one page can affect the appearance of any following page. Therefore, all preceding pages in a PostScript document must be processed to determine the appearance of a given page. A PDF file is a 7-bit ASCII file, except for elements that may have binary content. A PDF file starts with a header containing the magic number, the format is a subset of a COS format. A COS tree file consists primarily of objects, of which there are eight types, Boolean values, representing true or false Numbers Strings, enclosed within parentheses, objects may be either direct or indirect

54.
Nuclear Regulatory Commission
–
The Nuclear Regulatory Commission is an independent agency of the United States government tasked with protecting public health and safety related to nuclear energy. Established by the Energy Reorganization Act of 1974, the NRC began operations on January 19,1975 as one of two agencies to the United States Atomic Energy Commission. Prior to 1975 the Atomic Energy Commission was in charge of matters regarding radionuclides, the AEC was dissolved, because it was perceived as unduly favoring the industry it was charged with regulating. The NRC was formed as an independent commission to oversee nuclear energy matters, oversight of nuclear medicine, the U. S. AEC became the Energy Research and Development Administration in 1975, responsible for development and oversight of nuclear weapons. In 1977, ERDA became the United States Department of Energy, in 2000, the National Nuclear Security Administration was created as a subcomponent of DOE, responsible for nuclear weapons. Has, in critical areas, abdicated its role as a regulator altogether. To cite three examples, A1986 Congressional report found that NRC staff had provided valuable assistance to the utility seeking an operating license for the controversial Seabrook plant. The origins and development of NRC regulatory processes and policies are explained in five volumes of published by the University of California Press. These are, Controlling the Atom, The Beginnings of Nuclear Regulation 1946–1962, containing the Atom, Nuclear Regulation in a Changing Environment, 1963–1971. The NRC has produced a booklet, A Short History of Nuclear Regulation 1946–2009. Thomas Wellock, an academic, is the NRC historian. Before joining the NRC, Wellock wrote Critical Masses, Opposition to Nuclear Power in California, the NRC is headed by five Commissioners appointed by the President of the United States and confirmed by the United States Senate for five-year terms. One of them is designated by the President to be the Chairman, the current chairman is Kristine Svinicki. President Donald Trump designated Ms. Svinicki as Chairman of the NRC effective Jan, the NRC consists of the Commission on the one hand and offices of the Executive Director for Operations on the other. The Commission is divided into two committees and one Board, the Atomic Safety and Licensing Board Panel, as well as 8 commission staff offices, of these operations offices, NRCs major program components are the first five offices mentioned above. NRCs proposed FY2015 budget is $1,059.5 million and this is an increase of $3.6 million, including 65.1 FTE, compared to FY2014. NRC headquarters offices are located in unincorporated North Bethesda, Maryland, region I, located in King of Prussia, Pennsylvania, oversees the northeastern states. Region II, located in Atlanta, Georgia, oversees most of the southeastern states, region III, located in Lisle, Illinois, oversees the Midwest

55.
Livermore, California
–
Livermore is a city in Alameda County, California, in the United States. With an estimated 2014 population of 86,870, Livermore is the most populous city in the Tri-Valley, Livermore is located on the eastern edge of Californias San Francisco Bay Area. The incumbent Mayor of Livermore is John Marchand, a registered Democrat, Livermore was founded by William Mendenhall and named after Robert Livermore, his friend and a local rancher who settled in the area in the 1840s. Livermore is the home of the Lawrence Livermore National Laboratory, for which the chemical element livermorium is named, Livermore is also the California site of Sandia National Laboratories, which is headquartered in Albuquerque, New Mexico. Its south side is home to local vineyards, the city has redeveloped its downtown district and is considered part of the Tri-Valley area, comprising Amador, Livermore and San Ramon valleys. Before its incorporation in 1796 under the Franciscan Mission San Jose, located in what is now the part of Fremont. Each mission had two to three friars and a contingent of up to five soldiers to keep order in the mission. Other tribes were coerced into other adjacent missions, the Mission Indians were restricted to the mission grounds where they lived in sexually segregated barracks that they built themselves with padre instruction. The Livermore-Amador Valley after 1800 to about 1837 was primarily used as grazing land for some of the Mission San Joses growing herds of cattle, sheep. The herds grew wild with no fences and were culled about once a year for cow hides, the dead animals were left to rot or feed the California grizzly bears which then roamed the region. Some Indians joined or re-joined some of the few surviving tribes, the about 48, 000-acre Rancho Las Positas grant, which includes most of Livermore, was made to ranchers Robert Livermore and Jose Noriega in 1839. Most land grants were given little or no cost to the recipients. Robert Livermore was a British citizen who had jumped from a British merchant sailing ship stopping in Monterey, California and he became a naturalized Mexican citizen who had converted to Catholicism in 1823 as was required for citizenship and legal residence. Typical of most early rancho dwellings, the first building on his ranch was an adobe on Las Positas Creek near the end of todays Las Positas Road. The non-Indian population exploded, and cattle were suddenly worth much more than the $1. 00-$3.00 their hides could bring and it is believed to be the first wooden building in the Livermore Tri-Valley. Most horse traffic went by way of Altamont Pass just east of Livermore, Robert Livermore was a very accommodating host and welcomed nearly all that stopped by with lodging and meals. Robert Livermore died in 1858 and was buried at Mission San Jose before the establishment of the town bears his name. His ranch included much of the present-day city, the city of Livermore, named in honor of Robert Livermore, was established in 1869 by William Mendenhall, who had first met Livermore while marching through the valley with John C

56.
University of North Carolina at Chapel Hill
–
The University of North Carolina at Chapel Hill, also known as UNC, or simply Carolina, is a public research university located in Chapel Hill, North Carolina, United States. It is one of the 17 campuses of the University of North Carolina system, the first public institution of higher education in North Carolina, the school opened its doors to students on February 12,1795. The university offers degrees in over 70 courses of study through fourteen colleges, in 1952, North Carolina opened its own hospital, UNC Health Care, for research and treatment, and has since specialized in cancer care. The schools students, alumni, and sports teams are known as Tar Heels, the campus covers 729 acres of Chapel Hills downtown area, encompassing the Morehead Planetarium and the many stores and shops located on Franklin Street. Students can participate in over 550 officially recognized student organizations, the student-run newspaper The Daily Tar Heel has won national awards for collegiate media, while the student radio station WXYC provided the worlds first internet radio broadcast. North Carolina is one of the members of the Atlantic Coast Conference. Competing athletically as the Tar Heels, North Carolina has achieved success in sports, most notably in mens basketball, womens soccer. As a result, Womans College was renamed the University of North Carolina at Greensboro, in 1955, UNC Chapel Hill officially desegregated its undergraduate divisions. During World War II, UNC Chapel Hill was one of 131 colleges and universities nationally that took part in the V-12 Navy College Training Program which offered students a path to a Navy commission, during the 1960s, the campus was the location of significant political protest. Prior to the passage of the Civil Rights Act of 1964, protests about local racial segregation which began quietly in Franklin Street restaurants led to mass demonstrations, the climate of civil unrest prompted the 1963 Speaker Ban Law prohibiting speeches by communists on state campuses in North Carolina. The law was criticized by university Chancellor William Brantley Aycock and university President William Friday. A group of UNC Chapel Hill students, led by Student Body President Paul Dickson, filed a lawsuit in U. S. federal court, and on February 20,1968, the Speaker Ban Law was struck down. From the late 1990s and onward, UNC Chapel Hill expanded rapidly with a 15% increase in student population to more than 28,000 by 2007. Professor Oliver Smithies was awarded the Nobel Prize in Medicine in 2007 for his work in genetics, additionally, Aziz Sancar was awarded the Nobel Prize in Chemistry in 2015 for his work in understanding the molecular repair mechanisms of DNA. The current chancellor is Carol Folt, the first woman to hold the post, UNC Chapel Hills 729-acre campus is dominated by two central quads, Polk Place and McCorkle Place. Adjacent to Polk Place is a sunken brick courtyard known as the Pit where students will gather, the Morehead–Patterson Bell Tower, located in the heart of campus, tolls the quarter-hour. In 1999, UNC Chapel Hill was one of sixteen recipients of the American Society of Landscape Architects Medallion Awards and was identified as one of 50 college or university works of art by T. A, gaines in his book The Campus as a Work of Art. The universitys campus is divided into three regions, usually referred to as north campus, middle campus, and south campus

University of North Carolina at Chapel Hill
–
University of North Carolina course catalog from June 1819
University of North Carolina at Chapel Hill
–
University of North Carolina at Chapel Hill
University of North Carolina at Chapel Hill
–
Confederate soldier Silent Sam, University of North Carolina at Chapel Hill by sculptor John Wilson.
University of North Carolina at Chapel Hill
–
The Morehead Planetarium, designed by Eggers & Higgins, first opened in 1949.

57.
Becquerel
–
The becquerel is the SI derived unit of radioactivity. One becquerel is defined as the activity of a quantity of material in which one nucleus decays per second. The becquerel is therefore equivalent to a second, s−1. The becquerel is named after Henri Becquerel, who shared a Nobel Prize in Physics with Pierre, as with every International System of Units unit named for a person, the first letter of its symbol is uppercase. 1 Bq =1 s−1 A special name was introduced for the second to represent radioactivity to avoid potentially dangerous mistakes with prefixes. For example,1 µs−1 could be taken to mean 106 disintegrations per second, other names considered were hertz, a special name already in use for the reciprocal second, and fourier. The hertz is now used for periodic phenomena. Whereas 1 Hz is 1 cycle per second,1 Bq is 1 aperiodic radioactivity event per second, the gray and the becquerel were introduced in 1975. Between 1953 and 1975, absorbed dose was often measured in rads, decay activity was measured in curies before 1946 and often in rutherfords between 1946 and 1975. Like any SI unit, Bq can be prefixed, commonly used multiples are kBq, MBq, GBq, TBq, for practical applications,1 Bq is a small unit, therefore, the prefixes are common. For example, the roughly 0.0169 g of potassium-40 present in a human body produces approximately 4,400 disintegrations per second or 4.4 kBq of activity. The global inventory of carbon-14 is estimated to be 8. 5×1018 Bq, the nuclear explosion in Hiroshima is estimated to have produced 8×1024 Bq. The becquerel succeeded the curie, an older, non-SI unit of radioactivity based on the activity of 1 gram of radium-226, the curie is defined as 3. 7·1010 s−1, or 37 GBq. The following table shows radiation quantities in SI and non-SI units

58.
Coulomb
–
The coulomb is the International System of Units unit of electric charge. 242×1018 protons, and −1 C is equivalent to the charge of approximately 6. 242×1018 electrons. This SI unit is named after Charles-Augustin de Coulomb, as with every International System of Units unit named for a person, the first letter of its symbol is upper case. Note that degree Celsius conforms to this rule because the d is lowercase. — Based on The International System of Units, the SI system defines the coulomb in terms of the ampere and second,1 C =1 A ×1 s. The second is defined in terms of a frequency emitted by caesium atoms. The ampere is defined using Ampères force law, the definition relies in part on the mass of the prototype kilogram. In practice, the balance is used to measure amperes with the highest possible accuracy. One coulomb is the magnitude of charge in 6. 24150934×10^18 protons or electrons. The inverse of this gives the elementary charge of 1. 6021766208×10−19 C. The magnitude of the charge of one mole of elementary charges is known as a faraday unit of charge. In terms of Avogadros number, one coulomb is equal to approximately 1.036 × NA×10−5 elementary charges, one ampere-hour =3600 C,1 mA⋅h =3.6 C. One statcoulomb, the obsolete CGS electrostatic unit of charge, is approximately 3. 3356×10−10 C or about one-third of a nanocoulomb, the elementary charge, the charge of a proton, is approximately 1. 6021766208×10−19 C. In SI, the charge in coulombs is an approximate value. However, in other systems, the elementary charge has an exact value by definition. Specifically, e90 = / C exactly, SI itself may someday change its definitions in a similar way. For example, one possible proposed redefinition is the ampere. is such that the value of the charge e is exactly 1. 602176487×10−19 coulombs. This proposal is not yet accepted as part of the SI, the charges in static electricity from rubbing materials together are typically a few microcoulombs. The amount of charge that travels through a lightning bolt is typically around 15 C, the amount of charge that travels through a typical alkaline AA battery from being fully charged to discharged is about 5 kC =5000 C ≈1400 mA⋅h. The hydraulic analogy uses everyday terms to illustrate movement of charge, the analogy equates charge to a volume of water, and voltage to pressure

59.
Celsius
–
Celsius, also known as centigrade, is a metric scale and unit of measurement for temperature. As an SI derived unit, it is used by most countries in the world and it is named after the Swedish astronomer Anders Celsius, who developed a similar temperature scale. The degree Celsius can refer to a temperature on the Celsius scale as well as a unit to indicate a temperature interval. Before being renamed to honour Anders Celsius in 1948, the unit was called centigrade, from the Latin centum, which means 100, and gradus, which means steps. The scale is based on 0° for the point of water. This scale is widely taught in schools today, by international agreement the unit degree Celsius and the Celsius scale are currently defined by two different temperatures, absolute zero, and the triple point of VSMOW. This definition also precisely relates the Celsius scale to the Kelvin scale, absolute zero, the lowest temperature possible, is defined as being precisely 0 K and −273.15 °C. The temperature of the point of water is defined as precisely 273.16 K at 611.657 pascals pressure. This definition fixes the magnitude of both the degree Celsius and the kelvin as precisely 1 part in 273.16 of the difference between absolute zero and the point of water. Thus, it sets the magnitude of one degree Celsius and that of one kelvin as exactly the same, additionally, it establishes the difference between the two scales null points as being precisely 273.15 degrees. In his paper Observations of two persistent degrees on a thermometer, he recounted his experiments showing that the point of ice is essentially unaffected by pressure. He also determined with precision how the boiling point of water varied as a function of atmospheric pressure. He proposed that the point of his temperature scale, being the boiling point. This pressure is known as one standard atmosphere, the BIPMs 10th General Conference on Weights and Measures later defined one standard atmosphere to equal precisely 1013250dynes per square centimetre. On 19 May 1743 he published the design of a mercury thermometer, in 1744, coincident with the death of Anders Celsius, the Swedish botanist Carolus Linnaeus reversed Celsiuss scale. In it, Linnaeus recounted the temperatures inside the orangery at the University of Uppsala Botanical Garden, since the 19th century, the scientific and thermometry communities worldwide referred to this scale as the centigrade scale. Temperatures on the scale were often reported simply as degrees or. More properly, what was defined as centigrade then would now be hectograde.2 gradians, for scientific use, Celsius is the term usually used, with centigrade otherwise continuing to be in common but decreasing use, especially in informal contexts in English-speaking countries

60.
Farad
–
The farad is the SI derived unit of electrical capacitance, the ability of a body to store an electrical charge. It is named after the English physicist Michael Faraday, one farad is defined as the capacitance across which, when charged with one coulomb, there is a potential difference of one volt. Equally, one farad can be described as the capacitance which stores a one-coulomb charge across a potential difference of one volt, the relationship between capacitance, charge and potential difference is linear. For example, if the difference across a capacitor is halved. For most applications, the farad is a large unit of capacitance. Most electrical and electronic applications are covered by the following SI prefixes,1 mF =1000 μF =1000000 nF1 μF =0.000001 F =1000 nF =1000000 pF1 nF =0. In 1881 at the International Congress of Electricians in Paris, the name farad was officially used for the unit of electrical capacitance, a capacitor consists of two conducting surfaces, frequently referred to as plates, separated by an insulating layer usually referred to as a dielectric. The original capacitor was the Leyden jar developed in the 18th century and it is the accumulation of electric charge on the plates that results in capacitance. Values of capacitors are specified in farads, microfarads, nanofarads and picofarads. The millifarad is rarely used in practice, while the nanofarad is uncommon in North America, the size of commercially available capacitors ranges from around 0.1 pF to 5000F supercapacitors. Capacitance values of 1 pF or lower can be achieved by twisting two short lengths of insulated wire together, the capacitance of the Earths ionosphere with respect to the ground is calculated to be about 1 F. The picofarad is sometimes pronounced as puff or pic, as in a ten-puff capacitor. Similarly, mic is sometimes used informally to signify microfarads, if the Greek letter μ is not available, the notation uF is often used as a substitute for μF in electronics literature. A micro-microfarad, an obsolete unit sometimes found in texts, is the equivalent of a picofarad. In texts prior to 1960, and on capacitor packages even more recently. Similarly, mmf or MMFD represented picofarads, the reciprocal of capacitance is called electrical elastance, the unit of which is the daraf. The abfarad is an obsolete CGS unit of equal to 109 farads. The statfarad is a rarely used CGS unit equivalent to the capacitance of a capacitor with a charge of 1 statcoulomb across a potential difference of 1 statvolt and it is 1/ farad, approximately 1.1126 picofarads

Farad
–
A comparatively small 1 farad capacitor, for low voltages and current transfers
Farad
–
Examples of different types of capacitors

61.
Henry (unit)
–
The henry is the SI derived unit of electrical inductance. The unit is named after Joseph Henry, the American scientist who discovered electromagnetic induction independently of, the magnetic permeability of vacuum is 4π × 10−7 H⋅m−1. The henry is a unit based on four of the seven base units of the International System of Units, kilogram, meter, second. The United States National Institute of Standards and Technology recommends English-speaking users of SI to write the plural as henries

62.
Hertz
–
The hertz is the unit of frequency in the International System of Units and is defined as one cycle per second. It is named for Heinrich Rudolf Hertz, the first person to provide proof of the existence of electromagnetic waves. Hertz are commonly expressed in SI multiples kilohertz, megahertz, gigahertz, kilo means thousand, mega meaning million, giga meaning billion and tera for trillion. Some of the units most common uses are in the description of waves and musical tones, particularly those used in radio-. It is also used to describe the speeds at which computers, the hertz is equivalent to cycles per second, i. e. 1/second or s −1. In English, hertz is also used as the plural form, as an SI unit, Hz can be prefixed, commonly used multiples are kHz, MHz, GHz and THz. One hertz simply means one cycle per second,100 Hz means one hundred cycles per second, and so on. The unit may be applied to any periodic event—for example, a clock might be said to tick at 1 Hz, the rate of aperiodic or stochastic events occur is expressed in reciprocal second or inverse second in general or, the specific case of radioactive decay, becquerels. Whereas 1 Hz is 1 cycle per second,1 Bq is 1 aperiodic radionuclide event per second, the conversion between a frequency f measured in hertz and an angular velocity ω measured in radians per second is ω =2 π f and f = ω2 π. This SI unit is named after Heinrich Hertz, as with every International System of Units unit named for a person, the first letter of its symbol is upper case. Note that degree Celsius conforms to this rule because the d is lowercase. — Based on The International System of Units, the hertz is named after the German physicist Heinrich Hertz, who made important scientific contributions to the study of electromagnetism. The name was established by the International Electrotechnical Commission in 1930, the term cycles per second was largely replaced by hertz by the 1970s. One hobby magazine, Electronics Illustrated, declared their intention to stick with the traditional kc. Mc. etc. units, sound is a traveling longitudinal wave which is an oscillation of pressure. Humans perceive frequency of waves as pitch. Each musical note corresponds to a frequency which can be measured in hertz. An infants ear is able to perceive frequencies ranging from 20 Hz to 20,000 Hz, the range of ultrasound, infrasound and other physical vibrations such as molecular and atomic vibrations extends from a few femtoHz into the terahertz range and beyond. Electromagnetic radiation is described by its frequency—the number of oscillations of the perpendicular electric and magnetic fields per second—expressed in hertz. Radio frequency radiation is measured in kilohertz, megahertz, or gigahertz

Hertz
–
Details of a heartbeat as an example of a non- sinusoidal periodic phenomenon that can be described in terms of hertz. Two complete cycles are illustrated.
Hertz
–
A sine wave with varying frequency

63.
Katal
–
The katal is the SI unit of catalytic activity. It is a derived SI unit for quantifying the catalytic activity of enzymes and its use is recommended by the General Conference on Weights and Measures and other international organizations. It replaces the non-SI enzyme unit, enzyme units are, however, still more commonly used than the katal in practice at present, especially in biochemistry. The katal is not used to express the rate of a reaction, rather, it is used to express catalytic activity which is a property of the catalyst. The katal is invariant of the measurement procedure, but the quantity value is not. Therefore, in order to define the quantity of a catalyst, one katal of trypsin, for example, is that amount of trypsin which breaks a mole of peptide bonds per second under specified conditions. Kat = mol s The name katal has been used for decades, the name comes from the Ancient Greek κατάλυσις, meaning dissolution, which is the same origin as the word catalysis itself comes from. Unit katal for catalytic activity Pure Appl, the Tortuous Road to the Adoption of katal for the Expression of Catalytic Activity by the General Conference on Weights and Measures

64.
Lumen (unit)
–
The lumen is the SI derived unit of luminous flux, a measure of the total quantity of visible light emitted by a source. Lumens are related to lux in that one lux is one lumen per square meter, the lumen is defined in relation to the candela as 1 lm =1 cd ⋅ sr. A full sphere has an angle of 4π steradians, so a light source that uniformly radiates one candela in all directions has a total luminous flux of 1 cd × 4π sr = 4π cd⋅sr ≈12.57 lumens. If a light source emits one candela of luminous intensity uniformly across a solid angle of one steradian, alternatively, an isotropic one-candela light-source emits a total luminous flux of exactly 4π lumens. If the source were partly covered by an ideal absorbing hemisphere, the luminous intensity would still be one candela in those directions that are not obscured. The lumen can be thought of casually as a measure of the amount of visible light in some defined beam or angle. The number of candelas or lumens from a source also depends on its spectrum, the difference between the units lumen and lux is that the lux takes into account the area over which the luminous flux is spread. A flux of 1000 lumens, concentrated into an area of one square metre, the same 1000 lumens, spread out over ten square metres, produces a dimmer illuminance of only 100 lux. Mathematically,1 lx =1 lm/m2, a source radiating a power of one watt of light in the color for which the eye is most efficient has luminous flux of 683 lumens. So a lumen represents at least 1/683 watts of light power. Lamps used for lighting are commonly labelled with their output in lumens. A23 W spiral compact fluorescent lamp emits about 1, 400–1,600 lm, many compact fluorescent lamps and other alternative light sources are labelled as being equivalent to an incandescent bulb with a specific wattage. Below is a table that shows typical luminous flux for common incandescent bulbs, on September 1,2010, European Union legislation came into force mandating that lighting equipment must be labelled primarily in terms of luminous flux, instead of electric power. This change is a result of the EUs Eco-design Directive for Energy-using Products, for example, according to the European Union standard, an energy-efficient bulb that claims to be the equivalent of a 60 W tungsten bulb must have a minimum light output of 700–750 lm. The light output of projectors is typically measured in lumens, a standardized procedure for testing projectors has been established by the American National Standards Institute, which involves averaging together several measurements taken at different positions. For marketing purposes, the flux of projectors that have been tested according to this procedure may be quoted in ANSI lumens. ANSI lumen measurements are in more accurate than the other measurement techniques used in the projector industry. This allows projectors to be easily compared on the basis of their brightness specifications

65.
Lux
–
The lux is the SI unit of illuminance and luminous emittance, measuring luminous flux per unit area. It is equal to one lumen per square metre, in photometry, this is used as a measure of the intensity, as perceived by the human eye, of light that hits or passes through a surface. In English, lux is used as both the singular and plural form, illuminance is a measure of how much luminous flux is spread over a given area. One can think of flux as a measure of the total amount of visible light present. A given amount of light will illuminate a surface more dimly if it is spread over a larger area, however, the same 1000 lumens, spread out over ten square metres, produces a dimmer illuminance of only 100 lux. Achieving an illuminance of 500 lux might be possible in a kitchen with a single fluorescent light fixture with an output of 12000 lumens. To light a factory floor with dozens of times the area of the kitchen would require dozens of such fixtures, thus, lighting a larger area to the same level of lux requires a greater number of lumens. As with other SI units, SI prefixes can be used, for instance, a star of apparent magnitude 0 provides 2.08 microlux at the earths surface. A barely perceptible magnitude 6 star provides 8 nanolux, the unobscured sun provides an illumination of up to 100 kilolux on the Earths surface, the exact value depending on time of year and atmospheric conditions. This direct normal illuminance is related to the solar illuminance constant Esc, the illumination provided on a surface by a point source equals the number of lux just described times the cosine of the angle between a ray coming from the source and a normal to the surface. The number of lux falling on the surface equals this cosine times a number that characterizes the source from the point of view in question and it differs from the luminance, which does depend on the angular distribution of the emission. A perfectly white surface with one lux falling on it will emit one lux, like all photometric units, the lux has a corresponding radiometric unit. The weighting factor is known as the luminosity function, the lux is one lumen per square metre, and the corresponding radiometric unit, which measures irradiance, is the watt per square metre. The peak of the luminosity function is at 555 nm, the eyes image-forming visual system is sensitive to light of this wavelength than any other. Other wavelengths of visible light produce fewer lux per watt-per-meter-squared, the luminosity function falls to zero for wavelengths outside the visible spectrum. For a light source with mixed wavelengths, the number of lumens per watt can be calculated by means of the luminosity function and this means that white light sources produce far fewer lumens per watt than the theoretical maximum of 683.002 lm/W. The ratio between the number of lumens per watt and the theoretical maximum is expressed as a percentage known as the luminous efficiency. For example, an incandescent light bulb has a luminous efficiency of only about 2%

Lux
–
A lux meter for measuring illuminances in work places.

66.
Radian
–
The radian is the standard unit of angular measure, used in many areas of mathematics. The length of an arc of a circle is numerically equal to the measurement in radians of the angle that it subtends. The unit was formerly an SI supplementary unit, but this category was abolished in 1995, separately, the SI unit of solid angle measurement is the steradian. The radian is represented by the symbol rad, so for example, a value of 1.2 radians could be written as 1.2 rad,1.2 r,1. 2rad, or 1. 2c. Radian describes the angle subtended by a circular arc as the length of the arc divided by the radius of the arc. One radian is the angle subtended at the center of a circle by an arc that is equal in length to the radius of the circle. Conversely, the length of the arc is equal to the radius multiplied by the magnitude of the angle in radians. As the ratio of two lengths, the radian is a number that needs no unit symbol, and in mathematical writing the symbol rad is almost always omitted. When quantifying an angle in the absence of any symbol, radians are assumed, and it follows that the magnitude in radians of one complete revolution is the length of the entire circumference divided by the radius, or 2πr / r, or 2π. Thus 2π radians is equal to 360 degrees, meaning that one radian is equal to 180/π degrees, the concept of radian measure, as opposed to the degree of an angle, is normally credited to Roger Cotes in 1714. He described the radian in everything but name, and he recognized its naturalness as a unit of angular measure, the idea of measuring angles by the length of the arc was already in use by other mathematicians. For example, al-Kashi used so-called diameter parts as units where one part was 1/60 radian. The term radian first appeared in print on 5 June 1873, in examination questions set by James Thomson at Queens College, Belfast. He had used the term as early as 1871, while in 1869, Thomas Muir, then of the University of St Andrews, in 1874, after a consultation with James Thomson, Muir adopted radian. As stated, one radian is equal to 180/π degrees, thus, to convert from radians to degrees, multiply by 180/π. The length of circumference of a circle is given by 2 π r, so, to convert from radians to gradians multiply by 200 / π, and to convert from gradians to radians multiply by π /200. This is because radians have a mathematical naturalness that leads to a more elegant formulation of a number of important results, most notably, results in analysis involving trigonometric functions are simple and elegant when the functions arguments are expressed in radians. Because of these and other properties, the trigonometric functions appear in solutions to problems that are not obviously related to the functions geometrical meanings

Radian
–
A chart to convert between degrees and radians
Radian
–
An arc of a circle with the same length as the radius of that circle corresponds to an angle of 1 radian. A full circle corresponds to an angle of 2 π radians.

67.
Siemens (unit)
–
The siemens is the unit of electric conductance, electric susceptance and electric admittance in the International System of Units. The 14th General Conference on Weights and Measures approved the addition of the siemens as a unit in 1971. The unit is named after Ernst Werner von Siemens, as with every SI unit whose name is derived from the proper name of a person, the first letter of its symbol is upper case, the lower-case s is the symbol for the second. When an SI unit is spelled out in English, it should begin with a lower-case letter. In English, the same form siemens is used both for the singular and plural, the unit siemens for the conductance G is defined by S = Ω −1 = A V where Ω is the ohm, A is the ampere, and V is the volt. For a device with a conductance of one siemens, the current through the device will increase by one ampere for every increase of one volt of electric potential difference across the device. The conductance of a resistor with a resistance of five ohms, for example, is −1, mho /moʊ/ is an alternative name of the same unit, the reciprocal of one ohm. Mho is derived from spelling ohm backwards and is written with an upside-down capital Greek letter Omega, ℧, according to Maver the term mho was suggested by Sir William Thomson. The mho was officially renamed to the siemens, replacing the old meaning of the siemens unit, the term siemens, as it is an SI term, is used universally in science and often in electrical applications, while mho is still used primarily in electronic applications. Likewise, it is difficult to distinguish the symbol S from the lower-case s where second is meant, brochure The International System of Units issued by the BIPM Different units named after Siemens

68.
Tesla (unit)
–
The tesla is a unit of measurement of the strength of a magnetic field. It is a unit of the International System of Units. One tesla is equal to one weber per square metre, the unit was announced during the General Conference on Weights and Measures in 1960 and is named in honour of Nikola Tesla, upon the proposal of the Slovenian electrical engineer France Avčin. The strongest fields encountered from permanent magnets are from Halbach spheres, the strongest field trapped in a laboratory superconductor as of June 2014 is 21 T. This may be appreciated by looking at the units for each, the unit of electric field in the MKS system of units is newtons per coulomb, N/C, while the magnetic field can be written as N/. The dividing factor between the two types of field is metres per second, which is velocity, in ferromagnets, the movement creating the magnetic field is the electron spin. In a current-carrying wire the movement is due to moving through the wire. One tesla is equivalent to,10,000 G, used in the CGS system, thus,10 kG =1 T, and 1 G = 10−4 T.1,000,000,000 γ, used in geophysics. Thus,1 γ =1 nT.42.6 MHz of the 1H nucleus frequency, thus, the magnetic field associated with NMR at 1 GHz is 23.5 T. One tesla is equal to 1 V·s/m2 and this can be shown by starting with the speed of light in vacuum, c = −1/2, and inserting the SI values and units for c, the vacuum permittivity ε0, and the vacuum permeability μ0. Cancellation of numbers and units then produces this relation, for those concerned with low-frequency electromagnetic radiation in the home, the following conversions are needed most,1000 nT =1 µT =10 mG,1,000,000 µT =1 T. For the relation to the units of the field, see the article on permeability

69.
Weber (unit)
–
In physics, the weber /ˈweɪbər/ is the SI unit of magnetic flux. A flux density of one Wb/m2 is one tesla, the weber is named after the German physicist Wilhelm Eduard Weber. The weber may be defined in terms of Faradays law, which relates a changing magnetic flux through a loop to the field around the loop. A change in flux of one weber per second will induce an electromotive force of one volt. Officially, Weber — The weber is the flux that, linking a circuit of one turn. This SI unit is named after Wilhelm Eduard Weber, as with every International System of Units unit named for a person, the first letter of its symbol is upper case. Note that degree Celsius conforms to this rule because the d is lowercase. — Based on The International System of Units, in 1861, the British Association for the Advancement of Science established a committee under William Thomson to study electrical units. It was not until 1927 that TC1 dealt with the study of various outstanding problems concerning electrical and magnetic quantities, as disagreement continued, the IEC decided on an effort to remedy the situation. It instructed a task force to study the question in readiness for the next meeting, in 1935, TC1 recommended names for several electrical units, including the weber for the practical unit of magnetic flux. This system was given the designation of Giorgi system, also in 1936, TC1 passed responsibility for electric and magnetic magnitudes and units to the new TC24. This led eventually to the adoption of the Giorgi system, which unified electromagnetic units with the MKS dimensional system of units. In 1938, TC24 recommended as a link the permeability of free space with the value of µ0 = 4π×10−7 H/m. This group also recognized that any one of the units already in use. After consultation, the ampere was adopted as the unit of the Giorgi system in Paris in 1950

70.
Atomic mass unit
–
The unified atomic mass unit or dalton is a standard unit of mass that quantifies mass on an atomic or molecular scale. One unified atomic mass unit is approximately the mass of one nucleon and is equivalent to 1 g/mol. The CIPM has categorised it as a non-SI unit accepted for use with the SI, the amu without the unified prefix is technically an obsolete unit based on oxygen, which was replaced in 1961. However, many still use the term amu but now define it in the same way as u. In this sense, most uses of the atomic mass units. For standardization a specific atomic nucleus had to be chosen because the mass of a nucleon depends on the count of the nucleons in the atomic nucleus due to mass defect. This is also why the mass of a proton or neutron by itself is more than 1 u, the atomic mass unit is not the unit of mass in the atomic units system, which is rather the electron rest mass. The relative atomic mass scale has traditionally been a relative value and this evaluation was made prior to the discovery of the existence of elemental isotopes, which occurred in 1912. The divergence of these values could result in errors in computations, the chemistry amu, based on the relative atomic mass of natural oxygen, was about 1.000282 as massive as the physics amu, based on pure isotopic 16O. For these and other reasons, the standard for both physics and chemistry was changed to carbon-12 in 1961. The choice of carbon-12 was made to minimise further divergence with prior literature. The new and current unit was referred to as the atomic mass unit u. and given a new symbol, u. The Dalton is another name for the atomic mass unit. 1 u = m u =112 m Despite this change, modern sources often use the old term amu but define it as u. Therefore, in general, amu likely does not refer to the old oxygen standard unit, the unified atomic mass unit and the dalton are different names for the same unit of measure. As with other names such as watt and newton, dalton is not capitalized in English. In 2003 the Consultative Committee for Units, part of the CIPM, recommended a preference for the usage of the dalton over the atomic mass unit as it is shorter. In 2005, the International Union of Pure and Applied Physics endorsed the use of the dalton as an alternative to the atomic mass unit

71.
Day
–
In common usage, it is either an interval equal to 24 hours or daytime, the consecutive period of time during which the Sun is above the horizon. The period of time during which the Earth completes one rotation with respect to the Sun is called a solar day, several definitions of this universal human concept are used according to context, need and convenience. In 1960, the second was redefined in terms of the motion of the Earth. The unit of measurement day, redefined in 1960 as 86400 SI seconds and symbolized d, is not an SI unit, but is accepted for use with SI. The word day may also refer to a day of the week or to a date, as in answer to the question. The life patterns of humans and many species are related to Earths solar day. In recent decades the average length of a day on Earth has been about 86400.002 seconds. A day, understood as the span of time it takes for the Earth to make one rotation with respect to the celestial background or a distant star, is called a stellar day. This period of rotation is about 4 minutes less than 24 hours, mainly due to tidal effects, the Earths rotational period is not constant, resulting in further minor variations for both solar days and stellar days. Other planets and moons have stellar and solar days of different lengths to Earths, besides the day of 24 hours, the word day is used for several different spans of time based on the rotation of the Earth around its axis. An important one is the day, defined as the time it takes for the Sun to return to its culmination point. Because the Earth orbits the Sun elliptically as the Earth spins on an inclined axis, on average over the year this day is equivalent to 24 hours. A day, in the sense of daytime that is distinguished from night-time, is defined as the period during which sunlight directly reaches the ground. The length of daytime averages slightly more than half of the 24-hour day, two effects make daytime on average longer than nights. The Sun is not a point, but has an apparent size of about 32 minutes of arc, additionally, the atmosphere refracts sunlight in such a way that some of it reaches the ground even when the Sun is below the horizon by about 34 minutes of arc. So the first light reaches the ground when the centre of the Sun is still below the horizon by about 50 minutes of arc, the difference in time depends on the angle at which the Sun rises and sets, but can amount to around seven minutes. Ancient custom has a new day start at either the rising or setting of the Sun on the local horizon, the exact moment of, and the interval between, two sunrises or sunsets depends on the geographical position, and the time of year. A more constant day can be defined by the Sun passing through the local meridian, the exact moment is dependent on the geographical longitude, and to a lesser extent on the time of the year

72.
Degree (angle)
–
A degree, usually denoted by °, is a measurement of a plane angle, defined so that a full rotation is 360 degrees. It is not an SI unit, as the SI unit of measure is the radian. Because a full rotation equals 2π radians, one degree is equivalent to π/180 radians, the original motivation for choosing the degree as a unit of rotations and angles is unknown. One theory states that it is related to the fact that 360 is approximately the number of days in a year. Ancient astronomers noticed that the sun, which follows through the path over the course of the year. Some ancient calendars, such as the Persian calendar, used 360 days for a year, the use of a calendar with 360 days may be related to the use of sexagesimal numbers. The earliest trigonometry, used by the Babylonian astronomers and their Greek successors, was based on chords of a circle, a chord of length equal to the radius made a natural base quantity. One sixtieth of this, using their standard sexagesimal divisions, was a degree, Aristarchus of Samos and Hipparchus seem to have been among the first Greek scientists to exploit Babylonian astronomical knowledge and techniques systematically. Timocharis, Aristarchus, Aristillus, Archimedes, and Hipparchus were the first Greeks known to divide the circle in 360 degrees of 60 arc minutes, eratosthenes used a simpler sexagesimal system dividing a circle into 60 parts. Furthermore, it is divisible by every number from 1 to 10 except 7 and this property has many useful applications, such as dividing the world into 24 time zones, each of which is nominally 15° of longitude, to correlate with the established 24-hour day convention. Finally, it may be the case more than one of these factors has come into play. For many practical purposes, a degree is a small enough angle that whole degrees provide sufficient precision. When this is not the case, as in astronomy or for geographic coordinates, degree measurements may be written using decimal degrees, with the symbol behind the decimals. Alternatively, the sexagesimal unit subdivisions can be used. One degree is divided into 60 minutes, and one minute into 60 seconds, use of degrees-minutes-seconds is also called DMS notation. These subdivisions, also called the arcminute and arcsecond, are represented by a single and double prime. For example,40. 1875° = 40° 11′ 15″, or, using quotation mark characters, additional precision can be provided using decimals for the arcseconds component. The older system of thirds, fourths, etc. which continues the sexagesimal unit subdivision, was used by al-Kashi and other ancient astronomers, but is rarely used today

73.
Hectare
–
The hectare is an SI accepted metric system unit of area equal to 100 ares and primarily used in the measurement of land as a metric replacement for the imperial acre. An acre is about 0.405 hectare and one hectare contains about 2.47 acres, in 1795, when the metric system was introduced, the are was defined as 100 square metres and the hectare was thus 100 ares or 1⁄100 km2. When the metric system was further rationalised in 1960, resulting in the International System of Units, the are was not included as a recognised unit. The hectare, however, remains as a non-SI unit accepted for use with the SI units, the metric system of measurement was first given a legal basis in 1795 by the French Revolutionary government. At the first meeting of the CGPM in 1889 when a new standard metre, manufactured by Johnson Matthey & Co of London was adopted, in 1960, when the metric system was updated as the International System of Units, the are did not receive international recognition. The units that were catalogued replicated the recommendations of the CGPM, many farmers, especially older ones, still use the acre for everyday calculations, and convert to hectares only for official paperwork. Farm fields can have long histories which are resistant to change, with names such as the six acre field stretching back hundreds of years. The names centiare, deciare, decare and hectare are derived by adding the standard metric prefixes to the base unit of area. The centiare is a synonym for one square metre, the deciare is ten square metres. The are is a unit of area, equal to 100 square metres and it was defined by older forms of the metric system, but is now outside of the modern International System of Units. It is commonly used to measure real estate, in particular in Indonesia, India, and in French-, Portuguese-, Slovakian-, Serbian-, Czech-, Polish-, Dutch-, in Russia and other former Soviet Union states, the are is called sotka. It is used to describe the size of suburban dacha or allotment garden plots or small city parks where the hectare would be too large, the decare is derived from deka, the prefix for 10 and are, and is equal to 10 ares or 1000 square metres. It is used in Norway and in the former Ottoman areas of the Middle East, the hectare, although not strictly a unit of SI, is the only named unit of area that is accepted for use within the SI. The United Kingdom, United States, Burma, and to some extent Canada instead use the acre, others, such as South Africa, published conversion factors which were to be used particularly when preparing consolidation diagrams by compilation. In many countries, metrication redefined or clarified existing measures in terms of metric units, non-SI units accepted for use with the International System of Units

74.
Hour
–
An hour is a unit of time conventionally reckoned as 1⁄24 of a day and scientifically reckoned as 3, 599–3,601 seconds, depending on conditions. The seasonal, temporal, or unequal hour was established in the ancient Near East as 1⁄12 of the night or daytime, such hours varied by season, latitude, and weather. It was subsequently divided into 60 minutes, each of 60 seconds, the modern English word hour is a development of the Anglo-Norman houre and Middle English ure, first attested in the 13th century. It displaced the Old English tide and stound, the Anglo-Norman term was a borrowing of Old French ure, a variant of ore, which derived from Latin hōra and Greek hṓrā. Like Old English tīd and stund, hṓrā was originally a word for any span of time, including seasons. Its Proto-Indo-European root has been reconstructed as *yeh₁-, making hour distantly cognate with year, the time of day is typically expressed in English in terms of hours. Whole hours on a 12-hour clock are expressed using the contracted phrase oclock, Hours on a 24-hour clock are expressed as hundred or hundred hours. Fifteen and thirty minutes past the hour is expressed as a quarter past or after and half past, respectively, fifteen minutes before the hour may be expressed as a quarter to, of, till, or before the hour. Sumerian and Babylonian hours divided the day and night into 24 equal hours, the ancient Egyptians began dividing the night into wnwt at some time before the compilation of the Dynasty V Pyramid Texts in the 24th century BC. By 2150 BC, diagrams of stars inside Egyptian coffin lids—variously known as diagonal calendars or star clocks—attest that there were exactly 12 of these. The coffin diagrams show that the Egyptians took note of the risings of 36 stars or constellations. Each night, the rising of eleven of these decans were noted, the original decans used by the Egyptians would have fallen noticeably out of their proper places over a span of several centuries. By the time of Amenhotep III, the priests at Karnak were using water clocks to determine the hours and these were filled to the brim at sunset and the hour determined by comparing the water level against one of its twelve gauges, one for each month of the year. During the New Kingdom, another system of decans was used, the later division of the day into 12 hours was accomplished by sundials marked with ten equal divisions. The morning and evening periods when the failed to note time were observed as the first and last hours. The Egyptian hours were closely connected both with the priesthood of the gods and with their divine services, by the New Kingdom, each hour was conceived as a specific region of the sky or underworld through which Ras solar bark travelled. Protective deities were assigned to each and were used as the names of the hours, as the protectors and resurrectors of the sun, the goddesses of the night hours were considered to hold power over all lifespans and thus became part of Egyptian funerary rituals. The Egyptian for astronomer, used as a synonym for priest, was wnwty, the earliest forms of wnwt include one or three stars, with the later solar hours including the determinative hieroglyph for sun

Hour
–
Key concepts
Hour
–
Midnight on a 24-hour digital clock

75.
Minute and second of arc
–
A minute of arc, arcminute, arc minute, or minute arc is a unit of angular measurement equal to 1/60 of one degree. Since one degree is 1/360 of a turn, one minute of arc is 1/21600 of a turn, a second of arc, arcsecond, or arc second is 1/60 of an arcminute, 1/3600 of a degree, 1/1296000 of a turn, and π/648000 of a radian. To express even smaller angles, standard SI prefixes can be employed, the number of square arcminutes in a complete sphere is 4 π2 =466560000 π ≈148510660 square arcminutes. The standard symbol for marking the arcminute is the prime, though a single quote is used where only ASCII characters are permitted. One arcminute is thus written 1′ and it is also abbreviated as arcmin or amin or, less commonly, the prime with a circumflex over it. The standard symbol for the arcsecond is the prime, though a double quote is commonly used where only ASCII characters are permitted. One arcsecond is thus written 1″ and it is also abbreviated as arcsec or asec. In celestial navigation, seconds of arc are used in calculations. This notation has been carried over into marine GPS receivers, which normally display latitude and longitude in the format by default. An arcsecond is approximately the angle subtended by a U. S. dime coin at a distance of 4 kilometres, a milliarcsecond is about the size of a dime atop the Eiffel Tower as seen from New York City. A microarcsecond is about the size of a period at the end of a sentence in the Apollo mission manuals left on the Moon as seen from Earth, since antiquity the arcminute and arcsecond have been used in astronomy. The principal exception is Right ascension in equatorial coordinates, which is measured in units of hours, minutes. These small angles may also be written in milliarcseconds, or thousandths of an arcsecond, the unit of distance, the parsec, named from the parallax of one arcsecond, was developed for such parallax measurements. It is the distance at which the radius of the Earths orbit would subtend an angle of one arcsecond. The ESA astrometric space probe Gaia is hoped to measure star positions to 20 microarcseconds when it begins producing catalog positions sometime after 2016, there are about 1.3 trillion µas in a turn. Currently the best catalog positions of stars actually measured are in terms of milliarcseconds, apart from the Sun, the star with the largest angular diameter from Earth is R Doradus, a red supergiant with a diameter of 0.05 arcsecond. The dwarf planet Pluto has proven difficult to resolve because its angular diameter is about 0.1 arcsecond, space telescopes are not affected by the Earths atmosphere but are diffraction limited. For example, the Hubble space telescope can reach a size of stars down to about 0. 1″

Minute and second of arc
–
Comparison of angular diameter of the Sun, Moon, planets and the International Space Station. To get a true representation of the sizes, view the image at a distance of 103 times the width of the "Moon: max." circle. For example, if this circle is 10 cm wide on your monitor, view it from 10.3 m away.

76.
Neper
–
The neper is a logarithmic unit for ratios of measurements of physical field and power quantities, such as gain and loss of electronic signals. The units name is derived from the name of John Napier, the inventor of logarithms. As is the case for the decibel and bel, the neper is a unit of the International System of Quantities, but not part of the International System of Units, like the decibel, the neper is a unit in a logarithmic scale. While the bel uses the logarithm to compute ratios, the neper uses the natural logarithm. The value of a ratio in nepers is given by L N p = ln ⁡ x 1 x 2 = ln ⁡ x 1 − ln ⁡ x 2, where x 1 and x 2 are the values of interest, and ln is the natural logarithm. When the values are quadratic in the amplitude, they are first linearised by taking the square root before the logarithm is taken, in the ISQ, the neper is defined as 1 Np =1. The neper is defined in terms of ratios of field quantities, a power ratio 10 log r dB is equivalent to a field-quantity ratio 20 log r dB, since power is proportional to the square of the amplitude. Hence the neper and decibel are related via,1 N p =20 log 10 ⁡ e d B ≈8,685889638 d B and 1 d B =120 log 10 ⁡ e N p ≈0. The decibel and the neper have a ratio to each other. Like the decibel, the neper is a dimensionless unit, the International Telecommunication Union recognizes both units. The neper is a linear unit of relative difference, meaning in nepers, relative differences add. This property is shared with logarithmic units in other bases, such as the bel, the centineper can thus be used as a linear replacement for percentage differences. The linear approximation for small differences, =1 + δ + ϵ + δ ϵ ≈1 + δ + ϵ, is widely used. However, it is approximate, with error increasing for large percentage changes. Conversion of level gain and loss, neper, decibel, and bel Calculating transmission line losses

77.
Atomic units
–
Atomic units form a system of natural units which is especially convenient for atomic physics calculations. There are two different kinds of units, Hartree atomic units and Rydberg atomic units, which differ in the choice of the unit of mass. In Hartree units, the speed of light is approximately 137, atomic units are often abbreviated a. u. or au, not to be confused with the same abbreviation used also for astronomical units, arbitrary units, and absorbance units in different contexts. Atomic units, like SI units, have a unit of mass, a unit of length, however, the use and notation is somewhat different from SI. Suppose a particle with a mass of m has 3.4 times the mass of electron, the value of m can be written in three ways, m =3.4 m e. This is the clearest notation, where the unit is included explicitly as a symbol. This notation is ambiguous, Here, it means that the m is 3.4 times the atomic unit of mass. But if a length L were 3.4 times the unit of length. The dimension needs to be inferred from context and this notation is similar to the previous one, and has the same dimensional ambiguity. It comes from setting the atomic units to 1, in this case m e =1. These four fundamental constants form the basis of the atomic units, therefore, their numerical values in the atomic units are unity by definition. Dimensionless physical constants retain their values in any system of units, of particular importance is the fine-structure constant α = e 2 ℏ c ≈1 /137. This immediately gives the value of the speed of light, expressed in atomic units, below are given a few derived units. Some of them have names and symbols assigned, as indicated in the table. There are two variants of atomic units, one where they are used in conjunction with SI units for electromagnetism. Although the units written above are the same way, the units related to magnetism are not. In the SI system, the unit for magnetic field is 1 a. u. = ℏ e a 02 =2. 35×105 T =2. 35×109 G, and in the Gaussian-cgs unit system, = e a 02 c =1. 72×103 T =1. 72×107 G

78.
Natural units
–
In physics, natural units are physical units of measurement based only on universal physical constants. For example, the charge e is a natural unit of electric charge. It precludes the interpretation of an expression in terms of physical constants, such e and c. In this case, the reinsertion of the powers of e, c. Natural units are natural because the origin of their definition comes only from properties of nature, Planck units are often, without qualification, called natural units, although they constitute only one of several systems of natural units, albeit the best known such system. As with other systems of units, the units of a set of natural units will include definitions and values for length, mass, time, temperature. It is possible to disregard temperature as a physical quantity, since it states the energy per degree of freedom of a particle. Virtually every system of natural units normalizes Boltzmanns constant kB to 1, there are two common ways to relate charge to mass, length, and time, In Lorentz–Heaviside units, Coulombs law is F = q1q2/4πr2, and in Gaussian units, Coulombs law is F = q1q2/r2. Both possibilities are incorporated into different natural unit systems, where, α is the fine-structure constant,2 ≈0.007297, αG is the gravitational coupling constant,2 ≈ 6955175200000000000♠1. 752×10−45. Natural units are most commonly used by setting the units to one, for example, many natural unit systems include the equation c =1 in the unit-system definition, where c is the speed of light. If a velocity v is half the speed of light, then as v = c/2 and c =1, the equation v = 1/2 means the velocity v has the value one-half when measured in Planck units, or the velocity v is one-half the Planck unit of velocity. The equation c =1 can be plugged in anywhere else, for example, Einsteins equation E = mc2 can be rewritten in Planck units as E = m. This equation means The energy of a particle, measured in Planck units of energy, equals the mass of the particle, measured in Planck units of mass. For example, the special relativity equation E2 = p2c2 + m2c4 appears somewhat complicated, Physical interpretation, Natural unit systems automatically subsume dimensional analysis. For example, in Planck units, the units are defined by properties of quantum mechanics, not coincidentally, the Planck unit of length is approximately the distance at which quantum gravity effects become important. Likewise, atomic units are based on the mass and charge of an electron, no prototypes, A prototype is a physical object that defines a unit, such as the International Prototype Kilogram, a physical cylinder of metal whose mass is by definition exactly one kilogram. A prototype definition always has imperfect reproducibility between different places and between different times, and it is an advantage of natural systems that they use no prototypes. Less precise measurements, SI units are designed to be used in precision measurements, for example, the second is defined by an atomic transition frequency in cesium atoms, because this transition frequency can be precisely reproduced with atomic clock technology

79.
Outline of the metric system
–
The metric system can be described as all of the following, System – set of interacting or interdependent components forming an integrated whole. System of measurement – set of units which can be used to specify anything which can be measured, historically, systems of measurement were initially defined and regulated to support trade and internal commerce. Units were arbitrarily defined by fiat by the entities and were not necessarily well inter-related or self-consistent. When later analyzed and scientifically, some quantities were designated as fundamental units, introduction to the metric system International system of units is the system of units that has been officially endorsed under the Metre Convention since 1960. 1861 - Concept of unit coherence introduced by Maxwell - the base units were the centimetre, gram, History of metrication – metrication is the process by which legacy, national-specific systems of measurement were replaced by the metric system. Centimetre–gram–second system of units was the variant of the metric system that evolved in stages until it was superseded by SI. Gravitational metric system was a variant of the metric system that normalised the acceleration due to gravity. Metre–tonne–second system of units was a variant of the system used in French. Between 1812 and 1839 France used a system, Mesures usuelles History of the metre Prior to 1875 the metric system was controlled by the French Government. In that year, seventeen nations signed the Metre Convention and the management, Metre Convention describes the 1875 treaty and its development to the modern day. Three organisations, the CGPM, CIPM and BIPM were set up under the convention, general Conference on Weights and Measures – a meeting every four to six years of delegates from all member states. The International Committee for Weights and Measures – an advisory body to the CGPM consisting of prominent metrologists, both the European Union and the International Organization for Standardization have issued directives/recommendations to harmonise the use of units of measure. These documents endorse the use of SI for most purposes, European units of measurement directives ISO/IEC80000 New SI definitions – changes in the metric system, or more specifically, the International system of units that is expected to occur in 2018. Metric Association Metric Commission Metrication Board An Essay towards a Real Character and a Philosophical Language Reproduction –34