X.500

X.500 is a series of computer networking standards covering electronic directory services. The X.500 series was developed by ITU-T, formerly known as CCITT, and first approved in 1988.[1] The directory services were developed in order to support the requirements of X.400 electronic mail exchange and name lookup. ISO was a partner in developing the standards, incorporating them into the Open Systems Interconnection suite of protocols. ISO/IEC 9594 is the corresponding ISO identification.

Because these protocols used the OSI networking stack, a number of alternatives to DAP were developed to allow Internet clients to access the X.500 Directory using the TCP/IP networking stack. The most well-known alternative to DAP is Lightweight Directory Access Protocol (LDAP). While DAP and the other X.500 protocols can now use the TCP/IP networking stack, LDAP remains a popular directory access protocol.

The primary concept of X.500 is that there is a single Directory Information Tree (DIT), a hierarchical organization of entries which are distributed across one or more servers, called Directory System Agents (DSA). An entry consists of a set of attributes, each attribute with one or more values. Each entry has a unique Distinguished Name, formed by combining its Relative Distinguished Name (RDN), one or more attributes of the entry itself, and the RDNs of each of the superior entries up to the root of the DIT. As LDAP implements a very similar data model to that of X.500, there is further description of the data model in the article on LDAP.

X.520 and X.521 together provide a definition of a set of attributes and object classes to be used for representing people and organizations as entries in the DIT. They are one of the most widely deployed white pages schema.

X.509, the portion of the standard providing for an authentication framework, is now also widely used outside of the X.500 directory protocols. It specifies a standard format for public-key certificates.

The relationship of the X.500 Directory and X.509v3 digital certificates[edit]

The current use of X.509v3 certificates outside the Directory structure loaded directly into web browsers was necessary for e-commerce to develop by allowing for secure web based (SSL/TLS) communications which did not require the X.500 directory as a source of digital certificates as originally conceived in X.500 (1988). One should contrast the role of X.500 and X.509 to understand their relationship in that X.509 was designed to be the secure access method for updating X.500 before the WWW, but when web browsers became popular there needed to be a simple method of encrypting connections on the transport layer to web sites. Hence the trusted root certificates for supported certificate authorities were pre loaded into certificate storage areas on the personal computer or device.

The WWW e-commerce implementation of X.509v3 bypassed but did not replace the original ISO standard authentication mechanism of binding distinguished names in the X.500 Directory.

These packages of certificates can be added or removed by the end user in their software, but are reviewed by Microsoft and Mozilla in terms of their continued trustworthiness. Should a problem arise, such as what occurred with DigiNotar, browser security experts can issue an update to mark a certificate authority as untrusted, but this is a serious removal effectively of that CA from "internet trust". X.500 offers a way to view which organization claims a specific root certificate, outside of that provided bundle. This can function as a "4 corner model of trust" adding another check to determine if a root certificate has been compromised. Rules governing the Federal Bridge policy for revoking compromised certificates are available at www.idmanagement.gov.

The contrast of this browser bundled approach is that in X.500 or LDAP the attribute "caCertificate" can be "bound" to a directory entry and checked in addition to the default pre-loaded bundle of certificates of which end users typically have never noticed unless an SSL warning message has appeared.

For example, a web site using SSL, typically the DNS site name "www.foobar.com" is verified in a browser by the software using libraries that would check to see if the certificate was signed by one of the trusted root certificates given to the user.

Therefore, creating trust for users that they had reached the correct web site via HTTPS.

However, stronger checks are also possible, to indicate that more than the domain name was verified. To contrast this with X.500, the certificate is one attribute of many for an entry, in which the entry could contain anything allowable by the specific Directory schema. Thus X.500 does store the digital certificate, but it is one of many attributes that could potentially verify the organization, such as physical address, a contact telephone number and an email contact.

CA Certs or certificate authority certs are loaded into the browser automatically (in the case of Microsoft's update mechanism), or in new version updates of browsers, and the user is given further choices to import, delete, or develop an individual trust relationship with the loaded Certificate Authorities and determine how the browser will behave if OCSP revocation servers are unreachable.

This is in contrast with the Directory model which associates the attribute caCertificate with a listed certificate authority.

Thus the browser can verify the SSL cert of the website by means of the loaded group of accepted certificates or the root certificates can be looked up in an X.500 or LDAP Directory (or via HTTP/S) and imported into the list of trusted Certificate Authorities.

The "bound" distinguished name is located in the subject fields of the certificate which matches the Directory entry. X.509v3 can contain other extensions depending on the community of interest other than international domain names. For broad Internet use, RFC-5280 PKIX describes a profile for fields that may be useful for applications such as encrypted email.

An end user who relies on the authenticity of a certificate being presented to a browser or email has no simple way to compare a forged certificate presented (perhaps which triggers a browser warning) with a valid certificate, without also being given the opportunity to validate the DN or Distinguished Name which was designed to be looked up in an X.500 DIT.

The certificate itself is public and considered to be unforgeable and can therefore be distributed in any manner, but an associated binding to an identity occurs in the Directory. Binding is what links the certificate to the identity who claims to be using that certificate. For example, the X.500 software that runs the Federal Bridge has cross certificates that enable trust between certificate authorities.

Simple homographic matching of domain names has resulted in phishing attacks where a domain can appear to be legitimate, but is not.

If a X.509v3 certificate is bound to a valid organization's distinguished name within the Directory, then a simple check can be made in regards to the authenticity of the certificate by a comparison with what is presented to the browser with what is present in the Directory.

Some options do exist to check notaries to see if a certificate has only recently been seen, and therefore more likely to have been compromised.[3] If the cert is likely to be trusted and is failing because the domain name is a slight mismatch, it will then initially fail in the browser, but then be subjected to the notary trust, which can then bypass the browser warning.

A valid organizational entry, such as o=FoobarWidgets, will also have an associated alphanumeric OID, and it has been "identity proofed" by ANSI, providing another layer of assurance regarding binding the certificate to the identity.

Recent events (2011) have indicated a threat from unknown actors in nation states who have forged certificates. This was done in order to create a MITM attack against political activists in Syria accessing Facebook over the web. This would have normally triggered a browser warning, but would not if the MITM certificate was issued by a valid certificate authority already trusted by a browser or other software. Similar attacks were used by Stuxnet which allowed software to impersonate trusted code. The point of certificate transparency is to allow an end user to determine, using a simple procedure if a certificate is in fact valid. Checking against the default bundle of certificates may not be enough to do this, and therefore an additional check is desired. Other suggestions for certificate transparency have also been advanced.[4]

A different attack was used against Comodo, a certificate authority, that resulted in forged certificates that were directed at high-profile communications websites. This necessitated an emergency patch to major browsers. These certificates were actually issued from a trusted Certificate Authority, and therefore a user would have had no warning if they had gone to a faked website, in contrast with the Syria incident, where the certificate was crudely forged, including substituting Alto Palo, for Palo Alto. and incorrect serial numbers.

Some projects designed to exchange PHI, protected Health Information (which is considered to be highly HIPAA sensitive) may obtain X.509v3 certs via a CERT DNS resource record, or via LDAP to a X.500[2008] Directory. The issue of an authoritative bind then is detailed in RFCs related to the accuracy of the DNS information secured by signing from the root using DNSSEC.

The concept of root name servers has been a source of major contention in the Internet community, but for DNS is largely resolved. The name space associated with X.500 has traditionally been thought to start with a national naming authority, which mirrors the ISO/ITU approach to global systems with national representation. Thus different countries will create their own unique X.500 services. The U.S. X.500 was privatized in 1998, when the U.S. Government no longer offered X.500 or DNS registration outside of known government agencies.

The X.500 pilot project has been in development in the commercial space, and the technology continues to be present in major installations of millions of users within corporate data centers, and within the U.S. Government for credentialing.

The authors of RFC 2693 (concerning SPKI) note that "The original X.500 plan is unlikely ever to come to fruition. Collections of directory entries... are considered valuable or even confidential by those owning the lists and are not likely to be released to the world in the form of an X.500 directory sub-tree." and that "The X.500 idea of a distinguished name (a single, globally unique name that everyone could use when referring to an entity) is also not likely to occur."

"X.500 is too complex to support on desktops and over the Internet, so LDAP was created to provide this service 'for the rest of us'."[5]

1.
Film speed
–
Film speed is the measure of a photographic films sensitivity to light, determined by sensitometry and measured on various numerical scales, the most recent being the ISO system. A closely related ISO system is used to measure the sensitivity of digital imaging systems, highly sensitive films are correspondingly termed fast films. In both digital and film photography, the reduction of exposure corresponding to use of higher sensitivities generally leads to reduced image quality, in short, the higher the sensitivity, the grainier the image will be. Ultimately sensitivity is limited by the efficiency of the film or sensor. The speed of the emulsion was then expressed in degrees Warnerke corresponding with the last number visible on the plate after development. Each number represented an increase of 1/3 in speed, typical speeds were between 10° and 25° Warnerke at the time. The concept, however, was built upon in 1900 by Henry Chapman Jones in the development of his plate tester. In their system, speed numbers were inversely proportional to the exposure required, for example, an emulsion rated at 250 H&D would require ten times the exposure of an emulsion rated at 2500 H&D. The methods to determine the sensitivity were later modified in 1925, the H&D system was officially accepted as a standard in the former Soviet Union from 1928 until September 1951, when it was superseded by GOST 2817-50. The Scheinergrade system was devised by the German astronomer Julius Scheiner in 1894 originally as a method of comparing the speeds of plates used for astronomical photography, Scheiners system rated the speed of a plate by the least exposure to produce a visible darkening upon development. ≈2 The system was extended to cover larger ranges and some of its practical shortcomings were addressed by the Austrian scientist Josef Maria Eder. Scheiners system was abandoned in Germany, when the standardized DIN system was introduced in 1934. In various forms, it continued to be in use in other countries for some time. The DIN system, officially DIN standard 4512 by Deutsches Institut für Normung, was published in January 1934, International Congress of Photography held in Dresden from August 3 to 8,1931. The DIN system was inspired by Scheiners system, but the sensitivities were represented as the base 10 logarithm of the sensitivity multiplied by 10, similar to decibels. Thus an increase of 20° represented an increase in sensitivity. ≈3 /10 As in the Scheiner system, speeds were expressed in degrees, originally the sensitivity was written as a fraction with tenths, where the resultant value 1.8 represented the relative base 10 logarithm of the speed. Tenths were later abandoned with DIN4512, 1957-11, and the example above would be written as 18° DIN, the degree symbol was finally dropped with DIN4512, 1961-10

2.
Envelope
–
An envelope is a common packaging item, usually made of thin flat material. It is designed to contain an object, such as a letter or card. Traditional envelopes are made from sheets of cut to one of three shapes, a rhombus, a short-arm cross, or a kite. These shapes allow for the creation of the structure by folding the sheet sides around a central rectangular area. In this manner, an enclosure is formed with an arrangement of four flaps on the reverse side. Although in principle the flaps can be held in place by securing the topmost flap at a single point and they are most commonly used for enclosing and sending mail through a prepaid-postage postal system. Window envelopes have a cut in the front side that allows the paper within to be seen. They are generally arranged so that the address printed on the letter is visible. The window is covered with a transparent or translucent film to protect the letter inside, as was first designed by Americus F. Callahan in 1901. In some cases, shortages of materials or the need to economize resulted in envelopes that had no film covering the window, one innovative process, invented in Europe about 1905, involved using hot oil to saturate the area of the envelope where the address would appear. The treated area became sufficiently translucent for the address to be readable, as of 2009 there is no international standard for window envelopes, but some countries, including Germany and the United Kingdom, have national standards. An aerogram is related to a lettersheet, both being designed to have writing on the inside to minimize the weight, if desired, a separate letter could be enclosed with postage remaining at one penny provided the combined weight did not exceed half an ounce. This was a legacy of the system of calculating postage. During the U. S. Civil War those in the Confederate States Army occasionally used envelopes made from wallpaper, due to financial hardship. A return envelope is a pre-addressed, smaller envelope included as the contents of an envelope and can be used for courtesy reply mail, metered reply mail. Some envelopes are designed to be reused as the return envelope, the direct mail industry makes extensive use of return envelopes as a response mechanism. Up until 1840, all envelopes were handmade, each being individually cut to the appropriate shape out of a rectangular sheet. In 1845 Edwin Hill and Warren de la Rue obtained a patent for a machine that not only cut out the envelope shapes

3.
Web browser
–
A web browser is a software application for retrieving, presenting and traversing information resources on the World Wide Web. An information resource is identified by a Uniform Resource Identifier that may be a web page, image, hyperlinks present in resources enable users easily to navigate their browsers to related resources. Although browsers are primarily intended to use the World Wide Web, the most popular web browsers are Google Chrome, Microsoft Edge, Safari, Opera and Firefox. The first web browser was invented in 1990 by Sir Tim Berners-Lee, Berners-Lee is the director of the World Wide Web Consortium, which oversees the Webs continued development, and is also the founder of the World Wide Web Foundation. His browser was called WorldWideWeb and later renamed Nexus, the first commonly available web browser with a graphical user interface was Erwise. The development of Erwise was initiated by Robert Cailliau, andreesens browser sparked the internet boom of the 1990s. The introduction of Mosaic in 1993 – one of the first graphical web browsers – led to an explosion in web use, Microsoft responded with its Internet Explorer in 1995, also heavily influenced by Mosaic, initiating the industrys first browser war. Bundled with Windows, Internet Explorer gained dominance in the web browser market, Internet Explorer usage share peaked at over 95% by 2002. Opera debuted in 1996, it has never achieved widespread use and it is also available on several other embedded systems, including Nintendos Wii video game console. In 1998, Netscape launched what was to become the Mozilla Foundation in an attempt to produce a competitive browser using the open source software model, as of August 2011, Firefox has a 28% usage share. Apples Safari had its first beta release in January 2003, as of April 2011, the most recent major entrant to the browser market is Chrome, first released in September 2008. Chromes take-up has increased year by year, by doubling its usage share from 8% to 16% by August 2011. This increase seems largely to be at the expense of Internet Explorer, in December 2011, Chrome overtook Internet Explorer 8 as the most widely used web browser but still had lower usage than all versions of Internet Explorer combined. Chromes user-base continued to grow and in May 2012, Chromes usage passed the usage of all versions of Internet Explorer combined, by April 2014, Chromes usage had hit 45%. Internet Explorer was deprecated in Windows 10, with Microsoft Edge replacing it as the web browser. The ways that web browser makers fund their development costs has changed over time, the first web browser, WorldWideWeb, was a research project. In addition to being freeware, Netscape Navigator and Opera were also sold commercially, Internet Explorer, on the other hand, was bundled free with the Windows operating system, and therefore it was funded partly by the sales of Windows to computer manufacturers and direct to users. Internet Explorer also used to be available for the Mac, in this respect, IE may have contributed to Windows and Microsoft applications sales in another way, through lock-in to Microsofts browser

4.
International Organization for Standardization
–
The International Organization for Standardization is an international standard-setting body composed of representatives from various national standards organizations. Founded on 23 February 1947, the organization promotes worldwide proprietary and it is headquartered in Geneva, Switzerland, and as of March 2017 works in 162 countries. It was one of the first organizations granted general consultative status with the United Nations Economic, ISO, the International Organization for Standardization, is an independent, non-governmental organization, the members of which are the standards organizations of the 162 member countries. It is the worlds largest developer of international standards and facilitates world trade by providing common standards between nations. Nearly twenty thousand standards have been set covering everything from manufactured products and technology to food safety, use of the standards aids in the creation of products and services that are safe, reliable and of good quality. The standards help businesses increase productivity while minimizing errors and waste, by enabling products from different markets to be directly compared, they facilitate companies in entering new markets and assist in the development of global trade on a fair basis. The standards also serve to safeguard consumers and the end-users of products and services, the three official languages of the ISO are English, French, and Russian. The name of the organization in French is Organisation internationale de normalisation, according to the ISO, as its name in different languages would have different abbreviations, the organization adopted ISO as its abbreviated name in reference to the Greek word isos. However, during the meetings of the new organization, this Greek word was not invoked. Both the name ISO and the logo are registered trademarks, the organization today known as ISO began in 1926 as the International Federation of the National Standardizing Associations. ISO is an organization whose members are recognized authorities on standards. Members meet annually at a General Assembly to discuss ISOs strategic objectives, the organization is coordinated by a Central Secretariat based in Geneva. A Council with a membership of 20 member bodies provides guidance and governance. The Technical Management Board is responsible for over 250 technical committees, ISO has formed joint committees with the International Electrotechnical Commission to develop standards and terminology in the areas of electrical and electronic related technologies. Information technology ISO/IEC Joint Technical Committee 1 was created in 1987 to evelop, maintain, ISO has three membership categories, Member bodies are national bodies considered the most representative standards body in each country. These are the members of ISO that have voting rights. Correspondent members are countries that do not have their own standards organization and these members are informed about ISOs work, but do not participate in standards promulgation. Subscriber members are countries with small economies and they pay reduced membership fees, but can follow the development of standards

5.
Internet protocol suite
–
The Internet protocol suite is the conceptual model and set of communications protocols used on the Internet and similar computer networks. It is commonly known as TCP/IP because the protocols in the suite are the Transmission Control Protocol. It is occasionally known as the Department of Defense model, because the development of the model was funded by DARPA. The Internet protocol suite provides end-to-end data communication specifying how data should be packetized, addressed, transmitted, routed and received and this functionality is organized into four abstraction layers which are used to sort all related protocols according to the scope of networking involved. Technical standards specifying the Internet protocol suite and many of its constituent protocols are maintained by the Internet Engineering Task Force, the Internet protocol suite model is a simpler model developed prior to the OSI model. The Internet protocol suite resulted from research and development conducted by the Defense Advanced Research Projects Agency in the late 1960s, after initiating the pioneering ARPANET in 1969, DARPA started work on a number of other data transmission technologies. In 1972, Robert E. Cerf credits Hubert Zimmermann and Louis Pouzin, designer of the CYCLADES network, the protocol was implemented as the Transmission Control Program, first published in 1974. Initially, the TCP managed both datagram transmissions and routing, but as the protocol grew, other researchers recommended a division of functionality into protocol layers, postel stated, “we are screwing up in our design of Internet protocols by violating the principle of layering”. Encapsulation of different mechanisms was intended to create an environment where the layers could access only what was needed from the lower layers. A monolithic design would be inflexible and lead to scalability issues, the Transmission Control Program was split into two distinct protocols, the Transmission Control Protocol and the Internet Protocol. The new suite replaced all protocols used previously and this design is known as the end-to-end principle. Using this design, it possible to connect almost any network to the ARPANET, irrespective of the local characteristics. One popular expression is that TCP/IP, the product of Cerf and Kahns work. A computer called a router is provided with an interface to each network and it forwards packets back and forth between them. Originally a router was called gateway, but the term was changed to avoid confusion with other types of gateways, from 1973 to 1974, Cerfs networking research group at Stanford worked out details of the idea, resulting in the first TCP specification. A significant technical influence was the early networking work at Xerox PARC, DARPA then contracted with BBN Technologies, Stanford University, and the University College London to develop operational versions of the protocol on different hardware platforms. Four versions were developed, TCP v1, TCP v2, TCP v3 and IP v3, the last protocol is still in use today. In 1975, a two-network TCP/IP communications test was performed between Stanford and University College London, in November,1977, a three-network TCP/IP test was conducted between sites in the US, the UK, and Norway

6.
Vicat softening point
–
Vicat softening temperature or Vicat hardness is the determination of the softening point for materials that have no definite melting point, such as plastics. It is taken as the temperature at which the specimen is penetrated to a depth of 1 mm by a needle with a 1 mm2 circular or square cross-section. For the Vicat A test, a load of 10 N is used, for the Vicat B test, the load is 50 N. Standards to determine Vicat softening point include ASTM D1525 and ISO306, Property information for specific grades of resin are available in the Prospector Plastic Database. Property Search lets you search for plastics by more than 400 material properties, Vicat Softening Temperature - ISO306 The vicat softening temperature can be used to compare the heat-characteristics of different materials. Four different methods may be used for testing, ISO10350 Note ISO10350 Vicat values are tested using the B50 method. Similar Standards, ASTM D1525 Property information for specific grades of resin are available in the UL IDES Prospector Plastic Database

7.
X.509
–
In cryptography, X.509 is a standard that defines the format of public key certificates. X.509 certificates are used in many Internet protocols, including TLS/SSL, which is the basis for HTTPS, theyre also used in offline applications, like electronic signatures. An X.509 certificate contains a key and an identity. Besides the format for certificates themselves, X. X.509 is defined by the International Telecommunications Unions Standardization sector, X.509 was initially issued on July 3,1988 and was begun in association with the X.500 standard. It assumes a strict system of certificate authorities for issuing the certificates. This contrasts with web of trust models, like PGP, where anyone may sign, version 3 of X.509 includes the flexibility to support other topologies like bridges and meshes. It can be used in a peer-to-peer, OpenPGP-like web of trust, in the X.509 system, an organization that wants a signed certificate requests one via a certificate signing request. To do this, it first generates a key pair, keeping the private key secret and this contains information identifying the applicant and the applicants public key that is used to verify the signature of the CSR - and the Distinguished Name that the certificate is for. The CSR may be accompanied by other credentials or proofs of identity required by the certificate authority, the certification authority issues a certificate binding a public key to a particular distinguished name. An organizations trusted root certificates can be distributed to all employees so that they can use the company PKI system, for example, Firefox provides CSV file containing List of Included CAs. X.509 also includes standards for certificate revocation list implementations, the IETF-approved way of checking a certificates validity is the Online Certificate Status Protocol. Firefox 3 enables OCSP checking by default, as do versions of Windows from at least Vista, the structure foreseen by the standards is expressed in a formal language, Abstract Syntax Notation One. A certificate-using system must reject the certificate if it encounters a critical extension that it not recognize. A non-critical extension may be ignored if it is not recognized, the structure of version 1 is given in RFC1422. ITU-T introduced issuer and subject unique identifiers in version 2 to permit the reuse of issuer or subject name after some time, an example of reuse will be when a CA goes bankrupt and its name is deleted from the countrys public list. After some time another CA with the name may register itself. However, IETF recommends that no issuer and subject names be reused, therefore, version 2 is not widely deployed in the Internet. Extensions were introduced in version 3, a CA can use extensions to issue a certificate only for a specific purpose

8.
A440 (pitch standard)
–
A440 or A4, which has a frequency of 440 Hz, is the musical note A above middle C and serves as a general tuning standard for musical pitch. Prior to the standardization on 440 Hz, many countries and organizations followed the French standard since the 1860s of 435 Hz, which had also been the Austrian governments 1885 recommendation. Johann Heinrich Scheibler recommended A440 as a standard in 1834 after inventing the tonometer to measure pitch, the American music industry reached an informal standard of 440 Hz in 1926, and some began using it in instrument manufacturing. In 1936 the American Standards Association recommended that the A above middle C be tuned to 440 Hz and this standard was taken up by the International Organization for Standardization in 1955 as ISO16. It is designated A4 in scientific pitch notation because it occurs in the octave that starts with the fourth C key on a standard 88-key piano keyboard, A440 is widely used as concert pitch in the United Kingdom and the United States. In continental Europe the frequency of A4 commonly varies between 440 Hz and 444 Hz, in the period instrument movement, a consensus has arisen around a modern baroque pitch of 415 Hz, baroque for some special church music at 466 Hz, and classical pitch at 430 Hz. A440 is often used as a reference in just intonation regardless of the fundamental note or key. The US time and frequency station WWV broadcasts a 440 Hz signal at two minutes past every hour, with WWVH broadcasting the same tone at the first minute past every hour and this was added in 1936 to aid orchestras in tuning their instruments. History of pitch standards in Western music Concert pitch Pitch Electronic tuner A

9.
British Standard Pipe
–
It has been adopted as standard in plumbing and pipe fitting, except in the United States, where NPT and related threads are the standard used. These can be combined into two types of joints, Jointing threads These are pipe threads where pressure-tightness is made through the mating of two threads together. In the modern standard version, it is simply a size number. For a taper thread, it is the diameter at the length from the small end of the thread. The taper is 1 to 16, meaning that for each 16 units of measurement increase in the distance from the end, for left-hand threads, the letters, LH, are appended

10.
Man-in-the-middle attack
–
The attacker must be able to intercept all relevant messages passing between the two victims and inject new ones. This is straightforward in many circumstances, for example, an attacker within reception range of wireless access point can insert himself as a man-in-the-middle, all protocols include some form of endpoint authentication specifically to prevent attacks. For example authentication forwarded to one or two parties using a mutually trusted certificate of authority. Suppose Alice wishes to communicate with Bob, meanwhile, Mallory wishes to intercept the conversation to eavesdrop and optionally to deliver a false message to Bob. First, Alice asks Bob for his public key, if Bob sends his public key to Alice, but Mallory is able to intercept it, a man-in-the-middle attack can begin. Mallory sends a message to Alice that purports to come from Bob. Alice, believing this public key to be Bobs, encrypts her message with Mallorys key, Mallory again intercepts, deciphers the message using her private key, possibly alters it if she wants, and re-enciphers it using the public key Bob originally sent to Alice. When Bob receives the newly enciphered message, he believes it came from Alice, Alice sends a message to Bob, which is intercepted by Mallory, Alice Hi Bob, its Alice. → Mallory Bob Mallory relays this message to Bob, Bob cannot tell it is not really from Alice, Alice Mallory Hi Bob, → Bob Bob thinks that this message is a secure communication from Alice. Bob goes to the van down by the river and gets robbed by Mallory and this example shows the need for Alice and Bob to have some way to ensure that they are truly using each individual private key, rather than the public key of an attacker. Otherwise, such attacks are possible, in principle, against any message sent using public-key or private-key technology. A variety of techniques can help defend against attacks, MITM attacks are largely detected, or prevented by two means, authentication, and tamper detection. Authentication provides some degree of a guarantee that a message has come from a source. Means of tamper detection, by comparison, merely shows evidence that a message may have been altered, all systems that are secure against MITM attacks provide some method of authentication for messages. Most require an exchange of information in addition to the message over a secure channel, a public key infrastructure, such as Transport Layer Security, may harden Transmission Control Protocol against Man-in-the-middle-attacks. In such structures, clients and servers exchange certificates which are issued and verified by a third party called a certificate authority. However, these require a human in the loop in order to successfully initiate the transaction. Subsequent transactions then require one or more of the keys in the list must be used by the server in order to authenticate that transaction

Film speed is the measure of a photographic film's sensitivity to light, determined by sensitometry and measured on …

This film container denotes its speed as ISO 100/21°, including both arithmetic (100 ASA) and logarithmic (21 DIN) components. The second is often dropped, making (e.g.) "ISO 100" effectively equivalent to the older ASA speed. (As is common, the "100" in the film name alludes to its ISO rating).

Two Internet hosts connected via two routers and the corresponding layers used at each hop. The application on each host executes read and write operations as if the processes were directly connected to each other by some kind of data pipe. Every other detail of the communication is hidden from each process. The underlying mechanisms that transmit data between the host computers are located in the lower protocol layers.

Encapsulation of application data descending through the layers described in RFC 1122