1.
Informatica
–
Computer science is the study of the theory, experimentation, and engineering that form the basis for the design and use of computers. An alternate, more succinct definition of science is the study of automating algorithmic processes that scale. A computer scientist specializes in the theory of computation and the design of computational systems and its fields can be divided into a variety of theoretical and practical disciplines. Some fields, such as computational complexity theory, are highly abstract, other fields still focus on challenges in implementing computation. Human–computer interaction considers the challenges in making computers and computations useful, usable, the earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, further, algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623, in 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. He may be considered the first computer scientist and information theorist, for, among other reasons and he started developing this machine in 1834, and in less than two years, he had sketched out many of the salient features of the modern computer. A crucial step was the adoption of a card system derived from the Jacquard loom making it infinitely programmable. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information, when the machine was finished, some hailed it as Babbages dream come true. During the 1940s, as new and more powerful computing machines were developed, as it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as an academic discipline in the 1950s. The worlds first computer science program, the Cambridge Diploma in Computer Science. The first computer science program in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights and it is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM released the IBM704 and later the IBM709 computers, still, working with the IBM was frustrating if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again. During the late 1950s, the science discipline was very much in its developmental stages. Time has seen significant improvements in the usability and effectiveness of computing technology, modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base

2.
Digitale beeldbewerking
–
Digital image processing is the use of computer algorithms to perform image processing on digital images. As a subcategory or field of signal processing, digital image processing has many advantages over analog image processing. It allows a wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise. Since images are defined over two dimensions digital image processing may be modeled in the form of multidimensional systems, the cost of processing was fairly high, however, with the computing equipment of that era. That changed in the 1970s, when digital image processing proliferated as cheaper computers, images then could be processed in real time, for some dedicated problems such as television standards conversion. As general-purpose computers became faster, they started to take over the role of dedicated hardware for all but the most specialized, Digital image processing technology for medical applications was inducted into the Space Foundation Space Technology Hall of Fame in 1994. In 2002 Raanan Fattel, introduced Gradient domain image processing, a new way to process images in which the differences between pixels are manipulated rather than the values themselves. J. Fundamentals of Digital Image Processing, A Practical Approach with Examples in Matlab, Digital Image Processing, An Algorithmic Approach Using Java. R. Fisher, K Dawson-Howe, A. Fitzgibbon, C, dictionary of Computer Vision and Image Processing. Gonzalez, Richard E. Woods, Steven L. Eddins, milan Sonka, Vaclav Hlavac, Roger Boyle. Image Processing, Analysis, and Machine Vision, lectures on Image Processing, by Alan Peters. IPRG Open group related to image processing research resources Processing digital images with computer algorithms IPOL Open research journal on image processing with software and web demos

3.
Pixel
–
The address of a pixel corresponds to its physical coordinates. LCD pixels are manufactured in a grid, and are often represented using dots or squares. Each pixel is a sample of an image, more samples typically provide more accurate representations of the original. The intensity of each pixel is variable, in color imaging systems, a color is typically represented by three or four component intensities such as red, green, and blue, or cyan, magenta, yellow, and black. The word pixel is based on a contraction of pix and el, the word pixel was first published in 1965 by Frederic C. Billingsley of JPL, to describe the elements of video images from space probes to the Moon. Billingsley had learned the word from Keith E. McFarland, at the Link Division of General Precision in Palo Alto, McFarland said simply it was in use at the time. The word is a combination of pix, for picture, the word pix appeared in Variety magazine headlines in 1932, as an abbreviation for the word pictures, in reference to movies. By 1938, pix was being used in reference to pictures by photojournalists. The concept of a picture element dates to the earliest days of television, some authors explain pixel as picture cell, as early as 1972. In graphics and in image and video processing, pel is often used instead of pixel, for example, IBM used it in their Technical Reference for the original PC. A pixel is generally thought of as the smallest single component of a digital image, however, the definition is highly context-sensitive. For example, there can be printed pixels in a page, or pixels carried by electronic signals, or represented by digital values, or pixels on a display device, or pixels in a digital camera. This list is not exhaustive and, depending on context, synonyms include pel, sample, byte, bit, dot, Pixels can be used as a unit of measure such as,2400 pixels per inch,640 pixels per line, or spaced 10 pixels apart. For example, a high-quality photographic image may be printed with 600 ppi on a 1200 dpi inkjet printer, even higher dpi numbers, such as the 4800 dpi quoted by printer manufacturers since 2002, do not mean much in terms of achievable resolution. The more pixels used to represent an image, the closer the result can resemble the original, the number of pixels in an image is sometimes called the resolution, though resolution has a more specific definition.3 megapixels. The pixels, or color samples, that form an image may or may not be in one-to-one correspondence with screen pixels. In computing, a composed of pixels is known as a bitmapped image or a raster image

4.
Resolutie (optische systemen)
–
In scientific analysis, in general, the term resolution is used to describe the precision with which any instrument measures and records any variable in the specimen or sample under study. The imaging systems resolution can be limited either by aberration or by diffraction causing blurring of the image and these two phenomena have different origins and are unrelated. Aberrations can be explained by geometrical optics and can in principle be solved by increasing the optical quality —, on the other hand, diffraction comes from the wave nature of light and is determined by the finite aperture of the optical elements. The lens circular aperture is analogous to a version of the single-slit experiment. The interplay between diffraction and aberration can be characterised by the point spread function, the narrower the aperture of a lens the more likely the PSF is dominated by diffraction. If the distance is greater, the two points are well resolved and if it is smaller, they are regarded as not resolved, Rayleigh defended this criteria on sources of equal strength. Considering diffraction through an aperture, this translates into, θ =1.220 λ D where θ is the angular resolution, λ is the wavelength of light. The factor 1.220 is derived from a calculation of the position of the first dark circular ring surrounding the central Airy disc of the diffraction pattern and this number is more precisely 1.21966989. The first zero of the order-one Bessel function of the first kind J1 divided by π, the formal Rayleigh criterion is close to the empirical resolution limit found earlier by the English astronomer W. R. Dawes who tested human observers on close binary stars of equal brightness. The result, θ =4. 3% dip, modern image processing techniques including deconvolution of the point spread function allow resolution of binaries with even less angular separation. The angular resolution may be converted into a resolution, Δℓ. For a microscope, that distance is close to the length f of the objective. For this case, the Rayleigh criterion reads, Δ ℓ =1.220 f λ D. This is the size, in the plane, of smallest object that the lens can resolve. The size is proportional to wavelength, λ, and thus, for example and this result is related to the Fourier properties of a lens.220 f λ D =1.22 λ ⋅. Since this is the radius of the Airy disk, the resolution is better estimated by the diameter,2.44 λ ⋅ Point-like sources separated by a smaller than the angular resolution cannot be resolved. A single optical telescope may have a resolution less than one arcsecond. The angular resolution R of a telescope can usually be approximated by R = λ D where λ is the wavelength of the observed radiation, the Resulting R is in radians

5.
Kathodestraalbuis
–
The cathode ray tube is a vacuum tube that contains one or more electron guns and a phosphorescent screen, and is used to display images. It modulates, accelerates, and deflects electron beam onto the screen to create the images, the images may represent electrical waveforms, pictures, radar targets, or others. CRTs have also used as memory devices, in which case the visible light emitted from the fluorescent material is not intended to have significant meaning to a visual observer. In television sets and computer monitors, the front area of the tube is scanned repetitively and systematically in a fixed pattern called a raster. An image is produced by controlling the intensity of each of the three beams, one for each additive primary color with a video signal as a reference. A CRT is constructed from an envelope which is large, deep, fairly heavy. The interior of a CRT is evacuated to approximately 0.01 Pa to 133 nPa. evacuation being necessary to facilitate the flight of electrons from the gun to the tubes face. That it is evacuated makes handling an intact CRT potentially dangerous due to the risk of breaking the tube and causing a violent implosion that can hurl shards of glass at great velocity. As a matter of safety, the face is made of thick lead glass so as to be highly shatter-resistant and to block most X-ray emissions. Flat panel displays can also be made in large sizes, whereas 38 to 40 was about the largest size of a CRT television, flat panels are available in 60. Cathode rays were discovered by Johann Hittorf in 1869 in primitive Crookes tubes and he observed that some unknown rays were emitted from the cathode which could cast shadows on the glowing wall of the tube, indicating the rays were traveling in straight lines. In 1890, Arthur Schuster demonstrated cathode rays could be deflected by electric fields, the earliest version of the CRT was known as the Braun tube, invented by the German physicist Ferdinand Braun in 1897. It was a diode, a modification of the Crookes tube with a phosphor-coated screen. In 1907, Russian scientist Boris Rosing used a CRT in the end of an experimental video signal to form a picture. He managed to display simple geometric shapes onto the screen, which marked the first time that CRT technology was used for what is now known as television. The first cathode ray tube to use a hot cathode was developed by John B. Johnson and Harry Weiner Weinhart of Western Electric and it was named by inventor Vladimir K. Zworykin in 1929. RCA was granted a trademark for the term in 1932, it released the term to the public domain in 1950. The first commercially made electronic television sets with cathode ray tubes were manufactured by Telefunken in Germany in 1934, in oscilloscope CRTs, electrostatic deflection is used, rather than the magnetic deflection commonly used with television and other large CRTs

6.
Scanner (grafisch)
–
Commonly used in offices are variations of the desktop flatbed scanner where the document is placed on a glass window for scanning. Mechanically driven scanners that move the document are typically used for large-format documents, a rotary scanner, used for high-speed document scanning, is a type of drum scanner that uses a CCD array instead of a photomultiplier. Non-contact planetary scanners essentially photograph delicate books and documents, all these scanners produce two-dimensional images of subjects that are usually flat, but sometimes solid, 3D scanners produce information on the three-dimensional structure of solid objects. Digital cameras can be used for the same purposes as dedicated scanners, when compared to a true scanner, a camera image is subject to a degree of distortion, reflections, shadows, low contrast, and blur due to camera shake. Resolution is sufficient for less demanding applications, Digital cameras offer advantages of speed, portability and non-contact digitizing of thick documents without damaging the book spine. As of 2010 scanning technologies were combining 3D scanners with digital cameras to create full-color, in the biomedical research area, detection devices for DNA microarrays are called scanners as well. These scanners are high-resolution systems, similar to microscopes, the detection is done via CCD or a photomultiplier tube. Modern scanners are considered the successors of early telephotography and fax input devices and it used electromagnets to drive and synchronize movement of pendulums at the source and the distant location, to scan and reproduce images. It could transmit handwriting, signatures, or drawings within an area of up to 150 x 100mm, Édouard Belins Belinograph of 1913, scanned using a photocell and transmitted over ordinary phone lines, formed the basis for the AT&T Wirephoto service. In Europe, services similar to a wirephoto were called a Belino and it was used by news agencies from the 1920s to the mid-1990s, and consisted of a rotating drum with a single photodetector at a standard speed of 60 or 120 rpm. They send a linear analog AM signal through standard telephone lines to receptors. Color photos were sent as three separated RGB filtered images consecutively, but only for special events due to transmission costs, Drum scanners capture image information with photomultiplier tubes, rather than the charge-coupled device arrays found in flatbed scanners and inexpensive film scanners. Modern color drum scanners use three matched PMTs, which red, blue, and green light, respectively. Light from the artwork is split into separate red, blue. Photomultipliers offer superior dynamic range and for this reason drum scanners can extract more detail from very dark areas of a transparency than flatbed scanners using CCD sensors. The smaller dynamic range of the CCD sensors, versus photomultiplier tubes, can lead to loss of shadow detail, while mechanics vary by manufacturer, most drum scanners pass light from halogen lamps though a focusing system to illuminate both reflective and transmissive originals. The drum scanner gets its name from the clear acrylic cylinder, depending on size, it is possible to mount originals up to 20x28, but maximum size varies by manufacturer. One of the features of drum scanners is the ability to control sample area

7.
Software
–
Computer software, or simply software, is that part of a computer system that consists of data or computer instructions, in contrast to the physical hardware from which the system is built. In computer science and software engineering, computer software is all information processed by computer systems, programs, computer software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. Computer hardware and software require each other and neither can be used on its own. At the lowest level, executable code consists of machine language instructions specific to an individual processor—typically a central processing unit, a machine language consists of groups of binary values signifying processor instructions that change the state of the computer from its preceding state. For example, an instruction may change the value stored in a storage location in the computer—an effect that is not directly observable to the user. An instruction may also cause something to appear on a display of the computer system—a state change which should be visible to the user. The processor carries out the instructions in the order they are provided, unless it is instructed to jump to a different instruction, the majority of software is written in high-level programming languages that are easier and more efficient for programmers, meaning closer to a natural language. High-level languages are translated into machine language using a compiler or an interpreter or a combination of the two, an outline for what would have been the first piece of software was written by Ada Lovelace in the 19th century, for the planned Analytical Engine. However, neither the Analytical Engine nor any software for it were ever created, the first theory about software—prior to creation of computers as we know them today—was proposed by Alan Turing in his 1935 essay Computable numbers with an application to the Entscheidungsproblem. This eventually led to the creation of the academic fields of computer science and software engineering. Computer science is more theoretical, whereas software engineering focuses on practical concerns. However, prior to 1946, software as we now understand it—programs stored in the memory of stored-program digital computers—did not yet exist, the first electronic computing devices were instead rewired in order to reprogram them. On virtually all platforms, software can be grouped into a few broad categories. There are many different types of software, because the range of tasks that can be performed with a modern computer is so large—see list of software. System software includes, Operating systems, which are collections of software that manage resources and provides common services for other software that runs on top of them. Supervisory programs, boot loaders, shells and window systems are parts of operating systems. In practice, an operating system bundled with additional software so that a user can potentially do some work with a computer that only has an operating system. Device drivers, which operate or control a particular type of device that is attached to a computer, utilities, which are computer programs designed to assist users in the maintenance and care of their computers

8.
Interpolatie
–
In the mathematical field of numerical analysis, interpolation is a method of constructing new data points within the range of a discrete set of known data points. It is often required to interpolate the value of that function for a value of the independent variable. A different problem which is related to interpolation is the approximation of a complicated function by a simple function. Suppose the formula for some given function is known, but too complex to evaluate efficiently, a few known data points from the original function can be used to create an interpolation based on a simpler function. In the examples below if we consider x as a topological space, the classical results about interpolation of operators are the Riesz–Thorin theorem and the Marcinkiewicz theorem. There are also many other subsequent results, for example, suppose we have a table like this, which gives some values of an unknown function f. Interpolation provides a means of estimating the function at intermediate points, there are many different interpolation methods, some of which are described below. Some of the concerns to take into account when choosing an appropriate algorithm are, how many data points are needed. The simplest interpolation method is to locate the nearest data value, one of the simplest methods is linear interpolation. Consider the above example of estimating f, since 2.5 is midway between 2 and 3, it is reasonable to take f midway between f =0.9093 and f =0.1411, which yields 0.5252. Another disadvantage is that the interpolant is not differentiable at the point xk, the following error estimate shows that linear interpolation is not very precise. Denote the function which we want to interpolate by g, then the linear interpolation error is | f − g | ≤ C2 where C =18 max r ∈ | g ″ |. In words, the error is proportional to the square of the distance between the data points, the error in some other methods, including polynomial interpolation and spline interpolation, is proportional to higher powers of the distance between the data points. These methods also produce smoother interpolants, polynomial interpolation is a generalization of linear interpolation. Note that the interpolant is a linear function. We now replace this interpolant with a polynomial of higher degree, consider again the problem given above. The following sixth degree polynomial goes through all the seven points, substituting x =2.5, we find that f =0.5965. Generally, if we have n points, there is exactly one polynomial of degree at most n−1 going through all the data points

9.
Digitale camera
–
A digital camera or digicam is a camera that produces digital images that can be stored in a computer, displayed on a screen and printed. Most cameras sold today are digital, and digital cameras are incorporated into many devices ranging from PDAs, Digital and movie cameras share an optical system, typically using a lens with a variable diaphragm to focus light onto an image pickup device. The diaphragm and shutter admit the correct amount of light to the imager, just as with film, however, unlike film cameras, digital cameras can display images on a screen immediately after being recorded, and store and delete images from memory. Many digital cameras can also record moving videos with sound, some digital cameras can crop and stitch pictures and perform other elementary image editing. The history of the camera began with Eugene F. Lally of the Jet Propulsion Laboratory. His 1961 idea was to take pictures of the planets and stars while travelling through space to give information about the astronauts position, unfortunately, as with Texas Instruments employee Willis Adcocks filmless camera in 1972, the technology had yet to catch up with the concept. Steven Sasson as an engineer at Eastman Kodak invented and built the first electronic camera using a charge-coupled device image sensor in 1975, earlier ones used a camera tube, later ones digitized the signal. Early uses were military and scientific, followed by medical. In the mid to late 1990s digital cameras became common among consumers, by the mid-2000s digital cameras had largely replaced film cameras, and higher-end cell phones had an integrated digital camera. By the beginning of the 2010s, almost all smartphones had a digital camera. The two major types of image sensor are CCD and CMOS. A CCD sensor has one amplifier for all the pixels, while each pixel in a CMOS active-pixel sensor has its own amplifier, compared to CCDs, CMOS sensors use less power. Cameras with a small sensor use a back-side-illuminated CMOS sensor, overall final image quality is more dependent on the image processing capability of the camera, than on sensor type. The resolution of a camera is often limited by the image sensor that turns light into that discrete signals. The brighter the image at a point on the sensor. Depending on the structure of the sensor, a color filter array may be used. The number of pixels in the sensor determines the cameras pixel count, in a typical sensor, the pixel count is the product of the number of rows and the number of columns. For example, a 1,000 by 1,000 pixel sensor would have 1,000,000 pixels, final quality of an image depends on all optical transformations in the chain of producing the image

10.
Charge-coupled device
–
A charge-coupled device is a device for the movement of electrical charge, usually from within the device to an area where the charge can be manipulated, for example conversion into a digital value. This is achieved by shifting the signals between stages within the one at a time. CCDs move charge between capacitive bins in the device, with the shift allowing for the transfer of charge between bins, in recent years CCD has become a major technology for digital imaging. In a CCD image sensor, pixels are represented by p-doped metal-oxide-semiconductors capacitors, the charge-coupled device was invented in 1969 at AT&T Bell Labs by Willard Boyle and George E. Smith. The lab was working on bubble memory when Boyle and Smith conceived of the design of what they termed, in their notebook. The device could be used as a shift register, the essence of the design was the ability to transfer charge along the surface of a semiconductor from one storage capacitor to the next. The concept was similar in principle to the device, which was developed at Philips Research Labs during the late 1960s. The first patent on the application of CCDs to imaging was assigned to Michael Tompsett, the initial paper describing the concept listed possible uses as a memory, a delay line, and an imaging device. The first experimental device demonstrating the principle was a row of closely spaced metal squares on a silicon surface electrically accessed by wire bonds. The first working CCD made with integrated circuit technology was a simple 8-bit shift register and this device had input and output circuits and was used to demonstrate its use as a shift register and as a crude eight pixel linear imaging device. Development of the device progressed at a rapid rate, by 1971, Bell researchers led by Michael Tompsett were able to capture images with simple linear devices. Several companies, including Fairchild Semiconductor, RCA and Texas Instruments, picked up on the invention, fairchilds effort, led by ex-Bell researcher Gil Amelio, was the first with commercial devices, and by 1974 had a linear 500-element device and a 2-D100 x 100 pixel device. Steven Sasson, an engineer working for Kodak, invented the first digital still camera using a Fairchild 100 x 100 CCD in 1975. The first KH-11 KENNAN reconnaissance satellite equipped with charge-coupled device array technology for imaging was launched in December 1976, under the leadership of Kazuo Iwama, Sony also started a large development effort on CCDs involving a significant investment. Eventually, Sony managed to mass-produce CCDs for their camcorders, before this happened, Iwama died in August 1982, subsequently, a CCD chip was placed on his tombstone to acknowledge his contribution. He was also awarded the 2012 IEEE Edison Medal For pioneering contributions to imaging devices including CCD Imagers, cameras, in a CCD for capturing images, there is a photoactive region, and a transmission region made out of a shift register. An image is projected through a lens onto the capacitor array, once the array has been exposed to the image, a control circuit causes each capacitor to transfer its contents to its neighbor. The last capacitor in the array dumps its charge into a charge amplifier, by repeating this process, the controlling circuit converts the entire contents of the array in the semiconductor to a sequence of voltages

1.
Ada Lovelace is credited with writing the first algorithm intended for processing on a computer.

2.
Charles Babbage is credited with inventing the first mechanical computer.

3.
The German military used the Enigma machine (shown here) during World War II for communications they wanted kept secret. The large-scale decryption of Enigma traffic at Bletchley Park was an important factor that contributed to Allied victory in WWII.

1.
Airy diffraction patterns generated by light from two points passing through a circular aperture, such as the pupil of the eye. Points far apart (top) or meeting the Rayleigh criterion (middle) can be distinguished. Points closer than the Rayleigh criterion (bottom) are difficult to distinguish.

3.
Earliest known surviving heliographic engraving, 1825, printed from a metal plate made by Nicéphore Niépce. The plate was exposed under an ordinary engraving and copied it by photographic means. This was a step towards the first permanent photograph taken with a camera.

1.
How focal length affects perspective: Varying focal lengths at identical field size achieved by different camera-subject distances. Notice that the shorter the focal length and the larger the angle of view, perspective distortion and size differences increase.

2.
A camera's angle of view can be measured horizontally, vertically, or diagonally.

1.
The area within the depth of field appears sharp, while the areas beyond the depth of field appear blurry.

2.
A macro photograph with very shallow depth of field

3.
A macro photograph of a Nokia 101 phone with extremely shallow depth of field of only a few millimeters and a strong bokeh effect.

4.
A 35 mm lens set to f/11. The depth-of-field scale (top) indicates that a subject which is anywhere between 1 and 2 meters in front of the camera will be rendered acceptably sharp. If the aperture were set to f /22 instead, everything from just over 0.7 meters almost to infinity would appear to be in focus.

1.
Coarse bokeh on a photo shot with an 85 mm lens and 70 mm entrance pupil diameter, which corresponds to f /1.2

2.
An extremely shallow depth of field, a common effect in macrophotography, emphasizes bokeh.

3.
200 mm lens at f/2.

4.
An example of the bokeh produced by the Canon 85 mm prime f/1.8 lens. The polygonal shapes are due to the 8-bladed aperture diaphragm being slightly closed. At its full aperture (f/1.8) these shapes would be smooth and not polygonal.

1.
Photomacrograph of a common yellow dung fly (Scathophaga stercoraria) made using a lens at its maximum 1:1 reproduction ratio, and an 18×24mm image sensor, the on-screen display of the photograph results in a greater than life-size image.