www.scantips.com

Pixels, Printers, VideoWhat's With That?

These basics are pretty much about the single issue: How do I use my image, how to I make it be the proper size for viewing, for printing, or for the video monitor? All this is really quite easy, but digital may just be a new concept. It is like learning to drive - once you learn an easy thing or two, it's a skill helpful for life. When you know, you will simply just know.
But yes, it does seem that we could subtitle this: Details that no beginner wants to know. However the point is: You'll never grasp digital images until you get it ... until you know what digital images are, what to do with them, and how to do it.

Seriously, once we accept that pixels actually exist, then all this stuff is rather easy. It's about pixels.

This page tries to be a quick summary of the digital concepts, about how things work. The answer to virtually any question about image size starts with one of these basics. To be able to use digital images well, we need this understanding. This may perhaps be written a little like an argument, refuting the dumb incorrect myths we may have heard about how digital works. The concepts below are instead what you need to know to use digital images properly. It is actually rather easy to grasp, if you get started right.

The Most Fundamental Digital Concepts

The size of an image might be, for example, 4000x3000 pixels. That is 4000x3000 = 12 megapixels. Or, 4288x2848 is also 12 megapixels (rounded off). We tend to think of this as the "resolution" of the image. The pixels do indicate the "fineness" of the smallest possible digital detail (a pixel, which is a dot of one color).
This example is borrowed from the image Resize page, to show the idea about pixels.

400x500 pixels, 0.2 megapixels

The concept of pixels: This is an enlarged view of a tiny 58x58 pixel area of this picture (at arrow near center), shown at 800% size to be able to see the pixels.

Brief Digital Concepts

Pixels are how digital reproduces a scene and its colors. The digital camera merely takes many color samples (each is a pixel), of many very tiny areas, in this way shown. Film uses tiny specks of silver or emulsion dyes instead of pixels, which is not digital numbers, but film does the same sampling idea (colors of many tiny areas). Film areas actually show the color, which we can see. However, digital is totally about pixels, which are numbers representing the color. For example, the reddest orchids above have RGB components of about RGB(220, 6, 136), each on a scale of [0..255], so red is bright, green is weak, blue about mid-range. This color describes that shade of bluish red in one tiny area, a pixel. We don't have to know much detail, but more is at Wikipedia about the RGB color system.

The main concept of digital is that each pixel is just NUMBERS, binary data describing ONE RGB COLOR for one tiny area, a tiny dot of color, much like one small colored tile in a mosaic tile picture. The numeric concept may be new today (called digital), but the tile concept is 5000 years old. Our brain recognizes the reproduced image in those pixels or tiles. But enlarge these enough, and all you will see is the pixels (or the individual tiles). Pixels are all there is in a digital image, and we must think of it that way. Ignoring them will Not grasp the concept. Digital will make sense when you do think of pixels.

Pixels are real, they exist, in fact, pixels are ALL that exist in digital images. There is nothing else in a digital image. We don't need to see the individual pixels, but the image Size dimension in Pixels is the First Thing To Know about using any digital image, because this size in pixels is what is important for any use. The size of a digital image is dimensioned in pixels.

FWIW, we see some fanciful things in movies, where tremendously enlarging photo prints provides clues to solve crimes. The resolution decreases as the size increases, so it really does not work that well in that degree (enlarging film is much better than prints). Enlarging digital excessively only shows pixels.

Human eyes have rods and cones which are a similar sampling system of tiny areas. Cones are color sensitive, with red, green or blue cones. Sampling the color of tiny areas is not unlike pixels in that way. The color difference of adjacent areas is how image detail is perceived. We see a black power wire running across a blue sky because the colors are different. Color difference is the detail that we perceive (including slightest tonal shades of same color). In our digital pictures, a pixel is the smallest dot of color that can be reproduced, so we do think of more and smaller pixels as greater resolution of detail.

However, digital reproduction is a "copy" of an image. We should also realize that it is the camera lens that creates the image that we will reproduce digitally, and pixels are the detail of reproducing the lens image. For example, in a DX cropped sensor camera, the original is the image from the lens projected onto the 24x16 mm DX digital camera sensor. The image has this 24x16 mm size there, comparing to the size of an APS-C size film image. Then, the camera pixels merely digitally sample that lens image (very much like any scanner samples an image, meaning taking many color samples called pixels) to try to digitally reproduce (convert to numbers) the image that the lens created. A pixel is just numbers, three binary RGB numbers representing the red, green and blue components of the color of the area of that pixel. The pixels do NOT create the image, and cannot improve the lens image detail. The pixel sampling merely strives to reproduce its detail. At best, it can hopefully be a very good reproduction. A 24 megapixel DX image and a 24 megapixel FX image are NOT equal, because the FX image is simply half again larger (36x24 mm), and so does not have to be enlarged as much to show it.

Essentials to Know about Using Digital Images

Number One Basic Fact: The Size of a digital image is dimensioned in pixels. (Not in bytes, not in inches)

Any given digital image can have one of several sizes in bytes due to variable compression. It can have several sizes in inches or cm due to variable scaling when printing. But it has only one size in pixels, which defines how it might be used.

A pixel is just numbers that represent a color, specifically, the three RGB numbers of a color specification - which represent the average color that was sampled from this tiny dot of image area. When the image is viewed or printed, each little dot of image area is shown to be that corresponding color. In that way, digital images are a little like mosaic tile pictures (but in an ordered grid pattern). Each little dot is one color, and our human brain puts them together to recognize an image in all those colored dots. If it is an ordinary standard 24-bit RGB image (like JPG), the pixel data is one byte for each of the Red, Green, Blue components of the pixel, which is three bytes per pixel. So if 12 megapixels, then x3 is 36 million bytes of data (assuming the standard 24 bits). That is simply the actual data size of any 12 megapixel RGB image data, however you may see it compressed much smaller while it is in a JPG file (JPG file size is much smaller than the image data size, via JPG compression). But when that file is opened, it is full size again in computer memory, three bytes per pixel (24 bits). For other than 24-bit, and for the special interpretation of "megabytes", see more detail, and also for a calculator to convert bytes, KB, MB, GB, and TB.

The size of that image data when opened in memory is in bytes of memory. 24-bit RGB images (8-bit color) is always three bytes of RGB data per pixel. So bytes is the "data size", but "image size" is always in pixels. Whereas, inches only refer to the paper where these pixels will be printed.

A JPG file is compressed to be maybe 1/10 this data size (roughly, can be very variable), while in the JPG file, but 12 megapixels opens again to 36 million bytes in memory. JPG uses lossy compression, which means we can specify High JPG Quality for a larger better file, or Low JPG Quality for a smaller worse file (when and if file size is more important than image quality). See JPG.

Opinion here, a little outside of Basics... But of course, we don't have to always use JPG files. We can get additional JPG losses (quality losses, i.e., artifacts) every time we SAVE JPG again (Save is when JPG compression is done, creating the JPG). So repetitive editing and saving JPG is a no-no. If you don't do editing, cropping or resampling (needing saving again), it may not matter. If you do, then we can just not do JPG. Cameras that offer Raw output offer much greater range of adjustment out of the camera, substantial advantage anyway. And then TIF and PNG files are both lossless for in-work Saves, which are of course much larger files (closer to actual data size), but there's never any question about maximum quality (see File Formats) - and work files are often temporary, if even necessary. The single one last Save often does need to be JPG for the final purpose, but we can work it so that this one last time is the only one, and it can be High JPG Quality. This is two times as JPG (including original from camera), but which is better than six times. Then should any subsequent need for additional edit ever arise, this JPG be discarded (expendable), and we start over from our archived master version (which is normally the Raw master for me, it already contains my tonal edits). Just saying, file type is a choice, depending on your need for highest quality.

Image Size and Data Size Calculator

Numbers Only. A NaN result means an entry was "Not a Number"

Image Width:

pixels

Image Height:

pixels

Data Type:

If Printed at:

pixels per inch

This Calculator requires JavaScript be enabled in your browser.

This calculator tries to make the point that images involve four different sizes, used for different purposes. The numbers used to describe the actual size of the image is width x height, in pixels.

Data size is the uncompressed data, the actual data size - how large your uncompressed image data actually is - normally 3 bytes per pixel (usual RGB, for example JPG files). Compressed File Size in bytes is the least useful number, only of interest for internet transfer or memory card capacity. But pixels is the important number which determines how an image can be used.

The compressed file size will be smaller (variable cases, but JPG will be much smaller, file perhaps only 10% or 15% of data size).

Exif data will be added, and a few formatting bytes. Camera Raw image files also contain a Large JPG image too (this JPG is to show on the camera rear LCD, and it provides the histogram too).

Regarding color bit depth, many inexpensive LCD monitors have used only 6 bits (18-bit color). For photo work, look for the better monitors that specify 24-bit color. Good IPS monitors are becoming inexpensive now (I've been really pleased with a low priced Dell IPS monitor).

Repeating, to be sure it is clear that images have four very different "sizes", of interest in different situations.

If someone tells us they are sending us a 12 megabyte file, that tells us maybe the internet load, or how it will fill our disk, but bytes tells us nothing about the image, or about the image size, or about how we might use it. Bytes can involve data compression, another variable. Images are dimensioned in pixels.

For example, if about a 12 megapixel image:

Image Size is dimensioned in pixels, like 4000 x 3000 pixels. 4000x3000 = 12 million pixels, or 12 megapixels. This is the very first thing to know about an image, because this size in pixels (the dimensions in pixels) is the important factor for any use. Pixels is what we have, and pixels is all that we have. You must think about pixels (meaning, we must know and consider the image dimension in pixels).

Data Size is in bytes, three bytes of RGB data per pixel (if normal 24-bit color images, like JPG). So for example, 12 megapixels x 3 = 36 million bytes (about 34.3 MB), when uncompressed. Three bytes per pixel is TRUE of any common color image you might store in a JPG file (and of other 24 bit cases too). This is simply how large your RGB data is - three bytes per pixel - or three million bytes per megapixel (assuming normal 24 bit RGB data). The binary data is always this size when opened into computer memory.

A few specifics about Data Size: (See formats and megabytes, or a megapixel converter). Bytes are 8-bit numbers, of values ranging from 0..255. Because 2 to the power of 8 is 256, which is the maximum number (of values 0..255) that can be stored in 8 bits. Larger numbers require multiple bytes.

Grayscale 8-bit images are one byte of gray per pixel, values 0 to 255. Speaking of this tiny pixel area of the image, 0 is as dark as it can be (black), and 255 is as bright as it can be (white).

JPG color is always three 8-bit bytes per color pixel, one 8-bit number for each of red, green, blue channels, which totals 24 bits. (This is often called 8 bit color, but should be called 24 bit "color" in an 8 bit file). In each of the three RGB channels, again 0 is as dark as it can be, and 255 is as bright as it can be (very bright red or green or blue). 8 bits numbers can range from 0 to 255, so there are 256 possible shades (of each of red and green and blue). All of the combinations of these three can produce 256 to the power of 3 = 16,777,216 possible colors.

There are 16 bit formats, which is 16 bit numbers for each value of R,G, B, or six bytes per color pixel (2 to power 16 = 65536 possible values in each value, or 65536 to the power of 3 = 281 trillion possible colors). Typically TIF or PNG files can store either 8 or 16 bit color. JPG is 24 bit only.
And our monitors and printers only reproduce color as 24 bits.

Raw file pixels are not three RGB colors, but are only one of those colors per pixel. Raw is commonly 12 bits per pixel, or 1.5 bytes per pixel (uncompressed). Some Raw can be 14 bits, or 1.75 bytes per pixel. But Raw images cannot be viewed until we convert them to standard RGB images, then three bytes per pixel. So FWIW, since Raw cannot be viewed on RGB LCD screens, the Raw image you think you see on the camera rear LCD is instead an embedded JPG RGB copy created just to show there. The camera histogram is also created from this embedded JPG RGB copy. This JPG copy has the camera settings applied to it, but the raw data does not. We see raw images in the computer only after raw software converts them to normal RGB images (and then we apply our own "settings" to it there, after we can see it). The finished JPG image we make from the raw image will be normal 24 bit color (3 bytes per pixel uncompressed, but JPG is compressed drastically). For advantages of raw, see Shooting Raw.

Data size is image pixels only, and does not include the Exif data.

File Size is relatively uninteresting. Disk storage or internet transmission may have concerns, but file size is just overhead, not descriptive about the image. Data size is simply how large your image data is. Then file size varies considerably with data compression. Uncompressed files are as large as Data size, plus some bytes added for Exif. But file size is usually compressed in some degree, to be a file smaller than Data Size.

The data in JPG files especially, is dramatically compressed extremely smaller, in variable degree, typically perhaps to only 1/4 to 1/12 of Data size, but too much JPG compression can reduce image quality. The JPG file size varies widely with JPG Quality setting. High JPG Quality is a larger file but better image, and Low JPG Quality is a smaller file but a worse image (but who wants lower quality?) The JPG Quality number is a better quality guide than the file megabyte size. We should always favor a larger JPG file size, because smaller is counter-productive to quality. For the file to be so small, JPG is lossy compression, meaning liberties are taken, so that recovery is not perfect, and image quality can be reduced. We still get the same megapixel count back out, but the pixels you wrote into the JPG file are not necessarily quite the same (color of) pixels you see when opened to retrieve them (see JPG Artifacts). A pixel is only the color definition of a tiny spot of area, so a JPG artifact is a pattern of changed colors. Color difference is the detail we can detect and observe.

The camera menu JPG choices affecting file size are:

Image Size: Large, Medium, Small - This is the image size, the number of pixels. (Raw is always Large). It affects Data Size. More pixels is a larger file. Large is the default, and will leave you ready for any eventual use.

However, an image for a large video monitor or HDTV, or for a 4x6 inch print, needs only about 2 megapixels. If these are our only goals, and if we do want a smaller file, then for best image quality, I suggest that Small Fine is a greatly better choice then Large Basic (but Small won't print 8x10 inches as well, nor will it allow as much cropping).

The terms "Normal" and "Basic" are arguable, compression is the opposite of best image quality, and Fine is the better default (why would we want less quality?) Lossless compression (choices other than JPG) is less effective to reduce file size, because lossless has to promise to preserve and deliver the full quality of the image (no heroic shortcuts, no quality losses). Notice that lossless compression can still be impressively small, but maybe not incredibly small. The Windows file Explorer "Properties" will show file size in MB and in bytes.

The RGB image Data size is always the X by Y pixel dimensions times 3 bytes per pixel, which is simply how large your data is (for JPG and other 24 bit images). But the compressed file size varies somewhat with the individual image content in the scene (much fine detail is larger, much blank areas compress smaller). If you have a couple hundred camera JPG in one disk folder (if all are the same size settings from same camera, but are very varied image content), and click to sort them by file size, the largest (most detailed) JPG is probably about 2x larger then the smallest (least detailed), with the average size more in the middle.

Print Size - the size of the paper (in inches) which these pixels will cover when printed - 3000 pixels printed at 300 pixels per inch will cover 10 inches of paper - or the same 3000 pixels will cover 3000/200 = 15 inches printed at 200 pixels per inch (per inch of paper). Dpi is not a property of the image. Dpi is an arbitrary separate number with which we declare our printed image size choice when we print (declaring printed choice is called scaling, below).

Make no mistake though, Image size is dimensioned in pixels. It is always all about pixels. Digital cameras create pixels. Inches are only about the specific piece of paper. Bytes are only about memory. Pixels are about the image.

Continuing now with the list of Essentials to Know to USE images. This is the part that confuses people (about dpi), but it is pretty simple, and this should clarify.

Printers and Video screens are extremely different devices with respect to basic concepts.

This is a very big deal. Printers print on paper which is dimensioned in inches, but video screens are instead dimensioned in pixels (there is no concept of inches in video systems). This difference gets our attention. These devices do NOT work alike. They both show the same pixels in their way, but the basic concepts are quite different. Printers space the pixels on paper, at perhaps 300 pixels per inch of paper. Video monitor screens show the image pixels directly one for one on the monitor pixels.

Video screens show pixels directly.

When I say Video, I don't mean movies, instead I mean the monitor viewing screen, computer or TV. The video screen size is dimensioned in pixels, and the image is dimensioned in pixels, and the pixels are simply shown directly - without any concept of dpi. The video screen simply shows pixels one for one - one image pixel on one monitor pixel. So for example (one pixel of image on one pixel of screen), an image 800 pixels wide will fill exactly half the width of a 1600 pixel screen width.
People telling you the image needs to say 72 dpi for the screen or web are simply just wrong. Video shows pixels, with no concept of inches or dpi. On video screens, it does not matter at all what the dpi number is. The screen shows pixels directly.

When we show a big image, larger than our viewing screen (both are dimensioned in pixels), our viewing software normally instead shows us a temporary quickly resampled copy, small enough to be able to fit on the screen so we can see it, for example, perhaps maybe 1/4 actual size (this fraction is normally indicated to us, so we know it is not the full size real data). We still see the pixels of that smaller image presented directly, one for one, on the screen, which is the only way video can show images. When we edit it, we change the corresponding pixels in the original large image data, but we still see a new smaller resampled copy of those changes.

Dpi and inches are unknown concepts (not used) in video systems, or in digital cameras.

Digital images are dimensioned in pixels.

The video screen is dimensioned in pixels. Our computer video board is configured to show XxY pixels.

The video screen simply shows the image pixels directly, one for one
(or maybe shows a smaller resampled temporary copy to fit on the screen, but then still shows one new resampled image pixel on one screen pixel).

The digital camera sensor is dimensioned in pixels.

The camera sensor size creates the image to be its same size, dimensioned in pixels.

None of the above involves inches or dpi in any way. Only paper prints involve inches. Because paper is dimensioned in inches.

But images and video screens are dimensioned in pixels. So on paper, the concept of dpi (pixels per inch) is used to scale the image size (pixels) to the paper size (inches).

The dpi value shown in camera images is just some clutter in the file header, merely a separate arbitrary number which has not affected the pixels in the image file in any way. Dpi is only for printing, or for scanning. The scanner does assign the scaled dpi number you choose when scanning, so that has meaning, it will print that size. But the camera just assigns some meaningless arbitrary dpi number to the image file (print size might indicate a few feet). Of course, it has no clue what size you might choose to print it later, if you even decide to print it. Otherwise, it simply does not matter what this dpi number is, it has no use, not until the time you actually print it on paper, when you will decide an appropriate value (see Scaling below).

There is no concept of inches or dpi used in the video system. It doesn't matter if the monitor is a 12 inch screen or a 72 inch HDTV screen, if it is set to show 1920x1080 pixels, it will show 1920x1080 pixels (about 2 megapixels). Both monitor sizes show the SAME 1920x1080 image pixels, just at different sizes on the two physical screens. You might think you are showing your image to be, say 8 inches wide on your computer monitor, but it probably will show a different size on some other monitor of different size or different resolution setting. In our photo editor, we would see whatever size the image actually is (in pixels), but large images are normally resampled to show a copy that fit on the screen. We don't all see the same size in video, it depends on the screen size (both pixels and inches). Especially for web images, the site has no clue what monitor might view it. Yes, all of our 8x10 inch paper is the same size, but there is no concept of inches or dpi in any video system. Video shows pixels, directly. Really pretty simple (but different).

Printers do use the dpi number (printing resolution of pixels) to space the image pixels on paper (paper is dimensioned in inches - paper is where inches exist). Pixels are just a color definition of a dot or spot, dimensionless, with no size specified. But if we specify that the pixels are to be printed at 300 pixels per inch, then technically, each pixel ought to come out 1/300 inch size on paper. The printer ink tries to reproduce the pixel colors at whatever "pixels per inch" spacing specified (dpi was just a stored number in the image file, now only an instruction for printing). Which is a tough job, but inkjets have clever ways to simulate that color (millions of possible colors) with several dots of only four or six colors of ink. It is difficult to cram several ink drops into that small pixel's space, so there will be approximations. It may be much more approximate than you might imagine, but we don't notice.

If the image dimension is 3000 pixels, and if printed at 300 pixels per inch, the image will cover 3000/300 = 10 inches on paper. The image contains pixels, but all of the inches are on the sheet of paper. Within a reasonable small range, we can print different sizes by just spacing the same pixels differently (or for a larger range, we could resample the pixels to be a different image size). The only purpose of the dpi number is to space the pixels, pixels per inch, on paper. We can change this dpi number at will, to print different sizes on paper, without changing any pixel at all (called scaling).

If you print the image at home, from the image editor File - Print menu, the computer will use the dpi value in the file to compute the size of the image on paper. If it is 4000 pixels and says 180 dpi, it will try to print 4000/180 = 22.2 inches size. This is the only use for dpi in camera files (printing). Some print menus offer a way you can scale the size first however, to print a different size. If you scale this image to print 10 inches (to fit the paper), then it will scale to print at 4000/10 = 400 dpi (inkjets really cannot, but they try).

If you upload the image file to be printed somewhere, they don't ask dpi, they only ask what size to print the pixels that you provided. They will scale it for you. If you upload a dimension of say 2000 pixels, and ask them to print it 10 inches, you will necessarily get 2000 pixels / 10 inches = 200 dpi result. Most online printers have 250 dpi capability, which is a good upload goal. There is no point of uploading way more pixels than they can possibly print.

Scaling is adjusting the value of the dpi number itself in order to fit the image pixels to the paper size, for printing.
Word definition: A scale is a graduated measurement, like a map scale, and scaling is creating a proportionate size or extent, in this case of pixel distribution relative to the paper dimension. Scaling is computing that 3000 pixels printed at 300 pixels per inch will scale to cover 3000/300 = 10 inches of paper. Or scaling to 200 dpi size, 3000 pixels / 200 dpi = 15 inches of paper. The dpi number scales the pixel size so the overall image dimension fits the paper (more specifically, dpi scales the image size into inches, for paper, like in a book.)

So in any existing image, the only purpose of dpi is about scaling the image size on paper, pixels per inch. And of course, that numerical dpi result should also be an acceptable printing resolution for good quality. Just saying, printing at 100 dpi will be pretty poor (but 3000 pixels will print 30 inches then). Also excessively high values like 500 dpi will be pointless, just wishful thinking (but 3000 pixels will print 6 inches then). Printer capabilities are such that we can expect best results around 250 to 300 dpi, so we supply sufficient pixels to print the size we want, for example, 2500 to 3000 pixels for 10 inches. See how easy this is?

The dpi number (pixels per inch) is just a number. It is Not a property of the image. It is just an isolated number, a separate number simply stored in the file, independent of the image, and it is used only by the printer to space the pixels, only when printing the image on paper (which is where inches exist). When we "scale" an image to fit paper, all we do is change this one number (no pixels are affected in any way). Scaling is drastically different than resampling. Resampling changes the pixels, but scaling only changes this separate dpi Number itself. This dpi number will be the "printing resolution" (how the pixels will be spaced on paper), but it does not affect the pixels. It is NOT related to the camera in any way. If instead, you specify this image is to be printed some size in inches, then that original dpi number is not even used - some New dpi number is computed. Many bad dumb myths about this, but dpi is just a number, an instruction for printing. Dpi is definitely NOT used by video or cameras (see Scanners below for only exception). The image is dimensioned in pixels, not inches. Dpi scales pixels to inches on paper (at rate of so many pixels per inch), but video simply shows pixels directly. And again, the camera has absolutely no clue what size you may later choose to print it on paper.

Normally, our usual goal is that we try to print photo images at about 250 to 300 dpi. This is the capability of the printers (designed for the capability of our eye to see it). 250 to 300 dpi is good for our printers at home, and also good for printing services such as Shutterfly.com, Mpix.com, Snapfish.com, Walmart, etc. We adjust for the paper size by Scaling the image (setting the dpi number value to print that size). Or, if the image is much too large, we Resample it to be smaller, so that we can scale to around 300 dpi. We also need to crop it to the same shape as the paper. See Resize Images about Cropping and Scaling and Resampling, to fit and print the image.

If we print the image on our home printer, by selecting menu File - Print, the printer will honor the dpi number specified in the file, and will print the pixels at the size (inches) determined by the pixel dimensions and the specified pixels per inch number.
(Pixel dimension) / (paper dimension inches) = pixels / inches = pixels per inch

If we send the image out somewhere to be printed, and specify "print this 5x7 inches", they will. They will necessarily ignore our dpi number, and will rescale the image to the necessary dpi number to print the requested 5x7 inches (to cover the 5x7 inches with the provided pixels). The printer machine only has capability in the 250 to 300 dpi range. If their scaled dpi number comes out higher than 250 or 300 dpi, it won't hurt, but it cannot improve the quality. You can upload your 12 megapixel images to them, but if printing 6x4 inches, then about 1500x1000 pixels is all that can help (250 dpi). I am being ambiguous about 250 vs 300 dpi, normally it won't matter much which we use (we are at printer limits), but both will print slightly better than 200 dpi.

Sufficient pixels to print at 250 to 300 dpi is optimum to print photo images. More pixels really cannot help the printer, but very much less is detrimental to quality. This is very simple, but it is essential to know and keep track of. This simple little calculation will show the image size needed for optimum photo printing. This method is one thing you really need to know, it should be second nature to you, considered when printing any image.

Desired Image Size Goal

To print x

inches
mm

at dpi resolution

This Calculator requires JavaScript be enabled in your browser.

This dpi number does NOT need to be exact at all, but planning size to have sufficient pixels to be somewhere near this size ballpark (of 250 to 300 pixels per inch) is a very good thing for printing.

Resampling changes the pixels. When the image is too large, resampling entirely replaces the image with a different smaller image, with a different count of new different pixels. Maybe resampling changes an image that is 6000 pixels wide to be only 1000 pixels wide, so it will fit on the video screen. But this is a destructive loss, which may be perfect for the current goal, but destructive meaning that we cannot go back (we discarded pixels, so save this copy with a different file name - always save the original image too). Resampling is a big deal, destructive to the original. But scaling is not - we can instead just change only this dpi number (called scaling) with wild abandon, back and forth, at will. Changing the stored dpi number does not change any pixel in any way. It is just a separate number, a future instruction for printing. It has absolutely no use until time of printing. Then it will of course control the size in inches when it prints on paper (unless it is scaled again at that later time).

However (a major point), changing this dpi number will cause absolutely no change at all on the video screen (unless resampling is also selected). Video is not concerned with dpi or inches. Video ignores any dpi number, and simply shows the pixels directly, one for one, one image pixel on one video pixel location. No matter what number the dpi says, you will never see any effect of it on the video screen, which simply just shows the pixels directly. See an example of that.

Aspect Ratio: The image itself, and the printing paper size, are commonly different shapes, causing printing problems unless handled. This is not speaking of size, but speaking of shape, for example, 4x6 paper is more long and skinny, where 8x10 paper is more short and fat. But 8x12 paper is the SAME SHAPE as 4x6 paper- the proper image shape can be simply enlarged and still fits exactly. To fit our image on the paper, we crop the image shape to match the shape of our paper choice. An image has a property called Aspect Ratio (shape). It is the simple ratio of the two image dimensions. Maybe the image size is 3000x2000 pixels, so the aspect ratio is 3000:2000. We reduce this to the lowest common denominator, and call it 3:2. It just means the two dimensions are in ratio 3 to 2, which is a shape, which can be compared to the paper shape, which normally needs to be the same shape. More at Aspect Ratio.

Printing paper also has a similar shape, and the same Aspect Ratio applies. For example, 6x4 inch paper is also 3:2 aspect ratio. If we print THIS image on THIS paper, it will fit - the shapes are the same 3:2 aspect ratio (3000x2000 pixels is quite excessive though, for 4x6 inches), and really ought to be resampled to about 1800x1200 pixels first (3:2), to about 300 pixels per inch size.

However, if we want to print this image on 8x10 paper, the paper shape is different (4:5 aspect ratio) than the image (3:2), and some of the image will be lost (cropped, outside the paper edge, off the paper - the shapes are simply different). Or we could choose to fit the tightest dimension, leaving blank white borders the other way (we hate that too). We had exactly the same issues with film, not necessarily the same shape as our paper, but digital methods are a bit different. Now, we need to do Crop and Resample and Scale when printing digital images.

Video screens also have aspect ratio. Non-widescreen monitors used to all be 4:3, and HDTV wide screen TV is 16:9. This is equally important if we are trying to fill full screen, but we are more comfortable with blank space bordering our video images, than on paper.

Scanners do use a specified dpi number (scanning resolution) to create pixels from inches on paper, for example creating 300 pixels per inch. If we scan 10 inches of paper at 300 pixels per inch, we create 3000 pixels in that dimension. If we scanned it at 600 dpi, we create a 6000 pixel dimension, which could then be printed double size at 300 dpi (scaling). The basic scanning scaling concept is:

Scan at 300 dpi, print at 300 dpi - the copy prints at same original size.

Scan at 150 dpi, print at 300 dpi - the copy prints at half of original size.

Scan at 600 dpi, print at 300 dpi - the copy prints at double original size

The printed enlargement factor = (scanning resolution / printing resolution).Scan film at 3000 dpi, print at 300 dpi, is 10x enlargement of the included film size.

Dpi has necessary meaning for scanners, which create pixels from the paper size in inches.

See how that works out? The concept is pixels per inch. Scanning at 300 dpi automatically ensures that you will have sufficient pixels to print original size at 300 dpi. Even if not printing, scan dpi still determines the image size in pixels (created from the inches scanned).

Digital cameras create pixels directly, a fixed size image. If the camera sensor size is 12 megapixels, it creates a 12 megapixel image, dimensioned in pixels. There is absolutely no concept of dpi yet (no paper size is defined in the camera, inches are a very undefined concept at that point). The camera cannot possibly guess what size we might print it someday, but we will figure it out later, when we are ready to print, if we even print it. We do not care about dpi otherwise, it is an unused number until we print. However, we are always concerned with image size (pixels), which determines how we can use that image, on the video screen, or when ready to print.

The camera will stick in some arbitrary dummy dpi number, just so some believable printed size can be shown. If they didn't, then Photoshop will automatically call the blank to be 72 dpi, which indicates some unreal print size in feet, so the cameras do stick in a dummy dpi number, maybe 200 to 300 dpi. They don't know what size you may print it later. Camera brands vary in the dpi number they make up, but this value is a meaningless arbitrary number, confusing if we try to make any sense of it. There is NO CONCEPT of inches in the camera (just pixels). The image is dimensioned in pixels. We will change that dummy dpi number when we decide how we want to print it.

But digital basics are all the same for all images, so after image creation, then it is a digital image, dimensioned in pixels. Dpi is only used to control the size of the printed image on paper (paper has inches). Video screens are dimensioned in pixels, and video has no use for any dpi number.

Repeating - Inches only exist on the paper we print on, or the paper that we scan. Inches do not exist in the camera, in the image file, in the video system, or in computer memory. In those situations, only pixels exist. Without inches, there can be no concept of dpi. Instead, digital images are dimensioned in pixels. The single most important thing to know about digital images is their dimensions in pixels. This affects how you can use them.

FWIW, I am saying dpi for "pixels per inch". I am aware that nowadays, some instead prefer to say ppi for same thing, but I am also aware it has always been called dpi. Yes, I am aware that printing devices have another second use for dpi, meaning ink drops per inch, including halftone screens. If interested or confused about dpi, see more details here.