I have a new 14.2 megapixel Nikon Coolpix S6000 digital camera. When I take a photo and check its properties, it shows that the size is 1.8 megs. How do I get it to increase in size up to the 14 megs? I have used a Kodak camera that was only 12 megapixels and when I checked the properties, it showed a size of 3.2 megs.

That does not make sense to me. How is it that the camera with the larger megs takes photos with fewer megs? I have tried about every setting on the Nikon but cannot get it to take a photo with more than 2 megs in size. What does the 14.2 megs stand for? Is it not the size of the photo? Please help me; I'm puzzled by this.

--Submitted by: John M.

Here are some member answers to get you started, butplease read all the advice and suggestions that ourmembers have contributed to this question.

The compression level of a blu-ray is different than the compression of a streaming HD video on Netflix. They might be the same resolution but the compression changes the file sizes to very different numbers.

What's happening is that the jpeg engine in the Nikon compresses the data quite a bit, which can be bad or good. Maybe they've increase the megapixels but are able to compress the file size while retaining as much data as it had before, or it's compression is losing data that makes a difference in the final output and that would be a negative.

The resulting picture size is because of the JPEG compression, which will actually vary by photo (the more detailed the image, the less compression is possible). JPEG is actually similar to ZIP compression in concept.

Nikon cameras do have an image storage, which they call RAW, which will store the full 14.1 megabytes. I've used this by mistake, and it can cause substantial delay between pictures just due to disk speed; also, Nikon software is the only way to read or use the picture.

You can also opt for more lossy compression, so that you can store more pictures on your memory card.

If you look for RAW storage in your instruction book, it should tell you how to select the image size.

The PNG format have some similarity to the ZIP format, as does the GIF format.

JPEG have absolutely nothing in common with the ZIP, apart that it's a way to compress the data. But JPEG actualy remove some informations from your images.

Anyway, you should always set another format that JPEG/JPG for your photos. JPG discard information, and that must be avoided. It's even more importent if you do any editing of your images. Only convert to JPG as a last step before diffusing your images by e-mail.

You should also know that "RAW" images cannot be portable across systems. In fact their format is specific to each camera technology (notably the type of sensor, its subpixel geometry, its type of color filters, and the non-linear sensitivite curve of each cell, and the number of bits per captured subpixel).

The RAW image can be very huge, and is generally also compressed slightly (but in a non lossy binary algorithm). They take considerable space in your memory card, and this requires a larger internal memory for processing, and more time to transfer to the SD card (unless your camera can use several SD cards in parallel, or use cards featuring parallel storing channels such as newer SDHC cards or faster technologies like xD cards): this requires more seconds time between two RAW images at the maximum definition.

Effectively the SD cards capacity has been increased a lot, but the limiting factor is still their transfer rate (in megabits per second) which is much slower than conventional volatile RAM used for internal processing (more advanced cameras also need faster volatile RAM for this processing, just to be able to capture images very fast and avoid accumated noise before the many sensor cells are discharged and sampled sequentially).

For this reason, the RAW images will require a specific software to generate a demosaicized image in a portable color model and internchageable format like TIFF.The huge improvements in computing technologies (including lower power consumtion for faster processing), allows now the demosaising process to be implemented very cleanly within cameras, so only demosaicized images need to be stored, with accurate color model conversions.

TIFF images on those cameras are extremely good, and will not require the specific conversion software on your PC to process the RAW images (it is very difficult, or costly to get softwares that implement the correct demosaizicing conversion used in some cameras, because the RAW format depends on patented and highly protected sensor technologies).

So if your camera generates RAW images, make sure that it comes with a PC software that will allow you to convert them to accurate demosaicized TIFF images, with control on some conversion filters, notably the linearization transfer function for each color component, which depends on the exposition time, and the size of pixel cells for filters which depend on the diaphragm aperture (and global lightness of the scene). You should then compare the quality of the generated TIFF images.

And then you'll be able to choose a good software for photo corrections in this TIFF format, cropping, denoising, correction of red eyes caused by flashes. Finally you'll be able to choose the compressor to JPEG (simpler converters are very poor, and this includes the compressors implemented in cameras, because the JPEG compression is very CPU intensive, much more than the JPEG decompression). Be careful with GPU accelerated converters: they are very fast, but much less precise and may be much more lossy than what a conventional converter using floating points and autocorrection of accumulated sampling errors would produce : you need a very good imagery program

(don't use the very basic Windows Paint for this, as it generates lots of JPEG subsampling artefacts with too many color aberrations, and undesired saturation effects in its poor DCT transform which is also computed only with 8-bit of precision per component, including the lightness component, and very imprecise conversion of color model from sRGB to YCbCr, plus imprecise subsampling for generating the Cb/Cr components that are compressed separately from the Y component with a lower spacial definition, most often at 4:2:1 or worse at 4:1:1, instead of 1:1:1 !)

If you're generating high quality snapshots, don't be afraid by longer processing time. Windows Paint was only built for fast production of low quality images made by people using poor cameras under poor light conditions or basic autofocus cameras, and does not allow you full control of the various transfer function parameters. But if you use it, never load a JPEG image for working on it multiple times in this format (use the PNG format instead, or even BMP images with 32-bit colors for all intermediate images)

Also don't confuse RAW images with BMP and TIFF images. BMP images are demosaicized (just like TIFF which is the prefered storage format used by professionals because it offers color calibration, and keeps some parameters from the source RAW images, and notably its transfer function parameters and its color model, and a trace of the filters that have been applied), when RAW images are the exact binary snapshot as taken by the sensors, but prior to the correction of the subpixel geometries. RAW images are never suitable for reproduction directly on a display screen or for printing.

Many picture manipulation applications CAN read *.raw or *.nef files. I have a Nikon camera and use Picasa to organise photos and as my default picture viewer, it even imports directly from the camera. My personal favourite (free) software for photo editing is Gimp it need a plugin to read raw files but is no problem once installed.

A camera's mega pixels refers in a way to the maximum physical dimensions(whether 24" x 10") of the picture taken but megabytes refers to the digital size of the picture(the space it occupies on the camera's memory card)

As Kalel said, your pictures are being compressed before being stored onto the medium. The compression level is usually user-configurable to use more or less compression, or to take RAW (uncompressed) photos. Check your camera manual on how to adjust the compression level. Your Nikon's default compression setting is higher than your Kodak's, hence the smaller photo file size.

There seems to be an awful lot of confusion over file compression. All camera RAW files, be they from Canon, Nikon, Pentax, Olympus or some other proprietary brand are actually compressed. For example a 16.5 megabyte, 14.2 megapixel Pentax RAW file (.PEF) opens in Photoshop to a document size of more than 41 megabytes. This is because the RAW .PEF file is compressed. The same goes for an Olympus 13.4 megabyte, 8 megapixel file (.ORF) which opens in Photoshop to a document size of almost 23 megabytes.

There are basically two types of compression - lossless and lossy. Among the former type for images are PSD, TIF and RAW file formats from various camera manufacturers. These formats are perfect for editing because all the image data is there. ZIP and Z7 are among several other lossless compression formats.

Of the lossy formats by far the most popular and useful is JPEG (or JPG). It is customisable in that the compression/quality trade-off can be set by whoever is making the file. However, this file format is not so good for editing, and at the higher compression ratios editing should probably be avoided altogether. The point is that each time the file is opened for editing and saved, data is discarded and can never be retrieved. Liken it to recording music from one old tape cassette to another tape. Each tape contains its own background noise which is added to the next re-recording, so the signal to noise ratio gets worse. The signal becomes less as the noise increases. In image editing the original image becomes more and more degraded as more data has to be interpolated from what's left after the JPEG algorithm has discarded more data. Eventually all the fine detail goes, then other less fine detail, and the image is swamped with random clumps of pixels containing no data, known as "artefacts".

I know I haven't mentioned other formats such as PNG and GIF, and so on, because they don't add to understanding the difference between lossless and lossy compression. But I hope this explanation is helpful.

They are unrelated terms. Pixels are the tiny points of colors that compose an image. Bytes are the 0s and 1s used to store information. The megabyte count of an image is dependent upon the complexity and amount of information being stored which will vary from image to image.

John, I completely understand why you're so puzzled. Everyone who replies to your question over the next few days will be 'tech savvy' - we know all the lingo and we're used to using - and it can be easy for us to assume that everyone else knows this (relatively meaningless) information.

Firstly, I'll offer some definitions.

Megapixel - you're right, this is about the size of photo that your camera can take. A pixel is the smallest part of an image. 1 'megalpixel' = 1 million pixels, or tiny parts of an image. That means your Nikon camera can take a photo of a tree, and that photo will be made up of (around) 14.2 million tiny bits. Now I have a 6 year old Canon camera that's 4 megapixels. I could take a picture of the exact same tree, but my photo would only be made up of (around) 4 million tiny bits. Now the common way of referring to how many 'bits' make up an image is resolution. Resolution is essentially the detail of the image, so a photo with a higher resolution is more detailed. Your Nikon Camera has a resolution of 14.2 megapixels, which means it can take more detailed pictures of a tree than your Kodak, which has a smaller resolution of 12 megapixels.

Megabyte - this is all about the size of the file. Imagine you've got an old film camera, and you take the negatives to the print shop to get them processed. You choose a 4"x6" size for most of them, but then you really like another, so you opt for an 8"x10" print. The 8"x10" is bigger, right? When people talk about megabytes, they are referring to the size of a file - or how much space it takes up.

Megs - this is just a shorter way of saying - rather confusingly - either megabyte or megapixel.

Now, you ask about why the Kodak is giving you larger files, even though it has a lower resolution? I know nothing about either of these cameras, but there are a number of factors that could cause this. When a digital camera takes a photo, the camera then compresses the image so that it can store it on your memory card. Different cameras compress photos in different ways, and this could really affect the size of the file. Secondly, this might be due to settings on the camera. Have a play around in the menu and see if there's an option for detail - some cameras have 'rough', 'standard' and 'fine' settings. This can affect the amount of detail in each photo - the larger the detail, the larger the file.

To recap: 14.2 megs is the size of the photo - it's resolution. A megapixel is not the same as a megabyte. Don't worry about the two not matching up. Just because the file size of the Nikon is smaller than the Kodak, it doesn't mean that your photos are any lower quality.

Your message would be correct if, in every place where you used the word "resolution", you had used the correct term "definition".The number of pixels in an image determinates the "definition" (that's why we are proposed to buy "high definition" TV sets and programs).

The "definition" (counted in pixels) is independant of the physical image size and of the compression applied to the image data.

However the data compression MAY affect the "resolution" of the image: it's efectiely the case of the JPEG compression which effectively suppresses details in the image without affecting the "definition".

The "resolution" also depends on the optical elements of the camera (optical zoom, lens quality) and sometimes on the lighting conditions of the photographed scene, and as well on the sensitivity and fidelity of the camera lens (it is affected by the temperature noise of the captor).

The "resolution" is normally measured as a spacial angle (in steradian) of the smallest details that it can distinguish from the other with exact differences between the two average positions in the captured image ; the fidelity of this detail (respecting their contrast and colors) is also important because any averaging or noise that occurs in the capture data that will either diffuse the capture value into the neighbouring pixels, or will affect the color fidelity will reduce the "resolution".

The "resolution" may as well be indicated as a planar angle (in radians or degrees, or subdivisions of them). That's because there's a direct proportionality between the angular angle of the opening cone centered on the lens center and opened on one side of the lens to the capturing pixel, and on the other side on the captured scene, with a fixed ratio.

However this ratio is not exactly the mathematical ratio of conversion between steradians and radians, because the "cone" is actually not circular (the capturing pixels are arranged in a grid). In addition, there are always angular imperfections and color defects in this angle conversion, due to the difference of focal length between distinct wave lengths (colors) in the light spectrum (this is unavoidable in all optical systems). But for some measure, you can apply a good approximation of this ratio.

The "resolution" may be finally be given by using a distance, measured in (milli-)meters (or inches...), because there's a good linear approximation of the planar angle by the distance on the capturing surface from its center. This works as long as we are not too far from the optical center of the image. The reason is that the planar angle (in radians) is almost equal to the distance from the optical center measured with a distance unit equal to the focale length : sin(x) is approx. (x)... As a consequence, you can convert this distance by measuring it in terms of number of pixels traversed in one direction, but there's no predictable ratio because this also highly depends on the focale length !!

To make thing worse, the "resolution" of any camera also varies between the center of the captured image and the periphery: it is ALWAYS better in the center than on the periphery (due to various optical aberration). That's an effect left invisible when looking at a numeric image if you're just considering the pixels.

Actually, there is NO camera that indicates its effective resolution, they just advertize their "definition". The definition is only a commercial argument. What our eyes will see is not the definition of the image but its resolution, which will also depend on our distance of observation of the image). Our eyes are also imperfect cameras : they have a better resolution in the optical center than on the periphery (also because the retina has a better "definition" in the "macula" center, than on the periphery (it is alsmost null at a plce not far from the center of vision, where all nerves are converging to link the eye to the brain), if you consider the "definition" of our eyes being defined in terms of number of sensitice cells per surface unit, i.e. their density.

Image quality has NOTHING in common with the commercial advertizing of image definition (counted in pixels).

What really matters is the optical perfection and color fidelity, and light sensitivity of the cells, and speed of acquisition of the image (to avoid blurs), and temperature stability of the capture (to reduce noise). i.e. the only elements that are effectively affecting the "resolution". The better the resolution, the better you'll be able to adapt the captured image to your vision (incluing to your viewing distance) when reproducing it and then looking at it again (with your imperfect eyes...).

And yes the JPEG compression is lossy, but the losses are far smaller than the effects caused by a bad optical construction of the camera or by the temperature instability of the camera cell !

In fact, unless you are only viewinga part of the image (or want to crop it later to extract the interesting part of it), there's absolutely NO NEED of more than 3 megapixels (with a 4:3 ratio) on ANY image with 32-bit colors, because you will NEVER be able to see this difference (unless you zoom in to look at details at a subnormal distance).

That was more than a mouthful sa to image compression, Jpeg, and RAW etc. As a 30 year Professional Photographer, It is important to understand the relationship between image sensor quality and lens quality. These are the two most important components between "shooting pictures" and "capturing photos". You said this but in a very long and overly semantical way. In point and shoots it is always better to "kiss" keep it sweet and simple.

Digital photos are basically 'snapped' the same way as conventional photos, so pretty obviously the main thing is the lens. The 'megapixel' measure (to my mind) is closer to the difference between shooting an image on Kodachrome (a very fine grained film which requires decent illumintaion) or GAF 500 (coarse grained but useful for low light). But the compression (unless your camera allows RAW or TIFF) is also very important, and I simply do not understand why even today the cheaper cameras don't use the same compression algorithms as the more expensive cameras (same marque).

But the lens is the killer - shooting through a bit of moulded plastic onto Kodachrome will never be brilliant - same as shooting through a plastic lens onto gadzillions of megapixels won't be brilliant.

This (fairly obvious) stuff is NEVER considered if you only read camera reviews in the computer related media! (IMO)

I was pleased to see the last paragraph of verdyp's post. Because of lower production costs, manufacturers have all jumped on the bandwagon to see who can offer the most pixels. But, as verdyp says, it's all pretty meaningless unless you want to print a picture the size of a tablecloth, or significantly enlarge a little bit of it. Set your camera to no more than 3 megapixels unless you have a special requirement. Anything more is generally a waste of space and can problematic with transfer times.

Instead of going into the confusing difference between resolution and definition the question may be answered in a simpler way. The 14.2 megapixel camera can capture image with a maximum of 14.2 million dots per square inch while the 12 megapixel camera can capture image with a maximum of 12 million dots per square inch. So the 14.2 megapixel camera captures image with greater detail. But this does not prevent the 14.2 megapixel camera to capture image with lesser detail that is to say that it can be set to capture image with say 6 megapixel that is 6 million dots per square inch. In this case the size of the captured image in megabyte will be normally lesser than when image is captured with 14.2 megapixel or even 12 megapixel since the image will have lesser detail. So the settings of the 14.2 megapixel camera must be checked to see if the camera is capturing image with its maximum megapixel by default, otherwise the size of the image in megabyte might be lesser than that of an image that has been taken by a 12 megapixel camera with its maximum megapixels.

Regarding megapixel and megabyte it must be remembered that megapixel is not a unit for mesuring image file size while megabyte is. Megapixel denotes the detail with which an image is captured and thereby denotes how much an image can be enlarged when printed without any visible distortion or loss of clarity and not how much the size of the printed photo will be. With 14.2 megapixel camera captured image can be printed in sizes ranging from a passport size photo to a large poster size photo.

I've been a photographer in industry and done prepress work for 30 years. And I don't think this thread is a shining example of the public's general knowledge of digital photography. Some of it is quite revealing, and not always in a good way. Some information is so wrong it's difficult to know where to start. What with confusion over compression, pixels and bytes, resolution and definition, and now a camera's actual pixel rating! Oh, and terrible spelling here and there. But I think I'll stick with the so very wrong mega pixels per square inch.

Right. The pixel rating of a digital camera relates to the number of recordable pixels on the camera's array. In other words the potential number of pixels in the entire image. If the camera rating is 12 megapixels, that's it for the entire image. It is most certainly not per square inch. The number of pixels per inch becomes more or less according to whether you enlarge or reduce the physical size of the image in editing.For example, an ideal pixel rating for a 10"x8" image for reprographic reproduction (300 pixels per inch) is about 8 megapixels, assuming no cropping. Less than 8 megapixels would mean that data would have to be extrapolated (created from surrounding data) in order to fill the 300 pixels per inch requirement.

If the camera rating were per square inch then one would expect to see another camera rating relating to maximum physical image size. There isn't one for the obvious reasons explained above.

Thanks pvandck for correcting my misconception about megapixel per square inch. But if the "per square inch" portion is omitted I hope that my previous post has somewhat correctly tried to explain the confusion relating to the size in megabyte of a captured image and its relation with the megapixel rating of a camera. Further explanation and or correction is most welcome.

I'm sorry to reply you that a number of megapixels per inch (which is not resolution, and not even exactly the definition, but the pixel density for a specific media) also means nothing for photography : the capturing area size varies depending on the inner focale length, and is not a factor of quality, because what will matter is the distance of observation to the photo once it will be reproduced to a media (including the small screen of the camera, or a large HDTV screen, or a printed photo) : in such condition, the real limiting factor will not be that of the capturing camera sensor, but the nature and size of the media of reproduction that will be observed (nobody can observe the camera sensor directly, and its dimension has nothing in common with the size of the captured scene, and is much smaller).

Note that all numeric cameras often display a focal length equivalent which is actually 2 or 3 times longer than the effective focale length used (notably on compact cameras), and sometimes even more (microcameras, such as webcams). This is generally indicated by a multiplication factor which allows conversion of classic focal lengths used in argentic cameras for 24x36cm Kodakchrome films. This factor has nothing to do with the optical zoom factor (or the optional numeric zoom factor in which a central part in the captured image can be rescaled up by interpolation, and with tiny details better preserved by the lossy JPEG compression of this interpolated image).

But things are in fact much more complex, because the actual pixel resolution of the capturing cell is just an average computed after ignoring how the RGB pixels are effectively created: if you don't know what I mean, look at the Wikipedia article about "demosaicing". Effectively, an sRGB image assumed with the JPEG format attempts to map pixels as quares that never overlap. But most cameras actually don't have square pixels, but instead a mesh of subpixels that do not entirely fill the theoretical square, that are not necessarily captured at the same time for each color plane, and that are not necessarily useing the theoretical three pigments of the sRGB model; in addition the subpixel geometry really matters (newer cameras use arrangements of subpixels where there me be twice more green supixels than red or blue, and their arrangement is not necessarily an horizontal band of 3 tall rectangular subpixels like on most TFT/LCD display panels; other will include a white subpixel taking half of the total cputuring area, to better capture the subtle light differences).

Due to the subpixel geometries and the difference in the number pigments and differences in their relative proportions of surface, and difference in intervals of times where each supixel captures the light, and differences caused by microlenses, or color filters on top of the photoreactive captor, the camera firmware includes a demosaicing algorithm that will rebuild an image into the sRGB model and at the same timestamp for the whole area : this implies an interpolation, which can create color magenta fringes and other aberrations. Some algorithms may be aded on top of this to correct these artefacts created by the demosaicing algorithm.

Then you must know that the demosaicing algorithm implemented in the camera is highly dependant on the computing capability of the camera, notably for images captures at its maximum definition (i.e. using data from all subpixels captured from the sensor). In addition the demosaicing process, because it uses interpolation, must convert the light scales into a linear scale suitable for this interpolation, but the sensibility of the cells is most often not linear. To keep the precision of colors, a raw subpixel captured with (for e.g.) 10 bits of precision will first be converted to a scale with 14 bits of precision, but for the geometric demosaicing interpolation, 16 bit of precision per subpixel will be needed to create the 16 bits of precision for the computed theoretical sRGB pixels. Then, additional filters will be applied to correct color artefacts created by the demosaicing process itself (trying to limit the color fringing), which will slightly reduce the pixel resolution.

If the camera does not allow you to get the RAW captured images (without the demosaicing implemented in the camera by its limited graphic processor), but only the computed RAW image after demosaicing (in a format such as TIFF or PNG), or the JPEG image (which is lossy, both in terms of spacial pixel resolution and color/light resolution: the spacial pixel resolution of a JPEG-compressed image is considered to be one half of the pixels you get when decompressing it to a sRGB flat image, this is a property of the Shannon-Fano principles on signal sampling).

Finally, optical and physical factors really matter when you compare camera. Notably, the natural colors through any lens will never focus at the same length (the red wavelengths are longer than green then blue wavelengths, but will be more refracted through the lens, with also a more important part that will be reflected on the surface of the lens, and a more important part that will be absorbed in the lens itself and never reaching the caputuring area). This creates a slight dispersion of light (and then color information) to the surrounding capturing cells, when the focal length is adjusted for the median (green) wavelegnth, for the blue and red color components.

Optical properties are extremely complex to analyze, and before you compare the commervcially advertized pixel definition of cameras, you have to look at other information which camera makers aften forget to give clearly : notably what is the exact nature of the sensor, with its supixel gemetry type, or if the sensor includes microlens (some microlenses are also controled in their form by a piezoelectric system that allow them to adjust the focale and orientation that will also physically correct the geometry to realign the subpixels without using the lossy demosaicing interpolation), which demosaicing algorithm is used, what is the sampling resolution (in bits per subpixel), what is the resolution of the intermediate conversion to linear scale used by the interpolating process, what is the final resolution in bits per computed color component and per pixel of the demosaicized image (before JPEG compression), which type of numeric filter is applied to correct coolor fringes, if a noise reduction filter is used, how it is computed (bilinear, gaussian...) and with which radius (because this also reduces the effective definition in pixels, it's better if the camera sensor is not too much influenced by temperature, as it creates very visible random noise in dark areas, whose effect is MUCH more important on numeric cameras than on argentic films where the noise is much better spread in nanoscoping areas, and smoothed by the randomized pattern of argentic particles).

So a numeric camera has a lot of factors influencing the effective resolution you get in your snapshots. Some are physical and can be controled by better lithography (using less thermoresistive components in the cells, and reducing the currents flowing through the transistors, and reducing the electron dispersion leaking within the substrate to surrounding subpixels) and by a cooling system (between captures, the electronic charges must be moved out of the cells and this current generates heat that will reduce the sensitity and precision of the captured charges), or by faster sampling electronic. Some are opto-physical factors (variable wavelengths creating color dispersion through lens due to differential refraction indices : a piezolectric microlens system greatly improves this, but it requires additonal currents and generates additional heat which creates noise); some or purely physical (the geometric precision of the lens, and the opening of the diaphragm); most effects are introduced by the demosaicing process (and then by the processor capabilities and computing precision of the implemented algorithm), and finally by the numeric precision of the JPEG compression (notably the desampling factors in its transformation matrix or within the DCT conversion for intermediate additive terms, and in the spacial correction of accumulated sampling errors).

Note that compact cameras are very sensitive to noise, exactly because of temperature and the limited processor capabilities (and limitation of their batteries) and the very small lens (to get enough light, the diaphragm opening is relatively larger, and color aberrations due to opto-physical differential refraction is much more important, even if they use the same photosensor with the same subpixel geometry. Nothing can replace a good large optical system, using a longer foccal length. If you can, choose a camera with a x1 focal length instead of a x3, which allows for a larger diaphragm opening with lower refractive color aberrations. It also allows for a better cooling system on the sensor, and faster snapshots (more immune to motion blurs that reduce the spacial resolution).

Another factor is also the type of flash on the camera : if possible use a Xenon flash, because it lights maximally almost instantly to a pure white throughout the capturing opening time (other types of flashs do not light completely to white but can light up slowly in red wavelengths long before green and blue, and the subpixels will not be captured at the same time, causing more color artefacts during the demosaicing process). This explains why photos taken on most smartphones are reddish/yellowish, even with the flash (whish is very poor and slowly varies from red to orange then then some rosish white). These cameras can give correct colors only outdoor, during the day, under sunny conditions but not directly under sun exposure, and with enough indirect light reflections, where you don't need the flash.

Unfortunately, Xenon flashes require extensive battery charges, and huge batteries are not suitable on smartphones and compact cameras if you want good autonomy for enough clich?s. In addition, to get the necessary precharge, you need to wait for several seconds before taking the snapshot (otherwise the batteries will not be able to deliver the current without excessive temperature heating). You can avoid this heating problem if you use more battery cells in parallel (but this increases the price of the camera). For taking a high quality video ith good colors, nothing will replace an external source of light (i.e. powerful spotlights)

If you believe this post is offensive or violates the CNET Forums' Usage policies, you can report it below (this will not automatically remove the post). Once reported, our moderators will be notified and the post will be reviewed.

Track this thread and email me when there are updates.Please read before posting

If you're asking for technical help, please be sure to include all your system info, including operating system, model number, and any other specifics related to the problem. Also please exercise your best judgment when posting in the forums--revealing personal information such as your e-mail address, telephone number, and address is not recommended.

Old Thread Warning!

This thread is more than days old. It is very likely that it does not need any further discussion and replying to it will serve no purpose. However, if you feel it is necessary to make a new reply, you can still do so.

I am aware that this thread is old, but I still want to post a reply.

Checkbox must be checked in order to post in this old thread.

Sorry, there was a problem submitting your post. Please try again.

Sorry, there was a problem generating the preview. Please try again.

Duplicate posts are not allowed in the forums. Please edit your post and submit again.