Where they say 12 or 16-bit "precision", they actually mean resolution, and not accuracy. You might be able to discriminate 1 part in 4096 or 65536, but the absolute value may still be off by 5%, or 204 parts in 4096, or 3276 parts in 65536.

Where they say 12 or 16-bit "precision", they actually mean resolution, and not accuracy. You might be able to discriminate 1 part in 4096 or 65536, but the absolute value may still be off by 5%, or 204 parts in 4096, or 3276 parts in 65536.

The total word size (bits) determine the steps of resolution possible, the accuracy depends on the reference voltage used and of course the quality of the internal design of the chip. Both those models have an internal voltage reference as well as programmable gain settings, 4 single ended input channels or 2 differential input channels. The datasheet shows all the possible error tolerances and are impressive if you study ADC chips in the past. It's not a cheap chip but certainly a very powerful feature filled chip. Study the datasheet at your leisure.http://www.ti.com/lit/ds/symlink/ads1115.pdf

At the refinery I worked at before retirement we dealt with a lot of calibration issues and training. We tried to teach two different concepts to new instrumentation techs that started work there about what a quality measurement was and was not.

If a sensor can be proved to have good 'repeatablity' within it's rated 'accuracy' over it's full measurement range then you have a good sensor, stop fussing with it. Absolute accuracy is all about standards and what you are using as a reference to compare all other readings with. We stressed about repairs and adjustments that ended up with good repeatablity, rather then the circular fool's path of 'proving' that a given measurement is 'accurate'. So we used the word repeatablity to mean precision and didn't pretend to claim anything about accuracy.

On a few legal compliance measurements that local government required for us to have official calibration standards traceable to an approved 3rd party lab standards. For those we had to send out a few of our "bench standards instruments" such as a couple of bench DMM, bench deadweight tester (used for pressure measurements), bench electronic pressure sensor, etc to an 'approved' calibration laboratory which would test our standards, publish accuracy specs for them and put a dated seal on them good for one year. With these 'bench standards' we could then use them to compare our other measurement equipment when dealing with compliance measurement issues.

The word accuracy is a very overloaded word that can mean many different things to many different people. In principal it would seem to be a simple word, how close is a specific measurement to it's 'true' value. The problem is defining 'true' and trying to implement it in a meaningful, useful, and practical matter. Metrology can be an incredibly complex field. It's also incredibly expensive.

//printing original method not bothering with voltage reference lcd.setCursor(0,3); lcd.print(analogRead(0)*voltage_divider/1023*5);}

void loop() { read_voltage(); screen_print();}

Also, another issue is that some of the values seem to fluctuate on my LCD screen. This is especially apparent for method 2. Any suggestions on how to reduce this and make the value more stable? I've added a 470n capacitor between A0 and ground.

Quote from: dhenry on January 02, 2013, 03:55:02 PMPut a small capacitor on the analog input pin. Anything from 0.1n to 1000n will work.When should I use a capacitor like this? On all analog inputs? I have some inputs that goes straight from a 0-5V source and thus doesn't need a voltage divider but do I still need the capacitor?

What I fail utterly to see here is how a component that can be anywhere in a 10,000 to 1 range could possibly be meaningful compensation for anything.

Bob

--> WA7EMS "The solution of every problem is another problem." -Johann Wolfgang von GoetheI do answer technical questions PM'd to me with whatever is in my clipboard

Quote from: dhenry on January 02, 2013, 03:55:02 PMPut a small capacitor on the analog input pin. Anything from 0.1n to 1000n will work.When should I use a capacitor like this? On all analog inputs? I have some inputs that goes straight from a 0-5V source and thus doesn't need a voltage divider but do I still need the capacitor?

What I fail utterly to see here is how a component that can be anywhere in a 10,000 to 1 range could possibly be meaningful compensation for anything.

Bob

The concept of adding a cap is if the output impedance of whatever is driving the analog input pin is higher then 10K ohms then the internal sample and hold cap may not have time to charge up to the true value of the applied voltage. Possible solutions are:

Buffer the applied voltage with a device that meets the output impedance recommendation of the AVR ADC

Do consecutive analogRead() commands on the same input pin and ignore the first reading obtained

Add a small cap that will accumulate the charge of the applied voltage and will be able to transfer that charge voltage faster to the internal sample and hold cap when the pin is read.