I still have some Log-Log graph paper. I'm telling you this so that it's on record that there exists some dead tree still awaiting use, except for some corporate imagery doodlings. So please log my ex-log log-log logo, before you log off for the evening.

yakkoTDI wrote: <snipped quotes>I love the idea of graph paper. Sadly my handwriting looks like crap unless it is tiny and those graph boxes are just too big.

How big is too big?You can (could) get (print) graph paper in a wide variety of sizes. Most "standard" graph paper you get has sizes akin to that of standard lined paper, but there are also varieties with graph sizes down to 1/10" and 1 mm, and if you have the ink available you can print whatever size you want (shouldn't require much since you'll want to print lines that are visible enough but largely get out of the way of what you are graphing/drawing/writing).I recall finding some with a small grid (probably 1/10") that I used to doodle fractals on back in my school days.

pierreb wrote:I can't help but notice the decrease in quality started BEFORE the powerpoint/ms-paint era.It seems clear to me the quality decline wasn't MSFT made.I'm a "graph quality change" denier.

There is a very real chance, that the decline was started simply by using software instead of drawing graphs by hand, as lowering the entry barrier to something typically results in more people doing it, with on average less knowledge about it. Supposedly that effect (with regard to GUI design) was part of what damaged the reputation of Tcl/Tk.

Wasn't it even common to have a dedicated "Grapher" as an employee for publication quality graphs? Kind of like "computer" used to be a job description. I fear the day when people think of a machine, when they hear "scientist" Or should I be more worried that by that time the same applies to the term "people"?

pierreb wrote:I can't help but notice the decrease in quality started BEFORE the powerpoint/ms-paint era.

There is a very real chance, that the decline was started simply by using software instead of drawing graphs by hand, as lowering the entry barrier to something typically results in more people doing it, with on average less knowledge about it.

The first time I wrote a report on a computer, in college in 1985 or '86, I struggled with my graphs. I found it easy to draw good graphs by hand, with graphing paper, a ruler, and a calculator, but on the computer I was lost. It wasn't until years later, having learned how to do graphs in Mathematica and, later, Excel, that the combination of "computer" and "graph" slowly stopped giving me anxiety and nausea.

In other words, my explanation for the phenomenon (which I'm not convinced exists, but whatever) isn't that graphing became too easy, but too hard. I still can't use gnuplot without a bottle of Valium.

GlassHouses wrote:In other words, my explanation for the phenomenon (which I'm not convinced exists, but whatever) isn't that graphing became too easy, but too hard. I still can't use gnuplot without a bottle of Valium.

Likely a matter of education likely. I haven't really learned doing proper graphs by hand to begin with, so I would be lost when trying to do stuff like drawing a smooth curve based on actual data. I understand that there are some tools used to pull it off, but I don't think I've ever even seen them.

Sketches of course, are much easier by hand. But production quality graphs? No idea how to do them that way. Funny enough, I still find gnuplot the most intuitive solution I even wrote a simple (unpublished) wrapper for python, because I had grown too annoyed by alternatives.

One thing I noticed is that at just about the same time that TeX came along and made the fonts and layouts of papers look much nicer, gnuplot came along and made the fonts and labels on graphs look much worse.

"[T]he author has followed the usual practice of contemporary books on graph theory, namely to use words that are similar but not identical to the terms used in other books on graph theory."-- Donald Knuth, The Art of Computer Programming, Vol I, 3rd ed.

GlassHouses wrote:In other words, my explanation for the phenomenon (which I'm not convinced exists, but whatever) isn't that graphing became too easy, but too hard. I still can't use gnuplot without a bottle of Valium.

Likely a matter of education likely. I haven't really learned doing proper graphs by hand to begin with, so I would be lost when trying to do stuff like drawing a smooth curve based on actual data. I understand that there are some tools used to pull it off, but I don't think I've ever even seen them.

Sketches of course, are much easier by hand. But production quality graphs? No idea how to do them that way. Funny enough, I still find gnuplot the most intuitive solution I even wrote a simple (unpublished) wrapper for python, because I had grown too annoyed by alternatives.

Last time I tried Excel it crashed from too-many-datapoints though

The trick to hand-drawn graphs is that you linearize the data. Rather than plotting pH vs [HA] and trying to figure out where the middle of the S curve is (to get the pKa), you plot pH vs log([HA]/[A-]) and use a straightedge to get the line. Then finding the pKa is just a matter of seeing where the line intersects an axis. There's a lot of situations where we don't bother doing the linearization part anymore because modern graphing software (Excel if you're a wizard, but more like graphpad or kaleidagraph) has no problem fitting that sort of curve.

bottles wrote:The trick to hand-drawn graphs is that you linearize the data. Rather than plotting pH vs [HA] and trying to figure out where the middle of the S curve is (to get the pKa), you plot pH vs log([HA]/[A-]) and use a straightedge to get the line. Then finding the pKa is just a matter of seeing where the line intersects an axis. There's a lot of situations where we don't bother doing the linearization part anymore because modern graphing software (Excel if you're a wizard, but more like graphpad or kaleidagraph) has no problem fitting that sort of curve.

There are (at least) three different reasons for drawing a graph:

1. To enable some quantity, e.g. a constant of proportionality, to be extracted from noisy data using the human eye-brain combination2. To see how well the data supports the hypothesised relationship3. To illustrate visually the relationship between variables

The linearisation technique is good for (1) and (2), since it's easier for a human to eyeball the best straight-line fit, and it's easy to see how well the data points cluster around that line. Straight lines are something we've evolved to see, they can be generated easily with a ruler, and the same ruler will work whatever the parameters (gradient and intercept). Once you get into curves, there are too many parameters and you start to need French Curves (remember them!) and the like. As you say, the best fit (1) can be found in an objective way, as can the quality of the fit (2); it's now very easy for computers to do this, and given how open to abuse data can be, you really want to be using an objective method. The graph can still be useful as a visual way to illustrate the quality of fit.

Linearisation is essentially the opposite of what you want for (3), since the whole point there is to see whether the curve flattens off, asymptotes, peaks, where it's steepest, etc. Showing that some function of the variables results in a straight line literally removes all the interesting features from the relationship; it tells you no more than the hypothesised mathematical equation.

orthogon wrote:Linearisation is essentially the opposite of what you want for (3), since the whole point there is to see whether the curve flattens off, asymptotes, peaks, where it's steepest, etc. Showing that some function of the variables results in a straight line literally removes all the interesting features from the relationship; it tells you no more than the hypothesised mathematical equation.

Most of the graphs I encounter a spectra of some kind. For those you are interested in where they have peaks, how wide and high the peaks are, and sometimes, what shapes the peaks have. And whether what looks like a single peak, actually is a single peak or multiple ones close together. This is normally done by fitting a sum of modeled peaks to the data, and then looking at the (possibly smoothed) plot of the raw data to check if the fit has found a proper global minimum, and if it explains all the features of the data.

Honestly, I wasn't even thinking of using graphs as the tool the extract data from a graph (except for qualitative understanding). Cases were I was using that came up sometimes in private interest stuff though, mostly when data is being grossly misinterpted for selling stuff (e.g. that claim that SSDs will eventually be cheapter than HDDs (doesn't match the data), or that tablets/PCs were supposed to have vanished by now).