Author
Topic: Creating Animated GIFs (Read 77531 times)

It's been claimed that no one ever looks at 99.999% of all archived satellite scenes, even the cloud-free ones. It's also true that 99% of the web sites archiving single daily satellite records never put together a simple time series movie that could give visitors a quick preview of what was in the collection.

For example, NOAA's ASCAT archive of 9,040 gifs in a single directory does see too many visitors who want to 'download them all' with wget or plugin. However only 1820 of these show the Arctic. Still, at 547 KB each, that is 995 MB. A movie size would be much much smaller, depending on quality retained, rescaling, restricting to even-numbered or just weeks, and codecs used.

It's very much worth doing but would it be distributable?

The ASCAT time series for 2012-18 provide an excellent direct record of ice pack motion, though weather artifacts in individual images can be quite distracting. The gif below looks at using UH AMSR2 double-masking of land and water, leaving only ASCAT ice (which benefits greatly from normalizing and adaptive contrast enhancements).

The land mask is a one-time stationary product, made here from the 6.25 km AMSR2. That does not have a Geo2D file in its netCDF so it cannot be redisplayed in Panoply without its lat-lon lines. Here the Gimp color picker was used, along with single pixel 'grow selection' to replace the distracting red with a bland land mask color. Open water varies day to day but can be color-picked once for the whole time series when the AMSR2 are tiled. This has to be done at the original resolution (ie prior to rescaling to fit ASCAt's scale).

It is easy to escalate the double mask to a tiled layer floating above the ASCAT tile and apply after the latter is enhanced in ImageJ (which mostly lacks alpha channel masking). Gimp cannot de-tile beyond 100 frames due to a limitation in a plugin nor save out as a video so those operations have to be finalized in ImageJ.

Having done each year from 2010 to 2018 separately (which works better seasonally rather than per calendar years) to keep intermediate files small, it is easy to combine these within ImageJ either end to end as a continuous six year roll, or gridded into say a 2x3 rectangle which affords simultaneous display of the same date of each year.

This would make a gif gigantic file because of 365 frames x 6 years x 700 pxl x 650 pxl is a lot to display, even if reduced from RGB to grayscale and gif-differenced to compress. Using movie codecs, file size would become manageable, though display on say QuickTime would need prior testing. It's not clear if or what the forum would show.

Movies can also be extended by adding clips but that would only benefit making a long-term roll; there is no juxtapositioning option. Quicktime Pro 7.6.6 can do a lot more but while AAPL still distributes the Pro software(http://support.apple.com/kb/dl923), it no longer sells the enabling key though these are sometimes offered free online or sold on ebay.

.. edit, layer, change all kinds of metadata, add and delete tracks, QT 10 can do less... has the controls outside the movie, 10 has it covering a part of the movie... can add effects.. can open a wider range of codecs, QT 10 has to convert a lot of formats.. has excellent A/V tools to adjust the video brightness, color, contrast, tint, playback speed, audio volume, audio balance, bass, treble, pitch shift, and playback.

To crop image batches identically (to the Arctic Ocean plus a bit of the Fram), use 320x350 as the lower crop on ASCAT and 375x375 as upper crop. After reducing to 8-bit grayscale, 365 images of that size require 48.7 MB or 170 MB if rescaled to 700x700. Since ASCAT is available in the same format back to 2010, those numbers have to multiplied by 8 to reach 31 Dec 2017. That is well within the RAM memory limits of ImageJ (and my Mac) but some menu operations might be 'challenged' allowing no room for error. Hence rescaling is best deferred to the very end of the process.

The 9-year gif for today below is 3.7 MB; movies don't work well when so few frames are available. The still png showing all 9 years has 2010 in upper left and ends in 2018 in the lower right. The goal here is to animate this still image into a good quality movie, preferable at native ASCAT resolution (375x3 --> 1125x1125 (or even better to 2250x2250) to allow all years to play simultaneously in contrast-enhanced mode, possibly with masked and false color versions.

Note 2018 is not unusual in having thick CAA floes having rounded the Beaufort bend and stringing out up into the Chukchi (where they will melt out next summer): six of the 9 years show this same pattern for this Jan date.

Note the widespread confusion between NSIDC's averaged monthly motion at each point (vector sum of daily motion) and actual monthly floe trajectories which are line integrals of the daily displacement of the floe from its current position. It's not possible to get at the latter using the former.

Just testing a preliminary four year array of ASCAT. 121 days from Sept minimum until 13 Jan. Forum software is accepting the .mov uploads but not displaying them. Believe further admin enabling is needed as was done for mp4 and youtube.

Back to converting mov to mp4 (which displays for some but not many people). It sometimes takes a reload to get the actual image going; on the first go-round it only shows the controller. But it is going nicely now. It cannot remember my preferences (loop, play, controller off) and always goes back to default (controller on).

It is fairly wide though at 750x750 which is the natural scale for ASCAT's Arctic Ocean images. The dates need to be offset a bit to the right so that they are more readable over the snowy white background of Greenland. Four years takes about 3 MB so the full eight year record is manageable saving at 'normal' quality. Probably best as 2x4 array. It looks like 8 years at 365 days is too large but 8 years at 182 or 121 would both come in under 10 MB which the forum has accepted in the past. NSIDC uses 52 days per year or 1 date out of 7 for sea ice age animations.

Yes, very helpful. Between windows and mac, it should work for most people here. The key seems to be at my end, processing the .mov saved out from ImageJ to .mp4 using a modern free online converter like https://movtomp4.online/

Now I'm wishing there was some way to specify a delay before it begins another loop. One could not have it loop at all but press again on the start controller each time. I don't think it would work to add repeats of the final frame of the parent 65 MB gif animation because the codecs would probably see nothing changing and suppress them I suppose credits or ads or annotations or voice-over or music could be added but I'm not going there at this point.

More experimentation on the early stages first. The new attached has been downscaled to 650x650. that seems to max out the room available in forum width. The second version, ICA indexed color, does well on thicker ice but the green needs to be replaced. The process has also wiped out the day numbers.

Introduction Climate science data is mostly stored and distributed as netCDF (.nc) files. Those bundles multiple data files that need to stay together. These are almost always geolocated, each data point being tied to its latitude and longitude (plus ocean depth or atmospheric height if applicable).

The geolocated files within a netCDF bundle are called Geo2D files; they are the only ones that Panoply can use to display the data on a map projection. Typically each project creates a new netCDF bundle each day as new satellite imagery is processed by product pipelines and models, though some like bathymetry are one-off fixed resources.

It is quite difficult to un-bundle netCDFs and re-assemble a time series of a particular Geo2D file as a single new netCDF. The tool described here provides virtual un-bundling, all that is need in most situations.

Panoply is easy-to-use free software used to make maps that display climate data. Some 52 parameters control the map's appearance, notably the map projection, its center and horizon, data range displayed, and color palette.

After some interactive experimentation, choices can be saved out ('Export CL Script') as a human-readable text (file type .pcl) that can regenerate the map using the PanoplyCL command line tool. The map can in effect be edited at the level of the script by changing parameter settings. The tool here automates that process on a large scale to produce a succession of related maps, usually for an animated time series.

A Panoply script contains 59 lines of which 7 are optional explanatory comments, 45 control map and legend appearance, and 5 provide text boxes that appear outside the map itself.

These latter have limited controls on font, size, placement and character length but can nonetheless be filled with date appropriate to each frame, such as date, time, data range restrictions and comparative or summary statistics.

Text is rendered (dithered, not retained as vector layer) irrevocably onto the map periphery unlike map colors which are exactly those of the scale legend. Text can be harvested later into a small consolidated box for better control over final placement, for example over an unused portion of the map.

Panoply places an additional line of text at the bottom showing data range extremes; this can be turned off but not otherwise altered. In particular, it cannot show palette squeezes (range restrictions). The 'fit to data' button in the scale menu is not represented in the script as a central footnote control but as the true/false choice below. It lies on the same line as the left and right footnotes.

The initial script can be modified by 'mail merging' in a list that increments parameter placeholder settings in some useful way, for example over a date range.

The tool described here concatenates all these variant scripts into a single text file that, when run as a large PanoplyCL script, generates a separate output map (.png image) for each line in the list, each according to its parameter settings. These pngs make up the frames of the subsequent animation.

In the example, if the list pointed to 365 netCDF files, a year's worth of daily images would be created. These images, uploaded into ImageJ free software, can be saved out as a .gif animation or .mp4 video.

This section goes through real-world examples illustrating how the mail merge tool automates the production of climate science data animations by preparing scripts that PanoplyCL can run to generate the maps that make up the frames.

For brevity, only the relevant lines of a PanoplyCL script are shown. Much longer lists (many hundreds) would normally be used instead of the 4-5 shown in the examples (which could easily be generated manually).

You can best follow along by replicating the examples which requires that the respective netCDF files plus Panoply, PanoplyCL and their manuals have been downloaded, installed, and assimilated.

Example 0 This example just shows how the merge tool can generate a list of download urls for ASCAT daily imagery. Since those are available from 2010 on, leading to 2948 files totaling 2.4 GB up to mid-January 2018. Every 5th day however might give simplified but completely adequate depiction of ice motion. The desired dates are easily built by fill-down in a spreadsheet since ASCAT uses day-number instead of day-month.

Because ftp isn't properly enabled at this host site, the files are best retrieved by 'download it all' or 'bulk url opener' web browser plugins. About 1 in 50 ASCAT (or AMSR2) days are satellite malfunctions, giving 404 duds instead of pngs; these can be avoided based on smaller file size.

Example 1a The netCDF bundle here from ARDEM just provides ocean depth nearing the Bering Strait. It consists of 1D latitude x and 1D longitude y files showing the ocean domain mapped plus a single Geo2D file z providing geolocated bathymetry (and land DEM). Using these xyz coordinates relative to the earth's surface and current sea level, Panoply can draw a map using colors to show depth.

This netCDF could, but does not, provide a time coordinate for past and/or future sea levels that would allow Panoply to generate an animation of inundation and recession during the late Pleistocene/Holocene. However we can use the 'mail merge' tool and auxilary data to create these.

It all begins with map experimentation. Here, that shows that while the continental shelf is quite wide in the East Siberian Sea it falls off rapidly to great depth at about 150 meters. Further, land elevations in the netCDF aren't of interest in this context, though they do indicate river drainages.

This means the default 'fit to data' range settings of -9677 and 5965 (units aren't specified) are far too broad to make good use of the color palette so they must be set. The Arctic Ocean reaches its greatest depth of 5669 m in the Fram at the Malloy Deep, the Litke trench attains 5,449 m, the average depth is ~1,000 m and 60% is less than 200 m.

A simple spreadsheet bumping depth in steps of 5 meters out to the edge of the shelf will show the flooding of Beringia since the Last Glacial Maximum. The key placeholders in the PanoplyCL script are called "scale-min" and "scale-max" but directory file paths and so forth are also important, as are useful names for the output graphic frames. (More could be done in the title and subtitle text boxes.) The placeholders are put in brackets so the mail merge tools knows what to put in where as it goes through its little csv database:

var ncdata1 = panoply.openDataset ( "/Users/ARDEMv2.0.nc" );var ncvar1 = ncdata1.getVariable ( "z" );var myplot = panoply.createPlot ( "lonlat", ncvar1 );myplot.set ( "scale-min", -140 );myplot.set ( "scale-max", 0);myplot.saveImage ( "PNG", "Beringia at -140m.png" );etc.Example 1bThis is just a minor variation on the previous example that runs through 20,000 years of (sea level,year) pairs manually extracted from a careful paleo-reconstruction graph hosted on wikipedia. Because the rate of sea level rise accelerated during the early Holocene and then flattened out whereas the time interval between frames stays constant, a jerkier animation results than the one above.

Here the primary innovation is placing the varying data within the scale caption of Panoply. This can save quite a bit of 'post-production' custom work in Gimp because vector text and raster imagery don't size well together. Note the dates can't be presented with commas as in 20,000 because those are reserved for field separators though there's an easy workaround using two fields.

Imagine now only being interested in the snow surface temperature Geo2D. The archive here dates back to 15 Oct 2010 but takes a melt season break each year between day 228 and day 301. That's 1408 netCDF bundles at 22.34 MB for 31.4 GB of download of which perhaps 8% is project relevant.

UH provides a convenient wget file for command-line mode. A subset of that, say weekly, can be made from that template following instructions in Example 0.

The settings below fix the map projection to stereographic centered on North Pole in standard 'Greenland down' position with the Arctic Circle as horizon. This allows your map to be rescaled to match maps you might find online for which no netCDF file is provided.

You can adjust these numbers in the friendly Panoply interface; the four lines below show up in the exported CL script. Two other common settings emphasize the Bering Straits (to show conditions in the Chukchi, Beaufort, and East Siberian Sea) or the Fram Strait/Svalbard/Severnaya Zemlya area.

Suppose now you viewed the -45.0 as a placeholder. Making a list that increments this by one degree 360 times, the new mail merge tool will output a concatentated script with the effect of displaying ice data on a rotating globe (restricted to the Arctic).

If the path to the file on your hard drive and the date associated with the netCDFs are also varied, the display will increment the data by a day for each increment of rotation. In other words, construct a 3 row, 365 column table in a spreadsheet, save as comma separated variables (.csv format), and paste into the tool.

Example 3. It can become tedious testing dozens of palettes trying to find the one that best displays a particular data set. The automation option generates a slide show of several dozen potential palettes by varying an initial map. According to the PanoplyCl manual, a list of what it currently has in stock will be given with this command: printColorTableList ( ) void

Alternatively, ImageJ will let you try palette collections: open your Panoply map in any palette, then drag any internet palette (.lut) onto the tool bar to exchange color tables, over all the frames in a stack if in montage view. That way, a new palette (to be used just once maybe) need not be added to the Panoply set which is already cluttered.

Example 4 It is not so easy to harvest and re-position text from Panoply's 6 text input boxes once that text has been rendered onto the map background. However it is straightforward to consolidate and reposition text over unused portions of the map.

This is key, in conjunction with plot size 190, in arriving at a final product that fits within the forum's 700x700 pixel size constraints yet retains maximal map resolution. (In other words, the color palette and text lines ordinarily take space away from what's available to the map.)

The trick is to make a one-row montage of the frames, duplicate it, select a line of text across the entire montage, delete its complement to transparency, position it as a new layer over the map montage, then set its Gimp mode to 'darken only'. Since nothing can get darker than black text, it will pasted through; since nothing is less white than pure white, the white will get lost out to whatever lies underneath on the map.

Repeat the design composition with the other lines of text (and the scale color legend). Some of these lines may benefit from vertical resizing (ie, uncoupled from horizonal). Then, rather than flatten or merge, make 'new layer from visible'. This top layer can then be re-sliced in Gimp (100 slices max) or ImageJ back into animation frames.

The animation shows the gain: the map circle is originally 686x686 pixels but reducing the initial Panoply map to forum maximum reduces the 'content circle' to 520x520. This means only 57% as many pixels remain available to display data.

For a given PanoplyCL 'plot size', the text layers will be in predictable positions. Consequently the process can be automated.

Just some remarks on optimally processing ASCAT polar imagery to depict ice pack movement and making long time series, as restricted by forum file sizes and display width.

First note that the process by which the scatterometer instrument on the satellite makes these daily images from its orbital swaths is quite complicated but doesn't concern us as daily images in the archive are readily interpretable in terms of brightness contrasts between Arctic islands, floes, stable features, open water and so on.

Second, regardless of the physical meaning of the calibrated scattering values, the images may download as color png images but are actually one channel 8-bit grayscales. These make such lopsided use of the 256 available grays on the Arctic ice scenes that they greatly benefit from basic contrast stretching.

For consistency, this is best done simultaneously for all frames on a montage, bringing them into the full range of grays distinguishable by the eye. Here [10,245] is better than [0,255] because extreme values are wasted and the slightly narrowed gamut leaves room for extra colors, here 20 that might be reserved for tracking lines while staying within the 256 that a gif animation is capable of.

Modest gains may be had masking out black satellite data holes and bright land masses such as Greenland as they distort the histogram being stretched.

Third, ASCAT circles of the Arctic have a limited intrinsic size of 1154 x 1154 pixels which extend out to a 45º horizon (ie Lake Michigan) and so don't offer that many pixels for the Arctic Ocean per se. An enveloping rectangle large enough to capture the AO and some of the Bering Sea, Fram, Nares, Barents, CAA channels and Siberian islands has dimensions of about 400 x 345 = 138,000 pixels.

Not counting pixels in the land mask, the relevant display reduces to 89,147 pixels or 64.6% of the total to depict the 9 million sq km of Arctic Ocean proper. Thus a given pixel represents about 10 km x 10 km of sea ice.

The basic ASCAT ocean crop will enlarge to 700 x 606 for forum purposes. However special topical areas like Fram export don't need to show the whole ocean. The question is, how much can small regions be enlarged without pixellating. Here ImageJ provides bilinear and bicubic interpolation; the latter considers all eight neighbors in a 3 x 3 box around an initial pixel and thus may be slightly preferable.

Fourth, regional contrast adjustment can bring out local features that lack sufficient contrast. It wouldn't be possible to enhance these with a global contrast stretch because what worked in one place in the image might work poorly in another. The ImageJ tool called CLAHE can do this at various levels on variously sized boxes. There's relatively little to be gained here by masking.

Color tables are inconsistently implemented in graphics software. Gifs become unportable between them. For this reason, after making a time series in ImageJ and false-coloring it, the gif should be montaged out linearly and copied over to Gimp where it can be re-sliced back to a gif there.

Here, gifs are better because they preserve individual days as separate frames. However file sizes become way too large. Movies such a mp4 greatly reduce file size but degrade the data internally with extreme codecs. However it still works very convincingly to the eye.

The mp4 below shows original ASCAT images over globally and regionally corrected contrast -- there's a vast improvement in feature recognizability. This instrument can see a lot better into the ice than it's given credit for.

It's time for another update on the software project and website I've been working on in collaboration with A-Team. The software now has a name, which is 'floe'. You can see the current state of affairs at http://keytwist.net:8000. (temporary URL as I'm still making major changes.) The whole site is generated automatically with a single command, and will do whatever work is necessary to bring it up to date with the latest source data.

1. as a read-only repository of interesting* graphics, both current and historical, that are either unavailable or hard to access elsewhere. (*interestingness is under construction, while I've been focused on getting the platform working)

2. as an interactive web tool allowing anyone to create their own graphics from public data sources, without having to install anything or become experts on every detail of the process.

#1 is embodied in the site linked above. The floe program downloads 49 GB of 2017 data files from the ESRL FTP site, and crunches them into 292 HTML files and 13 GB of images. It takes about an hour and 40 minutes (on a fairly low-end linux box in the Amazon cloud) to run all of 2017 from scratch. Once caught up, handling each new day's data will take a few minutes.

The tabs on top show different ways that floe can add value. "ESRL Images" simply re-hosts RASM-ESRL products that are normally buried in compressed archive files, makes thumbnails, applies friendly names, organizes by date, and so on.

"New Products" uses the same ESRL animations, but makes new animations by disassembling them into individual frames, and recombining the frames into new animations that show both history and forecast frames. This is only a proof of concept and much more interesting things can be done.

"Panoply Experiments" goes further into the custom realm, and shows that we can run PanoplyCL on the server against .nc files published by ESRL, with a custom template script that we fill in with any parameters we choose, to generate new images. (The next step will be to combine these daily images into entirely new animations.)

That's the "static repository" side. I've also done some work on the "interactive web tool" side, which isn't publicly available yet (it's protected by a login screen so that bots don't find it and see how high they can run up my AWS bill). I built a "mail merge" (template filling) tool that A-Team has been discussing in this thread - the next step will be to actually run the resulting PanoplyCL script on the server on demand. This can be taken much further - both as tools for experts, and as other tools that will allow anyone to build custom graphics by choosing options on a form.

So this has evolved into quite a large project, and I'm definitely going to continue and see where it leads, and what this community can think of to do with it.

Very impressive. This is the way to go, an over-arching architectural enabling vision rather than endless ad hoc graphics and futile recommendations that thousands of people to go off on difficult learning curves that they don't have time or interest for, despite having considerable end-user talent if they could only get there.

Meanwhile I am still delving into the human perceptual side of optimal presentation of scientific information, the idea being that numerical time series are basically really boring but if communicated effectively can suggest significant hypotheses of my favorite kind (known in advance to be true) that may still have to be ground out later by some conventional objective process -- because it's not good enough to say hey look at this, obviously such and such is happening -- but at least it will be time well spent.

We are sort of stuck in a single time zone, a day goes by, there is nothing we can do to speed it up or slow it down. However a lot of processes in nature are taking place at vastly slower paces. By a simple compression of frame rate, we can bring these home, seeing them in a way not possible in ordinary human experience (eg Chasing Ice).

In my view, that could synergize with moving off into more effective color spaces -- the data may just be shades of gray (rods) but we don't have to look at it that way (cones). There's more than meets the eye to the retina -- amacrine and bipolar cells post-process the data before it even hits an integrative ganglion, providing all sorts of no-brainer hard-wired features such as pattern recognition and motion sensing. The coding dna for that has been under development since we diverged from jellyfish in the early Cambrian.

It looks to me like project-specific automated design of color scheme, while not the ultimate optimum, could probably pick the low hanging fruit. These would be based first on the global histogram (overall usage of the various grays, means, std deviations, variances, distribution etc), secondly on how these grays are statistically distributed in the plane, and thirdly on how they are changing in a time series stack. Of course, jpeg and mpeg have been there, done that in terms of designing compressioon, the eg standing for experts group.

At a practical level it suffices to fit a color lookup table to the graphic. Two examples are shown below. These show a linear grayscale of 256 values and what each one of them is going to be replaced with by going over to indexed color.

For unknown reasons, the two below are peculiarly effective in presenting (post-enhancement) Ascat ice motion imagery. Here it has to do with the bimodal peak in the sea ice roughness histogram, FYI vs MYI, and persistence for months or even years as cohesive units aka recognizable features, despite the viscoelastic properties of Arctic sea ice. Or rather, it's all about exploiting the latter.

For now, I am just looking for a digital assistant that will help me screen thousands of these until I see either an end point in improvement or get some idea of how to design a good one from scratch, for any project, given its data distribution, from an algorithm. Ideally the data hosters themselves would provide several as part of their archive.

A-Team, for automating the color scheme, maybe it would be useful to show the image, allow the user to select a sub-region of special interest (and/or difficulty in seeing what's going on in that particular spot) and tell the system to optimize for that region. You would want a weight parameter to tell it how much to favor that area at the expense of the rest of the image. Of course, first we would have to get the hands-off automated version working, which is a good challenge in itself. It seems similar to making a good topo map - the contours become less useful in areas where they're either too close together, or too far apart.

Do any of the free tools you use offer histogram and statistics extraction from image files? That needs to go on my to-do list.

Meanwhile I am still delving into the human perceptual side of optimal presentation of scientific information, the idea being that numerical time series are basically really boring .....

The use of graphics versus tables has a long history. Florence Nightingale went as a nurse to the war between Russia and Great Britain in the Crimea in the mid-1850's. Apart from basically inventing the modern profession of nursing, she was appalled by the mortality caused by basic poor hygiene and sanitary conditions in the hospitals and the army camps.

After the disasters of the Crimean war, Florence Nightingale returned to become a passionate campaigner for improvements in the health of the British army.

She developed the visual presentation of information, including the pie chart, first developed by William Playfair in 1801. Nightingale also used statistical graphics in reports to Parliament, realising this was the most effective way of bringing data to life.

I guess that makes you and dryland part of a long and honourable tradition.

Interesting, so disruptive technology goes back a ways. I was so excited to get a 480 pixel B/W monitor in 1997. Color meant mailing cmyk plates off to Korea for printing. Today, senior PIs reading printed pdfs en route to meetings, each frame of animation (if any) as text. So it only penetrates to a limited extent.

Quote

for automating the color scheme, maybe it would be useful to show the image, allow the user to select a sub-region of special interest and weight the system optimization for that region.

That get mixed in with crop, rescale and masking. For example Ascat can maximally take a 2.5x enlargement. Since the whole AO + some Bering + some Barents takes 380 pixels width, that is too wide for the forum not to mention exploding file size for say a year of mp4. So normally the user would specify corners for a roi crop.

Now the roi itself might only occupy half the rectangular crop, for example Chukchi with extraneous statistics from surrounding land. Most of the tools allow restriction per instructions from a companion image (usually a binary mask). That mask need not be all-or-none though, it can be any grayscale weighting (8-bit) in gimp.

So for Ascat, suppose the user might want to optimize for darker areas (smoother FYI) at the expense of brighter (roughed-up MYI). The mask there would come from a major gaussian blur of an image copy, maybe posterization of that to say 4-bit, attached.

More simply, just blow up contrast remapping at the low end. Below, after thresholding (which could be done uniformly across a whole series or frame-specifically). A lot of extra detail is revealed. However there is only so much water that can be squeezed from a turnip.

One thing I've noticed though in reading academic papers in image enhancement: it is all rigorously optimal [citation] from an information-theoretic standpoint [citation] but the final images are worse than some duffer gets on a flip-phone.

Quote

first we would have to get the hands-off automated version working ...similar to topo map - contours less useful where they're either tooclosetogether or too far apart.

Right. Same with city names. There is software now for that, dealing with zoom.

Quote

Do any of the free tools you use offer histogram and statistics extraction from image files?

ImageJ has graphics-specific offerings in that dept, nothing that has thrilled me. A lot of people would go with R statistics software and process raw numeric, say from ncdump out of Panoply.

gerontocrat, thanks for the Florence Nightingale story - I had no idea.

In both substance and style, her chart reminded me of the famous Charles Joseph Minard graphic of just a few years later (1869) showing the decimation of Napoleon's army. The wikipedia article (https://en.wikipedia.org/wiki/Charles_Joseph_Minard) says perhaps a bit hyperbolically, "Modern information scientists say the illustration may be the best statistical graphic ever drawn".

When I was a freshman in engineering school, they waved this under our noses as an example of the power of graphical presentation. I wonder if they still do that.

While waiting for the ESRL archive to build up again, not being convinced products like sea ice thickness are backwards compatible with 2016-17 values, I have been making some cross-silo products.

That is, most of our data source sites specialize in a fairly narrow daily product like bulk ice salinity and stop with that whereas the whole point of netCDF and Panoply is a seamless data integration framework. So this is a good niche for us, where value can be added and floe's automation is needed worse than ever.

The UH SMOS is a good one for cross-silo demo-ing. It has all the ancillary files such as error grid and land mask, some of the products are supplemental to ESRL, and everything is compliant. It is really a nuisance to spew out longer animations manually so floe/panoplyCL will be a big deal just with sites like this.

There is some really excellent coding going on here at https://www.online-convert.com/. They've been very decent about letting me do a lot of online converting of gifs into forum movies. This is mission-critical for us because file size limits what we can do with gif animations from floe. That is, gif is better in the scientific sense with its individual frames but its compression scheme is lousy.

At some point we need to look at remedying site sources that are producing non-compliant or defective netCDFs. Most commonly this involves lat lon associated with but not integrated into gridded data so a 2D but not a Geo2D file shows up in Panoply. This would either harranguing the site hosts or involve drilling into ncgen and re-hosting the archive properly formatted.

I've also come to realize that a lot of plain satellite imagery falls into the defective netCDF category. In effect what they've done is discard the intermediate gridded data and offered a Panoply map in a fixed Arctic projection. The PanoplyCL scaling parameters aren't provided but usually the map is polar stereographic at some multiple of -45º off the Greenwich meridian with a guessable horizon and pole.

For example Ascat is a one-channel 8-bit grayscale in Greenland-down orientation looking down at the north pole with 45ºN as the horizon. It comes without its land or open water masks but those are readily made with UH AMSR2 which though truncated rectangularly attains the same horizon and otherwise matches except for a scale factor.

The Ascat image amounts to a 350 x 300 rectangular pixel array for the Arctic Ocean. It's easy to overlay lat lon polar coordinates and so in effect assign a lat lon to each pixel. So the "inverse netCDF problem" is resolved by stubbing in these Ascat [0,255] numbers for the data values of any master Geo2D file adjusted to this scale. Once the image is in Panoply, it can be reprojected like any other netCDF and integrated with them arithmetically or as partial overlays.

While ESRL provides little or no explanation of what they are doing or why, the post-hiatus archive has gone from REB_plots to REB2_plots for their front-facing gifs, still without providing the underlying netCDF files needed to draw them directly in Panoply.

They did not fix any of the old bugs that I could see, such as different image sizes in the 5-day before and after thicknesses, or crazy complexity in snow contours over ice thickness, or lack of ± explanation of keys.

In their new file names, UAF stands for University of Alaska Fairbanks. They have some very strong comprehensive products for the Bering, Beaufort, Chukchi and ESAS but tend to truncate them arbitrarily at the Canadian border. So it's kinda narrow-minded, omitting say the Barents, but fortunately the region they do cover is critically important. I don't know what the 4 means in 4UAF_ATM and 'atmosphere' for ATM doesn't really describe the air-ice-sea contents of the nc files.

NIC is the military's National Ice Center whose mission is to provide "global to tactical scale ice and snow products, ice forecasting, and environmental intelligence services for the US government". It involves the navy, coast guard and NOAA. Their analysts manually curate sea ice edge products but don't use the same categories as the Canadian Ice Service. I'm not of the opinion that expert annotation is competitive with machine-learning classification over the long haul.

It looks to me like ESRL ditched their previous sea ice thickness basis and reset to the January 2018 CryoSat averaged observational thickness. ESRL will then thicken and compact it from there using their physics model. Ice and snow thickness are by far the most difficult products. Everyone gets a reset to zero at the annual fall maximum of open water that takes in all the peripheral seas and a good bit of the Arctic Ocean but errors still build and build in multi-year ice.

Sorry that didn't paste too well. 'Ascat interferometry' looks really useful.Is there any chance of a slightly more step by step description of this process? I'd like to try to compare this year to previous years.

slightly more step by step description of this process? I'd like to try to compare this year to previous years.

That'd be a good project. We're look to automate this either in the ImageJ scripting or macro language so daily updates becomes feasible.

This takes more steps than I had remembered. First, a nuisance download as their ftp is broken. After that, mostly ImageJ commands. But try it first with a minimal set of 4 Ascats, eg Mon-Wed, Mon-Fri, Mon-Sun = R,G.B before getting into 109 pixel files.

drag 150 winter days into ImageJ after cleaning file names

convert to grayscale to reduce file size (Image --> Type --> 8-bit)

stack them into a single file (Image --> Stack --> Images to Stack)

crop down to Arctic Ocean region of interest (Image --> Crop)

tile them into a row (Image --> Stack --> Make Montage... label opp with file names)

adjust bulk contrast (Image --> Adjust... --> Brightness/Contrast)

adjust local contrast (Process --> Enhance Local Contrast (CLAHE) set to 63,256,2.20)

(The land mask is made separately from AMSR2 by selection all 100 of the concentration colors, deleting to transparency, inverting to everything else and filling with black. The land mask is duplicated 150 times and tiled into a layer to go over the day difference layers. I attached an Ascat-ready mask below, just crop it down in parallel to your original cropping of the Ascat stack.)

Here is one with two-day offsets. It has more interferometric color because there is more motion over two days than the one-day offsets used on the freeze forum, plus I drew it out with hue tweaking.

Gimp also does a good bump map to gussy up plain vanilla Ascat motion (no differencing)

I made quite a few errors with the attempts above, mostly things like not checking that the whole stack was modified and layers not being aligned properly in gimp.This one is for the current melting season. Might be a bit extreme with the brightness/contrast.I need to find an easy way to add the mask and had to miss out recent frames with missing data.

Nullschool wind and mslp effort below. I made 20 separate 1 sec gifs of 10 frames each, advancing date 3 hrs in between, then loaded into ImageJ and concatenated. It is necessary to load the gifs in order or the sprites will run backwards! ImageJ has a nice duplication-of-selection feature that allowed quick appending of date and data in 24 pixels at the bottom of main content. Data would be better on top because the stupid controller tends to cover it up.

Then sliced into frames and saved as mp4. I tried the 2 sec pause-and-advance method earlier but couldn't keep the number of frames constant. The animation struggles with nullschool complex colors, gif format is limited to 256. Also it jerks too much between dates. Still, a bit of animation catches the sprite action. windy would actually be better as it has both gfs and ecmwf.

Help .. is there any way to stop large animations loading when I visit the main pages ? For example the current melting season page has 2 animations over 10000kb that start loading every time the page is visited .. I cannot afford this .. they take time to load and they cost me money whither I want to view them or not . Most animations used to be click to view now most just run and run .. until my data allowance runs out .. Help ! b.c.

just testing to see how forum handles gif transparency (fills in with zero black if >700,otherwise the ambient of the two bluish gray backgrounds) ... then testing whether these mp4 can be forced to run at 700 rather than 720 (yes, though it struggles to load today). The mp4 shows the advancing front north of Svalbard, 62 days to August 1st.

Given that we have 1442 members, it would be better if the 2-3 people on phone pay-per-MB plans would re-set their profiles so fewer posts are shown per page, say 5 instead of 50 (forum software says 'messages' when it means 'posts'). That way older posts don't have to re-load. This would be better than going to the lowest common denominator of the 1442 internet accesses which is probably someone in a remote location still on a rotary phone dial-up connection.

It is a bad idea to force everyone else's animations not to run by going to 701 pixels etc. Very few people will click on through, as shown by the counter. Often there is not indication in the first static thumbnail frame that the animation would be all that interesting.

The real problem is that very few people understand the concept of cropping their images down to the relevant areas. Like that person posted that 13 MB file the other day of which 1.3 MB was needed to show the Arctic. Also, many people are not resizing images down to forum max of 700. Again, they need to look at the menus -- all graphics software including cell phone offers crop and resize ... or you can do it free and fast online.

I have Snagit on my computer, and have made SnaiIt videos by successively clicking on several windows with aligned images (such as DMI images of Nares Strait on different days). I then grab PNG 'stills' from the video (using Snagit editor), add text or shapes, save them, then download into Gifmaker. I've always changed the number of colors from 256 to 25 or 50 to reduce file size.

Rammb rgb compose from 3gifs step by step:download Fiji (imagej) from https://imagej.net/Fiji/Downloads, install and run.drag the file of the first gif into the small interface to open it.make a montage from image/stacks/make montage (enter frame no. and columns=1,scale=1)rename to m1 from image/renamerepeat with the other two gifs renaming as m2 and m3make the three montages into a stack from image/stack/images to stackset stack to 8bit from image/type/8bitcompose as rgb from image/color/stack to rgbThat leaves us with an rgb montage which we turn back into a stack usingimage/stack/tools/montage to stack (enter the same frame no. and columns=1)set frame rate from image/stacks/animation optionsfile/save as/gif

Here are three short gif around Peterman glacier if you want to give it another go, I figure the landcover there is quite varied, so perhaps something will be interesting. I will work on learning imagej as time allows. I tried to keep the files smaller, don't know how to package them in a more efficient manner.

I think that looks nice as a snow/land cover. I got tripped up a bit somewhere, I got them into an RGB stack, but it wasn't a gif. Operator error I'm sure. I'll be practicing with imagej as time allows, I have two small kids running around so it gets tough. I think I'll stick with screenshots over downloaded images, I am finding the screenshots seem preserve more detail. I'm gong to revisit the SSTs with a different band, focused on the Chukchi.

Thanks again, I kinda dig the high contrast.

Logged

"To defy the laws of tradition, is a crusade only of the brave" - Les Claypool