I'd like to thank everyone that sent in data sets as part of the contouring discussion in the 9.0.165.4 thread, as those were a big help in producing the optimizations in 165.5. Having good example data and descriptions of workflow made it possible to rapidly turn this around. If anyone ever found anything quicker than 9 for contouring, please re-run using 165.5 to see how it has improved. :-)

I will try giving it a shot tomorrow or Wednesday. But, out of curiosity, what improvements did you see in using the large DEM I sent to you? (this question can be for either Dimitri or any of the other folks who got a hold of the Garrett County DEM).

The attached 8.0.30 result took about one hour. (A bit more; Manifold 8 didn't log a time natively so I was reliant on my watch.) Notice how smooth it is. Some very small artefacts that should ideally not be there, but almost no contour collisions.

The 9.0.165.5 result took 69 seconds. There is a significant number of contour collisions, though very few small artefacts.

The 9.0.165.4 result took 715 seconds (11mn 12s). As far as I can tell, the result is exactly the same as the result for 9.0.165.5.

These are all 15m contours from SRTM at 1spx for the whole South Island of New Zealand (32401 x 28801px, FP32, missing pixels fillied with -32767).

The screenshots are details from a flat area of Canterbury directly west of Banks Peninsula. The match between zooms and scales is rough.

Timings are for i7-4790 (without Meltdown/Spectre mitigations).

I much prefer the result given by 8.0.30. I don't think I could use the results from 9, however quick. No result is perfect.

I'll leave it to adamw for a more precise discussion, but I do not believe it is so much that SRTM is noisy, it is that it does not match reality in a way that drawing contours at certain levels of detail will be free of odd contour features that the data insists should be there.

SRTM is integer data: 7, 8, 9 and so on. That is not the physical reality of the surfaces SRTM represents, which smoothly contour at scales both below and above the pixel size of SRTM. If you use a step function to represent non-stepped data, and you contour at levels of detail where those not-the-same-as-reality steps are a factor, you get artifacts when applying procedures that create contours which represent surfaces using generally-understood conventions about what contours are supposed to mean.

None of this is unusual: the common recipe to which adamw refers I believe comes from ESRI advice. I suppose other approaches might be to use a blur, or remove resultant artifacts with normalization.

There should be no difference between the results produced by 9.0.165.4 and 9.0.165.5. You say you aren't seeing any and I am not seeing any either from cursory look, so that's good.

There might be a difference between the results produced by 9 and 8. The screens show some. What remains is to determine whether these differences are good or bad. The important thing is that both 8 and 9 use the same underlying model for the surface so it's a simple case of one of them being wrong. (Other programs may use a slightly different model and so their contours might be legitimately different from those produced by 8 or 9, but this is not the case here.) And we did fix a number of inaccuracies compared to 8 particularly in edge cases like contour height coinciding exactly with that of flat areas of specific shapes. It might be that it is 8 that is wrong, although yes, it is possible that it is 9.

Could we have the file and the contour transform parameters?

Whatever the case, a common recipe to get rid of spikes is to alter contour heights slightly so that they don't coincide exactly with heights on the raster. Ie, instead of producing contours every 10 meters starting at 500, produce contours every 10 meters starting at 500.001.

On a closer look, it seems to me that the screen for 8 and the screens for 9 show contours for different heights.

That might be fixed by the UI for entering contouring parameters which will round the min / max height to the step. (In the transform, if you specify that you want heights between 120 and 445 with a step of 100, you get 120, 220, 320, 420, while you perhaps want 200, 300, 400.)

On a closer look, it seems to me that the screen for 8 and the screens for 9 show contours for different heights.

I've checked, and I'm afraid that is not the case.

So back to

The important thing is that both 8 and 9 use the same underlying model for the surface so it's a simple case of one of them being wrong. ... we did fix a number of inaccuracies compared to 8 particularly in edge cases like contour height coinciding exactly with that of flat areas of specific shapes. It might be that it is 8 that is wrong, although yes, it is possible that it is 9.

Could we have the file and the contour transform parameters?

I am packaging up a single 1x1° tile of SRTM data, including the area shown in previous screenshots, and will list exact steps in 8 and 9.

The original data source is here. Login required. I will also provide a direct link.

Zoom box.png shows the full STRM tile, with a blue box showing the area of detail used for the next three images.

Zoom M8.png shows the zoomed area with contours created in Manifold 8.

Zoom M9 shows the area with contours created in Manifold 9.

Zoom GM 18 shows the area with contours made in Global Mapper 18.2.

(In GM the contours were made without optional smoothing, simplification or filtering, and again from 0 to 1930m, step 15m. Execution time was roughly 20s, on the same machine used in Manifold 8 and 9.)

I've kept the formatting and shading as similar as possible between all three examples.

The Manifold 8 and Global Mapper results are very similar, though the Manifold 8 result has some tiny branches not present in Global Mapper (no complaint about that--a small blur gets rid of those).

The Manifold 9 result looks a bit different. It seems to have extra noise or artefacts. See the adjacent and linked contour rings, mainly near the centre of the screenshot.

Another interesting difference (maybe worth looking at) is a small cluster in the upper part of the large band of forest in the upper right of the screenshots. See the yellow box.

There are just a few pixels at 75m elevation here, which Manifold 8 and Global Mapper 18 both pick up (showing 3 small rings, almost the same though not quite), but Manifold 9 does not (no 75m contour here).

There is a general note on differences between contours produced by 8 and 9 in the build notes for 9.0.165.6.

A smaller note on this specific case:

The branch in the yellow box above which 8 creates and 9 does not create is all on pixels of the same height, 75, which coincides with the contour height. The surrounding pixels are all lower, this is a flat peak. So, 8 circles that and 9 doesn't. If the surrounding pixels were all higher, it would be the reverse, 9 would circle the pixels and 8 wouldn't.

As an illustration, I negated the surface and built a contour at height -75, using 8. Here are the results:

The contour at 75 on the original surface is thin black, the contour at -75 on the negated surface is wide cyan.

There are noticeable differences between Manifold 8 and Manifold 9 in all flat areas. Manifold 9 is sometimes better, possibly.

Global Mapper is basically like Manifold 8--but notice the half-pixel offset between the GM contours and Manifold contours (both 8 and 9). I think that is because Global Mapper observes the distinction between Pixel-is-Point and Pixel-is-Area raster data, whereas Manifold always assumes Pixel-is-Area. (Other terms for these are grid-centred data and cell-centred data, respectively.) I believe GM is right in this case, since SRTM data is Pixel-is-Point (grid-centred)--that is why its one-degree tiles at one second resolution are 3601 x 3601 pixels, not 3600 x 3600. The TIFF metadata for the tile confirms "Pixel is Point".

Regarding pixel-is-point and pixel-is-area, Manifold only assumes pixel-is-area in that the parameters of the coordinate system define the shift to the corner of the corner pixel, not to the center of the corner pixel. Transforms make individual decisions, and contours in particular assume pixel-is-point.

We will take a look at the offset, it is either us or Global Mapper misregistering the image by applying the shift parameters to center instead of corner or vice versa.

The Manifold 8 and Global Mapper results are very similar, though the Manifold 8 result has some tiny branches not present in Global Mapper (no complaint about that--a small blur gets rid of those).

The Manifold 9 result looks a bit different. It seems to have extra noise or artefacts. See the adjacent and linked contour rings, mainly near the centre of the screenshot.

But the above is too broad an opinion to be used to draw any conclusions. The only technical conclusion you can draw from the screenshots presented is that all three results are different. Observing they are all "similar" to some degree doesn't really tell you much, nor does picking a few contours out of zillions help much in characterizing exactly how they are different from each other, at least not without a careful look at the underlying data in each case.

For example, sure, there are a) spots where GM and 8 show small contours where 9 does not, but conversely there are locations where b) 9 shows small contours where 8 or GM may not. Without a careful consideration of what the actual data is in each such spot, calling b) a case of "noise" while not calling a) a case of "noise" is expressing a pre-formed opinion, not making a technical observation.

I 100% agree that global factors such as treating data as pixel-is-area or pixel-is-point should be reckoned explicitly with options. Those are pretty clear base conditions where making comparisons at only the highest level helps a lot.

But further down, sorting out what is "noise" and what is the drawing of a line that the data rightfully compels should be there, can only be done by examining the data.

It also depends on what you set to be the objective of the package in terms of capturing the information in the original data and preserving that information in the transformation from raster to vector. If the vector lines really do capture the data, then you should be able to re-create the raster from that vector in a reverse transformation.

How that works in practice is that in the case of contour lines somebody might call a spike, that is, a line which extends from an otherwise closed line figure, "noise." But if that line correctly represents a col effect on either side you need that stub contour line to correctly re-create the raster surface from the vector lines in a reverse vector-to-raster transformation. If you remove that line because you don't like spikes, you have changed what the vector data says about the form, the undulations, of the surface. Re-creating the raster from vectors in which such spikes have been removed will result in a surface that is different the the original, in that it will be missing the col.

9 deals with many such cases that 8 did not, I think to result in more accurate contours. But accurate contours are not always the objective in that people might prefer pretty contours (in the sense of being smoother, more continuous or seeming to be more orderly) to accurate ones. If GM is, indeed, more similar to 8 when you look at various cases in detail, that might (don't know... just saying "might") indicate GM, like 8, is not as accurate as 9.

Thanks a lot for the detailed description with illustrations and data.

We'll look into what is going on.

We have been constructing test cases and looking into the code yesterday and I believe we identified one issue which we should fix. After we do this, we will look into the differences between contours produced by 8 and 9 on this data and on several other data sets, to check the specifics in each case and make adjustments as necessary.

For what it is worth, we have been getting even higher performance increases on that particular data set (from 1500+ sec to 95 sec instead of "just" to 182 sec). That might be related to the fact that you are running the test using Viewer = right after a big import that shook up a lot of memory, and we were running the test using a fresh instance of 9 with the MAP file created by a prior session.

Regarding the difference in results, indeed that might be both versions being correct with respect to their raster models. We might obviously include support for different models, but we think it would be even better if we allowed, say, interpolating or otherwise pre-processing the raster as an intermediate step before building contours (or anything else). That way, the user would have a very flexible way to make the raster richer / smoother / whatever, with as many parameters as desired. We have some of that already, that can and perhaps should be extended.

Speed is nice but quality is really important. The right tool for the job is what matters so yes, please make the tool great with those pre-processing steps. Add quick ways to change parameters and view results as well. That will save tons of time and lead to a job done right. Sorry for this cheer leading post, but trying to balance all talk about raw speed of processes :-)

Speed is nice but quality is really important. The right tool for the job is what matters so yes, please make the tool great with those pre-processing steps.

You have to be careful when you use the word "quality" because that means different things to different people. Is it an improvement in "quality" when you remove details to make something prettier?

For example, do you consider a vector representation that accurately represents the original raster to be a higher "quality" thing than one which does not? A test of that is whether you can recover the original raster from the vector using a vector-to-raster transformation.

By "quality" some people mean a prettier vector, not a more accurate one. If you interpolate a raster before vectorizing it, the result of the raster-to-vector operation is a vector representation of an interpolated surface, not the original surface. That is often significantly prettier than an accurate vector representation because contours can be smoother and more orderly. Interpolation cannot make a surface more accurate by adding details that are not in the original data, but just like applying a blur to a photograph, it can remove details that are there.

For the same reason photographers will use "soft focus" to blur out pimples and other undesired, but genuine, details in portraits, you can apply a variety of techniques to elevation data to get prettier, but less accurate contour lines. I suggest that being able to do such things is great, a wonderful part of the toolkit, but I would respectfully suggest that the basis for having them is artistry, and not call it "quality".

Well said Dimitri. My intent was simply to encourage a little focus on more tools (the right tool creates the desired outcome or qualiity) rather than on raw speed and a clean interface. I'm sure that will happen after 9's infrastructure gets fully developed, but the current process of dabbling in various areas (like style) leaves us wondering a bit :-)

Might it be a good idea for demonstration purposes to use that same BOEM Gulf of Mexico mutlibeam bathymetry you reference in your YouTube video, to illustrate the speed of working with large DEMs, to also illustrate the speed of contouring with Manifold 9.0.165.5?

I contoured the east and west TIFF images on my home laptop and it accomplished the task in ~49 seconds and ~39 seconds respectively on the TIFFs where the elevation values are in feet.

My laptop is a Windows 10 Pro - Dell XPS 15 circa 2014 (Core i7 4712HQ, 16GB DDR3 1600 MHz RAM, 1 TB 5400 RPM hard drive with the 32 GB mSATA cache drive Nividia 750M, but the source files in the Manifold project document were on a Corsair GTX 256 GB thumb drive, the specs of which state a 450 MB/s read and 350 MB/s write speed, but I usually only see well over 100 MB/s when I'm unzipping those BOEM files, using Winzip 18.

If you pick as a data source the pre-existing Esri created SHP file contours and compare the results of the contouring with what Manifold 9.0.165.5 can do you will get a feeling for the accuracy, precision, quality, etc., of what Manifold does vs. what ArcGIS does.

Personally I want to take my 30 m USGS NED DEM and generate contours on it. It's 140.61 GB in size. The bounding box is (Top 83.0016106799 N, Left -180 W, Right -88.9984386585 W, Bottom 50.9966324193 N). The data set is 327603, 115217 pixels in size. Cell size is (0.00027778, 0.00027778) degrees (x,y). You can see the coverage in the attached JPG. This data set would really push the contouring transform nicely. I'm getting a new workstation at work and it will be online hopefully by May and I'll try to push it then. There's around 1800, 1 arc-second tiles in that data set.

We will check the contours produced by our code vs the pre-existing ones produced by ESRI, like you say. We can perform such comparisons on any data, but in this case since the contour files have been published, they are kind of "official", this makes comparing vs them more valuable.

It might indeed be a good idea to do a demo of contours on the data in general - perhaps after we have easy means to merge West and East together.

When you get around to testing contours on your big data set, please consider reporting the results on the forum, we are interested in how it will go. The next build will contain several relevant optimizations / additions. We do test against synthetic data sets of similar size, but there are frequently important insights which you can only get from someone else's data.

Might it be a good idea for demonstration purposes to use that same BOEM Gulf of Mexico mutlibeam bathymetry you reference in your YouTube video, to illustrate the speed of working with large DEMs, to also illustrate the speed of contouring with Manifold 9.0.165.5?

I contoured the east and west TIFF images on my home laptop and it accomplished the task in ~49 seconds and ~39 seconds respectively on the TIFFs where the elevation values are in feet.

As impressive as it is that your laptop could do that so quickly, anything that takes more than a few seconds is not right for a video. Videos are necessarily mass market, which means they must be created for highly impatient attention spans.

So, if Manifold can do something in 39 seconds that requires 39 minutes in a different package, well, that's wonderful when you read it in text as in this post, but 39 seconds is an eternity on video. The average YouTube visitor isn't going to stick around while nothing happens for 39 seconds.

Some videos cheat that by saying "oh, let's pause until this is done..." and then come back, but we do not like doing that in Manifold, where we prefer whenever possible for people to see the true, authentic effect as it happens in real time.

It would be great, by the way, if you could report analogous timings on your laptop with Arc. Please report all the settings and workflow in both cases so apples to apples comparisons can be made.

By the way, I got a kick out of this...

My laptop is [...] 16GB DDR3 [...]

... It's 140.61 GB in size.

... Ambitious! :-) I would recommend on your laptop for the first few trials starting with a part of the data set and then scaling up, so you know when to start launching the job at the end of the day, to leave it cooking overnight.

I have to admit to being curious... what is the use case, the end need, for creating contours on all of Alaska and a big part of Canada all at once?

Also, as Art Lembo has pointed out in other posts... Until you get your new workstation, I'd recommend getting an inexpensive 3 TB external hard disk for extra space on your laptop. Those have become very inexpensive. Run it over USB 3.0 and it will be faster than a thumb drive, with plenty of extra space as well.

For your new workstation, get a Ryzen with lots of cores, maybe even a Threadripper with 32 cores if you can swing it. :-)

By the way...

I contoured the east and west TIFF images on my home laptop and it accomplished the task in ~49 seconds and ~39 seconds respectively on the TIFFs where the elevation values are in feet.

... what settings for contours did you use?

I just tried on a really old and slow Core i7 with 24 GB running windows 10 with data on an external hard disk, making contours from -3300 to 0 with a step of 300:

Transform (Contour Areas): [BOEMbathyW_m] (28.559 sec)

Transform (Contour Areas): [BOEMbathyE_m] (46.007 sec)

All 8 hypercores were busy, but of course with more memory, some SSD and so on, it would be much quicker.

I contoured the east and west TIFF images on my home laptop and it accomplished the task in ~49 seconds and ~39 seconds respectively on the TIFFs where the elevation values are in feet.

Oops... I forgot to ask... Did you contour areas or contour lines?

Here are some updated numbers. I ran contour areas and contour lines on an old Intel Core i7 machine with 24 GB of RAM and also on an old AMD FX machine with 16 GB of RAM. Neither had SSD but instead was running on plain, old slow hard disk. Both were running Windows 10, and both computed contours from -3300 to 0 in steps of 300:

AMD FX:

Transform (Contour Areas): [BOEMbathyW_m] (13.797 sec)

Transform (Contour Lines): [BOEMbathyW_m] (8.422 sec)

Transform (Contour Areas): [BOEMbathyE_m] (23.626 sec)

Transform (Contour Lines): [BOEMbathyE_m] (16.157 sec)

Intel i7:

Transform (Contour Areas): [BOEMbathyW_m] (28.559 sec)

Transform (Contour Lines): [BOEMbathyW_m] (14.532 sec)

Transform (Contour Areas): [BOEMbathyE_m] (44.531 sec)

Transform (Contour Lines): [BOEMbathyE_m] (27.763 sec)

An AMD FX has eight real cores while the Core i7 has four cores that can be treated as eight hypercores. What is interesting about the above timings is that if the task is big compared to the system RAM available then they are affected by Windows caching of disk read/writes. To get the above numbers, I ran each trial twice in immediate succession. The first one, while Windows was sorting out cache allocated to other things, was usually significantly slower.

Anyway, getting the biggest number for doing contour areas on the larger of the two data sets down to 23 seconds on a sub-$100 CPU is pretty good, especially considering that only 16 GB of RAM is an absurdly small amount these days. I was surprised to see so little on that machine. It's just a spare machine that nobody uses that sits around in a corner somewhere on our local net and nobody noticed it has so little RAM.

Perhaps I'll be able to get the areca 12 drive thunderbolt 3 RAID but for now DROBO 5Ds will have to do. I've got tons of space. Our organization tries to force everything to be network driven but GIS that isn't based on WMS, WAS, WCS, etc., its impissibly slow under normal circumstances let alone the 3.5 MB/s that I get network throughput from my remote site to the corporate Datacenter. Yes I live in the middle of nowhere. I'd love to have a 10GbE high end NAS RAID or good server with fiber channel connected RAID to connect via 10GbE but that's an expensive pipe dream.

I contoured the data using lines. I used an interval of 100 ft. Yes I used the TIFFs for depths in feet. I normally use meters but wanted to try the feet. My range was -11000 to 0. I did this for both data sets. The large 140 GB DEM is in an Esri File GeoDatabase on my DROBO5D at work. The storage RAID that DEM TIFF sources tiles of the DEM data is stored is on a DROBO 5D connected via USB 3.0. I've got 10.49 TB 7.5 TB free. All my manifold projects are stored on a separate DROBO 5D with 21 TB of storage and 18TB free. I'm thinking of trying to get the Areca 12 drive thunderbolt 3 RAID so I have around 2 GB/s throughput on my new workstation.

If you contour the above using 5 as the height, there will be a diagonal spike in the center.

Spikes usually appear when contouring heights that have been forced to be integer, a common recipe is to produce contours at non-integer heights. Spikes can be removed in postprocessing using one of the Normalize transforms. All contouring algorithms produce spikes, if the output of a particular algorithm does not have them, this means they have been removed either as the last step of the algorithm or possibly a little earlier.

I'd be interested in hearing which were "ugly"... but, in the meantime I tried a few myself.

1. LibreCAD imports something, but apparently not completely as compared to others:

2. A free resource from a well-respected CAD company, Bentley, is their free CAD viewer. Anyone willing to register with personal details, and willing to burn about a gigabyte of space, can download it and use it for free. What is useful about this is that Bentley utilizes the RealDWG paid library that AutoCAD licenses for about $5000 first year, $2500 per year thereafter. So... in theory this should produce the same results as everybody else who uses RealDWG:

As you can see from the above screenshot, which is clearly is an incorrect import, even using AutoCAD's own code within a package from one of the world's most experienced and most respected CAD companies is not a guarantee the import will be accomplished correctly.

Anyway, my point with the above is not just which packages can "open" a file and "import" it, but which managed to import it correctly. If anything, many people would prefer to have no import than one which imports data with numerous errors that are difficult to detect, and which then propagate through workflow and projects.

I therefore respectfully ask all contributors when remarking if something does or does not import a specific dwg to clearly state whether the import is accomplished accurately, or if there are any errors in the import such as some objects not being imported, geometry being imported incorrectly, etc.

It's not easy to say more than "ugly" if you have no clue about what the ACAD file should look like.

But as Manifold Viewer AutoDesk has a free viewer for DWG files to and similar to ManifoldViewer it allows to check the state of visibility of layers (very important, should be reflected in the layerbar of the imported map) and type of objects.

Ask google for the last version (2018) of DWG TrueView. You can at least see what's missing in the import.

Most important: You can convert new ACAD formats and downgrade to ACAD 2000 which is the latest format Mfd 8 + 9 can import with the restriction of some entity types AFAIK.

None of the programs can claim to import the file acceptably. Each of them has its deficiencies (ArcMap10 misses the hatch in the structure on the right).

So the problem obviously is based on AutoDesk politics to "promote" each new version with incompatible and unessential additions. I never before met an ACAD_PROXY_ENTITY.

As there are alternatives I jangle every associate to deliver R14.DXF files if they want to use ACAD as exchange format. And every one of them has had this experience before. Modern ACAD formats are no standard!

They all stick to AutoCad, because they have invested so much in further training of stuff to tame the beast. And rarely the engineers in person are competent to correct the small little typo that slipped into the last version. So they enhance their status by access to a draftsperson. That are the two sides why this business model takes so long to die off.

ACAD_PROXY_ENTITY is a workaround AutoDesk uses for basic AutoCAD to allow more complex drawing features that are created by the specialist versions of AutoCAD. A common example would be opening a drawing file prepared in Civil 3D 2018 with plain AutoCAD 2018, if the Civil 3D 2018 object enabler is installed in AutoCAD 2018 (a separate install usually) the Civil 3D model features (TIN, Surface, Contours, road design features, etc.) can be viewed, but not modified. If the object enabler isn't installed even basic AutoCAD can't display these kind of features.

When I need to bring in such features to another software package I have to use Civil 3D to convert the objects, typically contours, to basic line contour features that can be imported without issue.

true view is 750 MO. I try Serif draw Plus export to autocad in dxf and dwg format for many autocad version 200 2004 R2 not all format is supported by manifold 9 . The import of *.dwg2004 and *.dwgR12 fail .

I think Dimitri just wanted to know which converters did a good job vs which did a bad job + how to tell a good job vs a bad job (it is not always obvious).

We hear you on the deficiencies of CAD imports. We completely agree importing CAD files works worse than importing almost anything else. This is a constant annoyance to our users who have to jump through hoops (converting to older formats / converting complex shapes into simpler ones) to try and make sure their data survives. There are reasons for this mess outside of our control. However, we are looking for ways to improve things, and have some ideas.

We will likely issue 9.0.165.6 tomorrow. It will contain several improvements for contours among other things, based on feedback (thanks!). We'd like to then issue a public build in a few days and proceed to the next series of builds (there is a small stash of new features for it already which we are holding back because we want to issue the public build first).

And by general, I assume he means no details but more of a topical road map...without dates. I used to buy software when I was in the Air Force and had to answer this question before getting funded. There's no way to hold a programmer to a date, but there is always a big picture road map of modules to the end point. Once a module had been introduced then we could discuss details of that module. Whether you are willing to share such a road map is, of course, up to you.

It was very refreshing to watch the enhancement of the contour algorithms in real time. Although I did not check the logs to measure it, I could feel the speed improvement in my smaller .dem files with the 165.5 update.

See the 9 FAQ page, some comments there. For more real time, the discussions in this forum are pretty good. As remarked below, there is a lot of "community driven" to what takes priority on the short list.

It was very refreshing to watch the enhancement of the contour algorithms in real time.

On the subject of free AutoCAD viewers, DraftSight but Dassault is the one that I use. It is actually a fully AutoCAD compatible 2D CAD program. I use it to clean up drawings before pulling them into QGIS. Yes QGIS. Pulling AutoCAD drawings into Manifold was always rotten as it made a drawing for each layer. QGIS makes one layer for all the polylines and the layer name in AutoCAD becomes an attribute in a column. Strangely that fits the 'everything is a table mantra' much more than how M8 handled things.