Our Latest R&D Output

It was only this spring that we released Version 11.1. But after the summer we’re now ready for another impressive release—with all kinds of additions and enhancements, including 100+ entirely new functions:

We have a very deliberate strategy for our releases. Integer releases (like 11) concentrate on major complete new frameworks that we’ll be building on far into the future. “.1” releases (like 11.2) are intended as snapshots of the latest output from our R&D pipeline–delivering new capabilities large and small as soon as they’re ready.

Version 11.2 has a mixture of things in it—ranging from ones that provide finishing touches to existing major frameworks, to ones that are first hints of major frameworks under construction. One of my personal responsibilities is to make sure that everything we add is coherently designed, and fits into the long-term vision of the system in a unified way.

And by the time we’re getting ready for a release, I’ve been involved enough with most of the new functions we’re adding that they begin to feel like personal friends. So when we’re doing a .1 release and seeing what new functions are going to be ready for it, it’s a bit like making a party invitation list: who’s going to be able to come to the big celebration?

Years back there’d be a nice list, but it would be of modest length. Today, however, I’m just amazed at how fast our R&D pipeline is running, and how much comes out of it every month. Yes, we’ve been consistently building our Wolfram Language technology stack for more than 30 years—and we’ve got a great team. But it’s still a thrill for me to see just how much we’re actually able to deliver to all our users in a .1 release like 11.2.

Advances in Machine Learning

It’s hard to know where to begin. But let’s pick a current hot area: machine learning.

We’ve had functionality that would now be considered machine learning in the Wolfram Language for decades, and back in 2014 we introduced the “machine-learning superfunctions” Classify and Predict—to give broad access to modern machine learning. By early 2015, we had state-of-the-art deep-learning image identification in ImageIdentify, and then, last year, in Version 11, we began rolling out our full symbolic neural net computation system.

Our goal is to push the envelope of what’s possible in machine learning, but also to deliver everything in a nice, integrated way that makes it easy for a wide range of people to use, even if they’re not machine-learning experts. And in Version 11.2 we’ve actually used machine learning to add automation to our machine-learning capabilities.

So, in particular, Classify and Predict are significantly more powerful in Version 11.2. Their basic scheme is that you give them training data, and they’ll learn from it to automatically produce a machine-learning classifier or predictor. But a critical thing in doing this well is to know what features to extract from the data—whether it’s images, sounds, text, or whatever. And in Version 11.2 Classify and Predict have a variety of new kinds of built-in feature extractors that have been pre-trained on a wide range of kinds of data.

But the most obviously new aspect of Classify and Predict is how they select the core machine-learning method to use (as well as hyperparameters for it). (By the way, 11.2 also introduces things like optimized gradient-boosted trees.) And if you run Classify and Predict now in a notebook you’ll actually see them dynamically figuring out and optimizing what they’re doing (needless to say, using machine learning):

By the way, you can always press Stop to stop the training process. And with the new option TimeGoal you can explicitly say how long the training should be planned to be—from seconds to years.

As a field, machine learning is advancing very rapidly right now (in the course of my career, I’ve seen perhaps a dozen fields in this kind of hypergrowth—and it’s always exciting). And one of the things about our general symbolic neural net framework is that we’re able to take new advances and immediately integrate them into our long-term system—and build on them in all sorts of ways.

At the front lines of this is the function NetModel—to which new trained and untrained models are being added all the time. (The models are hosted in the cloud—but downloaded and cached for desktop or embedded use.) And so, for example, a few weeks ago NetModel got a new model for inferring geolocations of photographs—that’s based on basic research from just a few months ago:

✕

NetModel["ResNet-101 Trained on YFCC100M Geotagged Data"]

Now if we give it a picture with sand dunes in it, its top inferences for possible locations seem to center around certain deserts:

NetModel handles networks that can be used for all sorts of purposes—not only as classifiers, but also, for example, as feature extractors.

Building on NetModel and our symbolic neural network framework, we’ve also been able to add new built-in classifiers to use directly from Classify. So now, in addition to things like sentiment, we have NSFW, face age and facial expression (yes, an actual tiger isn’t safe, but in a different sense):

✕

Classify["NSFWImage",CloudGet["https://wolfr.am/tiger"]]

Our built-in ImageIdentify function (whose underlying network you can access with NetModel) has been tuned and retrained for Version 11.2—but fundamentally it’s still a classifier. One of the important things that’s happening with machine learning is the development of new types of functions, supporting new kinds of workflows. We’ve got a lot of development going on in this direction, but for 11.2 one new (and fun) example is ImageRestyle—that takes a picture and applies the style of another picture to it:

✕

ImageRestyle[\[Placeholder],\[Placeholder]]

And in honor of this new functionality, maybe it’s time to get the image on my personal home page replaced with something more “styled”—though it’s a bit hard to know what to choose:

By the way, another new feature of 11.2 is the ability to directly export trained networks and other machine-learning functionality. If you’re only interested in an actual network, you can get in MXNet format—suitable for immediate execution wherever MXNet is supported. In typical real situations, there’s some pre- and post-processing that’s needed as well—and the complete functionality can be exported in WMLF (Wolfram Machine Learning Format).

Cloud (and iOS) Notebooks

We invented the idea of notebooks back in 1988, for Mathematica 1.0—and over the past 29 years we’ve been steadily refining and extending how they work on desktop systems. About nine years ago we also began the very complex process of bringing our notebook interface to web browsers—to be able to run notebooks directly in the cloud, without any need for local installation.

It’s been a long, hard journey. But between new features of the Wolfram Language and new web technologies (like isomorphic React, Flow, MobX)—and heroic efforts of software engineering—we’re finally reaching the point where our cloud notebooks are ready for robust prime-time use. Like, try this one:

We actually do continuous releases of the Wolfram Cloud—but with Version 11.2 of the Wolfram Language we’re able to add a final layer of polish and tuning to cloud notebooks.

You can create and compute directly on the web, and you can immediately “peel off” a notebook to run on the desktop. Or you can start on the desktop, and immediately push your notebook to the cloud, so it can be shared, embedded—and further edited or computed with—in the cloud.

By the way, when you’re using the Wolfram Cloud, you’re not limited to desktop systems. With the Wolfram Cloud App, you can work with notebooks on mobile devices too. And now that Version 11.2 is released, we’re able to roll out a new version of the Wolfram Cloud App, that makes it surprisingly realistic (thanks to some neat UX ideas) to write Wolfram Language code even on your phone.

Talking of mobile devices, there’s another big thing that’s coming: interactive Wolfram Notebooks running completely locally and natively on iOS devices—both tablets and phones. This has been another heroic software engineering project—which actually started nearly as long ago as the cloud notebook project.

The goal here is to be able to read and interact with—but not author—notebooks directly on an iOS device. And so now with the Wolfram Player App that will be released next week, you can have a notebook on your iOS device, and use Manipulate and other dynamic content, as well as read and navigate notebooks—with the whole interface natively adapted to the touch environment.

For years it’s been frustrating when people send me notebook attachments in email, and I’ve had to do things like upload them to the cloud to be able to read them on my phone. But now with native notebooks on iOS, I can immediately just read notebook attachments directly from email.

Mathematical Limits

Math was the first big application of the Wolfram Language (that’s why it was called Mathematica!)… and for more than 30 years we’ve been committed to aggressively pursuing R&D to expand the domain of math that can be made computational. And in Version 11.2 the biggest math advance we’ve made is in the area of limits.

Mathematica 1.0 back in 1988 already had a basic Limit function. And over the years Limit has gradually been enhanced. But in 11.2—as a result of algorithms we’ve developed over the past several years—it’s reached a completely new level.

The simple-minded way to compute a limit is to work out the first terms in a power series. But that doesn’t work when functions increase too rapidly, or have wild and woolly singularities. But in 11.2 the new algorithms we’ve developed have no problem handling things like this:

✕

Limit[E^(E^x + x^2) (-Erf[E^-E^x - x] - Erf[x]), x -> \[Infinity]]

✕

Limit[(3 x + Sqrt[9 x^2 + 4 x - Sin[x]]), x -> -\[Infinity]]

It’s very convenient that we have a test set of millions of complicated limit problems that people have asked Wolfram|Alpha about over the past few years—and I’m pleased to say that with our new algorithms we can now immediately handle more than 96% of them.

Limits are in a sense at the very core of calculus and continuous mathematics—and to do them correctly requires a huge tower of knowledge about a whole variety of areas of mathematics. Multivariate limits are particularly tricky—with the main takeaway from many textbooks basically being “it’s hard to get them right”. Well, in 11.2, thanks to our new algorithms (and with a lot of support from our algebra, functional analysis and geometry capabilities), we’re finally able to correctly do a very wide range of multivariate limits—saying whether there’s a definite answer, or whether the limit is provably indeterminate.

Version 11.2 also introduces two other convenient mathematical constructs: MaxLimit and MinLimit (sometimes known as lim sup and lim inf). Ordinary limits have a habit of being indeterminate whenever things get funky, but MaxLimit and MinLimit have definite values, and are what come up most often in applications.

So, for example, there isn’t a definite ordinary limit here:

✕

Limit[Sin[x] + Cos[x/4], x -> \[Infinity]]

But there’s a MaxLimit, that turns out to be a complicated algebraic number:

All Sorts of New Data

There’s always new data in the Wolfram Knowledgebase—flowing every second from all sorts of data feeds, and systematically being added by our curators and curation systems. The architecture of our cloud and desktop system allows both new data and new types of data (as well as natural language input for it) to be immediately available in the Wolfram Language as soon as it’s in the Wolfram Knowledgebase.

And between Version 11.1 and Version 11.2, there’ve been millions of updates to the Knowledgebase. There’ve also been some new types of data added. For example—after several years of development—we’ve now got well-curated data on all notable military conflicts, battles, etc. in history:

Another thing that’s new in 11.2 is greatly enhanced predictive caching of data in the Wolfram Language—making it much more efficient to compute with large volumes of curated data from the Wolfram Knowledgebase.

By the way, Version 11.2 is the first new version to be released since the Wolfram Data Repository was launched. And through the Data Repository, 11.2 has access to nearly 600 curated datasets across a very wide range of areas. 11.2 also now supports functions like ResourceSubmit, for programmatically submitting data for publication in the Wolfram Data Repository. (You can also publish data yourself just using CloudDeploy.)

There’s a huge amount of data and types of computations available in Wolfram|Alpha—that with great effort have been brought to the level where they can be relied on, at least for the kind of one-shot usage that’s typical in Wolfram|Alpha. But one of our long-term goals is to take as many areas as possible and raise the level even higher—to the point where they can be built into the core Wolfram Language, and relied on for systematic programmatic usage.

In Version 11.2 an area where this has happened is ocean tides. So now there’s a function TideData that can give tide predictions for any of the tide stations around the world. I actually found myself using this function in a recent livecoding session I did—where it so happened that I needed to know daily water levels in Aberdeen Harbor in 1913. (Watch the Twitch recording to find out why!)

GeoImage

GeoGraphics and related functions have built-in access to detailed maps of the world. They’ve also had access to low-resolution satellite imagery. But in Version 11.2 there’s a new function GeoImage that uses an integrated external service to provide full-resolution satellite imagery:

I’ve ended up using GeoImage in each of the two livecoding sessions I did just recently. Yes, in principle one could go to the web and find a satellite image of someplace, but it’s amazing what a different level of utility one reaches when one can programmatically get the satellite image right inside the Wolfram Language—and then maybe feed it to image processing, or visualization, or machine-learning functions. Like here’s a feature space plot of satellite images of volcanos in California:

We’re always updating and adding all sorts of geo data in the Wolfram Knowledgebase. And for example, as of Version 11.2, we’ve now got high-resolution geo elevation data for the Moon—which came in very handy for our recent precision eclipse computation project.

Visualization

One of the obvious strengths of the Wolfram Language is its wide range of integrated and highly automated visualization capabilities. Version 11.2 adds some convenient new functions and options. An example is StackedListPlot, which, as its name suggests, makes stacked (cumulative) list plots:

✕

StackedListPlot[RandomInteger[10, {3, 30}]]

There’s also StackedDateListPlot, here working with historical time series from the Wolfram Knowledgebase:

One of our goals in the Wolfram Language is to make good stylistic choices as automatic as possible. And in Version 11.2 we’ve, for example, added a whole collection of plot themes for AnatomyPlot3D. You can always explicitly give whatever styling you want. But we provide many default themes. You can pick a classic anatomy book look (by the way, all these 3D objects are fully manipulable and computable):

3D Computational Geometry

The Wolfram Language has very strong computational geometry capabilities—that work on both exact surfaces and approximate meshes. It’s a tremendous algorithmic challenge to smoothly handle constructive geometry in 3D—but after many years of work, Version 11.2 can do it:

And of course, everything fits immediately into the rest of the system:

✕

Volume[%]

More Audio

Version 11 introduced a major new framework for large-scale audio processing in the Wolfram Language. We’re still developing all sorts of capabilities based on this framework, especially using machine learning. And in Version 11.2 there are a number of immediate enhancements. There are very practical things, like built-in support for AudioCapture under Linux. There’s also now the notion of a dynamic AudioStream, whose playback can be programmatically controlled.

Capture the Screen

The Wolfram Language tries to let you get data wherever you can. One capability added for Version 11.2 is being able to capture images of your computer screen. (Rasterize has been able to rasterize complete notebooks for a long time; CurrentNotebookImage now captures an image of what’s visible from a notebook on your screen.) Here’s an image of my main (first) screen, captured as I’m writing this post:

✕

CurrentScreenImage[1]

Of course, I can now do computation on this image, just like I would on any other image. Here’s a map of the inferred “saliency” of different parts of my screen:

✕

ImageSaliencyFilter[CurrentScreenImage[1]]//Colorize

Language Features

Part of developing the Wolfram Language is adding major new frameworks. But another part is polishing the system, and implementing new functions that make doing things in the system ever easier, smoother and clearer.

Here are a few functions we’ve added in 11.2. The first is simple, but useful: TakeList—a function that successively takes blocks of elements from a list:

Here’s a very different kind of new feature: an addition to Capitalize that applies the heuristics for capitalizing “important words” to make something “title case”. (Yes, for an individual string this doesn’t look so useful; but it’s really useful when you’ve got 100 strings from different sources to make consistent.)

✕

Capitalize["a new kind of science", "TitleCase"]

Talking of presentation, here’s a simple but very useful new output format: DecimalForm. Numbers are normally displayed in scientific notation when they get big, but DecimalForm forces “grade school” number format, without scientific notation:

✕

Table[16.5^n, {n, 10}]

✕

DecimalForm[Table[16.5^n, {n, 10}]]

Another language enhancement added in 11.2—though it’s really more of a seed for the future—is TwoWayRule, input as <->. Ever since Version 1.0 we’ve had Rule (->), and over the years we’ve found Rule increasingly useful as an inert structure that can symbolically represent diverse kinds of transformations and connections. Rule is fundamentally one-way: “left-hand side goes to right-hand side”. But one also sometimes needs a two-way version—and that’s what TwoWayRule provides.

Right now TwoWayRule can be used, for example, to enter undirected edges in a graph, or pairs of levels to exchange in Transpose. But in the future, it’ll be used more and more widely.

✕

Graph[{1 2, 2 3, 3 1}]

11.2 has all sorts of other language enhancements. Here’s an example of a somewhat different kind: the functions StringToByteArray and ByteArrayToString, which handle the somewhat tricky issue of converting between raw byte arrays and strings with various encodings (like UTF-8).

Initialization & System Operations

How do you get the Wolfram Language to automatically initialize itself in some particular way? All the way from Version 1.0, you’ve been able to set up an init.m file to run at initialization time. But finally now in Version 11.2 there’s a much more general and programmatic way of doing this—using InitializationValue and related constructs.

It’s made possible by the PersistentValue framework introduced in 11.1. And what’s particularly nice about it is that it allows a whole range of “persistence locations”—so you can store your initialization information on a per-session, per-computer, per-user, or also (new in 11.2) per-notebook way.

Talking about things that go all the way to Version 1.0, here’s a little story. Back in Version 1.0, Mathematica (as it then was) pretty much always used to display how much memory was still available on your computer (and, yes, you had to be very careful back then because there usually wasn’t much). Well, somewhere along the way, as virtual memory became widespread, people started thinking that “available memory” didn’t mean much, and we stopped displaying it. But now, after being gone for 25+ years, modern operating systems have let us bring it back—and there’s a new function MemoryAvailable in Version 11.2. And, yes, for my computer the result has gained about 5 digits relative to what it had in 1988:

✕

MemoryAvailable[ ]

Unified Asynchronous Tasks

There’ve been ways to do some kinds of asynchronous or “background” tasks in the Wolfram Language for a while, but in 11.2 there’s a complete systematic framework for it. There’s a thing called TaskObject that symbolically represents an asynchronous task. And there are basically now three ways such a task can be executed. First, there’s CloudSubmit, which submits the task for execution in the cloud. Then there’s LocalSubmit, which submits the task to be executed on your local computer, but in a separate subkernel. And finally, there’s SessionSubmit, which executes the task in idle time in your current Wolfram Language session.

When you submit a task, it’s off getting executed (you can schedule it to happen at particular times using ScheduledTask). The way you “hear back” from the task is through “handler functions”: functions that are set up when you submit the task to “handle” certain events that can occur during the execution of the task (completion, errors, etc.).

There are also functions like TaskSuspend, TaskAbort, TaskWait and so on, that let you interact with tasks “from the outside”. And, yes, when you’re doing big machine-learning trainings, for example, this comes in pretty handy.

Connectivity

We’re always keen to make the Wolfram Language as connected as it can be. And in Version 11.2 we’ve added a variety of features to achieve that. In Version 11 we introduced the Authentication option, which lets you give credentials in functions like URLExecute. Version 11 already allowed for PermissionsKey (a.k.a. an “app id”). In 11.2 you can now give an explicit username and password, and you can also use SecuredAuthenticationKey to provide OAuth credentials. It’s tricky stuff, but I’m pleased with how cleanly we’re able to represent it using the symbolic character of the Wolfram Language—and it’s really useful when you’re, for example, actually working with a bunch internal websites or APIs.

Back in Version 10 (2014) we introduced the very powerful idea of using APIFunction to provide a symbolic specification for a web API—that could be deployed to the cloud using CloudDeploy. Then in Version 10.2 we introduced MailReceiverFunction, which responds not to web requests, but instead to receiving mail messages. (By the way, in 11.2 we’ve considerably strengthened SendMail, notably adding various authentication and address validation capabilities.)

In Version 11, we introduced the channel framework, which allows for publish-subscribe interactions between Wolfram Language instances (and external programs)—enabling things like chat, as well as a host of useful internal services. Well, in our continual path of automating and unifying, we’re introducing in 11.2 ChannelReceiverFunction—which can be deployed to the cloud to respond to whatever messages are sent on a particular channel.

In the low-level software engineering of the Wolfram Language we’ve used sockets for a long time. A few years ago we started exposing some socket functionality within the language. And now in 11.2 we have a full socket framework. The socket framework supports both traditional TCP sockets, as well as modern ZeroMQ sockets.

External Programs

Ever since the beginning, the Wolfram Language has been able to communicate with external C programs—actually using its native WSTP (Wolfram Symbolic Transfer Protocol) symbolic expression transfer protocol. Years ago J/Link and .NetLink enabled seamless connection to Java and .Net programs. RLink did the same for R. Then there are things like LibraryLink, that allow direct connection to DLLs—or RunProcess for running programs from the shell.

But 11.2 introduces a new form of external program communication: ExternalEvaluate. ExternalEvaluate is for doing computation in languages which—like the Wolfram Language—support REPL-style input/output. The two first examples available in 11.2 are Python and NodeJS.

Here’s a computation done with NodeJS—though this would definitely be better done directly in the Wolfram Language:

✕

ExternalEvaluate["NodeJS", "Math.sqrt(50)"]

Here’s a Python computation (yes, it’s pretty funky to use & for BitAnd):

✕

ExternalEvaluate["Python", "[ i & 10 for i in range(10)]"]

Of course, the place where things start to get useful is when one’s accessing large external code bases or libraries. And what’s nice is that one can use the Wolfram Language to control everything, and to analyze the results. ExternalEvaluate is in a sense a very lightweight construct—and one can routinely use it even deep inside some piece of Wolfram Language code.

There’s an infrastructure around ExternalEvaluate, aimed at connecting to the correct executable, appropriately converting types, and so on. There’s also StartExternalSession, which allows you to start a single external session, and then perform multiple evaluations in it.

The Whole List

So is there still more to say about 11.2? Yes! There are lots of new functions and features that I haven’t mentioned at all. Here’s a more extensive list:

But if you want to find out about 11.2, the best thing to do is to actually run it. I’ve actually been running pre-release versions of 11.2 on my personal machines for a couple of months. So by now I’m taking the new features and functions quite for granted—even though, earlier on, I kept on saying “this is really useful; how could we have not had this for 30 years?”. Well, realistically, it’s taken building everything we have so far—not only to provide the technical foundations, but also to seed the ideas, for 11.2. But now our work on 11.2 is done, and 11.2 is ready to go out into the world—and deliver the latest results from our decades of research and development.

I don’t mean to spoil anything, but are the population graphs quite right? It seems odd to me that Japan is on top while China is on the bottom. I think the labelling may have messed up and reversed the order.

Nice find Eric, we thought the same thing at first! The graph is cumulative as it ascends. The first line represents the population of China, while the second represents the population of India and China combined, with the space between representing just India’s population. This continues up to the least populous country, that being Japan. Hope this helps!

TERRIFIC minor release! I can’t believe what can be done on “all commercial platforms that can run 11.2″ now! It ties so many things together in ways nothing else would without loosing the top end of things: Math!

Wolfram Player release for iOS next week is a landmark event. Since Theo’s demo on an iPad 2 back in 2012, there have been huge improvements in CPU (A5 vs. A10X or A11), GPU, main memory (512MB vs. 2,3, or 4GB), screen resolution, Metal interface, etc. Those new mobile machines should be at least 10x faster and probably more like 40x faster than that retired iPad 2 platform. Even the A9 performance on the 2017 iPad — the “slow” one most likely found in schools — will be a blazing improvement on the 2012 A5 processor.

This is an interesting time for a major app release. I fondly hope that AAPL populates their demo iPhone 8 and X models and iPads in Apple Stores with your software. I look forward to some pixel-kicking Wolfram Language demos running on those machines. Mathematica is so different from anything available on these machines — it will take some hands-on work for the masses to visualize what they have.

Just want to known how to update?
Currently, I am running the Mathematica 11.1 version which stays as It was when I purchase it.
The calculation of matrix rank (by MatrixRank) is extremely slow.
I hope the new updates will solve this problem. But neither the software automatically update when it starts, nor there is a button to trigger the updates.
Would you please tell me where to get this 11.2 update?

Nice, some of the natural language processing is still a bit frustrating. The network is kinda slow as I expect everyone is testing it out so soon after the announcement. And as NKS is mentioned here – the app for NKS is listed for extinction by apple because (I presume) it is a 32 bit app. I like to reread NKS so will there be a replacement ?

Its always nice to work on Wolfram Mathematica. And version 11.2 seems so inituitive. Great job, developers. Looking forward to Mathematica 12.0.
P.S: Are you planning to launch an android version of Mathematica?

This is exceptional and amazing release. I am an avid fan of Wolfram Knowledge and its contribution to model way of learning for new upcoming stars of the our new generation. Keep up the great work. thank you

So, since you mention integer and 0.1 releases … any timeline for when version 12 will be coming out?

I’m considering upgrading, but kind of don’t want to upgrade today and have the next “major” release come out in like 3-6 mos and kick myself for not waiting. Seems like the last “major” release came out in 2016. Which makes me think the next “major” release will come out soon. Do you guys post predicted milestones / release dates ahead of time, so people can have some idea of whether to buy or hold off briefly? Or do you just kind of drop it on people?

P.S. Love the addition of ImageRestyle[]. Been waiting for pretty much exactly this for a while. May have even “suggested” it at some point. Happy to see it implemented… Really makes me want to upgrade. Just want to upgrade to the most current version, that’ll be current for a while. So, I’m hoping v12 if/when available. If any idea when that will be…

Hello MGmirkin,
Most people upgrade coupled with premier service. This provides free upgrades for 12 months mitigating the concern you had regarding missing out on a next major release. Please contact Sales at info@wolfram.com and they can assist you with the best pricing. As for when the next version is to be released: we wish we knew! Such is the nature of doing things never been done before.

Trying to work with 3D regions keeps being a frustrating experience. When RegionIntersection was introduced in Version 10, it was pretty frustrating to discover that it was just not implemented for 3D.
Now, it is claimed to work in 3D as well, but unfortunately, it seems to work for only for carefully selected and tweaked examples.
Have a look at the blog contribution above, where a Menger mesh is intersected with a ball.
It works only if the ball is wrapped with BoundaryDiscretizeRegion, otherwise just a trivial transform to BooleanRegion is returned.
Even when I luse the example with the BoundaryDiscretizeRegion, but increase the degree of the Menger mesh to 3, 11.2 fails and just returns the trivial Boolean Region.
Keeps on being frustrating.

What was added in Version 11.2 is support for Boolean operations on MeshRegion and BoundaryMeshRegion in 3D. And there will be additional improvements in Version 11.3 as well. The case of mixing an exact region like Ball[] and approximate region like a BoundaryMeshRegion does not auto-discretize currently. And it shouldn’t as this means you have not chance of getting the correct result when computing on top of these regions. However what will happen is that auto-visualization (which is a lighter discretization for visualization purposes which is quite different than computation discretization).