Steve Eddins has developed MATLAB and image processing capabilities for MathWorks since 1993. He also coaches development teams on designing programming interfaces for engineers and scientists. Steve coauthored Digital Image Processing Using MATLAB.

The first MathWorks general product release of the year, R2013a, shipped a couple of months ago. I've already mentioned it once here in my 12-Mar-2013 post about the new MATLAB unit test framework.

With each new release, I peruse the release notes for MATLAB to see what things I find particularly interesting. (This helps me remember which product features have actually been released, as opposed to still being in development. My memory needs all the help it can get.)

The first thing to note is the reappearance of the table of contents for navigating in the Help Browser and in the online Documentation Center. This is a direct result of helpful feedback we received from many of you about the R2012b release.

My favorite "make-it-go-faster-without-sacrificing-accuracy" people (the MATLAB Math Team, that is) have been busy again. People with computers based on Intel or AMD chips using the AVX instruction set should see their calls to fft speed up. Anybody running permute on 3-D or higher-dimensional arrays should also get a nice boost. I've done a lot of development work related to image and scientific format support, so I know that a fast permute can be pretty useful when reading image and scientific data. That's because most of these formats store array elements in the file in a different order than MATLAB uses in memory.

In the small-but-nice category, the MATLAB Math Team also simplified a common programming pattern in my own neck of the woods (image processing). Specifically, it's a bit easier to initial an array of 0s or 1s whose type is based on existing array. Here's an example to illustrate:

My developer friend Tom Bryan really "likes" this (ahem) because it enables much easier solutions to some common programming tasks for users of Fixed-Point Designer.

I have occasionally done a little web scripting in MATLAB, so it's nice to see urlread and urlwrite get a little love. These functions can now handle basic authentication via the 'Authentication', 'Username', and 'Password' parameters.

Do you use a Mac? You can now write MPEG-4 H.264 files using VideoWriter (requires Mac OS 10.7 or later).

A couple of handy new string functions have appeared, strsplit and strjoin. Based on how often users have submitted their own versions to the MATLAB Central File Exchange, I'm sure these will be popular.

out = strsplit(pwd,'\')

out =
'B:' 'published' '2013'

You can now do extrapolation with both scattered and gridded interpolation. For extrapolation with scattered interpolation, use the new scatteredInterpolant. Here's an example I lifted from the doc.

Query the interpolant at a single point outside the convex hull using nearest neighbor extrapolation.

Define a matrix of 200 random points.

P = -2.5 + 5*gallery('uniformdata',[200 2],0);

Sample an exponential function. These are the sample values for the interpolant.

Note

5 CommentsOldest to Newest

Hi Steve,
The addition of the ability to typecast with ‘like’ initially had me scratching my head, wondering what the fuss was about. After all, it’s no easier to write

B = zeros(100,100,'like',rgb);

than it is to write

B = zeros(100,100,class(rgb));

But digging in to the doc a bit, I see that Tom’s excitement probably stems more from the rest of what ‘like’ does than just from its class conversion capacity. In particular, ‘like’ reflects complexity (real vs complex) and sparsity, in addition to class. So if, for instance, I created a sparse, complex 3×3 matrix:

A = sparse(complex(rand(3)));

I can now readily create a matrix of zeros that is also sparse and complex. And, with an additional input, 3×3:

B = zeros(size(A),'like',A);

In this case, both A and B are _type double_; sparse matrices are required to be of that class. But one could also use ‘like’ to reflect “complex + single,” for example.

Brett—Good observation. Another reason why Tom really likes this for working with fixed-point data is that the ‘like’ syntax uses the fixed-point parameters (word length, mantissa bits, signed/unsigned) of the input prototype value.

Sean,
I’m not sure why assignment to B–implicitly using the default class (double)–is so much slower than in the second case above. But I’d point out that if you instead _explicitly_ specified the class, using either:

B = zeros(900,900,class(A));

or

B = zeros(900,900,’double’);

you would see very different behavior. (On my computer, these syntaxes are both faster than ‘like’,’A’.)

Another strong motivation for using the ‘like’ syntax is to allow other numeric-like types to work with your code. Examples of these are gpuArray and distributed in the PCT toolbox. Imagine that you want to write an algorithm which you know can run on both the CPU and the GPU, and you want to write it just once. However you need to create some arrays that you will use during the algorithm (for example an array of ones). How would you do that today, such that no matter what your input array were, your array of ones was the same?

Prior to ‘like’ you would have to understand both the underlying type of the array (gpuArray, fi, distributed, etc.) and how to create an array of that type. Now you can simply write

In this way the code you write can be resilient to many different types of numeric input, and if we design some new numeric type in the future (which supports ‘like’) your code will work with that new type as well!