In studying physics, asking very simple questions often puts one at the forefront of current research and, indeed, opens whole new research areas. The field of metamaterials was developed over the last decade by posing fundamentally simple queries: can light refract negatively? What are the basic limits on the wavelength of light inside materials? What would happen if different components of the electric field vector in a light beam experienced a radically different electromagnetic environment?

As one digs further, the questions become increasingly complex and specific; not all of them will have good answers, and not all of them will even be good questions. It is easy to start getting lost and discouraged. When this happens, you can run back up the rabbit hole to the gleaming edifice of one of the basic equations. For me, this refuge is the manifestly covariant form of Maxwell's electrodynamics.

I have worked with Maxwell's equations for many years, in many different forms. Down there, in the rabbit hole, they are in their work attire, sporting unsightly divergences and curls, with constituent parameters dangling awkwardly from the field components. But here, out in the open, they are barely recognizable, dressed up as tensors and 4-vectors, conversing in classical Greek. You'd be a fool to mistake the Levi-Civita symbol for the dielectric constant. In this form, they are the equations of Einstein and Feynman, they are shining peaks in the landscape of modern physics. And yet, they are also my equations. I'm adding my strokes, however tentative and insignificant, alongside those of the greats. This gives me the inspiration to carry on.

Back down the rabbit hole I go: I'm on a mission. Optics has always attracted me by the ease with which fundamental electromagnetic and quantum mechanical abstractions become reified – often, in pretty colors – both in a lab setting, as well as in many devices that have revolutionized our world. And yet, many beautiful designs and elegant ideas are doomed to failure, due to inherent limitations of optical materials. This revolts the avowed idealist in me.

The world of applied physics is rife with trade-offs. Indeed, many are codified in the fundamental laws, such as the Heisenberg uncertainty principle. Yet most limitations are mere caprices of Nature, which tailors material parameters to its fickle, oft-inscrutable specifications. Metamaterials offer a tantalizing escape from the status quo. By custom-designing material properties, we can strike down some of the vexing compromises that limit performance and capabilities of optical devices. My mission in the rabbit hole that became my Ph.D. thesis is to eliminate the compromise that is holding back all of nanophotonics.

Many of the trade-offs in nanophotonics involve the fundamental differences between metals and dielectrics. These differences make metals appealing as short-wavelength waveguides, or emission enhancers, yet woefully unsuitable for many other applications. Dielectrics suffer from a similar fate, in reverse. Can we create a material that would behave both like a metals and a dielectric? Yes. Such materials are called hyperbolic, and they can be fabricated using modern metamaterials techniques. Some rare examples of hyperbolic materials can even be found in nature, but this is of dubious benefit to the contemporary metamaterials ideologue.

Indeed, we are past the age when our building materials were logs and mud, and we are past the age when circuit switching elements operated by thermionic emission. In the 21st century, we should get past the age when we rely on a few serendipitously found crystals to determine what we can and cannot do with photons. To be sure, future metamaterials engineers might still need to emerge from the rabbit hole to comprehend the unblemished covariant beauty of Maxwell's equations. But now, their exact from inside materials will result from our manifest destiny rather than from an accident of fate.

It's unfortunate that this blog is turning into rants about scientific computing. Perhaps I am simply ethnically predisposed to doing science and kvetching.

Today's rant is brought about by my collaborators' request to make my paper figures "presentable". The reason is simple -- current figure drafts are straight up Mathematica output, relying mostly on defaults, and they suck. Let's face it -- figures produced by Mathematica can be good, but rarely great, and trying to fine-tune the appearance of frames and axes can be a daunting task.

The problem isn't that Mathematica sucks at plots -- so does most other software. The problem is that to my knowledge there are no reasonable open-source alternatives to something like Origin. So right now, if you are unsatisfied with Mathematica's plotting capabilities, your options come down to this:

- get as far as you easily can with Mathematica, then switch to Illustrator and Photoshop. Pros: guaranteed to work. Cons: time-consuming, labor-intensive, and requires Illustrator, Photoshop, and Windows.

- import data into one of {R, Matlab, Python/Matplotlib} and hope it can do what you are after. Pros: you might get the output you want. Cons: you need to know R, Matlab, or Python.

What shocks me is that in this day and age, it seems that every few years, somebody sits down to write a plotting package, and ends up reinventing the wheel. So you have a bunch of wheels, none perfectly circular; there are all sorts of bumps and protrusions, all in different places. Within R, for instance, there are at least 3 different ways to create plots (base graphics, ggplot2, lattice), and more are being created (e.g. jjplot). However, despite all the man-hours spent, we still don't have such basic things as TeX integration. In a software package that claims to be the premier graphics solution for a statistician/applied mathematician! The syntax to add formulas to plots is revolting. Even Excel can do better.

My Mathematica adventures continue. I am slowly hammering out a good way to manage namespaces and scoping. I was hoping to write a set of concise utility functions for that purpose, but ran into severe roadblocks. Certain things related to switching contexts in the front end turn out to be impossible programmatically from the kernel -- the workaround is to manipulate the front-end from the kernel, essentialy spawning and evaluating cells. Trying to do this resulted in a bunch of unexplained and unanticipated behavior, meaning my utility functions remain in the "almost but not quite" state of readiness. I don't intend to work on that any longer, since I already have a set of (more verbose) commands at my disposal that do what I need.

I did, however, finally ask the question on Mathgroup about why there isn't a scoping construct that would effectively shield every symbol inside of it by default. I've been meaning to ask this for years. I didn't get a direct answer, but some insightful comments did arise on that thread -- in particular David Park discussed f[a_,b_][x_] := ... and With[{a=...,b=...}, f[x_]:=...] idioms for managing constant parameters, and Leonid Shifrin suggested the possibility of using a `Private subcontext in a Begin["Context`"];...;End[] construct. That's a good idea that might come in handy at some point.

But a most curious thing happened to my message on the way to the Mathgroup subscribers. You can read the original message below. Twice in the body of the message I draw comparisons with my other workhorses -- Matlab and R. If you now search for my message on the Mathgroup archives or Google Groups, you will see that the mention of Matlab has disappeared, leaving R sad and alone.

Mathgroup is moderated, yet I can't quite fathom the rationale for that peculiar redaction. But it's fun to hypothesize that there's some sort of a cold war going on between Mathworks and Wolfram, with hackers, and secret agents, poison-tipped umbrellas in Boston, and car bombings outside sultry cafes in Champaign, IL. Which brings up an interesting question -- to whose posse do I belong?.. Am I on the L-shaped membrane eigenmode crew, or are hyperbolic dodecahedrons my homies?.. As of late, I am spending lots of time with both, so I'd consider myself a double agent of sorts. If history is any indication, though, I am much more likely to flame Mathematica -- although I've become less aggressive after I RTFMed the core docs and bits of the Mathematica book. Text of my message to Mathgroup follows.

Dear Mathematica gurus,

One of the things that initially made Mathematica difficult for me to use was scoping -- in particular, the fact that all symbols by default appear in the global namespace. Even though this is a default behavior for interactive evaluation in many packages, e.g. Matlab and R, in Mathematica, it leads to a greater potential for errors because unlike those languages, in Mathematica (1). a symbol can have multiple DownValues, and (2). if one forgets to explicitly localize a symbol inside a scoping construct, it may silently be taken from the global namespace.

After many years I finally figured out a (more or less) clean way to structure my code and workflow, through a combination of defining modules, contexts, packages, and careful use of Clear and Remove.

I still wonder, however, why there isn't a construct similar to Module that would define a unique private context for _all_ symbols within the construct (i.e. without having to declare them in a list). You can kind of simulate this behavior by using BeginContext["MyCont`"] together with redefining $ContextPath temporarily to only have "MyCont`" and "System`". This is obviously too verbose to be of practical use, but I do wonder why there isn't a built-in construct.

I suppose my question is -- is there a deep wisdom behind its absence, or perhaps I am an anomaly in thinking that such behavior (automatic lexical scoping for symbols in subroutines, present in Matlab, R, and many others) would be incredibly handy?..

I come back from California sporting a tie-dyed t-shirt with a peace sign, good Chinese food, bubble tea and Yogurtland soft-serve still having the last of their nutrients being extracted by my system, and many conversations lingering in my mind.

I can still see the faces of my interlocutors and hear fragments of their remarks. Yet when I think harder and try to place those fragments within some sort of a larger context -- one that would paint a more complete picture of the person or convey a self-contained idea -- I get lost. I suppose it's hard to get into people's heads, figure out their beliefs, thoughts, and feelings, and then paint a verbal snapshot. It's doubly hard if you are an 83% I on Meyers-Briggs. Perhaps next time I will try to listen as if I were a journalist seeking a coherent sound bite, or an author trying to develop a character. Then again: the human psyche is an incredibly complex function, full of hidden variables, feedback loops, and nonlinear responses. It takes great talent to accurately sample it and then translate it into words. (Admittedly, if you are a TV journalist, you can always grab a couple of random points and draw a straight line. If you are a Fox journalist, one point is sufficient.) But that all is for next time. Right now I'll be content with whatever verbal and visual snapshots that are still imprinted in my mind.

"...The Western society has completely degraded. Take a look at France -- they are nothing but a banda pidorov. I can't wait to get out of this country." E. proceeded to light a cigarette and talk about the awesome slavic music + electronica parties he throws back in upstate NY. Hanging out with him led to a sharp increase in beer consumption, appreciation for the wealth of practical scientific knowledge chemists seem to harbor, and a touch of jealosy for how easily he picks up girls.

"...Look at those two -- they are like a pilot and the co-pilot." J. pointed at the young Russian prof and the old Russian prof sitting together at a desk in the front of the room. The young one chaired the session. The old one was there largely due to an overbearing feeling of self-importance. "These guys are the assholes in the business class," J. continued, looking at the front few rows. "The rest are sleeping in coach. If it were a real airplane someone would be groping a flight attendant in the back row." He looked at his watch and proceeded to another room, delivering one of his trademark cynical oral presentations to an audience of 15 people, of which five were asleep and five were staring at their laptops. The most impressive thing about J. is that he is a damn good scientist and is quite productive, all the while giving the impression that he doesn't give a shit.

...this post will have to be abbreviated due to our impending arrival into St. Louis. Who knows whether I'll find time to paint any more snapshots -- but I figure I'm doing quite well so far in terms of blog updates anyhow.

Reading a paper on "Electromagnetic Interactions of Molecules with Metal Surfaces". It explains Weyl's method for computing fields of a dipole over half-space (apparently much simpler than Sommerfeld's solution).

Oh right, spherical harmonics... dipole moment (in general, lowest non-vanishing multipole moment) is independent of origin if net charge is zero... You technically have to include a delta-function into coordinate-free expression for dipole fields... Same derivation states the mean value theorem for the fields... Hmmm, I once learned all those theorems (Browse the web for a bit)

-- oh right, mean value theorem for the potentials... and Earnshaw's theorem... Ok, back to the paper...

Paper expresses the dipole as a current source. Hmm, what's a current source?.. (Browse through Jackson chapter 5.)

Oh right, Biot-Savart law (Jackson claims it doesn't have a standalone meaning expressed via differential elements but has to be integrated... and to connect it w/ B fields of a single moving charge is a difficult problem involving relativity) --ok, integrate to get Ampere's law (B=...), take curl, 3 tricks; integrate by parts, get the curl B Maxwell's equation (w/ displacement current).

Ok, now how do we connect this to a radiating dipole?... (Browse forward towards Jackson chapter 9; get distracted by chapter 6:) Hmm, why do we use a Lorenz gauge?..

Oh right, it makes our wave equations for all components of the 4-potential look the same... Hmm, what's a gauge? (Browse Wikipedia.)

Oh right. If you take the expression for energy-momentum invariant (E^2-p^2c^2=m^2c^4) and put in operators, you get Klein-Gordon equation -- which can describe spinless scalar particles, but at the time physicists didn't realize such things existed, and the equation didn't give a satisfactory picture of the electrons (why exactly?...) so the search went on.
Dirac wanted something like a Klein-Gordon, but first order in space and time derivatives (why exactly?...) So what does he do?.. Well, he takes the square root of the operator-form E^2-p^2c^2 -- by prepending some anti-commuting 4x4 matrices to get rid of the cross-terms. It's so simple and clever. Just go to Wikipedia and look at the equation. And then it turns out that he can write his anti-commuting matrices using blocks of Pauli matrices -- which until then served as a phenomenological description of Stern-Gerlach. Then it turns out that you can write a very simple expression for the Dirac equation that involves Pauli matrices operating on spinors -- but here's the problem: when brought into the rest frame, one of those spinors is a negative-energy eigenstate. If such states are allowed, then why don't electrons decay into lower and lower energy states until they hit minus infinity?.. So Dirac postulated that the vacuum is actually a sea of electrons filling all the negative energy levels, and any observed negative energy state is just a hole. BUT -- just like in the electron/hole theory of charge carriers in solid state -- a hole would have to be positively charged!.. Dirac thought it might be a proton; Weyl was one of the first to suggest that maybe there exists a positively-charged electron. Weyl...
CRAP. I was supposed to figure out Weyl's dipole solution...
p.s. -- creation/annihilation formalism of QFT obviates the need for postulating Dirac's sea

After spending a substantial amount of time tweaking my system, it now works as well as I want it to, and I am hereby making a public promise not to touch my config files for a month. I want to be able to post a screenshot a month from now with a "last modified" timestamp corresponding to today. (The only exception is M-x / smex stuff in .emacs; if the author of the package fixes the broken fuzzy matching, I'll add it in. No other changes are allowed, though!)

So what went into the final round of updates?..

First, there were more tweaks to .bashrc and .inputrc. I turned on bash completion; here are some notes about this:

bash_completion is very handy, but has a few annoyances. First, the bash startup time increased to 2+ seconds. This was fixed by calling the completion functions dynamically -- solution was found on http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=467231; dyncomp.sh script I saved locally.

To prevent tilde from getting expanded: edit /etc/bash_completion; look for function named _expand(); comment out all code (but leave one dummy line e.g. foo=bar, o/w bash complains). If used dynamic completion replacement above, look for the function _expand as a sep. file in bash_completion.d

For .inputrc I added several customizations -- set show-all-if-ambiguous on, history-search-{forward/backward}, etc. These things are handy; you might want to google for them if you don't already use them. I realized a short time later that any keybindings involving meta (e.g. "\M-o") failed to work in bash shell on Dreamhost. The problem is a buggy version of readline that breaks in unicode locale and the solution is to replace "\M-" with "\e" (e.g. "\eo").

Ok, now let's get to the good stuff! Here it is: any time I work in a terminal, I can open Windows Explorer in the current directory, or in any directory I specify. I aliased this to a single key 'e'. It's handy! Conversely, if I am browsing a directory in Explorer, I can launch a terminal window in that directory using a single shortcut key (e.g. Win-T). And finally, if I am in Emacs, I can launch either Windows Explorer or the terminal in the directory of the current buffer (or current dired buffer) with Win-E and Win-T respectively. How does this magic work?

#!/usr/bin/bash# Open explorer in the current directory or directory of the argument## Unix/Windows path conversion: -- could've done it w/ cygpath in hindsight# pwdWin=`pwd|perl -p -e 's/\/cygdrive\/(.)/\1:/; s/\//\\/g'`# note that you have to double-escape special characters herecygwin_root=/cygdrive/c/cygwin
if[$#-eq0]; thenabsPath="$(pwd -P)"elseabsPath="$(realpath "$1")"fi# the following will always execute unless the path is printed as c:/...# in which case we can proceed directly to replacementif["${absPath:0:1}" = "/"]; then# either have /cygrive/c/..., or:# special case: starts with /, e.g. / or /usrbase_dir=$(echo$absPath|awk -F/'{print $2}')# when absPath=="/", so basedir=="":if["$base_dir" = ""]||["$base_dir"!= "cygdrive"]; thenabsPath=$cygwin_root$absPathfifiwinPath=$(echo$absPath|perl-p-e's/\/cygdrive\/(.)/\1:/; s/\//\\/g')
explorer /e, "$winPath"

There are extra checks necessitated by idiosyncrasies of Cygwin paths. Ok, so this is standard bash scripting; no biggie. Now, how do we call terminal from Windows?.. This piece of magic requires several components.
The first is writing a program to launch the terminal (mrxvt in my case) in a particular directory. There are two ways to do this: VB script or a batch file. VB script is a more modern solution, and could look like this:

The key aspect that makes this work is the execution of -c "cd [DIRECTORY]; exec bash;" command at the end of the chain run--mrxvt--bash. Run command, if you are curious, allows to launch commands that would otherwise spawn a terminal window. Exec bash is needed because otherwise bash exists after executing the -c string command.

It turns out that the batch file runs noticeably faster on my computer.

There is an alternative approach to specify the terminal's start directory, in addition to the bash -c [string] command line option. You can set an environment variable and have .bashrc check for its existence and cd accordingly. This has the advantage of being a tiny bit faster than the preceding approach, but because all environment variables get cached, subsequent tabs of the terminal (mrxvt) will open in the directory in which the first tab started, which might not be the desirable behavior.

To get this working, put set STARTINGDIRECTORY=%1 into the batch file, use the run command%RUN% /usr/local/bin/mrxvt -e /bin/bash --login -i, and put into .bashrc:

Now the fun part: how do we invoke this for a particular directory as we browse it in Explorer?.. Autohotkey comes to our rescue! I found a piece of code on Stack Overflow to launch the windows command shell and tweaked it a little:

Notice how this hotkey is global -- if we are in Explorer, it will give us a terminal started in the directory being browsed. Otherwise, it will just launch a terminal. But what about Emacs?.. We'll do something sneaky: we'll tell Autohotkey to send Emacs a different key combo for Win-T and use a different set of bindings in Emacs:

(defun terminal-here ()"Launch external terminal in the current buffer's directory or current dired
directory. (Works by grabbing the directory name and passing as an argument to
a batch file. Note the (toggle-read-only) workaround; the command will not run
in dired mode without it."(interactive)(let((dir "")(diredp nil))(cond((and(local-variable-p 'dired-directory) dired-directory)(setq dir dired-directory)(setq diredp t)(toggle-read-only))((stringp (buffer-file-name))(setq dir (file-name-directory (buffer-file-name)))))(shell-command (concat "~/bin/mrxvt_win.bat \""dir"\" 2>/dev/null &")(universal-argument))(if diredp (toggle-read-only))))

(and there's a similar function to launch Windows Explorer). Bind them to C-F3 and C-F4 (in reality Win-E and Win-T translated by Autohotkey) and we are all done, 5 scripting languages later! Who said administering a Windows box isn't fun :)

That was the final piece of the puzzle, the only major feature that was not working. Now my Emacs environment is complete! Hooray!

As usual, all the information needed to get this running was out there, but poorly organized. Basically, it comes down to the following two points:
1. NTEmacs can only use cygwin ssh-agent if launched from cygwin bash. This is not a problem; under Windows I currently use the following VB script to launch:WScript.CreateObject("WScript.Shell").Run "c:\cygwin\bin\bash -l -c /usr/bin/emacs", 0, false
1a. Alternatively, one can use PuTTY with plink protocol and the PuTTY agent; search EmacsWiki for info on how to do this; this PDF helps.
2. Assuming we are going the cygwin route, this is what's needed in .emacs(require 'tramp)
(setq tramp-default-method "ssh")
(nconc (cadr (assq 'tramp-login-args (assoc "ssh" tramp-methods))) '(("bash" "-i")))
(setcdr (assq 'tramp-remote-sh (assoc "ssh" tramp-methods)) '("bash -i"))

This is it!

And now, for computing tweak of the day: I like to only press tab once in order to display candidate list in bash tab completion. I think it makes significant usability difference. Here's how to turn it on:echo "set show-all-if-ambiguous on # for single tab press completion" >> ~/.inputrc

I started playing around with running Mathematica inside Emacs. Once I develop the set of Mathematica <-> Matlab interoperability scripts, this functionality will be invaluable. An unexpected (pleasant) surprise was that Mathematica graphics is available at the console through Java -- you just have to load <<JavaGraphics`. The graphics performance is slow as molasses on my underpowered laptop, but it should be quite usable on a good workstation.

(Look, Ma! No notebook!)

What's that, by the way?.. Is this the Matlab beam propagation code snippet ported to Mathematica? Why, yes it is -- a direct translation done by Wolfram Research (although to be fair, it ended up being almost a direct translation of Matlab code). This runs as fast as the Matlab version, but it's also a bit of a contrived example because the beam propagation in homogeneous space is of the form exp(i k z), and this can be efficiently computed with an outer product (Transpose[Outer[Times, z, kz]]] in this case). In more complicated cases, Matlab uses meshgrid(), and I am not sure that Mathematica could compete in terms of speed. Even if it does, I don't see a better way of doing things other than implement Matlab built-in constructs in Mathematica, which is kind of silly. Thus, Matlab is still going to be the workhorse here.

Mathematica's Manipulate[] functionality got me hooked on interactivity, so once I had the beam propagation code running smoothly, I decided to implement a primitive GUI for it -- how hard could it be?.. Wasn't hard, really, but somewhat time consuming and required lots of code. I mean, order of 100 lines of code to do something that's almost a one-liner in Mathematica.

While it is certainly lacking on the elegance part, it runs fast. If I had to do it over again, though, I would implement the numerics in Matlab, dump it all into a datafile, read into Mathematica and Manipulate[] away.