There’s going to be a lot of that, now that critical mass of commercial systems are available. new materials are being used in additive manufacturing and new devices have considerably expanded the capabilities of systems in terms of speed, build volume and finish. It’s an interesting moment in engineering history, and nobody knows where it will lead.

Printing with [anything besides plastics] is fraught with difficulty, so interesting methods have been tried for substances like metals, clay, frosting(!) with varying success. Two methods have lately shown promise in metal and glass(amazingly enough).

First, metals. The most common method of depositing metals has been to embed the metal in something a bit more fluid, like in an ink suspension. This has the usual effect of having poor mechanical adhesion, because after the fluid dries the metal may adhere to itself poorly (likely) and there may be fluid contamination trapped in the metal layers (very likely). Researchers got around this with an entirely new method, using a sacrificial electrode to generate ions of the metal and spraying those ions electrostatically. You can get insanely small resolution using this technique:

…and you can print with more than one metal by building both into the tip and just switching voltage from one electrode to the other:

Elegant as hell, isn’t it?

Then, glass: a team in France using chalcogenide glass (which softens at a relatively low temperature compared to other glass) produced chalcogenide glass filaments with dimensions similar to the commercial plastic filaments normally used with the 3-D printer. The research team then increased the maximum extruding temperature of a commercial 3-D printer from around 260 °C to 330 °C. The result is pretty interesting:

An interesting proof-of-concept piece, this points to novel uses for chalcogenide glass commonly used to make optical components that operate at mid-infrared wavelengths. It’s not likely to be used elsewhere, as it’s a “soft” glass, but the feat is going to be useful in optics fabrication. Also, there are some low-temperature metal alloys that could probably benefit from this technique.

Neuroscientists at MIT have published a paper which demonstrates that 40Hz pulses of light can somehow inhibit the progress of neurodegeneration in mouse model. This study is designed to figure out how a flickering light could stifle cognitive decline, using two unique mouse models engineered to overproduced the toxic proteins that contribute to neurodegeneration. The animals were exposed to light flickering at 40 Hz for one hour every day for between three and six weeks. It worked a treat; mice engineered to overproduce tau proteins (that usually cause neurodegeneration) displayed no neuronal degeneration after three weeks of treatment compared to a control group that displayed nearly 20 percent total neuronal loss. The other mouse model, engineered to produce a neurodegenerative protein called p25, displayed no neurodegeneration whatsoever during the entire six weeks of treatment.

The researchers then zoomed in on the light-treated animal’s neurons and microglia to study whether the treatment induced any unusual changes in gene expression. The light-treated mice revealed increased neuronal expression of genes associated with synaptic function and DNA repair. In microglia, the brain’s immune cells, there was a decrease in genes associated with inflammation.

Nobody understands how a 40 Hz flickering light can trigger these specific changes to gene expression deep in the brain, but human trials testing the sound and light treatment in Alzheimer’s patients have already begun.

Note: Adding a 40 Hz auditory tone to the process improved the efficacy of this treatment. Your elderly parents can benefit from this by using gnuaural, an open-source generator of binaural beats for meditation and other psychological effects.

"Where have all the bloody teaspoons gone?" is an age old question in the workplace. In an article in the BMJ [not concerned with scatology, but British Medicine] from 2005, researchers at the Burnet Institute in Australia attempt to measure the phenomenon of teaspoon loss and its effect on office life. They purchased and discreetly numbered 70 stainless steel teaspoons (54 of standard quality and 16 of higher quality). The teaspoons were placed in tearooms around the institute and were counted weekly over five months. After five months, staff were told about the research project and asked to complete a brief anonymous questionnaire about their attitudes towards and knowledge of teaspoons and teaspoon theft.

During the study, 56 (80%) of the 70 teaspoons disappeared. The half life of the teaspoons was 81 days (that is, half had disappeared permanently after that time). The half life of teaspoons in communal tearooms (42 days) was significantly shorter than those in rooms linked to particular research groups (77 days). The rate of loss was not influenced by the teaspoons’ value and the overall incidence of teaspoon loss was 360.62 per 100 teaspoon years. At this rate, an estimated 250 teaspoons would need to be purchased annually to maintain a workable population of 70 teaspoons, say the authors.

The questionnaire showed that most employees (73%) were dissatisfied with teaspoon coverage in the institute, suggesting that teaspoons are an essential part of office life. The rapid rate of teaspoon loss shows that their availability (and therefore office life) is under constant assault.

One possible explanation for the phenomenon is resistentialism (the theory that inanimate objects have a natural aversion to humans), they write. This is supported by the fact that people have little or no control over teaspoon migration.

Given the widely applicable nature of these results, they suggest that the development of effective control measures against the loss of teaspoons should be a research priority

Hilarious. But wait; there’s more.

Exasperated by the disappearance, the scientists decided they would measure the phenomenon. Do the teaspoons really disappear over time? The answer was a resounding yes: spoons in research institute tearooms seem to have legs. While good fun, the research is a good example of a study design referred to as "longitudinal".

A longitudinal study uses continuous or repeated measures to follow particular individuals – in this case, teaspoons – over prolonged periods of time. The studies are generally observational in nature: the scientists simply watch and collect data over time. Typically, no external influence is applied during the course of the study. Beyond just working out where all the teaspoons have gone, this study type is also useful for evaluating the relationship between risk factors and the development of disease (for example, heart disease), and the outcomes of treatments over different lengths of time. In this study, the main questions posed by our researchers were to determine the overall rate of loss of teaspoons, and to work out how long it took for teaspoons to go missing.

They purchased 70 teaspoons (16 of which were of higher quality), each one discretely numbered and then distributed throughout the institute. Counts of the teaspoons were carried out weekly for two months, then fortnightly for a further three months. Desktops and other immediately visible surfaces were also scanned for "misplaced" spoons. After five months of covert research, the study was revealed to the institute, and staff were asked to return or anonymously report any marked teaspoons which may have found their way into desk draws or homes.

Good study design

This type of data collection provides a simple example of what makes a good longitudinal study. If we break it down, a longitudinal study needs to:

take place over a prolonged period (this study was done over 5 months)

be observational in nature (teaspoons were observed and counted, there was no intervention)

conducted without external influences (teaspoon users/thieves were not aware they were being studied until the conclusion of the study itself).

Results

The results show that 56 (80%) of the 70 teaspoons disappeared during the study, and that the half life of the teaspoons was 81 days (that is, half had disappeared permanently after that time). The study also showed the half life of teaspoons in communal tearooms (42 days) was significantly shorter than for those in research group specific tearooms (77 days). The rate of loss was not influenced by the teaspoons’ value. All of these pieces of information directly answer the main question posed by the researchers.

Conclusions

A longitudinal study is terrific at following individuals or teaspoons over a period of time and observing outcomes. But, by definition, the design means there can be no intervention (as we are just observing a phenomenon). The researchers could not employ a tool or an intervention to prevent spoons from being "misplaced", and the researchers could only report a spoon missing. As the study is observational only, there is no way of finding out what has happened to the spoon, just that it is lost. The authors were able to conclude that the loss of workplace teaspoons was rapid, and their availability in the tearoom was constantly under threat.

Homework: Megan S C Lim et al. The case of the disappearing teaspoons: longitudinal cohort study of the displacement of teaspoons in an Australian research institute, BMJ (2005). DOI: 10.1136/bmj.331.7531.1498

I have been forced at gunpoint to use a Mac for the last six weeks at my newest place of employment, and not without a few tears. I had to learn to install IntelliJ, NetBeans and Eclipse (already had that one) for MacOS. The company which enslaves me uses MacOS’ Self Service app, from which I installed Homebrew. Homebrew does every installation you could possibly desire (well, nearly) and I installed in short order git, gradle, Java and IntelliJ–all correctly and findably by each other, managing the pathname (or whatever they are called in MacOS). I must say, this makes first-day setup for the engineers much quicker, and much simpler. Good thing too, since the poor sods are going to be working with a bewildering variety of the manifold technologies which enable the hydra-headed beast which is my employer.

It turns out that Homebrew is a MacOS-only product; but there are several package installers which can work with Windows, such as Scoop,

Chocolatey and Npackd, I quite liked Scoop (hence the Youtubery), but you may wish to try the others. Good luck; for your more complex setups this can be a real timesaver.

A wonderful paper in the archives of the University of Rochester shows how any random scatter plot can be fit to a curve with enough parameters, and thence a lower number of same is often thought to be a good measure of an expression’s fitness for use…until now. “The mathematician John von Neumann famously admonished that with four free parameters he could make an elephant, and with five he could make it wiggle its trunk…The aim of this short note is to show that, in fact, very simple, elementary models exist that are capable of fitting arbitrarily many points to an arbitrary precision using only a single real-valued parameter θ. This is not always due to severe pathologies—one such model, studied here, is infinitely continuously differentiable as a function of θ. The existence of this model has implications for statistical model comparison, and shows that great care must be taken in machine learning efforts to discover equations from data since some simple models can fit any data set arbitrarily well.”

Mind you, the parameter θ needs to be calculated precisely: ”Both use r = 8 and require hundreds to thousands of digits of precision in θ.”.

Gee whiz (and hilarity) aside, the paper demonstrates the fallacy of using unreasonable models for this sort of algorithmic from-data derivation to create meaning from what might be noise, or Joan Miro’s signature.

Fascinating bit of video here as Our Hero (not me, in this case) takes Bach (and later Mozart) MIDI files, creates an 88-character ASCII-character alphabet from them and trains a Recurrent Neural Network to output similar sequences.

The results (and a lot of the process) is shown in the video above. Take your time and watch the whole thing; I wonder how long he would have to train the RNN to start outputting Baroque Muzak continually?

Scientists have detected thousands of exoplanets in recent years, by watching varying brightness as they occlude their stars. This animation comes from direct imaging methods (not radio telescopes), meaning that the telescopes saw the Jupiter-sized planets directly (hot Jupiters are young planets that still glow in the infrared portion of the spectrum).

Jason Wang at the University of California, Berkeley has combined several observations of HR 8799 into the delightful GIF below. This is years of optical data, folks. “In this video you’re seeing real data,” he told Gizmodo1 of the video above. “I smoothed out the orbits so that it’s as if we’re watching [the planets] constantly in real time.”

1 I stole the picture from Giz. I feel no shame at all. Follow that link, though; there’s much more!

I would like to point out that Star Trek has so influenced culture that the United Federation of Planets is likely to happen any time now…we just need more planets. We already have Majel Barret’s voice phonemes. Now we need Google to sync up, a bit better response to voice meaning (the voice vector thing should help), a truthiness evaluator and bingo! Star Trek in your phone/home/office/laboratory/dungeon/whatever.

People in the know (i. e., my readers) are aware that I take my phones seriously, and have for three smart phones now. Well, smart-enough phones, I guess. I mean I had an HTC 8125 ancient creaking phone with one of Microsoft’s many, many failed phone operating systems (are they really up to FOUR commercially-failed systems, and about to go for FIVE?), which did some things I needed in a phone: calculator (never used it, but could have), texting (would have used it but did not…not sure that it could, now that I think on it), took [execrable] photographs (look back in this blog far enough and you will find them, along with scathing reviews of the image quality) but at least ran the flash card app I wrote for it, among others (my writing them would not have been necessary if MS has anything like an app store. Just sayin’), and played my beloved audio books during my [endless] commute.1

Still, it was not the optimum device. My next phone (Samsung Galaxy S3 i9250) was a considerable improvement, in that the camera focused closely enough to copy text. It ran Android apps mostly without complaint (even ones I had written myself), texted my children and played Bluetooth music and audio books without complaint, even after having survived several cracked glass incidents (to be fair, I never did repair the glass. It looked like a vandalized cathedral when it finally died). It was a vast improvement, and I cried bitter tears indeed2 when it suddenly stopped letting me make telephone calls.

Now I have the aforementioned Amazon Moto G Play phone, and I must say it is an improvement on my previous experiences (except for the annoying notifications. How the #$%^&* do I turn them off?) in speed, in reception and in sound clarity (although not volume). The camera is much better (see recent postings about the weather, blue jay invasion, etc.) and the Android version is 6.0, which is 1.7 better than previous. And it was cheap: $99 for the phone with advertising, $149 without. I have been unable to figure out how to replace the bootloader to get rid of the advertisements (which would violate my agreement and would be Bad And Wrong), but it works so well I don’t care at all.

EXCITING, HORRIBLE UPDATE: can’t root the phone to use adb wireless. This is totally bogus.

1Not sure that’s the longest run-on sentence I have ever written, but Baron Bulwer-Lytton must feel somewhat threatened in his cozy grave.

2Mostly because I had spent a fortune on it. Don’t fear; this story DOES have a happy ending.

Ordinarily I would give you a breakdown of each of these nifty developments, but more are coming and I may want to return to these later when I am not pressed for time. Follow the links above; there are others as well that you will find more well constructed than my chicken scratchings, I’m sure.

A 58-year-old woman (“HB”) with ALS has had a functioning brain-computer interface (BCI) for a while now, and is able to communicate (slowly) with the outside world. She was facing total lock-in Real Soon Now, so any device which offers communication ability is welcome.

What it is:

Electrode strips at the top laid across her brain like band-aids read faint electrical signals. With training HB was able to “type” fairly quickly (words per minute, but still). More work remains to be done on the interfacing software (I am imagining more inputs and a neural network to interpret her thoughts more and more efficiently), and HB is ecstatic to have a way t live in the world. She would like to use the interface to control a wheelchair, for example, but that is a ways off.

The X Axis on the graph is the percentage of GDP spent on R&D and the size of the balls is the amount of spending. The Y Axis is the scientists and engineers per million people.

]

Notice that the 2nd, 3rd, 5th and 6th largest amounts are spent by Asian countries. And notice that Sweden, Denmark, Norway, Singapore and Finland have the largest number of scientists per capita, but look at the volume of South Korea and the number of scientists…those guys are going to eat the world.

Once upon a time scientists studying the sun couldn’t have the faintest idea of the internal activity of the Sun. One bright (see what I did there) scientist realized that monitoring neutrinos, the massless, chargeless, non-interacting particles that zip through the universe barely interacting with anything at all, might give a useful clue to the machinations therein. I mean, they knew neutrinos are part of the solar flux,so it’s just a matter of detecting massless, chargeless, non-interacting particles.

Oh, crap.

Well, luckily neutrinos do not remain neutrinos forever; they decay into detectable particles…eventually. Not often to be sure, as billions pass through a square centimeter every second without leaving any decay particles. Those decay particles can be detected with rather elaborate photomultipliers in a huge cavern in Japan somewhere: “It consists of a tank filled with 50,000 tons of ultra-pure water, surrounded by about 13,000 photo-multiplier tubes. If a neutrino enters the water and interacts with electrons or nuclei there, it results in a charged particle that moves faster than the speed of light in water. This leads to an optical shock wave, a cone of light called Cherenkov radiation. This light is projected onto the wall of the tank and recorded by the photomultiplier tubes.1“ Despite the heavy hardware only a few thousand are detected every year, which should tell you something about the likelihood of a decay event…not very damn likely.

Thing is, the theoretical number and the actual number didn’t match; the experimental result was one-third of theoretical, indicating something must be wrong with the theoretical understanding, or the experiment is crap. It turned out that neutrinos oscillate among three forms (electron, muon and tau) and detectors were primarily sensitive to only electron neutrinos.

Here’s where science gets really intricate; pour another shot and I’ll tell you why. In a distantly-related field, other scientists observed variations in the rate of beta decay of radioactive elements. Once again, either the data is crap or the theory, and the theory says the decay rate should be constant. Looking at the data over time, they found that the beta-decay rate matched the neutrino data, indicating a one-month oscillation attributable to solar radiation. Many now believe that neutrino emissions from the Sun are somehow affecting beta decay.

If that’s not strange enough for you then feature this: the same guys who figured this out are going to use beta-decay experiments here on Earth to monitor massless, chargeless, non-interacting neutrinos, and thereby the Sun.

1. Sometimes I don’t feel like writing all that much. It is 11:30p.m. and I’m tired. Sue me.

Dazzling in complexity, the little chart above details the fate of cosmic rays (high-energy protons hurtled from the sun) which impact our atmosphere, leaving a byzantine collection of particles and EM emissions. Some of these suckers are relatively easy to detect; the muon possibly the easiest. Scientists studying the output of our sun can use more information about cosmic ray bombardment and an array of muon detectors would be really useful for this as muons (and other particles) are generated within a cone-shaped shower, with all particles staying within about 1 degree of the primary particle’s path.

Enter Spencer Axani, doctoral student at Massachusetts Institute of Technology who has whomped one up for a mere hundred bucks, and published a paper with detailed construction plans (no Instructables project yet, however. I checked):

Straightforward as heck, a plastic brick and a photomultiplier tube are locked up in a light-tight box. Muons hit the brick, generate a photon on decay and the photomultiplier generates enough juice to tell there’s been an event. An Arduino is used (yes, an Arduino) as a peak detector and a Python script crunches the time-stamped data for delivery to a PC.

An Electromagnetic Pulse (EMP) generator can overload various kinds of circuitry, causing all sorts of merry havoc among the pinks. You can make a little baby one and overload poorly-protected circuits up close, although a hammer is more certain to succeed.