Storing 32 bits of data in a piece of glass

After finding an old piezoelectric delay line in an old TV, [Mike] decided to figure out how it works and in the process stored his name in sound waves reflecting inside a piece of glass

[Mike] was intrigued by these old-fashioned delay lines after watching [Dave] from EEVblog’s teardown of an circa 1985 camcorder. [Dave] found a piezoelectric delay line in his camcorder – a device that is able to store digital data by sending a sound wave into a glass plate, letting the sound wave bounce through the plate. and picking up the sound on the other end. It’s actually not too dissimilar to a mercury delay line used in the earliest computers.

After sending a pulse through his piezoelectric delay line, [Mike] picked up an echo almost exactly 64 microseconds later. After hooking up a simple circuit constructed out of a 74-series chip, [Mike] found he could ‘loop’ the delay line and keep a pulse going for up to 3 milliseconds.

Three milliseconds isn’t much, but by injecting serial data into the delay line, [Mike] was able to spell out his name in binary, as seen above. It’s just 32 bits stored for a fraction of a second, making it a very volatile, low-capacity memory, but functionally equivalent to the old mercury delay lines of yore.

It’s certainly not what [Mike] or [Dave]’s delay line was designed to do; these video delay lines were used to hold the previous line of video for a form of error correction. Outside [Mike]’s workbench and a few museums, though, you won’t see a delay line used as a form of computer memory. A very cool build and an awesome history lesson, and we thank [Mike] for that.

No, he is storing his data in the glass as sound for a fraction of a second. This is how they did video processing before digital delay was reasonable. I once got to help strip down an old TV station video player. It took a bunch of 2″ tapes (the tape it self is 2″ wide, the cartridge it was on was huge) and played them for commercials. It was a huge machine, 2 or3 large cabinets and a couple thousand pounds total. The machine had dozens of add on cards with a bunch of these delay lines on them.

nop, he isn’t storing anything. “storage” From the dictionary : A device consisting of electronic, electrostatic, electrical, hardware, or other elements into which data may be entered, and from which data may be obtained as desired. In no way the data are retrieved when he desire.
Eirinn is quite right and I found the title misleading too.
I was looking for an optical reader of some kind.

Bah, it IS storage. It follows your “dictionary” quote, only its in micro increments. I.E. If the data is retrieved as desired within that ms of time then it is storage. If you do the same thing with a CDROM but you chose to get your data after 1000 years, your probably not going to get your data, does that mean CDROM is not storage? In electronics, small is just as valid as BIG? :)

The reflections don’t have anything to do with the storage. The side reflections are used to “fold” the path to increase the delay, the end reflections are just a source of interference. A version without a folded path would store data just as well, it’d just need to be longer.

You put data in, you retrieve data later. In the meantime, the data only exists as vibrations in flight through the glass. The regeneration circuitry never “holds” even a full bit. The data is truly stored in the delay line.

Before SAW (Surface Acoustic Wave) delay lines were used to decode the colour information in TV’s and composite video monitors, coils were used.
I’ve salvaged a few from older colour TV sets, basically it’s a piece of wire long enough to hold a single line of video.

PAL stands for phase alternating line, to decode the colour information they stored each line in one of these, then compared it to the next line.
NTSC works the same way.

Thats the way Tesla would have done it (:
but can you comprehend just how vast even an old eight bit cpu would be if it were constructed solely from coils of wire. using interferance patterns in auternating current to represent ones and zeros, it would be nice and quick though (:

No, it doesn’t. NTSC gets its color reference at the beginning of each horizontal line in the form of a color burst (a precise number of cycles of 3.58MHz sine wave, whose phase provides the reference hue), which excites and resynchronizes the phase of a local color oscillator. The phase difference between the received color subcarrier and the local oscillator determines the hue to be displayed at any point on the current scan line.

NTSC encodes the two color channels in quadrature modulation using a single subcarrier frequency, which simplifies to phase encoding the chroma and amplitude encoding the saturation. The color burst serves as phase reference.

PAL does the same, BUT in addition, the phase is reversed on each line.

Quoting wikipedia:
Early PAL receivers relied on the human eye to do that cancelling; however, this resulted in a comb-like effect known as Hanover bars on larger phase errors. Thus, most receivers now use a chrominance delay line, which stores the received colour information on each line of display; an average of the colour information from the previous line and the current line is then used to drive the picture tube. The effect is that phase errors result in saturation changes, which are less objectionable than the equivalent hue changes of NTSC.

The way I heard it…
Earlier NTSC systems didn’t have a glass delay line, which is why they needed a hue control. PAL never needed a hue control because of the Phase Alteration by Line that allowed sequential lines to swap phase and be compared. Storage of the previous line was only made possible by the invention of the tunable glass delay line. (Here in the land of PAL we use to jokingly call NTSC Not Twice Same Colour). The story gos that although glass delay lines existed, they were made with a sensor at each end, meaning the delay could not be tuned after the sensors were attached. So NTSC did not compensate for the group delay effect. In the time between the NTSC and the PAL standards, someone came up with the idea of bouncing the signal off the edge of the glass in a pattern resembling a simple fish drawing. The 2 corners at the sensor side are snipped off, and placed at an angle. From the transmitter the signal bounces off the side, the end, the other side and to the receiver. This allows the end to be polished after the sensors were fixed until the required delay is achieved.
If this technology existed pre NTSC, there probably would never have been an NTSC, only a PAL. And in PAL land would have had one less joke to snigger about.

See this is why I feel somewhat spoilt by doing my degree in this era. Seems like “fix it in software” is a phrase which can be applied far too often theses days. We’ve always got a few MIPS to play with.
It’s the ingenuity of the older designs which fascinates me most. “Back then” it probably seemed like the simplest way to do things, but to me it seems incredibly complicated!

YES, “delay lines” can definitely be used for data storage. Most folks these days don’t remember how fabulously expensive semiconductor memory once was. I once repaired a Monroe desktop calculator which stored all of its data in a long piece of steel wire would in a flat spiral “pancake” about a foot in diameter. The data were continuously recirculated around this long spiral as a string of pulses which were altered as necessary during the “refresh” operation before being fed back into the spiral again. Of course, data access times were very long by computer standards, but calculator users were unaware of it at typical calculator use speeds. In fact, the calculator was programmable and used “Harvard architecture” in that both “code” and data were stored in the recirculating loop.

…used “Harvard architecture” in that both “code” and data were stored in the recirculating loop.

No, Harvard architecture is when you have separate storage for code and for data, so that they don’t contend for access bandwidth (and you can’t have a self-modifying code). Many popular MCUs are Harvard architecture machines, AVRs PICs, 8051s, some of the ARM’s…

When code and data use same storage, that’s Von Neumann’s architecture. Those are most of the personal computers, most mainframes (except some supercomputers), but there also are MCUs like Freescale 68HCxx’s and Zilog’s eZ80Acclaim. You can make them execute code out of their own internal RAM, something not possible with “Harvards”.

Most practical Harvard designs had a way to transfer information from the data space to the instruction space as the power of the stored program concept was evident relatively early in the computer evolution. And if we would still call that a Harvard design (and most in the field do) then one have to accept that most current machines are Harvard descendants. E.g. x86 processors can’t have an address in both instruction and data caches at the same time but allow program generated or loaded data to be treated as instructions by moving the data to the instruction cache (and vice versa).

If I remember right, Burrows had a computer back in the ’60s that used magneto constrictive delay lines as it’s main memory. It worked well, but it was like having a huge shift register as main memory. Neat idea, but core memory was more reliable so it’s life in industry was limited. Solid state memory (RAM chips) was better and faster once it came out after several more years, but initially it was WAY more expensive than core memory even with the rewrite after read memory controller (why? core’s change state to an inverted mode when they are read, so to know it is in the right state it had to be re-written after each read — writes were faster, since no read was involved.)