During this evening we will be questioning the nature and purpose of generative art. With 4 lectures and an open talk, we investigate whether or not generative art is indeed an art form or just a technique. Is it possible to make a distinction between generative processes, applications of generative processes and generative processes as a conceptual ground for software art? Does the term generative art apply to screen savers, alife, fractal art,, conceptual software/code art and poetry alike? Is it even important to make this distinction? The term generative art has been around for a while now, and is applied to extremely diverse works and opposite artistic approaches. No more! It’s time to fork! (again!)

This site is an introduction to the fine art of playing images in the way that musicians play with sound. It is an art that has been almost three centuries in the birthing and that has gone by a variety of names—visual music, color music, audio-visual-music, motion graphics, synchromy, and lumia.

Monday, March 3, 2008

Try playing some music on your computer. Chances are, when you stick in that CD or access those MP3s, a swirl of color will appear on-screen, throbbing and pulsing in time to your tunes.

These sound-and-light displays, churned out by "visualization" programs built into most of today's media players, don't serve any practical purpose. They're about simple sensory enjoyment and about giving us a glimpse of a bold future when our separate senses will collapse into a single pleasure — a time when categories such as "music" and "video" and "art" and "graphics" are supposed to dissolve, leaving us bathing in a brave new world of multimedia sensations.

The funny thing is, that future has been here for almost 100 years, hidden away in obscure corners of avant-garde art and music and filmmaking. The big summer show opening today at the Smithsonian's Hirshhorn Museum tracks how radical artists have been crossing over between sight and sound for ages, even though most experts and museums have rarely taken note of this important trend.

The Hirshhorn exhibition, titled "Visual Music: Synaesthesia in Art and Music Since 1900," includes works that few of us have ever seen before, by artists we've barely heard of, using media and techniques whose names don't even ring a bell. It gives us a chance to explore life on the artistic fringes and take in some of the mind-bending sights and sounds that have come out of them.

The show includes some paintings, sure, by artists as well known as Man Ray, Paul Klee and Wassily Kandinsky. But there's also a room devoted to Thomas Wilfred's "lumia," an art form the American inventor first developed in the 1920s. It sets nebulas of color swirling across translucent screens.

Over the first half of the 20th century, artists turned out "color organs" with names such as the Synchrome Kineidoscope, the Clavilux, the Lumigraph or the Optophonic Piano [shown at right]. Some of these Rube Goldberg contraptions, salvaged from dark corners and displayed in working order in this show, demand a couple of operators to make them go. They produce elaborate displays of light and color that either accompany music, or that are meant as silent "visual symphonies."

These instruments mostly gave way to abstract films, at first made using standard animation skills and then, in the 1950s, by way of more advanced technologies that opened new frontiers in animation. (Computers were adopted early on for the special effects of abstract film. George Lucas owes a debt to a number of "visual musicians" who simply wanted to make swarms of colored dots go dancing across space.)

In the psychedelic 1960s, certain experimental artists and collectives (with names like the Single Wing Turquoise Bird and the Joshua Light Show) emerged from the artistic margins to design the elaborate projections that ran at concerts by Frank Zappa, Pink Floyd and the Who. This exhibition includes archival footage from some of these shows, but the art form will come fully to life only this weekend, with the Hirshhorn's one-time "Cosmic Drift" event. On Saturday night [June 25], the museum will be staying open from 9:30 p.m. to 2 a.m. The show itself will act as a kind of art-historical backdrop for a program of live light-and-sound performances that will take place in the Hirshhorn's circular courtyard. It'll be a groovy trip, man.

Hirshhorn curator Kerry Brougher — who conceived the exhibition with his colleague Judith Zilczer and Jeremy Strick, director of the Los Angeles Museum of Contemporary Art, where "Visual Music" premiered in February — argues in his catalogue essay that the rock-concert spectacles of the 1960s give a rare example of vanguard art infiltrating the mainstream. The infiltration was so thorough, in fact, that few of us are likely to realize that those light shows had their roots in esoteric art ideas born 60 years before.

Those took off from a simple notion and had a simple aim.

The notion was to take the novelty of abstract art, so radical before World War I that it could hardly be imagined, and justify it by comparison to music. If a Beethoven string quartet could be understood and admired on its own terms, without imagining that it painted a sonic picture of the world, visual art should have the same freedom to escape from rendering reality. The notes and timbres and structures of music could be compared to the colors and textures and forms of a painting; a talented artist could assemble them into a visual "composition" every bit as affecting, meaningful and praiseworthy as anything that goes on in a fancy concert hall.

There were even shreds of scientific evidence in support of such crossing over between the visual and musical arts. In a rare neurological condition known as "synaesthesia," the sensory systems in certain people's brains are cross-wired. When a given sound enters their ears, they "see" — in their mind's eye, at least — a color.

Another synaesthete might take in a color or shape, and find that the optical signal has been carried to the brain's auditory system, producing a sonic experience at the same time as the visual one. The modern French composer Olivier Messiaen was said to "see" flashes of color that corresponded to chords in the music he played. Such stories provided a kind of real-life analogy to, and justification for, the "visual music" proposed by early abstractionists.

Kandinsky is generally credited as the first artist to produce purely abstract works of art. He, however, took the pairing of pictorial abstraction with musical abstraction, understood by some of his peers as nothing more than a useful analogy, and made it literal. He said that his paintings were meant to translate the specific qualities of music into visual terms. His Impression III (Concert) [right] was made in response to a famous performance of Arnold Schoenberg's radically modern music held in Munich in early 1911. "The independent life of the individual voices in your composition is exactly what I am trying to find in my paintings," the artist wrote to the composer.

In 1916, under the influence of Kandinsky, the American Man Ray, based in Paris, painted his colorful Symphony Orchestra, whose only recognizable feature is the keyboard of a piano.

Two other Americans, Morgan Russell and Stanton Macdonald-Wright, tried to build an entire artistic movement, dubbed "Synchromism," around musical ideas. In their Synchromist manifesto, they insisted that "mankind has until now always tried to satisfy its need for the highest spiritual exaltation only in music. Only [musical] tones have been able to ... transport us to the highest realms ... Yet color is just as capable as music of providing us with the highest ecstasies and delights."

Both tried to take the musical analogy as far as it could go, designing light-projecting machines that would make patterns of color and form play out in space over time, as the notes of music do.Long after abstract painting had found its footing, and stopped needing the crutch of a musical analogy, notions of visual music continued to attract followers working in media that took place over time. There were those color organs, first, which eventually gave way to experimental film, capable of combining sound and abstract image without clumsy apparatuses. The 1930s bred various pioneers in abstract animation. Figures such as Len Lye and Oskar Fischinger (who at first worked on the popularization of visual music in Disney's Fantasia, then fled the project) made geometric and biomorphic shapes go dancing across the movie screen, sometimes truly rivaling what leading abstract painters were doing on their static canvases.

Which leads — by way of this show's psychedelic spectacles, zooming computer graphics and recent kinetic light sculptures — to your computer's media player. Which, if you think about it, isn't such a grand place for an art form to end up.

Many of the works in "Visual Music" suffer from the same problem as your computer's own "visual music" display: They provide wow-cool flashes of attractive light and shape that don't take long to lose their interest. There's something about trying to find visual equivalents for the sonic energy and verve of music that seems to push artists toward superficiality.

Despite this exhibition's subtitle, none of its artworks actually manages full-blown synaesthesia, truly crossing over between sound and vision. You'll not once feel you're hearing something just by looking at a piece in this show. And short of fulfilling that grand aim, its visuals tend to become illustrations of how we imagine music operates, rather than real rivals to the musical experience, or champions of a fully visual one.

In 1923, American painter Arthur Dove went to a Chinese restaurant, and, according to this exhibition's catalogue, he came away inspired. He went off and made an abstract picture that seems full of earthy, soy-sauce browns; of spikes that make me think of ginger's bite; of garlicky edges and angles. The only problem is, the mouthwatering sensations that I read out of Dove's artwork are not the ones he meant to put into it. Rather than "Wonton Visions," his painting is titled Chinese Music.

Which goes to show that synaesthesia is always in the mind of the beholder, and that relying on such sensory crossovers doesn't get you all that far in art.

Imagine that each key of a piano or an organ keyboard stops at a chosen position, or makes a specific element of a set of transparent filters move, more or less quickly, transperced by a beam of white light, and you will have some idea of the instrument invented by Baranoff-Rossiné.

There are various kinds of luminous filters; plain coloured ones, optical elements such as prisms, lenses or mirrors; filters including graphic elements and, finally, filters with coloured shapes and defined outlines. Add to this the possibility of modifying the position of the projector, the screen frame, the symmetry or asymmetry of the compositions and their movements, as well as their intensity. You will then be able to reconstitute this optical piano that will interpret an infinite number of musical compositions. The key word here is interpret, because, for the time being, the aim is not to determine a unique rendering of an existing musical composition for which the author did not foresee any light being superimposed. In music, as in any other art, one has to take into account elements such as the talent and sensitivity of the musician in order to fully understand the composer’s thoughts. The day when a composer composes music using notes that remain to be determined in terms of music and light, the interpreter will have less freedom, and on that day, the artistic unity we are discussing will probably be closer to perfection.

(This autobiographic text was originally written using the 3rd person.)

For a long time now, artists have sought to combine the perceptions of several of our senses so that we can feel the integration of simultaneous sensations, modified in time in accordance with a concerted rhythm, a particular artistic impression. Let us recall the trials in optical sound simultaneism that have been done to date.

At the end of the XVIIth Century, a well-known philosopher, Eckhardthausen tried to transcribe popular songs in coloured composition.

In 1734, a mathematician, Abbe Castel, tried to give an optophonic concert using coloured records appearing above a harpsichord where each key corresponded to a record. This process can be used to characterise the alphabetic translation of music by colour, that is equally expressed by the fantasy of Arthur Rimbaud: A = black, E= white, I = red, U = green.

After Abbé Castel, we find numerous attempts of this kind : by the artist Tchourlionis in Finland, François Kupka in Czechoslovakia (1912), Léopold Sturzwage (see “Paris Evenings”, July and August 1914), Arthur Ciaceli in Italy, the musicalist movement in Paris, Viking Eggeling in Sweden, Hans Richier in Berlin, Blanc-Gatti in Paris (1922), the great opto-phonic concerts in 1922 at the Grand Opéra of Moscow and the Meyerhold theatre, Wilfred in America in 1925, and Z. Peschanek in Prague. They were carried out using coloured projections, triggered by levers or contact switches. Some effects obtained thus were quite pleasant, but the artistic results were insufficient. They were coloured beams and not colours.

Multiple settings find their simplest expression in opera. This form of art is so old that we no longer any attention today to the fact that there is a superimposition. Only certain modern futurists would rise up against such a mixture, pretending that each art should be sufficient to itself. The most complete trial that has been achieved in this field dates from around 1895. A French poet, P.N. Roinard, had a rather strange play put on, in which the basic feeling of each scene was symbolised by a colour, a flower and a scent. The “Art Theatre” did not have any follow-up. Nearer to us, Loië Füller achieved marvellous stage effects using play of coloured lights, combined with choreographic ensemnbles. Let us remember, finally, that great artists such as Léonard de Vinci, Jean Sébastien Bach, and Skriabine, were tempted by the combination of music and colour.

Baranoff-Rossiné’s optical piano, projecting in space or onto a screen, colours and moving shapes, varied to infinity, depends absolutely, as in the sonorous piano, on the operating of the keys. All the previous seekers had not been able to decompose it in its intimate elements: it is in fact quite arbitrary to want to translate a musical note by any specific colour.

A = black, E = white, DO = violet, RÉ indigo... poet’s fantasy. Oboe = green, flute = blue, trumpet = red, these are purely litterary reconciliations that, even if they might be exact, are incapable of moving us. Between sound and light there are harmonies that are otherwise precise, drawn from their very structure: in a musical composition we distinguish the three following basic elements; the intensity of the sound, the pitch of the sound, the rythym and the movement. This will be one of Baranoff-Rossiné’s merits to have been able to extract these elements from music to bring them closer to similar elements existing or able to exist in light.

Research has moreover enabled us to find others that are just as simple. One should not mistake the Saphites optical piano with its luminous effects, luminous organs, and coloured projections. These are primitive and incomplete instruments, giving a limited number of coloured beams, and not colours. Rossiné’s optical piano produces luminous colours, varied to infinity, simultaneously united with the abstract and concrete shapes (decorations and images) in a static and dynamic state, successive and simultaneous. All these results, moreover, can be increased by developing the pianitst’s technique. The quality of the luminous colour is far above anything that we have imagined previously. The shapes and colours are richer than in a kaleidoscope and their choice depends on the will of the player; the projection frame can be varied to infinity. Thanks to his optical piano, Baranoff-Rossiné has created a new art form that as a consequence has its own unity, and it does not involve purely and simply superimposing one phenomenon on another.

Sunday, March 2, 2008

Published in Organised Sound 3(3): 187-191 1998 Cambridge University Press.

Graphical Groove: Memorium for a Visual Music System by Laurie Spiegel, August, 1998

Abstract:

Once upon a time there was a computer music system called GROOVE (Generating Realtime Operations On Voltage-controlled Equipment, Bell Telephone Laboratories, Murray Hill, New Jersey), which outputted in the realm of sound, and was a wonderful and still-unique tool for the composition thereof. Once upon a time a then-young composer who was using GROOVE for music got the hairbrained idea that if she made a few minor changes here and there she could use it to compose images as well. This she did in 1974-6, and though the untimely demise of the system completed, owing to massive hardware changes in this system's home lab, prevented creation of much documentation in the form of aesthetic works of its output, the system did function sufficiently to make some description worthwhile. While it is true that the mid-1960s DDP-224 computer on which GROOVE became a VAMPIRE (Video And Music Program for Interactive Realtime Exploration/Experimentation) was a massive roomsized computer, it has by now long been eclipsed in power by the constantly improving home computer. It is worth describing the concepts involved in part because there are by now many small computers capable of emulating its musical methods. Besides, I had a deep personal relationship with that computer, and wish to commemorate it. Here then follows the tale of Graphical GROOVE, a.k.a. the VAMPIRE.

What was GROOVE, anyway?

Before going on to its visual applications, it may help to help you visualize the GROOVE system in its original form, that of a hybrid (digital-analogue) computer music system, as developed by Max Mathews, Dick Moore and colleagues. The principle was both simple and general. A number of input devices (knobs, pushbuttons, a small organ keyboard, a 3D joystick, an alphanumeric keyboard, card reader, several console and toggle switches, and a number of output devices (14 digital-to-analog converters used for control voltages, 36 computer controlled relays, a thermal printer, and 2 washing machine sized one megabyte hard disks) were connected to a room-sized 24 bit DDP-224 computer programmable by its users in FORTRAN IV and DAP 24 bit assembly language. Also accessible (as subroutines residing in Fortran IV libraries) were what might be called "soft" or "virtual" input devices (random number generators, attack-decay interpolators, and a sophisticated periodic function generator) and output devices (storage buffers, including arrays for logical switches and data of different types).

Each user would write their own "user programs" for their own purposes, to specify interconnections between inputs and outputs including the above mentioned. Since these connections could be complex transfer functions consisting of any kind of process the user could code up, this made the system ideal for the development of what we called "intelligent instruments". (These are essentially musical instruments for which the ratio of the amount of information the music system generates to the amount the musician plays, per unit of time, is greater than one to one.) It also made the system ideal for the exploration of compositional algorithms.

Aside from the creative freedoms and temptations inherent in the responsibility of each user to program their own "patch," GROOVE had another cataclysmically important characteristic as a music system. It viewed everything as (perceptually) continuous functions of time. It did not think in terms of notes or other such "events". Instead, it required such entities to be programmed as processes or sampled as curves of change over time. The sampling rate was l00 hertz, which meant that all analog oscillator, etc. parameters were updated fast enough to sound as though continuously changing to us mere organisms. (This was sampling done on the level of control parameters, not audio sampling.)

During each sample, the user program would be looped through once, all inputs referenced would be read, all programmed computations made, and the output channelled to DACs and disk files which could be edited later. An editing program would be just another interactive user program which each of us would write for a specific kind of modification of a particular work, and the data being edited would be the stored time functions on disk instead of the live data coming into the computer from the various input devices. Editing programs could use all of the same devices (knobs, periodic function generators, et cetera) that might be used in recording a first pass, and editing was often a realtime performance process in itself.

The centerpiece of the design was the bank of 200 functions of time. All data was stored as series of numbers that had no specific association with any parameter of sound or of musical composition except what a user program might give it by connecting these numbers to a relay or DAC (digital to analogue convertor). The system allowed the composition of functions of time in the abstract.

The importance of being able to approach all parameters of sound, of composition, or of performance as perceptually continuous functions of time cannot be over stressed during this current period when music seems everywhere to be digitally described as entities called "notes", and in which there are generally conceived to be differing necessary rates of change for different musical parameters. In our modern post-MIDI world, pitch is seen as changing at a rate of once per note whereas amplitude is updated "continuously" at higher resolution (faster rate). GROOVE embodied a concept space shared with the old truly modular analog synthesizers of the 1960s, on which any pattern of temporal change could be applied to any parameter, and in which sound can really be treated as a multidimensional continuous phenomenon. GROOVE added to this the ability to use time functions computationally without direct connection to any input or output variable.

The software was able to handle several times as many of these simultaneous time functions than we had hardware DACs to use them on (200 functions for 14 DACs). We therefore had many spare functions available to use for variables of any level of abstraction we might want, from recording actual knob or switch settings as we improvised or interpreted stored music, to global and profound compositional parameters such as probabilities, densities or entropy curves.

Each of us used these time functions in our own ways. In fact each of us freely used the entire system in entirely our own way because we each had an entire copy of it to ourself, with full source code. We could each change anything we wanted.

Temptation:

We often enjoyed just playing around with the system. The rate at which the computer ran through the user program loop, reading its inputs and writing to its outputs, was controlled by an external analog oscillator in this early hybrid system. So of course, we tried plugging in a voltage controlled oscillator so you could compose a time function which would create tempo changes by changing the sampling frequency of the computer itself. At one point Emmanuel Ghent had the computer control the speed of a variable speed reel to reel tape recorder so that he could specifically compose pitch changes with the oscillations of a bank of fixed frequency resonant filters he had built. In general, there were a lot of interconnections between the digital and analog domains and we played with them quite a bit.

This was often difficult because the analog audio lab and the digital computer hardware were in separate labs at a cumbersome distance from each other, connected by several hundred yards of trunk cables. We all made many trips back and forth between the analog and digital ends of GROOVE to calibrate DAC output voltages or to change the configuration of the multicolored spaghetti. (A typical patch consisted of hundreds of cables on a removable patch matrix board that each user could slide into a card rack full of audio modules too miscellaneous to describe here, so that each user could pursue a hardware configuration unlike anyone else's.)

During such trips down that long long hall between the analog and digital labs, when not impatiently obsessed with an embarrassing desire for roller skates to shorten the long walk (in that era before roller skates were ok for grownups too), I began stopping to look through a glass window in a door to another computer room along the way. Strange abstract shapes could usually be seen evolving on a video monitor, growing and evolving, week after week, month after month. Eventually I got to know Dr. Kenneth Knowlton, the computer graphics pioneer and master of evolutionary algorithms, and we began to work together on various projects. After learning some graphics coding there, I became intrigued with the idea of trying to make musical structure visible and embarked on the strange mission of bringing GROOVE's compositional capabilities to bear on the frame buffer output, particularly the ideas of time functions, transfer functions, and interconnectible software modules.

This was back in the early l970s, before digital synthesis of sound could be done in realtime (computed at the speed that we hear it). In that era, apart from a hybrid system such as GROOVE, computer music could only be done noninteractively, by entering defining information into a computer, waiting for sounds to be computed, then retrieving, recording and listening to them later. Ken, working with film maker Lillian Schwarz, was working in a similarly nonrealtime way, running image generation software all night, using a program that would compute a single frame of film then open and close the lens of a computer controlled film camera to expose the film and then advance the film.

I reasoned that just as GROOVE's computer control of analog modules had made interaction with relatively complex logic systems a realtime process, permitting realtime interactive computer control of musical materials for the first time, realtime interactive computer graphics should be possible as well by similar means. Instead of recording the image on film frame by frame, I should be able to code myself a visual musical instrument that would let me play and compose image pieces by recording the control data as time functions and playing back the time functions as visual compositions.

The idea of getting GROOVE running on this second computer in a different lab down the hall where it would output to a video monitor instead of to banks of equipment in an analog audio lab did not come to me all at once. Initially, I merely succumbed to the irresistible temptation of glowing color and texture and movement and light, but for the next several years, I spent probably as much time working on this visual music system as on audible music, and realtime interaction with video images felt like playing music to me. The desire to compose music visually was an inevitable craving.

RTV (Real Time Video):

What later became VAMPIRE started relatively simply, as a program called RTV (Realtime Video). That, in turn, started even more simply, as a mere drawing program for creating still images. Using a routine that Ken Knowlton gave me which permitted me to address the "frame buffer" (I believe this was just a dedicated area of memory in computer CORE) and a Rand Tablet, I wrote myself a drawing" program (similar to what we now call "paint" programs, but in 1974 there was no terminology for this yet), and greatly enjoyed doing a long ongoing series of computer drawings, evolving and changing the way the drawing program elaborated an image from my motions over the Rand Tablet as time went on.

Ken and I also worked out an elaborate initialization routine for an array 64 definable, storable bitmapped textures which could be used as "brushes" or letters of the alphabet, or whatever, and which made use of a box with 10 columns of 12 pushbuttons each representing a bit that could be on or off, functioning as a means of entering these patterns. After consulting some of my old hand weaving books, I made a large deck of hollerith cards, and shuffled them different ways to be able to easily enter batches of patterns via the computer's card reader. (Ken did some truly amazing things with that 10 by 12 button box that are beyond the scope of this writing but nonetheless worth mentioning, such as projecting completely customizable virtual control surfaces for telephone related jobs onto a half silvered mirror above it. But that's another story.)

As a composer of music, I soon found that I enjoyed playing the drawing parameters in real time like a musical instrument. I could move around in an image and change the size, color, texture, color and other parameters in real time as I drew it, using knobs and switches just like those the GROOVE music computer down the hall. I would draw with one hand while manipulating the various visual parameters with my other hand using the 3D joystick, switches, push buttons and knobs.

The movements of the object I dragged around the screen felt melodic, and I realized that I wasn't satisfied with just one "melodic" line. In audible music I had loved counterpoint best, so I wrote in another realtime interactive device to play. It was a square box of 16 pushbuttons for standard musical contrapuntal options. By now it was possible to interact with quite a number of visible variables in realtime.

This was before "menu driven" human interface systems came into fashion. Even had it not been, however, I've always preferred random access parallel (equally reachable-for) controls to any kind of hierarchical or modal way of organizing such a group of controls. The interfaces in traditional acoustic musical instruments are generally of random access parallel level design. It may be more hardware intensive, but spontaneously grabable controls are better for the music and art.

The simultaneous parallel inputs I had written into the system at this point, before interfacing it with the GROOVE music system, when it was still just an unrecordable room sized live performance visual instrument, were as follows:

Rand tablet:

x and y location currently being drawn

Foot pedal:

enable or disable drawing (writing to the display)

Knobs:

Logical operation (write, and, or, xor)

Vertical sizeHorizontal sizeColor number 1 through 8 for foregroundColor number of background

Global color parameters

Color definition mode 1 (parametric control)

SaturationValueHueResolution of hue spectrum3-D joystick for path through color space of color indices 1 through 8

Color definition mode 2 (3-D joystick axes)

x = amplitude of greeny = amplitude of redz = amplitude of blue

Push buttons - contrapuntal options:

Single line (single sequence of time-sequences x-y locations, as drawn)

VAMPIRE (the Video And Music Playing Interactive Realtime Experiment):

With that many parameters to control in real time, I had arrived at the same difficult stage in visual improvisation at which I had found myself needing to switch over from improvising to composing in audible music several years earlier. The capabilities available to me had gotten to be more than I could sensitively and intelligently control in realtime in one pass to any where near the limits of what I felt was their aesthetic potential.

Concurrently, I had become increasingly interested in the use of algorithms and powerful evolutionary parameters in sonic composing, and the idea of organic or other visual growth processes algorithmicly described and controlled with realtime interactive input, and of composing temporal structures that could be stored, replayed, edited, added to ("overdubbed" or "multitracked"), refined, and realized in either audio or video output modalities, based on a single set of processes or composed functions, made an interface of the drawing system with GROOVE's compositional and function-oriented software an almost inevitable and irresistible path to take. It would be possible to compose a single set of functions of time that could be manifest in the human sensory world interchangeably as amplitudes, pitches, stereo sound placements, et cetera, or as image size, location, color, or texture (et cetera), or (conceivably, ultimately) in both sensory modalities at once.

There are fewer parameters of sound to deal with than there are for images. In a hybrid system such as GROOVE, which used fixed waveform analog oscillators and computer controlled analog filters and voltage controlled oscillators, each "voice" may have frequency amplitude, filter cutoff, and possibly, filter Q, reverb mixture, or stereo location. A visual "voice" may have x, y, and possibly z axis locations, size in each of these dimensions, color, texture, hue, saturation, value (or other color parameters), plus logical operation on screen contents (write, and, or, exclusive or), and in the case of a recognizable entity, scaling and rotation variables (for solid objects roll, pitch and yaw) in two or three dimensions. (I did not deal with transformations of solid objects in this relatively primitive realtime digital visual instrument and composing system.)

In essence, what this system ultimately provided for the short time that it ran before its untimely demise, was a instrument for composing abstract patterns of change over time by recording human input into a computer via an array of devices the interpretation and use of each of which could be programmed and the data from which could be stored, replayed, reinterpreted and reused. The set of time functions created could be further altered by any transformation one wished to program and then used to control any parameter of image or of sound (when transfered back to GROOVE's audio-interfaced computer by computer tape or disk). Unfortunately, due to the requirement of separate computers in separarte rooms at the Labs, it was not physically possible to use a single set of recorded (and/or computed) time functions to control both image and sound simultaneously, though in principle this would have been possible.

Like any other vampire, this one consistently got most of its nourishment out of me in the middle of the night, especially just before dawn. It did so from 1974 through 1979, at which time its CORE was dismantled, which was the digital equivalent of having a stake driven through its art.