Philippe is confident that my thought process around “materialism and determinism”, as discussed in my proposal, is not well thought out and, well, just plain wrong. It’s a struggle I’ve been having for some time with this and previous work. On one hand, I don’t believe that there is a reasonable link between the function of a computer and the function of a brain. On the other hand, I’m attempting to make a machine do something very mind/brain-like: dreaming. Why the contradiction? During my proposal defence, the only word that occurred to me was “satire”. Now, I don’t really believe this work is satire, but there is some aspect of subversion and pushing limits in order to highlight problems. What do I want to highlight problems with? Centrally it’s transhumanism / the singularity / mind as information. The proposal was my first attempt to explicate why I’m interested in things like autonomy, intentional distance, the relation between scientific knowledge and truth, between humans and machines, and most importantly how we use technology to think and analyze ourselves. The plan is to go through my proposal and highlight the most important and conceptually cohesive aspects of my criticism of transhumanism, which is the major “why” of this work.

Philippe mentioned that I have a tendency to do things in isolation (to some degree) and am therefore spending a lot of time trying to solve things that have already been solved. A good example is that I only recently realized I needed a long term memory in the system, which would have been obvious from the start if I was working from a known cognitive architecture. The problem is more that the frameworks present in cognitive architectures did not appear conducive to dreaming. Rather it seemed that dreaming would be a trivial addition to these architectures and not really make a contribution. One of the insights of this research is a consideration of dreaming and day-dreaming as contiguous with, and even functionally required for, cognition. These are not add-ons that can be applied to existing systems, but must be considered in a framework of interpretation that is informed by dreams. Rather than starting with cognitive architectures and trying to fit dreams in, I’ve started with dreams and am attempting to make a simple system where dreaming is central. That is not to say that most of cognition is related to dreaming (though it may be) but there is an issue with concepts like long and short term memory (LTM / STM) that are present in systems like LIDA, but whose relations to dreaming are unclear. For example, if we consider dreaming in the context of LIDA, is it an aspect of short-term or long term memory, or both? It is unclear how dreaming would manifest in LIDA. The working assumption has been that dreams happen in LTM, and since the system is not solving a particular task (seems all cognitive architectures make the assumption that cognition is centrally task oriented), STM has little meaning. It turns out that this is not true an STM may have value only for perception in the absence of a task because you can’t compare every item of new stimulus to all of LTM.

Thecla noticed that I talked a lot about the theory of dreams, and a lot about the system design itself, but I did not talk at all (in the proposal) about the mapping of those theories to my artistic choices in regards to aesthetics. This process has been implicit to some degree, I’ve been making choices for conceptual and technical reasons, but that can’t be entirely true because the images produced are highly aesthetic objects. I’ve never been very concerned with aesthetics. I’ve thought of aesthetics as a manifestation of concept. Choices that concern aesthetics are so fluid and entrenched that I’m hardly aware I make them at all, or at what level (conceptual or aesthetic) I manifest them. The aspects of the system that effect spatial (still image) aesthetics are (1) the segmentation system (whose edges result from an attempt to balance CPU usage, with the breaking of the image into a reasonable number of parts—hundreds rather than thousands) (2) the merging code that averages what is supposed to be seen as the same object in subsequent frames. Images printed at NFF were a mix of approaches, but the majority are me deciding how many percepts to include, which results in the density and amount of temporal compression. For the temporal aesthetics (dream propagation) I choose how many propagations are active and the calculation of the “prominent” feature, which have the most effect on the visual output. The way prominent features (the feature that makes a percept an outlier most) are calculated leads to time being the prominent feature for most percepts. Dreams do show, under certain circumstances, frame by frame temporal replaying of time, which I presume is an effect of the evenness of the distribution in time and its feature prominence. Steven mentioned that he thinks the major contribution of this work to science is my mapping of theoretical values to visually manifested aesthetic choices.

Steven asked me why the manifestation of percepts is not an overlay over the perceptual images. This was brought up in the past. I was resistant back then because it was not clear to me what the results of this kind of feedback would be, since the output of segmentation would be fed back into the segmentation algorithm. This is less of an issue as I think about it, but there is a theoretical implication. If images are over-layed on stimulus images than this is an acceptance of Kosslyn’s proposal that mental images are presented on the early visual cortex. This is confirmed in some studies but not in others, so its unclear if it fits. The theory seems mechanical: mental images are encoded in LTM and decoded to the scratch space that is the early visual cortex. The major question is whether this view is compatible with the current design, where there is no “visual buffer” but that the state of the perceptual networks is the perception of external or internal images. There was also some discussion on homoeostasis and thinking of the system as closed (so it can gain an equilibrium). It’s still unclear to me what it would mean for this system to be motivated by keeping stable, and dreams being some aspect of that. If presentation of stimulus is independent of activation, then the live stimulus portions of the image would never habituate, and therefore the trigger for a day-dream would be a unprocessed camera image? At the very least this seems to accept Kosslyn’s proposal that mental images are in the early visual cortex, and I’m not sure this is a useful idea.

A final point that was made is that when people see the images many of them don’t think of the kinds of dreams that humans have. This makes sense because the dreams of a machine are necessarily a function of its perception of the world. If that perception is deficient (in this case so deficient as to be almost totally devoid of abstraction), then there will be a corresponding deficiency in the dreams of the machine. The aim is not to reproduce human dreams, but to explore the underlying mechanisms that cause dreams in the service of an image-generating process. There have been anecdotal cases where people have dreams of very simple sensorial experiences such as colour fields, rather than complex social narratives.