tag:blogger.com,1999:blog-17879477000332446072019-05-25T12:59:46.888+01:00countercomplexbitwise creations in a pre-apocalyptic worldviznuthttp://www.blogger.com/profile/06927455242083569579noreply@blogger.comBlogger31125tag:blogger.com,1999:blog-1787947700033244607.post-57529145093029939952018-03-31T08:42:00.000+01:002018-03-31T08:43:25.777+01:00Reaching for the blocks of the living worldAs a child, I used to believe in endless linear progress. There were ever higher buildings in the world, ever more TV channels, ever-faster computers and spacecraft. Records were broken, numbers got bigger, the complexity of everything increased. I saw this as the absolute good; actually, it was the only thinkable way of how a universe could work. Pop culture products such as Star Trek and <a href="https://www.rockpapershotgun.com/tag/sid-meiers-civilization-vi-rise-and-fall/">Sid Meier's Civilization</a> enforced this dogma.<br /><br />In my teens, I started to notice the dark sides. New computer programs seldom showed progress in code quality anymore; on the contrary, it seemed that the growing hardware specs were making developers lazy, indifferent and incompetent. The way how tech media praised the growing clock rates started to sound idiotic, and the ever-growing mass of people buying high-spec PCs without even being interested in their deep internals was ever more despicable. <br /><br />As a response, I started to embrace an opposite kind of esthetic and technological ideology: small is beautiful, bits are beautiful, hacks are beautiful. True progress is about deepness and compression instead of maximization and accumulation. Even apparently very simple structures may yield unexpected complexity – of an emergent, "<a href="http://countercomplex.blogspot.fi/">countercomplex</a>" kind instead of the "straightforwardly complex" kind. <br /><br />At first, I took it mosty as a computer-related problem and a computer-related battle. But then I started to realize its relevance to the entire human technological civilization. Our economical-industrial system basically has a <a href="http://viznut.fi/texts-en/resource_leak_bug_of_our_civilization.html">resource leak bug</a> that most of us have learned to regard as a feature rather than a bug. Fixing it requires an overall shift to a mentality that values compression more than expansion and accumulation. <br /><br />This is a kind of change that needs pioneers who experiment with more compressed technologies and societies before the planetary conditions force everybody to. <a href="http://viznut.fi/en/future.html">I want to be among them</a>. <br /><br /><div align="center">II</div><br />Over the past few years, I have been hanging out and living with people who have interests and ambitions towards ecovillages, permaculture, appropriate technology and the like. I have also been deepening my relationship with natural processes by growing some edible plants on a field and gotten eer more fascinated about various neo-lowtech and "off-the-grid" ways of constructing dwellings, securing food production and holding up human culture. <br /><br />My parents had a small organic farm when I was a kid, so it was not an alien world for me. However, when trying to learn about natural processes and their grassroots-level application in my usual analytical way, I noticed that I would have needed new tools to handle the complexity, uncontrollability and uncertainty. My existing methods of building mental models are not very good for learning about slow and complex natural processes. <br /><br />Basically, I have two major studying modes. One is the aforementioned analytical mode I adopted when growing up with computer programming: get down to the lowest level of abstraction (such as ones and zeros) and then build up from there, layer by layer. If the mode does not seem effective, I tend to switch to the opposite mode that resembles the way how I explored my childhood forests: forget the strictness, just let your intuition guide your trial-and-error experiments. I was also studying neural networks at the time, making me even more anxious about the ineffectivity and limits of blocky intellectual analysis. I did not entirely realize that I would have needed some kind of an intermediate mode. <br /><br />The trial-and-error mode is not problematic per se, it just needs a lot of cycles. After getting lost often enough in the same forest, a map gradually forms in the mind without any systematic mapping effort. Years ago, when learning to cook, I tried to find some kind of a theoretical ruleset of how the different ingredients and processes work but couldn't find any. So, I just went on with trial-and-error and let an intuitive "ruleset" form organically in my head, and I think I'm an okayish cook nowadays. When experimenting with the likes of plant-growing, however, the cycle is far too long for effective learning, so it needs decades to build a decent intuition about it. <br /><br />Back in the seventies, computer hackers such as Ted Nelson <a href="https://en.wikipedia.org/wiki/Computer_Lib/Dream_Machines">advocated computers</a> as a means of learning about how the world works. Simplified models of various real-world systems could be simulated by computer programs, allowing people to use the trial-and-error learning method to grow intuitive understanding about them. When trying to absorb the wisdom of Bill Mollison's Permaculture Designer's Manual, I started to hunger after a simulator where I could try to implement all kinds of crazy ideas in order to test them against the theory. Additionally, as a simulator like this would be necessarily based on knowable mathematics, I would also be able to use my analytical mode with it. <br /><br /><div align="center">III</div><br />I have now been working for some time on this kind of "world simulator". Its work title is "Ovys", from the Finnish for "self-sufficient community simulator". It will be more like a game, a learning toy or an imagination assistant than a serious design/modelling tool, but I hope it will eventually end up being useful for some real-world planning as well. I also dream about coupling it with a machine learning system that could discover low-tech ideas from the blind spots of human visionaries. <br /><br />I will write more about Ovys once it is closer to the first prototype stage. Anyway, it currently simulates solar radiation, airflow and heat transfer in various materials in a 3D grid world. After the first prototype (and perhaps some crowdfunding), I plan to implement the likes of the water cycle, plant growth, nutrient cycles and human agents at least in some kind of a "minecrafty" way that can be improved in later versions by other people. <br /><br />As a game, one might describe it as a realism-oriented reimagination of Dwarf Fortress. Some day, one might perhaps even describe it as a realism-oriented reimagination of Civilization. viznuthttp://www.blogger.com/profile/06927455242083569579noreply@blogger.com29tag:blogger.com,1999:blog-1787947700033244607.post-88119748447913741782015-04-09T12:56:00.000+01:002015-04-09T12:56:07.720+01:00Bringing magic back to technologyBack in 2011, I was one of the discoverers of "<a href="http://canonical.org/~kragen/bytebeat/">Bytebeat</a>", a type of very short computer programs that generate music. These programs received quite a lot of attention because they seem to be far too short for the complex musical structures they output. I wrote several technical articles about Bytebeat (<a href="http://arxiv.org/abs/1112.1368">arxiv</a>, <a href="http://countercomplex.blogspot.fi/2011/10/algorithmic-symphonies-from-one-line-of.html">countercomplex 1</a>, <a href="http://countercomplex.blogspot.fi/2011/10/some-deep-analysis-of-one-line-music.html">countercomplex 2</a>) as well as a <a href="http://widerscreen.fi/numerot/2014-1-2/kasittamattomat-koodirivit-musiikkina-bytebeat-ja-demoskenen-tekninen-kokeellisuus/">Finnish-language academic article</a> about the social dynamics of the phenomenon. Those who just need a quick glance may want to check out one of the <a href="https://www.youtube.com/watch?v=tCRPUv8V22o">Youtube videos</a>.<br /><br />The popularity of Bytebeat can be partially explained with the concept of "hack value", especially in the context of <a href="http://en.wikipedia.org/wiki/HAKMEM">Hakmem</a>-style hacks -- very short programs that seem to outgrow their size. The <a href="http://www.catb.org/jargon/html/">Jargon File</a> gives the following formal definition for "hack value" in the context of very short visual programs, display hacks:<br /><blockquote>"The hack value of a display hack is proportional to the esthetic value of the images times the cleverness of the algorithm divided by the size of the code."</blockquote>Bytebeat programs apparently have a high hack value in this sense. The demoscene, being distinct from the MIT hacker lineage, does not really use the term "hack value". Still, its own ultra-compact artifacts (executables of 4096 bytes and less) are judged in a very similar manner. I might just replace "cleverness of the algorithm" with something like "freshness of the output compared to earlier work".<br />Another related hacker concept is "magic", which the Jargon File defines as follows:<br /><blockquote>1. adj. As yet unexplained, or too complicated to explain; compare automagically and (Arthur C.) Clarke's Third Law: "Any sufficiently advanced technology is indistinguishable from magic." "TTY echoing is controlled by a large number of magic bits." "This routine magically computes the parity of an 8-bit byte in three instructions."&nbsp;</blockquote><blockquote>2. adj. Characteristic of something that works although no one really understands why (this is especially called black magic).&nbsp;</blockquote><blockquote>3. n. [Stanford] A feature not generally publicized that allows something otherwise impossible, or a feature formerly in that category but now unveiled.&nbsp;</blockquote><blockquote>4. n. The ultimate goal of all engineering &amp; development, elegance in the extreme; from the first corollary to Clarke's Third Law: "Any technology distinguishable from magic is insufficiently advanced".</blockquote>Short programs with a high hack value are magical especially in the first two senses. How and why Bytebeat programs work was often a mystery even to their discoverers. Even when some theory about them was devised, it was often quite difficult to understand or apply. Especially bitwise arithmetic tends to have very esoteric uses in Bytebeat.<br /><br />The hacker definition of magic indirectly suggests that highly advanced and elegant engineering should be difficult to understand. Indecipherable program code has even been celebrated in contests such as <a href="http://www.ioccc.org/">IOCCC</a>. This idea is highly countercultural. In mainstream software industry, clever hacks are despised: all code should be as easy as possible to understand and maintain. The mystical aspects of hacker subcultures are there to compensate for the dumb, odorless and dehumanizing qualities of the industrial chores.<br /><br />Magic appears in the Jargon File in two ways. Terms such as "black magic", "voodoo programming" and "cargo cult programming" represent cases where the user doesn't know what they are doing or may not even strive to. Another aspect is exemplified by terms such as "deep magic" and "heavy wizardry": there, the technology may be difficult to understand or chaotic to control, but at least there are some talented individuals who have managed to. These aspects could be called "wild" and "domesticated", respectively, or alternatively "superstition" and "esoterica".<br /><br />Most technology used to be magical in the wild/superstitious way. Cultural evolution does not require individual innovators to understand how their innovations work. Fermentation, for example, had been used for thousands of years without anyone having seen a micro-organism. Despite this, cultural evolution can find very good solutions if enough time is given: traditional craft designs often have a kind of optimality that is very difficult to attain from scratch even with the help of modern science. (See e.g. Robert Boyd et al.'s articles about cultural evolution of technology)<br /><br />Science and technology have countless examples of "wild magic" getting "domesticated". An example from computer music is the Karplus-Strong string model. Earlier models of acoustic simulation had been constructed via rational analysis alone, so they were prohibitively expensive for real-time synthesis. Then, Karplus and Strong accidentally discovered a very resource-efficient model due to a software bug, and nowadays it is pretty standard textbook material without much magical glamor at all.<br /><br />Magic and rationality support each other. In good technology, they would coexist in symbiosis. Industrialization, however, brought a cult of obsolescence that prevented this kind of relationship. Traditions, time-proven designs, intuitive understanding and irreducible wisdom started to get obsoleted by one-dimensional reductive analysis. Nowadays, "magic" is only tolerated as bursts of inspiration that must be captured within reductivist frameworks before they break something.<br /><br />In the 20th century, utilitarian industrial engineering started to get obsoleted by its bastard offspring, tumorous engineering. This is what I discussed in my earlier essay "<a href="http://countercomplex.blogspot.fi/2014/08/the-resource-leak-bug-of-our.html">The resource leak bug of our civilization</a>". Accumulation of bloat and complexity for their own sake is making technology increasingly difficult to rationally understand and control. In computing, where tumourous engineering dominates, designers are already longing back to utilitarian industry where simplicity, controllability, resource-efficiency and expertise were still valued.<br /><br />When advocating the reintroduction of magic, one must be careful not to endorse the kind of superstitious thinking that already has a good hold on how people relate to technology. Devices that hide their internal logic and instead base their interfaces on guessing what the user wants are kind of Aladdin's lamps to most. You don't really understand how they work, but at least their spirits fulfill your wishes as long as you don't make them angry.<br /><br />The way how magic manifests itself in traditional technology is diagonally opposite to this. The basic functional principles of a bow, a canoe or a violin can be learned via simple observation and experimentation. The mystery lies elsewhere: in the evolutionary design details that are difficult to rationally explain, in the outworldish talent and wisdom of the master crafter, in the superhuman excellence of the skilled user. If the design has been improved over generations, even minor improvements are difficult to do anymore, which gives it an aura of perfection.<br /><br />The magic we need more in today's technological world is of the latter kind. We should strive to increase deepness rather than outward complexity, human virtuosity rather than consumerism, flexibility rather than effortlessness. The mysteries should invite attempts at understanding and exploitation rather than blind reliance or worship; this is also the key difference between esoterica and superstition.<br /><br />One definition of magic, compatible with that in the Jargon File, is that it breaks people's preconceptions of what is possible. In order to challenge and ridicule today's technological bloat, we should particularly aim at discoveries that are "far too simple and random to work but still do". New ways to use and combine the available grassroots-level elements, for instance.<br /><br />A Bytebeat formula is a simple arrangement of digital-arithmetic operations that have been elementary to computers since the very beginning. It is apparently something that should have been discovered decades ago, but it wasn't. Hakmem contains a few "sound hacks" that could have evolved into Bytebeat if a wide enough counter had been introduced into them, but there are no indications that this ever took place. It is mind-boggling to think about that the space of very short programs remains so uncharted that random excursions there can churn out new interesting structures even after seventy years.<br /><br />Now consider that we are surrounded by millions of different natural "building blocks" such as plants, micro-organisms and geological materials. I honestly believe that, despite hundreds of thousands of years of cultural evolution, their combinatory space is nowhere near fully charted. For instance, it could be possible to find a rather simple and rudimentary technique that would make micro-organisms transform sand into a building material superior to everything we know today. A favorite fantasy scenario of mine is a small self-sufficient town that builds advanced spacecraft from scratch with "grassroots-level" techniques that seem magical to our eyes.<br /><br />How to develop this kind of magic? Rational analysis and deterministic engineering will help us to some extent, but we are dealing with systems so chaotic and multidimensional that decades of random experimentation would be needed for many crucial leaps-forward. And we don't really have those decades if we want to beat our technological cancer.<br /><br />Fortunately, the same <a href="http://countercomplex.blogspot.fi/2013/07/slower-moores-law-wouldnt-be-that-bad.html">Moore's law</a> that empowers tumorous engineering also provides a way out. Computers make it possible to manage chaotic systems in ways other than neurotic modularization. Today's vast computational capacities can be used to simulate the technological trial-and-error of cultural evolution with various level of accuracy. Of course, simulations often fail, but at least they can give us a compass for real-world experimentation. Another important compass is "hack value" or "scientific intuition" -- the modern manifestations of the good old human sense of wonder that has been providing fitness estimations for cultural evolution since time immemorial.viznuthttp://www.blogger.com/profile/06927455242083569579noreply@blogger.com12tag:blogger.com,1999:blog-1787947700033244607.post-30627602328041118512015-04-02T13:26:00.003+01:002015-04-02T14:40:26.255+01:00My first twenty years on the demosceneSince I have been somewhat inactive in computer art for a while, I felt it might be a good idea to sum up the first twenty years of my demoscene career. Besides, my <a href="http://www.pelulamu.net/viznut/demos/">previous summary</a> is already a decade old.<br /><br />Back in 1994, I got involved in some heated BBS discussions. I thought the computer culture of the time had been infected by a horrible disease. IBM PC compatible software was getting slow and bloated, and no one seemed to even question the need for regular hardware upgrades. I totally despised the way how PC hardware was being marketed to middle-class idiots and even declared the 486 PC as the computer of choice for dumb and spoiled kids. I was using an 8088 PC at the time and promised to myself not to buy any computing hardware that wasn't considered obsolete by consumption-oriented people. This decision has held quite well to these days. Nowadays, it is rather easy to get even "non-obsolete" hardware for free, so there has been very little need to actually buy anything but minor spare parts.<br /><br /><div style="text-align: center;"><a href="http://www.pouet.net/prod.php?which=21175"><img src="http://4.bp.blogspot.com/-zAAYtA-U_x0/VR0z4bLW4jI/AAAAAAAAAUw/kwI3A52H7xI/s1600/pelulamu.png" /></a></div><br />In the autumn of 1994, I released a couple of silly textmode games to spread my counterpropaganda. "<a href="http://www.pouet.net/prod.php?which=21175">Gamer Lamer</a>" was about a kid who gathered "lamer points" by buying computers and games with his father's money. "<a href="http://www.pouet.net/prod.php?which=21174">Micro$oft Simulator</a>", on the other hand, was a very simple economic simulator oriented on releasing new Windows versions and suing someone. I released these games under the group title <a href="http://www.pouet.net/groups.php?which=960">PWP</a> ("Pers-Wastaiset Produktiot" or "anti-arse productions") which was a kind of insider joke to begin with. The Finnish computer magazines of the time had been using the word "perusmikro" ("baseline microcomputer") for new and shiny 486 PCs, and this had inspired me to call them "persmikro" ("arse microcomputer").<br /><br />At that time, Finnish BBSes were full of people who visited demoparties even without being involved with the demoscene. I wanted to meet users of my favorite boards more often, so I started visiting the events as well. In order to not being just another hang-around loser, I always entered a production to the PC 64k intro competition starting from 1996.<br /><br />(The demo screenshots are Youtube links, by the way.)<br /><br /><div style="text-align: center;"><a href="https://www.youtube.com/watch?v=VY8U-0t12zQ"><img src="http://2.bp.blogspot.com/-wX524bvmeJY/VR0vUS6cjKI/AAAAAAAAATw/Qcfomzvx6GQ/s1600/isi.png" /></a></div><br />Of course, I wanted to rebel against the demoscene status quo. I saw PC demos as "persmikro" software (after all, they were bloated to download with 2400 bps and didn't work in my 8088) and I was also annoyed by their conceptual emptiness. I decided that all PWP demos should run on 8088 in textmode or CGA, be under 32 kilobytes big and have some meaningful content. The afore-mentioned "Gamer Lamer" or "Pelulamu" character became the main hero in these productions. PWP demos have always been mostly my own solo productions, but sometimes other people contributed material as well – mostly graphics but sometimes music too.<br /><br />The first three demos I released (the "<a href="http://www.pouet.net/prod.php?which=3681">Demulamu</a>" trilogy) were disqualified from their respective competitions. Once I had developed some skill and style, I actually became quite succesful. In 1997, I came second in the 64k competition of the Abduction demoparty with "<a href="http://www.pouet.net/prod.php?which=3689">Isi</a>", and in 1998, I won the competition with "<a href="http://www.pouet.net/prod.php?which=3694">Final Isi</a>".<br /><br />My demos were often seen as "cheap", pleasing crowds with "jokes" instead of talent. I wanted to prove to the naysayers that I had technical skills as well. In 1997, I had managed to get myself an "obsolete" 386 motherboard and VGA and started to work on a "technically decent" four-kilobyte demo for that year's Assembly party. The principle of meaningful content held: I wanted to tell a story instead of just showing rotating 3D objects. "<a href="http://www.pouet.net/prod.php?which=3702">Helium</a>" eventually came first in the competition. Notably, it had optional Adlib FM music (eating up about 300 bytes of code and data) at a time when music was generally disallowed in the 4k size class.<br /><br /><div style="text-align: center;"><a href="https://www.youtube.com/watch?v=eBKGKr2nfYU"><img src="http://4.bp.blogspot.com/-q_UxsyuxxIw/VR0vUFjMqHI/AAAAAAAAATs/Oux31X78WiQ/s1600/helium.jpeg" /></a></div><br />My <a href="http://www.pouet.net/prod.php?which=3705">subsequent</a> PC 4k demos were not as succesful, so I abandoned the category. Nevertheless, squeezing off individual bytes in size-optimized productions made me realize that profound discoveries and challenges might be waiting within tight constraints. Since Unix/Linux <a href="http://www.pouet.net/prod.php?which=3703">I was starting to get into</a> wasn't a very grateful demo platform, I decided to go 8-bit.<br /><br />In 1998, there was a new event called <a href="http://www.altparty.org/the-first-alternative-party.html">Alternative Party</a> which wanted to promote alternative demoscene platforms and competitions. The main leading demoscene platforms of the time (386+ PC and AGA Amiga) were not allowed but anything else was. I sympathized the idea from the beginning and decided to try my hands on some VIC-20 demo code. "<a href="http://www.pouet.net/prod.php?which=3704">Bouncing Ball 2</a>" won the competition and started a kind of curse: every time I ever participated in the demo competition at Alternative Party, I ended up first (<a href="http://www.pouet.net/prod.php?which=3704">1998</a>, <a href="http://www.pouet.net/prod.php?which=55435">2002</a>, <a href="http://www.pouet.net/prod.php?which=8549">2003</a> and <a href="http://www.pouet.net/prod.php?which=56097">2010</a>). <br /><br />Alternative Party was influential in removing platform restrictions from other Finnish demoparties as well, which allowed me to use the unxepanded VIC-20 as my primary target platform just about anywhere. I felt quite good with this. There hadn't been many VIC-20 demos before, so there was still a lot of untapped potential in the hardware. I liked the raw and dirty esthetics the platform, the hard-core memory constraints of the unexpanded machine, as well as the fact that the platform itself could be regarded as a political statement. I often won competitions with the VIC-20 against much more powerful machines which kind of asserted that I was on the right track.<br /><br />In around 2001-2003, there were several people who actively released VIC-20 demos, so there was some technical competition within the platform as well. New technical tricks were found all the time, and emulators often lagged behind the development. In 2003, I won the Alternative Party with a demo, "<a href="http://www.pouet.net/prod.php?which=8549">Robotic Warrior</a>", that used a singing software speech synthesizer. The synth later became a kind of trademark for my demo productions. Later that year, I made my greatest hardware-level discovery ever – that the square-wave audio channels of the VIC-I chip actually use shift registers instead of mere flip-flops. Both the speech synth and "<a href="http://www.pelulamu.net/pwp/vic20/waveforms.txt">Viznut waveforms</a>" can be heard in "<a href="http://www.pouet.net/prod.php?which=10626">Robotic Liberation</a>" (2003) which I still regard as a kind of "magnum opus" for my VIC-20 work.<br /><br /><div style="text-align: center;"><a href="https://www.youtube.com/watch?v=2SdGkkp1aq8"><img src="http://3.bp.blogspot.com/-UXz-bb0STXA/VR00eRl9dvI/AAAAAAAAAU4/gKs-aByc6NY/s1600/liber.jpeg" /></a></div><br />Although I released some "purely technical" demos (like the "<a href="http://www.pouet.net/prod.php?which=31537">Impossiblator</a>" series), most of my VIC-20 productions have political or philosophical commentary of some kind. For example, "<a href="http://www.pouet.net/prod.php?which=8549">Robotic Warrior</a>" and "<a href="http://www.pouet.net/prod.php?which=10626">Robotic Liberation</a>", despite being primarily technical show-offs, are dystopian tales on the classic theme of machines rising against people.<br /><br />I made demos for some other 8-bit platforms as well. "<a href="http://www.pouet.net/prod.php?which=25764">Progress Without Progress</a>" (2006) is a simple Commodore 64 production that criticizes economic growth and consumption-oriented society (with a SID-enhanced version of my speech synthesizer). I also released a total of three 4k demos for the C-64 for the German parties Breakpoint and Revision. I never cared very much about technical excellence or "clean esthetics" when working on the C-64, as other sceners were concentrating on these aspects. For example, "<a href="http://www.pouet.net/prod.php?which=54667">Dramatic Pixels</a>" (2010) is above all an experiment in minimalistic storytelling.<br /><br />A version of my speech synth can also be heard on Wamma's Atari 2600 demo "<a href="http://www.pouet.net/prod.php?which=30236">(core)</a>", and some of my VCS code can be seen in Trilobit's "<a href="http://www.pouet.net/prod.php?which=51961">Doctor</a>" as well. I found the Atari 2600 platform very inspiring, having many similar characteristics and constraints I appreciate in the VIC 20 but sometimes in a more extreme form.<br /><br /><div style="text-align: center;"><a href="https://www.youtube.com/watch?v=qgtDyc4cuqI"> <img src="http://3.bp.blogspot.com/-1KclpNO17wI/VR0vWcRYcxI/AAAAAAAAAUE/7F84NSzqG-o/s1600/progress.jpeg" /></a></div><br />When I was bored with new technical effects for the VIC-20, I created tools that would allow me to emphasize art over technology. "<a href="http://www.pouet.net/prod.php?which=51115">The Next Level</a>" (2007) was the first example of this, combining "Brickshop32" animation with my trusted speech synth. I also wrote <a href="http://countercomplex.blogspot.fi/2008/08/greetings-to-everyone-once-again-as-you.html">a blog post</a> about its development. The dystopian demo "<a href="http://www.pouet.net/prod.php?which=53883">Future 1999</a>" (2009) combines streamed character-cell graphics with sampled speech. "<a href="http://www.pouet.net/prod.php?which=56097">Large Unified Theory</a>" (2010), a story about enlightenment and revolution, was the last production where I used BS32.<br /><br /><div style="text-align: center;"><a href="https://www.youtube.com/watch?v=el9S1GNp4vQ"><img src="http://2.bp.blogspot.com/-YhBETsr0C7Q/VR0vVdyi05I/AAAAAAAAAUQ/wsT83xZxqMU/s1600/lut.jpeg" /></a></div><br />Perhaps the hurried 128-kilobyte MS-DOS demo "<a href="http://www.pouet.net/prod.php?which=57443">Human Resistance</a>" (2011) should be mentioned here as well. In the vein of my earlier dystopian demos, it tells about a resistance group that has achieved victory against a supposedly superior artificial intelligence by using the most human aspects of human mind. I find these themes very relevant to what kind of thoughts I am processing right now.<br /><br /><div style="text-align: center;"><a href="https://www.youtube.com/watch?v=F1537t45xm8"><img src="http://2.bp.blogspot.com/-2Clsm7TkJwc/VR0vWhVBWCI/AAAAAAAAAUY/A-MLuRDRxDM/s1600/resist.png" /></a></div><br />In around 2009-2011, I spent a lot of time contemplating on** the nature of the demoscene and computing platforms, as seen in many of my blog posts from that period. See e.g. "<a href="http://www.pelulamu.net/countercomplex/putting-the-demoscene-in-a-context/">Putting the demoscene in a context</a>", "<a href="http://countercomplex.blogspot.fi/2010/03/defining-computationally-minimal-art-or.html">Defining Computationally Minimal Art</a>" and "<a href="http://pelulamu.net/countercomplex/the_future_of_demo_art/">The Future of Demo Art</a>" (which are also available on <a href="https://independent.academia.edu/VilleMatiasHeikkil%C3%A4">academia.edu</a>). I got quoted in the first ever doctoral dissertation about demos (<a href="http://www.danielbotz.de/">Daniel Botz: Kunst, Code und Maschine</a>), which also gave me some new food for thought. This started to form basis on my philosophical ideas about technology which I am refining right now.<br /><br />Extreme minimalism in code and data size had fascinated me since my first 4k demos. I felt there was a lot of untapped potential in extremely simple and chaotic systems (as hinted by Stephen Wolfram's work). The C-64 4k demo "<a href="http://www.pouet.net/prod.php?which=59125">False Dimension</a>" (2012) is a collection of Rorschach-like "landscape photographs" generated from 16-bit pseudorandom seeds. I also wanted to push the limits of <a href="http://countercomplex.blogspot.fi/2011/06/16-byte-frontier-extreme-results-from.html">sub-256-byte size classes</a>, but since real-world platforms tend to be quite problematic with tiny program sizes, I wanted a clean virtual machine for this purpose. "<a href="http://pelulamu.net/ibniz/">IBNIZ</a>" (2011) was born out of this desire.<br /><br />When designing IBNIZ, I wanted to have a grasp on how much math would be actually needed for all-inclusive music synthesis. Experimentation with this gave birth to "<a href="http://canonical.org/~kragen/bytebeat/">Bytebeat</a>", an extremely minimalistic approach to code-based music. It became quite a big thing, with more than 100000 watchers for the related Youtube videos. I even <a href="http://arxiv.org/abs/1112.1368">wrote an academic article</a> about the thing.<br /><br /><div style="text-align: center;"><a href="https://www.youtube.com/watch?v=tCRPUv8V22o"><img src="http://4.bp.blogspot.com/-1V4r3Cpr_EU/VR0vUHgwv9I/AAAAAAAAAUg/mx_lDH97NKo/s1600/bytebeat.jpeg" /></a></div><br />After Bytebeat, I had begun to consciously distance myself from the demoscene in order to have more room for different kinds of social and creative endeavours. The focus on non-interactive works seemed limited to me especially when I was pondering about the <a href="http://countercomplex.blogspot.fi/2011/07/dont-submit-yourself-to-game-machine.html">"Tetris effects" of social media mechanisms</a> or technology in general. However, my only step toward interactive works has been a single participation in <a href="http://countercomplex.blogspot.fi/2014/09/choosing-low-tech-visual-styles-for.html">Ludum Dare</a>. I had founded an oldschool computer magazine called "<a href="http://skrolli.fi/skrolli-english">Skrolli</a>" in autumn 2012 and a lot of my resources went there.<br /><br />Now that I have improved my self-management skills, I feel I might be ready for some vaguely demoscene-related software projects once again. One of the projects I have been thinking about is "CUGS" (Computer Underground Simulator) which would attempt to create a game-like social environment that would encourage creative and skill-oriented computer subcultures to thrive (basically replicating some of the conditions that allowed the demoscene to form and prosper). However, my head is full of other kinds of ideas as well, so what will happen in the next few months remains to be seen.viznuthttp://www.blogger.com/profile/06927455242083569579noreply@blogger.com28tag:blogger.com,1999:blog-1787947700033244607.post-28766284810746863822015-03-14T19:31:00.001+00:002015-03-14T19:31:56.400+00:00Counteracting alienation with technological arts and crafts<p>The alienating effects of modern technology have been discussed a lot during the past few centuries. Prominent thinkers such as Marx and Heidegger have pointed out how people get reduced into one-dimensional resources or pieces of machinery. Later on, grasping the real world has become increasingly difficult due to the ever-complexifying network of interface layers. I touched this topic a little bit in <a href="http://countercomplex.blogspot.fi/2014/08/the-resource-leak-bug-of-our.html">an earlier text of mine</a>.</p><p>How to solve the problem? Discussion tends to bipolarize into quarrels between techno-utopians ("technological progress will automatically solve all the problems") and neo-luddites ("the problems are inherent in technology so we should avoid it altogether"). I looked for a more constructive view and found it in <a href="http://en.wikipedia.org/wiki/Technology_and_the_Character_of_Contemporary_Life:_A_Philosophical_Inquiry">Albert Borgmann</a>.</p><p>According to Borgmann, the problem is not in technology or consumption per se, but in the fact that we have given them the primary importance in our lives. To solve the problem, Borgmann proposes that we give the importance to something more worthwhile instead – something he calls "focal things and practices". His examples include music, gardening, running, and the culture of the table. Technological society would be there to protect these focalities instead of trying to make them obsolete.</p><p>In general, focal things and practices are something that are somehow able to reflect the whole human existence. Something where self-expression, excellence and deep meanings can be cultivated. Traditional arts and crafts often seem to fulfill the requirements, but Borgmann becomes skeptical whenever high technology gets involved. Computers or modern cars easily alienate the hands-on craftsperson with their blackboxed microelectronics.</p><p>Perhaps the most annoying part in Eric S. Raymond's "<a href="http://www.catb.org/esr/faqs/hacker-howto.html">How To Become A Hacker</a>" is the one titled "Points For Style". Raymond states there that an aspiring hacker should adopt certain non-computer activites such as language play, sci-fi fandom, martial arts and musical practice. This sounds to me like an enforcement of a rather narrow subcultural stereotype, but reading Borgmann made me realize an important point there: computer activities alone aren't enough even for computer hackers – they need to be complemented by something more focal.</p><h2>Worlds drifting apart</h2><p>So far so good: we should maintain a world of focal things supported by a world of high-tech things. The former is quite earthly, so everything that involves computing and such belongs to the latter. But what if these two worlds drift too far apart?</p><p>Borgmann believes that focal things can clarify technology. The contrast between focal and technological helps people put high-tech in proper roles and demand more tangibility from it. If the technology is material enough, its material aspects can be deepened by the materiality of the focal things. When dealing with information technology, however, Borgmann's idea starts losing relevance. Virtual worlds no longer speak a material language, so focal traditions no longer help grasp their black boxes. Technology becomes a detached, incomprehensible bubble of its own – a kind of "necessary evil" for those who put the focal world first.</p><p>In order to keep the two worlds anchored together, I suppose we need to build some islands between them. We need things and practices that are tangible and human enough to be earthed by "real" focal practices, but high-tech enough to speak the high-tech language.</p><p>Hacker culture provides one possible key. The principles of playful exploration and technological self-expression can be expanded to many other technologies besides computing. Even if "true focality" can't be reached, the hacker attitude at least counteracts passive alienation. Art and craft building on the assumed essence of a technology can be powerful in revealing the human-approachable dimensions of that technology.</p><h2>How many hackers do we need?</h2><p>I don't think it is necessary for every user of a complex technology to actively anchor it to reality. However, I do think everyone's social circle should include people who do. Assuming a a minimal Dunbar's number of 100, we can deduce that at least one percent of users of any given technology in any social group should be part of a "hacker culture" that anchors it.</p><p>Anchoring a technology requires a relationship deeper than what mere rational expertise provides. I would suggest that at least 10% of the users of a technology (preferrably a majority, however) should have a solid rational understanding of it, and at least 10% of these should be "hackers". A buffer of "casual experts" between superficial and deep users would also have some sociodynamical importance.</p><p>We also need to anchor those technologies that we don't use directly but which are used for producing the goods we consume. Since everyone eats food and wears clothes, every social circle needs to have some "gardening hackers" and "textile hackers" or something with a similar anchoring capacity. In a scenario where agriculture and textile industry are highly automated, some "automation hackers" may be needed as well.</p><p>Computing needs to be anchored from two sides – physical and logical. The physical aspect could be well supported by basic electronics craft or something like ham radio, while the logical side could be nurtured by programming-centered arts, maybe even by recreational mathematics.</p><h2>The big picture</h2><p>Sophisticated automation leaves people with increasing amounts of free time. Meanwhile, knowledge and control over technology are held by ever fewer. It is therefore quite reasonable to use the extra free time for activities that help keep technology in people's hands. A network of technological crafters may also provide alternative infrastructure that decreases dependence on the dominant machinery.</p><p>In an ideal world, people would be constantly aware of the skills and interests present in their various social circles. They would be ready to adopt new interests depending on which technologies need stronger anchoring. The society in general would support the growth and diversification of those groups that are too small or demographically too uniform.</p><p>At their best, technological arts would have a profound positive effect on how the majority experiences technology – even when practiced by only a few. They would inspire awe, appreciation and fascination in the masses but at the same time invite them to try to understand the technology.</p><p>This was my humble suggestion on a possible way how to counteract technological alienation. I hope I managed to be inspiring.</p>viznuthttp://www.blogger.com/profile/06927455242083569579noreply@blogger.com12tag:blogger.com,1999:blog-1787947700033244607.post-2258756187775100382014-09-25T20:23:00.000+01:002014-09-25T20:33:19.704+01:00Choosing low-tech visual styles for gamesA month ago, I participated in Ludum Dare, a 48-hour game development contest. This was the first time I finished a game-like project since about 2005.<br /><br />The theme of the contest was "connected worlds". I made a game called <a href="http://www.ludumdare.com/compo/ludum-dare-30/?action=preview&amp;uid=41940">Quantum Dash</a> that experiments with parallel universes as a central game mechanic. The player operates in three universes at the same time, and when connecting "interdimensional cords", the differences between these universes explosively cancel each other. The "Dash" part in the name refers to the Boulder Dash style grid physics I used. I found the creation process very refreshing, I am quite happy with the result considering the circumstances, and I will very likely continue making games (or at least rapid prototypes thereof).<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-OjNr3_e4fyY/VCRqN2ijTvI/AAAAAAAAAS8/JpwTiYb2qgA/s1600/quantumdash-0.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-OjNr3_e4fyY/VCRqN2ijTvI/AAAAAAAAAS8/JpwTiYb2qgA/s1600/quantumdash-0.png" height="240" width="320" /></a></div><br /><br />My relationship with computer games became somewhat dissonant during the nineties. At that time, the commercial industry became radically more centralized and profit-oriented. Eccentric European coder-auteur-heroes disappeared from computer magazines, giving way to American industry giants and their campaigns. There was also the rise of the "gamer" subculture that I considered rather repulsive from early on due to its glorification of hardware upgrades and disinterest towards real computer skills.<br /><br />Profit maximization in the so-called serious game industry is largely driven by a specific, Hollywood-style "bigger is better" approach to audiovisual esthetics. That is, a strive for photorealism. This approach is, of course, very appealing to shareholders: It is easy to imagine the grail -- everyone knows what the real world looks like -- but no one will ever reach it despite getting closer all the time. Increases in processing power and development budgets quite predictably map to increases in photorealism. There is also inherent obsolescence: yesterday's near-photorealism looks bad compared to today's near-photorealism, so it is easy to make consumers desire revamped versions of earlier titles instead of anything new.<br /><br />In the early noughties, the cult of photorealism was still so dominant that even non-commercial and small-scale game productions followed it. Thus, independent games often looked like inadequate, "poor man's" versions of AAA games. But the cult was starting to lose its grip: independent games were already looking for new paths. In his <a href="http://www.jesperjuul.net/text/independentstyle/">spring 2014 paper</a>, game researcher Jesper Juul gives 2005 as an important year in this respect: since 2005, the Grand Prize winners of the Independent Games Festival have invariably followed styles that diverge from the industrial mainstream.<br /><br />Juul defines "Independent Style" as follows: <i>"Independent Style is a representation of a representation. It uses contemporary technology to emulate low-tech and usually “cheap” graphical materials and visual styles, signaling that a game with this style is more immediate, authentic and honest than are big-budget titles with high-end 3-dimensional graphics."</i><br /><br />The most prominent genre within I.S. is what Juul calls "pixel style", reminiscent of older video game technology and also overlapping with the concept of "<a href="http://countercomplex.blogspot.fi/2010/03/defining-computationally-minimal-art-or.html">Computationally Minimal Art</a>" I formulated a few years ago. My game, Quantum Dash, also fits in this substyle. I found the stylistic approach appealing because it is quick and easy to implement from scratch in a limited time. Part of this easiness stems from the fact that CMA is native to the basic <a href="http://countercomplex.blogspot.fi/2012/03/fabric-theory-talking-about-cultural.html">fabric</a>of digital electronic computers. Another attracting aspect is the long tradition of low-tech video games which makes it easy to reflect prior work and use the established esthetic language.<br /><br />Another widely used approach simulates art made with physical materials such as cut-out paper (And Yet It Moves) or wax pastels on paper (Crayon Physics). Both this approach and the aforementioned pixel style apparently refer to older technologies, which makes it tempting to generalize the idea of past references to other genres of I.S. as well. However, I think Juul somewhat stumbles with this attempt with styles that don't have a real historical predecessor: "The pixel style 3d games Minecraft and Fez also cannot refer to an earlier time when 3d games were commonly made out of large volumetric pixels (voxels), so like Crayon Physics Deluxe, the historical reference is somewhat counterfactual, but still suggests a simpler, if nonexistent, earlier technology."<br /><br />I think it would be more fruitful to concentrate on complexity than history when analyzing Independent Style. The esthetic possibility space of modern computing is mind-bogglingly large. It is easy to get lost in all the available potential complexity. However, by introducing constraints and stylistic choices that dramatically reduce the complexity, it is easier even for a solo artist to explore and grasp the space. The contraints and choices don't need to refer to any kind of history -- real or counterfactual -- to be effective.<br /><br />The voxel style in Minecraft can still be considered somewhat historical -- a 3D expansion of grid-based 2D games such as Boulder Dash. However, I suspect that the esthetic experimentation in independent games will eventually lead to a much wider variety of styles and constraints -- including a bunch that cannot be explained with historical references.<br /><br />The demoscene has been experimenting with different visual styles for a long time. Even at times when technical innovation was the primary concern, the goal was to find new things that just look good -- and realism was just one possible way of looking good. In 1996, when <a href="https://www.youtube.com/watch?v=kIN0vDdzl-s">realtime raytracing</a>was a hot new photorealistic thing among democoders, there was a production called <a href="https://www.youtube.com/watch?v=5dB5wgFMZoQ">Paper by Psychic Link</a> that dropped jaws with its paper-inspired visuals -- a decade before paper simulation became trendy in the independent games scene. Now that the new PC hardware no longer challenges the demo artist the way it used to, there is much more emphasis on stylistic experimentation in non-constrained PC demos.<br /><br />Because of this longer history of active experimentation, I think it would be useful for many more independent game developers to look for stylistic inspiration in demoscene works. Of course, not all the tricks and effects adapt well to games, but the technological and social conditions in their production are quite similar to those in low-budget games. After all, demos are real-time-rendering computer programs produced by small groups without budgets, usually over relatively short time periods, so there's very little room for "big-budget practices" there.<br /><br />Here's a short list of demos with unique esthetic elements that might be able to inspire game esthetics as well. Two of them are for 8-bit computers and the rest for (semi-)modern PCs.<br /><ul><li><a href="https://www.youtube.com/watch?v=0R48MOXS7Wg">Metamorphosis by ASD</a></li><li><a href="https://www.youtube.com/watch?v=ZvWSNL2-cEs">IX by Moppi Productions</a></li><li><a href="https://www.youtube.com/watch?v=t4vmxv5MAlw">Your Song is Quiet part 2 by Inward and TPOLM</a></li><li><a href="https://www.youtube.com/watch?v=lb6erJZiUgc">Royal Temple Ball by Synesthetics</a></li><li><a href="https://www.youtube.com/watch?v=1qWBjSfmadU">Antifact by Limp Ninja</a></li><li><a href="https://www.youtube.com/watch?v=yLEm4uuiljs">Weed by Triebkraft and 4th Dimension</a></li><li><a href="https://www.youtube.com/watch?v=Tdw6CeuBt20">hwr2 by Kosmoplovci</a></li></ul>I'm expanding into game design and development primarily because I want to experiment with the power of interactivity, especially in relation to some of my <a href="http://countercomplex.blogspot.fi/2014/09/how-i-view-our-species-and-our-world.html">greater-than-life goals</a>. So, audiovisuals will be a secondary concern.<br /><br />Still, due to my background, I want to take effort in choosing a set of simple and lightweight esthetic approaches to be used. They will definitely be computationally minimal, but I want to choose some fresh techniques in order to contrast favorably against the square-pixel style that is already quite mainstream in independent games. But that'll be a topic for another post.viznuthttp://www.blogger.com/profile/06927455242083569579noreply@blogger.com7tag:blogger.com,1999:blog-1787947700033244607.post-71321465866465946502014-09-07T11:50:00.000+01:002014-09-07T12:22:16.110+01:00How I view our species and our world<p>My recent blog post "<a href="http://countercomplex.blogspot.fi/2014/08/the-resource-leak-bug-of-our.html">The resource leak bug of our civilization"</a> has gathered some interest recently, especially after getting noticed by <a href="http://ranprieur.com/">Ran Prieur</a> in his blog. I therefore decided to translate <a href="http://viznut.blogspot.fi/2014/04/pari-sanaa-ihmisista-ja-maailmasta.html">another essay</a> to give it a wider context. Titled "A few words about humans and the world", it is intended to be a kind of wholesome summary of my worldview, and it is especially intended for people who have had difficulties in understanding the basis of some of my opinions.</p><p><b>---</b></p> <p>This writeup is supposed to be concise rather than convincing. It therefore skips a lot of argumentation, linking and breakdowns that might be considered necessary by some. I'll get back to them in more specific texts.</p> <p><b>1. Constructions</b></p> <p>Humans are builders. We build not only houses, devices and production machinery, but also cultures, conceptual systems and worldviews. Various constructions can be useful as tools, however we also have an unfortunate tendency to chain ourselves to them.</p> <p>Right now, humankind has chained itself to the worship of abundance: it is imperative to produce and consume more and more of everything. Quantitative growth is imagined to be the same thing as progress. Especially during the last hundred years, the theology of abundance has invaded so deep and profound levels, that most people don't even realize its effect. It's not just about consumerism on a superficial level, but about the whole economic system and worldview.</p> <p>Extreme examples of growth ideology can be easily found in the digital world, where it manifests as a raised-to-the-power-two version. What happens if worshippers of abundance get their hands on a virtual world where the amount of available resources increases exponentially? Right, they will start bloating up the use of resources, sometimes even for its own sake. It is not at all uncommon to require a thousand times more memory and computational power than necessary for a given task. Mindless complexity and purposeless activities are equated with technological advancement. The tools and methods the virtual world is being built with have been designed from the point of view of idealized expansion, so it is difficult to even imagine alternatives.</p><p>I have some background in a branch of hacker culture, demoscene, where the highest ideal is to use minimal resources in an optimal way. The nature of the most valued progress there is condensing rather than expanding: doing new things under ever stricter limitations. This has helped me perceive the distortions of the digital world and their counterparts in the material world.</p><p>In everyday life, the worship of growth shows up, above all, as complexification of everything. It is becoming increasingly difficult to understand various socio-economic networks or even the functionality of ordinary technological devices. This alienates people from the basics of their lives. Many try to fight this alienation by creating pockets of understandability. Escapism, conservatism and extremism rise. On the other hand, there is also an increase in do-it-yourself culture and longing to a more self-sufficient way of life. People should be encouraged into these latter-mentioned, positive means to counter alienation instead of channels that increase conflicts.</p><p>An ever greater portion of techno-economical structures consists of useless clutter, so-called economic tumors. They form when various decision-makers attempt to keep their acquired cake-pieces as big as possible. Unnecessary complexity slows down and unilateralizes progress instead of being a requirement for it. Expansion needs to be balanced with contraction -- you can't breath in without breating out.</p> <p>The current phase of expansion is finally about to end, since the fossil fuels that made it possible are getting rarer, and we still don't know about an equally powerful replacement. As the phase took so long, the transition into contraction will be difficult to many. An increasingly bigger portion of economy will escape into the digital world, where it is possible to maintain the unrealistic swelling longer than in the material world.</p> <p>Dependencies of production can be depicted as a pyramid where the things on the higher levels are built from the things below. In today's world, people always try to build on the top, so the result looks more like a shaky tower than a pyramid. Most new things could be easily built at lower levels. The lowest levels of the pyramid could also be strengthened by giving more room for various self-sufficient communities, local production and low-tech inventions. Technological and cultural evolution is not a one-dimensional road where "forward" and "backward" are the only alternatives. Rather, it is a network of possibilities burgeoning towards every direction, and even its strange side-loops are worth knowing.</p> <p><b>2. Diversity</b></p> <p>It is often assumed that growth would increase the amount of available options. In principle, this is true -- there are more and more different products on store shelves -- but their differences are more and more superficial. The same is true with ways of life: it is increasingly difficult to choose a way of life that wouldn't be attached to the same chains of production or models of thinking as every other way of life. The alternatives boil down into the same basic consumer-whoredom.</p> <p>Proprietors overstandardize the world with their choices, but this probably isn't very conscious activity. When there are enough decision-makers who play the same game with the same rules, the world will eventually shape around these rules (including all the ingrained bugs and glitches). Conspiracy theories or evil-incarnates are therefore not required to explain what's going on.</p> <p>The human-built machinery is getting increasingly more complex, so it is also increasingly more difficult to talk about it in concrete terms. Many therefore seek help from conceptual tools such as economic theories, legal terminology or ideologies, and subsequently forget that they are just tools. Nowadays, money- and production-centered ways of conceptualizing the world have become so dominant that people often don't realize that there are other alternatives.</p> <p>Diversity helps nature adapt to changes and recover from disasters. For the same reason, human culture should be as diverse as possible especially now that the future is very uncertain and we have already started to crash into the wall. It is necessary to make it considerably less difficult to choose radically different ways of life. Much more room should be given to experimental societies. Small and unique languages and cultures should be treasured.</p> <p>There's no one-size-fits-all model that would be best for everyone. However, I believe that most people would be happiest in a society that actively maintains human rights and makes certain that no one is left behind. Dictatorship of majority, however, is not that crucial feature of a political system in a world where everyone can freely choose a suitable system. Regardless, dissidents should be given enough room in every society: everyone doesn't necessarily have the chance to choose a society, and excessive unanimosity tends to be quite harmful anyway.</p> <p><b>3. Consciousness</b></p> <p>Thousands of years ago, the passion for construction became so overwhelming that the quest for mental refinement didn't keep with the pace. I regard this as the main reason why human beings are so prone to become slaves of their constructs. Rational analysis is the only mental skill that has been nurtured somewhat sufficiently, and even rational analysis often becomes just a tool for various emotional outbursts and desires. Even very intelligent people may be completely lost with their emotions and motivations, making them inclined to adopt ridiculously one-dimensional thought constructs.</p> <p>Putting one's own herd before anyone else is an example of attitude that may work among small hunter-gatherer groups, but which should have no more place in the modern civilization. A population that has the intellectual facilities to build global networks of cause and effect should also have the ability to make decisions on the corresponding level of understanding instead of being driven by pre-intellectual instincts.</p> <p>Assuming that humankind still wants to maintain complex societal and technological structures, it should fill its consciousness gap. Any school system should teach the understanding and control of one's own mind at least as seriously as reading and writing. New practical mental methods, suitable for an ever greater variety of people, should be developed at least as passionately as new material technology.</p><p>For many people, worldview is still primarily a way of expressing one's herd instincts. They argue and even fight about whose worldview is superior. It is hopeful that future will bring a more individual attitude towards them: there is no single "truth" but different ways for conceptualizing the reality. A way that is suitable for one mind may be even destructive to another mind. Science produces facts and theories that can be used as building blocks for different worldviews, but it is not possible to put these worldviews into an objective order of preference.</p> <p><b>4. Life</b></p> <p>The purposes of life for individual human beings stem from their individual worldviews, so it is futile to suggest rules-of-thumb that suit all of them. It is much easier to talk about the purpose of biological life, however.</p><p>The basic nature of life, based on how life is generally defined, is active self-preservation: life continuously maintains its form, spreads and adapts into different circumstances. The biological role of a living being is therefore to be part of an ecosystem, strengthening the ecosystem's potential for continued existence.</p> <p>The longer there is life on Earth, the more likely it is to expand into outer space at some point of time. This expansion may already take place during the human era, but I don't think we should specifically strive for it before we have learned how to behave non-destructively. However, I'm all for the production of raw material and energy in space, if it helps us abstain from raping our home planet.</p><p>At their best, intelligent lifeforms could function as some sort of gardeners. Gardeners that strengthen and protect the life in their respective homeworlds and help spread it to other spheres. However, I don't dare to suggest that the current human species have the prequisites for this kind of role. At this moment, we are so lost that we couldn't become even a galactic plague.</p><p>Some people regard the human species as a mistake of evolution and want us to abandon everything that differentiates us from other animals. I see no problem per se in the natural behavior of homo sapiens, however: there's just an unfortunate misbalance of traits. We shouldn't therefore abandon reason, abstractions or constructivity but rebalance them with more conscious self-improvement and mental refinement.</p> <p><b>5. The end of the world</b></p> <p>It is not possible to save the world, if it means saving the current societies and consumer-centric lifestyles. At most, we can soften the crash a little bit. It is therefore more relevant to concentrate on activities that make the postapocalyptic world more life-friendly.</p> <p>As there is still an increasing amount of communications technology and automation in the world, and the privileged even have increasingly more free time, these facilities should be used right now for sowing the seeds for a better world. If we start building alternative constructs only when the circumstances force us to, the transition will be extremely painful.</p> <p>People increasingly dwell in easiness bubbles facilitated by technology. It is therefore a good idea to bring suitable signals and facilities into these bubbles. Video game technology, for example, can be used to help reclaim one's mind, life and material environment. Entertainment in general can be used to increase the interest in such a reclaim.</p> <p>Many people imagine progress as a kind of unidirectional growth curve and therefore regard the postapocalyptic era as a "return to the past". However, the future world is more likely to become radically different from any previous historical era -- regardless of some possible "old-fashioned" aspects. It may therefore more relevant to use fantasy rather than history to envision the future.</p> viznuthttp://www.blogger.com/profile/06927455242083569579noreply@blogger.com3tag:blogger.com,1999:blog-1787947700033244607.post-35342547726942045462014-08-05T21:26:00.000+01:002014-09-12T13:23:16.430+01:00The resource leak bug of our civilization<br />A couple of months ago, Trixter of Hornet released a demo called "8088&nbsp;Domination", which shows off real-time video and audio playback on the&nbsp;original 1981 IBM PC. This demo, among many others, contrasts favorably&nbsp;against today's wasteful use of computing resources.<br /><br />When people try to explain the wastefulness of today's computing, they&nbsp;commonly offer something I call "tradeoff hypothesis". According to this&nbsp;hypothesis, the wastefulness of software would be compensated by&nbsp;flexibility, reliability, maintability, and perhaps most importantly, cheap&nbsp;programming work. Even Trixter himself favors this explanation.<br /><br />I used to believe in the tradeoff hypothesis as well. I saw demo art on&nbsp;extreme platforms as a careful craft that attains incredible feats while&nbsp;sacrificing generality and development speed. However, during recent years,&nbsp;I have become increasingly convinced that the portion of true tradeoff is&nbsp;quite marginal. An ever-increasing portion of the waste comes from&nbsp;abstraction clutter that serves no purpose in final runtime code. Most of&nbsp;this clutter could be eliminated with more thoughtful tools and methods&nbsp;without any sacrifices. What we have been witnessing in computing world is&nbsp;nothing utilitarian but a reflection of a more general, inherent&nbsp;wastefulness, that stems from the internal issues of contemporary human&nbsp;civilization.<br /><br /><h3>The bug</h3><br />Our mainstream economic system is oriented towards maximal production and&nbsp;growth. This effectively means that participants are forced to maximize&nbsp;their portions of the cake in order to stay in the game. It is therefore&nbsp;necessary to insert useless and even harmful "tumor material" in one's own&nbsp;economical portion in order to avoid losing one's position. This produces an&nbsp;ever-growing global parasite fungus that manifests as things like black&nbsp;boxes, planned obsolescence and artificial creation of needs.<br /><br />Using a software development metaphor, it can be said that our economic&nbsp;system has a fatal bug. A bug that continuously spawns new processes that&nbsp;allocate more and more resources without releasing them afterwards,&nbsp;eventually stopping the whole system from functioning. Of course, "bug" is a&nbsp;somewhat normative term, and many bugs can actually be reappropriated as&nbsp;useful features. However, resource leak bugs are very seldom useful for&nbsp;anything else than attacking the system from the outside.<br /><br />Bugs are often regarded as necessary features by end-users who are not&nbsp;familiar with alternatives that lack the bug. This also applies to our&nbsp;society. Even if we realize the existence of the bug, we may regard it as a&nbsp;necessary evil because we don't know about anything else. Serious&nbsp;politicians rarely talk about trying to fix the bug. On the contrary, it is&nbsp;actually getting more common to embrace it instead. A group that calls&nbsp;itself "Libertarians" even builds their ethics on it. Another group called&nbsp;"Extropians" takes the maximization idea to the extreme by advocating an&nbsp;explosive expansion of humankind into outer space. In the so-called&nbsp;Kardashev scale, the developmental stage of a civilization is&nbsp;straightforwardly equated with how much stellar energy it can harness for&nbsp;production-for-its-own-sake.<br /><br /><h3>How the bug manifests in computing</h3><br />What happens if you give this buggy civilization a virtual world where the&nbsp;abundance of resources grows exponentially, as in Moore's law? Exactly: it&nbsp;adopts the extropian attitude, aggressively harnessing as much resources as&nbsp;it can. Since the computing world is virtually limitless, it can serve as an&nbsp;interesting laboratory example where the growth-for-its-own-sake ideology&nbsp;takes a rather pure and extreme form. Nearly every methodology, language and&nbsp;tool used in the virtual world focuses on cumulative growth while neglecting&nbsp;many other aspects.<br /><div><br /></div><div><div>To concretize, consider web applications. There is a plethora of different&nbsp;browser versions and hardware configurations. It is difficult for developers&nbsp;to take all the diversity in account, so the problem has been solved by&nbsp;encapsulation: monolithic libraries (such as Jquery) that provide&nbsp;cross-browser-compatible utility blocks for client-side scripting. Also,&nbsp;many websites share similar basic functionality, so it would be a waste of&nbsp;labor time to implement everything specifically for each application. This&nbsp;problem has also been solved with encapsulation: huge frameworks and engines&nbsp;that can be customized for specific needs. These masses of code have usually&nbsp;been built upon previous masses of code (such as PHP) that have been&nbsp;designed for the exactly same purpose. Frameworks encapsulate legacy&nbsp;frameworks, and eventually, most of the computing resources are wasted by&nbsp;the intermediate bloat. Accumulation of unnecessary code dependencies also&nbsp;makes software more bug-prone, and debugging becomes increasingly difficult&nbsp;because of the ever-growing pile of potentially buggy intermediate layers.&nbsp;</div><div><br /></div><div>Software developers tend to use encapsulation as the default strategy for&nbsp;just about everything. It may feel like a simple, pragmatic and universal&nbsp;choice, but this feeling is mainly due to the tools and the philosophies&nbsp;they stem from. The tools make it simple to encapsulate and accumulate, and&nbsp;the industrial processes of software engineering emphasize these ideas. Alternatives remain underdeveloped. Mainstream tools make it far more cumbersome to do things like metacoding, static analysis and automatic code&nbsp;transformations, which would be far more relevant than static frameworks for&nbsp;problems such as cross-browser compatibility.</div><div><br /></div><div>Tell a bunch of average software developers to design a sailship. They will&nbsp;do a web search for available modules. They will pick a wind power module and an&nbsp;electric engine module, which will be attached to some kind of a floating module. When someone mentions aero- or hydrodynamics, the group will respond&nbsp;by saying that elementary physics is a far too specialized area, and it is cheaper and more straight-forward to just combine pre-existing modules and&nbsp;pray that the combination will work sufficiently well.</div><div><br /></div><h3>Result: alienation</h3><div><br /></div><div>The way of building complex systems from more-or-less black boxes is also&nbsp;the way how our industrial society is constructed. Computing just takes it&nbsp;more extreme. Modularity in computing therefore relates very well to the technology criticism of philosophers such as Albert Borgmann.</div><div><br /></div><div>In his 1984 book, Borgmann uses the term "service interface", which even&nbsp;sounds like software development terminology. Service interfaces often&nbsp;involve money. People who have a paid job, for example, can be regarded as&nbsp;modules that try to fulfill a set of requirements in order to remain&nbsp;acceptable pieces of the system. When using the money, they can be regarded&nbsp;as modules that consume services produced by other modules. What happens beyond the interface is considered irrelevant, and this irrelevance is a major source of alienation. Compare someone who grows and chops their own&nbsp;wood for heating to someone who works in forest industry and buys burnwood&nbsp;with the paycheck. In the former case, it is easier to get genuinely&nbsp;interested by all the aspects of forests and wood because they directly&nbsp;affect one's life. In the latter case, fulfilling the unit requirements is&nbsp;enough.</div><div><br /></div><div>The way of perceiving the world as modules or devices operated via service&nbsp;interfaces is called "device paradigm" in Borgmann's work. This is&nbsp;contrasted against "focal things and practices" which tend to have a wider,&nbsp;non-encapsulated significance to one's life. Heating one's house with&nbsp;self-chopped wood is focal. Also arts and crafts have a lot of examples of&nbsp;focality. Borgmann urges a restoration of focal things and practices in order to counteract the alienating effects of the device paradigm.</div></div><div><br /></div><div><div>It is increasingly difficult for computer users to avoid technological&nbsp;alienation. Systems become increasingly complex and genuine interest towards&nbsp;their inner workings may be discouraging. If you learn something from it,&nbsp;the information probably won't stay current for very long. If you modify it,&nbsp;subsequent software updates will break it. It is extremely difficult to&nbsp;develop a focal relationship with a modern technological system. Even&nbsp;hard-core technology enthusiasts tend to ignore most aspects of the systems&nbsp;they are interested in. When ever-complexifying computer systems grow ever&nbsp;deeper ingrained into our society, it becomes increasingly difficult&nbsp;to grasp even for those who are dedicated to understand it. Eventually even&nbsp;</div><div>they will give up.</div><div><br /></div><div>Chopping one's own wood may be a useful way to counteract the alienation of&nbsp;the classic industrial society, as oldschool factories and heating stoves&nbsp;still have some basics in common. In order to counteract the alienation&nbsp;caused by computer technology, however, we need to find new kind of focal things and practices that are more computerish. If they cannot be found,&nbsp;they need to be created. Crafting with low-complexity computer and&nbsp;electronic systems, including the creation of art based on them is my&nbsp;strongest candidate for such a focal practice among those practices that&nbsp;already exist in subcultural form.</div><div><br /></div><h3>The demoscene insight</h3><div><br /></div><div>I have been programming since my childhood, for nearly thirty years. I have&nbsp;been involved with the demoscene for nearly twenty years. During this time,&nbsp;I have grown a lot of angst towards various trends of computing.</div><div><br /></div><div>Extreme categories of the demoscene -- namely, eight-bit democoding and&nbsp;extremely short programs -- have been helpful for me in managing this angst.&nbsp;These branches of the demoscene are a useful, countercultural mirror that&nbsp;contrasts against the trends of industrial software development and helps grasp its inherent problems.</div><div><br /></div><div>Other subcultures have been far less useful for me in this endeavour. The&nbsp;mainstream of open source / free software, for example, is a copycat&nbsp;culture, despite its strong ideological dimension. It does not actively&nbsp;question the philosophies and methodologies of the growth-obsessed industry&nbsp;but actually embraces them when creating duplicate implementations of&nbsp;growth-obsessed software ideas.</div><div><br /></div><div>Perhaps the strongest countercultural trend within the demoscene is the move&nbsp;of focus towards ever tighter size limitations, or as they say, "4k is the&nbsp;new 64k". This trend is diagonally opposite to what the growth-oriented&nbsp;society is doing, and forces to rethink even the deepest "best practices" of&nbsp;industrial software development. Encapsulation, for example, is still quite&nbsp;prominent in the 4k category (4klang is a monolith), but in 1k and smaller&nbsp;categories, finer methods are needed. When going downwards in size, paths considered dirty by the mainstream need to be embraced. Efficient&nbsp;exploration and taming of chaotic systems needs tools that are deeply&nbsp;different from what have been used before. Stephen Wolfram's ideas presented</div><div>in "A New Kind of Science" can perhaps provide useful insight for this&nbsp;endeavour.</div><div><br /></div><div>Another important countercultural aspect of the demoscene is the&nbsp;relationship with computing platforms. The mainstream regards platforms as&nbsp;neutral devices that can be used to reach a predefined result, while the demoscene regards them as a kind of raw material that has a specific essence&nbsp;of its own. Size categories may also split platforms into subplatforms, each&nbsp;of which has its own essence. The mainstream wants to hide platform-specific&nbsp;characteristics by encapsulating them into uniform straightjackets, while the demoscene is more keen to find suitable esthetical approaches for each&nbsp;category. In Borgmannian terms, demoscene practices are more focal.</div><div><br /></div><div>Demoscene-inspired practices may not be the wisest choice for pragmatic&nbsp;software development. However, they can be recommended for the development&nbsp;of a deeper relationship with technology and for diminishing the alienating&nbsp;effects of our growth-obsessed civilization.</div></div><div><br /></div><div><h3>What to do?</h3><div><br /></div><div>I am convinced that our civilization is already falling and this fall cannot&nbsp;be prevented. What we can do, however, is create seeds for something better.&nbsp;Now is the best time for doing this, as we still have plenty of spare time&nbsp;and resources especially in rich countries. We especially need to propagate&nbsp;the seeds towards laypeople who are already suffering from increasing&nbsp;alienation because of the ever more computerized technological culture. The masses must realize that alternatives are possible.</div><div><br /></div><div>A lot of our current civilization is constructed around the resource leak&nbsp;bug. We must therefore deconstruct the civilization down to its elementary&nbsp;philosophies and develop new alternatives. Countercultural insights may be&nbsp;useful here. And since hacker subcultures have been forced to deal with the&nbsp;resource leak bug in its most extreme manifestation for some time already, their input can be&nbsp;particularly valuable.</div></div><div><br /></div>viznuthttp://www.blogger.com/profile/06927455242083569579noreply@blogger.com27tag:blogger.com,1999:blog-1787947700033244607.post-31157129897359668472013-07-14T17:58:00.000+01:002013-07-14T18:53:44.817+01:00Slower Moore's law wouldn't be that bad.Many aspects of the world of computing are dominated by Moore's law -- the phenomenon that the density of integrated circuits tends to double every two years. In mainstream thought, this is often equated with progress -- a deterministic forward-march towards the universal better along a metaphorical one-dimensional path. In this essay, I'm creating a fictional alternative timeline to bring up some more dimensions. A more moderate pace in Moore's law wouldn't necessarily be that bad after all.<br /><br /><h2>Question: What if Moore's law had been progressing at a half speed since 1980?</h2>I won't try to explain the point of divergence. I just accept that, since 1980, certain technological milestones would have been rarer and fewer. As a result, certain quantities would have doubled only once every four years instead of every two years. The RAM capacities, transistor counts, hard disk sizes and clock frequencies would have just reached the 1990s level in the year 2000, and in the year 2013, we would be on the 1996 level in regards to these variables.<br /><br />I'm excluding some hardware-related variables from my speculation. Growth in telecommunications bandwidths, including the spread of broadband, are more related to infrastructural development than Moore's law. I also consider the technological development in things like batteries, radio tranceivers and LCD screens to be unrelated to Moore's law, so their progress would have been more or less unaffected apart from things like framebuffer and DSP logic.<br /><br /><h2>1. Most milestones of computing culture would not have been postponed.</h2>When I mentioned "the 1996 level", many readers probably envisioned a world where we would be "stuck in the year 1996" in all computing-related aspects. Noisy desktop Pentiums running Windows 95s and Netscape Navigators, with users staring in awe at rainbow-colored, static, GIF-animation-plagued websites over landline dialup connections. This tells about mainstream views about computer culture: everything is so one-dimensionally techno-determinist that even progress in purely software- and culture-related aspects is difficult to envision without their supposed hardware prequisities.<br /><br />My view is that progress in computing and some other high technology has always been primarily cultural. Things don't become market hits straight after they're invented, and they don't get invented straight after they're technologically possible. For example, there were touchscreen-based mobile computers as early as 1993 (Apple Newton), but it took until 2010 before the cultural aspects were right for their widespread adoption (iPad). In the Slow-Moore world, therefore, a lot of people would have tablets just like in our world, despite the fact that they wouldn't probably have too many colors.<br /><br />The mainstream adoption of the Internet would have taken place in the mid-1990s just like in the real world. 1987-equivalent hardware would have been completely sufficient for the boom to take place. Public online services such as Videotex and BBSes had been available since the late 1970s, and Minitel had already gathered millions of users in France in the 1980s, so even a dumb text terminal would have sufficed on the client side. The power of the Internet compared to its competitors was its global, free and decentralized nature, so it would have taken off among common people even without graphical web browsers.<br /><br />Assuming that the Internet had become popular with character-based interfaces rather than multimedia-enhanced hypertext documents, its technical timeline would have become somewhat different. Terminal emulators would have eventually accumulated features in the same way as Netscape-like browsers did in the real world. RIPscrip is a real-world example of what could have become dominant: graphics images, GUI components and even sound and video on top of a dumb terminal connection. "Dynamic content" wouldn't require horrible kludges such as "AJAX" or "dynamic HTML", as the dumb terminal approach would have been interactive and dynamic enough to begin with. The gap between graphical and text-based applications would be narrower, as well as the gap between "pre-web" and "modern" online culture.<br /><br />The development of social media was purely culture-driven: Facebook would have been technically possible already in the 1980s -- feeds based on friend lists don't require more per-user computation than, say, IRC channels. What was needed was cultural development: several "generations" of online services were required before all the relevant ideas came up. In general, most online services I can think of could have taken place in some form or another, about the same time as they appeared in the real world.<br /><br />The obvious exceptions would be those services that require a prohibitive amount of server-side storage. An equivalent of Google Street View would perhaps just show rough shapes of the buildings instead of actual photographs. YouTube would focus on low-bitrate animations (something like Flash) rather than on full videos, as the default storage space available per user would be quite limited. Client-side video/audio playback wouldn't necessarily be an issue, since MPEG decompression hardware was already available in some consumer devices in the early 1990s (Amiga CD32) and would have therefore been feasible in the Slow-Moore year 2004. Users would just be more sensitive about disk space and would therefore avoid video formats for content that doesn't require actual video.<br /><br />All the familiar video games would be there, as the resource-hogging aspects of games can generally be scaled down without losing the game itself. It could even be argued that there would be far more "AAA" titles available, assuming that the average budget per game would be lower due to lower fidelity requirements.<br /><br />Domestic broadband connections would be there, but they would be more often implemented via per-apartment ethernet sockets than via per-apartment broadband modems. The amount of DSP logic required by some protocols (*DSL) would make per-apartment boxes rather expensive compared to the installation of some additional physical wires. In rural areas, traditional telephone modems would still be rather common.<br /><br />Mobile phones would be very popular. Their computational specs would be rather low, but most of them would still be able to access Internet services and run downloadable third-party applications. Neither of these requires a lot of power -- in fact, every microprocessor is designed to run custom code to begin with. Very few phones would have built-in cameras, however -- the development of cheap and tiny digital camera cells has a lot to do with Moore's law. Also, global digital divide would be greater -- there wouldn't be extremely cheap handsets available in poor countries.<br /><br />It must be emphasized here that even though IC feature sizes would be in the "1996 level", we wouldn't be building devices from the familiar 1996 components. The designs would be far more advanced and logic-efficient. Hardware milestones would have been more like "reinventing the wheel" than accumulating as much intellectual property as possible on a single chip. RISC and Transputer architectures would have displaced X86-like CISCs a long time ago and perhaps even given way to ingenious inventions we can't even imagine.<br /><br />Affordable 3D printers would be just around the corner, just like in the real world. Their developmental bottlenecks have more to do with the material printing process itself than anything Moorean. Similarly, the setbacks in the progress of virtual reality helmets have more to do with optics and head-tracking sensors than semiconductors.<br /><br /><h2>2. People would be more conscious about the use of computing resources.</h2>As mentioned before, digital storage would be far less abundant than in the real world. Online services would still have tight per-user disk quotas and many users would be willing to actually pay for more space. Even laypeople would have a rather good grasp about kilobytes and megabytes and would often put effort in choosing efficient storage formats. All computer users would need to regularly choose what is worth keeping and what isn't. Online privacy would generally be better, as it would be prohibitively expensive for service providers to neurotically keep the complete track record of every user.<br /><br />As global Internet backbones would have considerably slower capacities than local and mid-range networks, users would actually care about where each server is geographically located. Decentralized systems such as IRC and Usenet would therefore never have given way to centralized services. Search engines would be technically more similar to YacY than Google, social media more similar to Diaspora than Facebook. Even the equivalent of Wikipedia would be a network of thousands of servers -- a centralized site would have ended up being killed by deletionists. Big businesses would be embracing this "peer-to-peer" world instead of expanding their own server farms.<br /><br />In general, Internet culture would be more decentralized, ephemeral and realtime than in the real world. Live broadcasts would be more common than vlogs or podcasts. Much less data would be permanently stored, so people would have relatively small digital footprints. Big companies would have far less power over users.<br /><br />Attitudes towards software development would be quite different, especially in regards to efficiency and optimization. In the real world, wasteful use of computational resources is systematically overlooked because "no one will notice the problem in the future anyway". As a result, we have incredibly powerful computers whose software still suffers from mainframe-era problems such as ridiculously high UI latencies. In a Slow-Moore world, such problems would have been solved a long time ago: after all, all you need is a good user-level control to how the operating system priorizes different pieces of code and data, and some will to use it.<br /><br />Another problem in real-world software development is the accumulation of abstraction layers. Abstraction is often useful during development, as it speeds up the process and simplifies maintenance, but most of the resulting dependencies are a completely waste of resources in the final product. A lot of this waste could be eliminated automatically by the use of advanced static analysis and other methods. From the vast contrast between carefully size-optimized hobbyist hacks and bloated mainstream software we might guess that some mind-boggling optimization ratios could be reached. However, the use and development of such tools has been seriously lagging behind because of the attitude problems caused by Moore's law.<br /><br />In a Slow-Moore world, the use of computing resources would be extremely efficient compared to current standards. This wouldn't mean that hand-coded assembly would be particularly common, however. Instead, we would have something like "hack libraries": huge collections of efficient solutions for various problems, from low-level to high-level, from specific to generic. All tamed, tested and proven in their respective parameter ranges. Software development tools would have intelligent pattern-matchers that would find efficient hacks from these libraries, bolt them together in optimal arrangements and even optimize the bolts away. Hobbyists and professionals alike would be competing in finding ever smarter hacks and algorithms to include in the "wisdombase", thus making all software incrementally more resource-efficient.<br /><br /><h2>3. There would still be a gap between digital and "real" content.</h2>Regardless of how efficently hardware resources are used, unbreakable limits always exist. In a Slow-Moore world, for instance, film photography would still be superior in quality to digital photography. Also, since the digital culture would be far more resource-conscious, large resolutions wouldn't even be desirable in purely digital contexts.<br /><br />Spreading "memes" as bitmap images is a central piece of today's Internet culture. Even snippets of on-line discussions get spread as bitmapped screenshots. Wasteful, yes, but compatible and therefore tolerable. The Slow-Moore Internet would probably be much more compatible with low-bit formats such as plaintext or vector and character graphics.<br /><br />Since the beginning of digital culture, there has been a desire to import content from "meatspace" into the digital world. At first, people did it in laborous ways: books were typed into text files, paintings and photographs were repainted with graphics editors, songs were covered with tracker programs. Later, automatic methods appeared: pictures could be scanned, songs could be recorded and compressed into MP3-like formats. However, it took some time before straight automatic imports could compete against skillful manual effort. In low resolutions, skillful pixel-pushing still makes a difference. Synthesized songs take a fraction of the space of an equivalent MP3 recording. Eventually, the difference diminished, and no one longer cared about it.<br /><br />In a Slow-Moore world, the timeline of digital media would have been vastly different. A-priori-digital content would still have vast advantages over imported media. Artists looking for worldwide appreciation via the Internet would often choose to take the effort to learn born-digital methods instead of just digitizing their analog works. As a result, many traditional disciplines of computer art would have grown enormous. Demoscene and low-bit techniques such as procedural content generation and tracker-like synthesized music would be the mainstream norm in the Internet culture instead of anything "underground".<br /><br />Small steps towards photorealism and higher fidelity would still be able to impress large audiences, as they would still notice the difference. However, in a resource-conscious online culture, there would also probably be a strong countercultural movement against "high-bit" -- a movement seeking to embrace the established "Internet esthetics" instead of letting it be taken over and marginalized by imports.<br /><br />Record and film companies would definitely be suing people for importing, covering and spreading their copyrighted material. However, they would still be able to sell it in physical formats because of their superior quality. There would also be a class of snobs who hate all "computer art" and all the related esthetic while preferring "real, physical formats".<br /><br /><h2>4. Conclusion</h2>A Slow-Moore world would be somewhat "backwards" in some respects but far more sensible or even more advanced in others. As a demoscener with an ever-growing conflict against today's industry-standard attitudes, I would probably prefer to live with a more moderate level of Moorean inflation. However, a Netflix fan who likes high-quality digital photography and doesn't mind being in surveillance would probably choose otherwise.<br /><br />The point in my thought experiment was to justify my view that the idea of a linear tech tree strongly tied to Moore's law is a banal oversimplification. There are many other dimensions that need to be noticed as well.<br /><br />The alternative timeline may also be used as inspiration for real-world projects. I would definitely like to see whether an aggressively optimizing code generation tool based on "hack libraries" could be feasible. I would also like to see the advent of a mainstream operating system that doesn't suck.<br /><br />Nevertheless: Down with Moore's law fetishism! It's time for a more mature technological vision!viznuthttp://www.blogger.com/profile/06927455242083569579noreply@blogger.com7tag:blogger.com,1999:blog-1787947700033244607.post-49557642130640360792013-01-05T15:29:00.002+00:002013-01-05T15:39:30.254+00:00I founded a new "oldschool" computer magazine.Maybe it's a sensible time to tell a bit what I've been up to for the past few months.<br /><br />In September 2012, I founded <a href="http://www.skrolli.fi/">Skrolli</a>, a new Finnish computer magazine. This turn in my life surprised even myself.<br /><br />It started from <a href="http://pelulamu.net/pixx/retrobitti-etusivu.jpeg">an image that went viral</a>. Produced by my friend CCR with a lot of ideas from me, it was a faux magazine cover speculating what the longest-living Finnish home computing magazine, MikroBitti, would be like today if it had never renewed itself after the eighties. The magazine happens to be somewhat iconic to those Finns who got immersed to computing before the turn of the millennium, so it reached some relevant audience quite efficiently.<br /><br />The faux cover was meant to be a joke, but the abundance of comments like "I would definitely subscribe to this kind of magazine" made me seriously consider the possibility of actually creating something like it. I put up a simple web page stating the idea of a new "countercultural" computer magazine that is somewhat similar to what MikroBitti used to be like. In just a few days, over a hundred people showed up on the dedicated IRC channel, and here we are.<br /><br />Bringing the concept of an oldschool microcomputer magazine to the present era needs some thoughtful reflection. The world has changed a lot; computer hobbyists no longer exist as a unified group, for example. Everyone uses a computer for leisure, and it is sometimes difficult to draw line between those who are interested in the applications and those who are genuinely interested in the technology. Different activities also have their own subcultures with their own communication channels, and it is often hard to relate to someone whose subculture has a very different basis.<br /><br />Skrolli defines computer culture as something where the computational aspects are irreducible. It is possible to create visual art or music completely without digital technology, for example, but once the computer becomes the very material (like in case of pixel art or chip music), the creative activity becomes relevant to our magazine. Everything where programming or other direct access to the computational mechanisms is involved is also relevant, of course.<br /><br />I also chose to target the magazine to my own language group. In a nation of six million, the various subcultures are closer to one another, so it is easier to build a common project that spans the whole scale. The continuing existence of large computer hobbyist events in this country might also simplify the task. If the magazine had been started in English or even German, there would have been a much greater risk of appealing only to a few specialized niches.<br /><br />In order to keep myself motivated, I have been considering the possibility that Skrolli will actually start a new movement. Something that brings the computational aspects of computer entuhsiasm back to daylight and helps the younger generation to find a true, non-compromising relationship with digital technology. Once the movement starts growing on its own, without being tied to a single project, language barriers will no longer exist for it.<br /><br />I will be busy with this stuff for at least a couple of months until we get the first few issues printed (yes, it will be primarily a paper magazine as a statement against short-living journalism). After that, it is somewhat likely that I will finish the projects I temporarily abandoned: there will probably be a JIT-enabled version IBNIZ, and the IBNIZ democoding contest I promised will be arranged. Stay tuned!viznuthttp://www.blogger.com/profile/06927455242083569579noreply@blogger.com6tag:blogger.com,1999:blog-1787947700033244607.post-81965729541629243032012-04-19T13:10:00.008+01:002012-04-19T16:13:08.901+01:00The relationship between "New Aesthetic" and Computationally Minimal ArtA couple of weeks ago, something called "New Aesthetic" was brought to my attention. It is difficult to find any sort of coherent definition for the idea, but it seems like an umbrella label for a wide variety of visual things that somehow look computational, often in not-so-computational contexts. The main spreader of the meme is apparently <a href="http://new-aesthetic.tumblr.com/">a Tumblr blog</a> that collects pictures of things such as pixellated glitches in textiles, real-life voxel sculptures, mugs decorated with website graphics, digitally glitched photographs, satellite images as well as all kinds of other things that evocate suitably futuristic associations.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/-T5Bkv-5n8lc/T5ACa2IUCrI/AAAAAAAAAK8/n7_YumwKjuw/s1600/newaesthetic.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 320px;" src="http://2.bp.blogspot.com/-T5Bkv-5n8lc/T5ACa2IUCrI/AAAAAAAAAK8/n7_YumwKjuw/s320/newaesthetic.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5733084985872878258" /></a><br /><br />Despite the profound vagueness of the umbrella term, it is not difficult to notice the general trend it refers to. Just a decade ago, a computationally inspired real-life object would have been a unique novelty item, but nowadays there are such things all around us. I mentioned an aspect of this trend back in 2010 in <a href="http://pelulamu.net/countercomplex/computationally-minimal-art/">my article on Computationally Minimal Art</a>, where I noticed that "retrocomputing esthetics" is not just thriving in its respective subcultures (such as demoscene or chip music scene) but popping up every now and then in mainstream contexts as well -- often completely without the historical or nostalgic vibe usually associated with retrocomputing.<br /><br />As the concept of "New Aesthetic" overlaps a lot of my ponderings, I now feel like building some semantics in order to relate the ideas to one another:<br /><br />"New Aesthetics", as I see it, is a rather vague umbrella term that contains a wide variety of things but has a major subset that could be called "Computationally Inspired".<br /><br />"Computationally Inspired" is anything that brings the concepts and building blocks of the "digital world" into non-native contexts. T-shirts, mugs and other real-life objects decorated with big-pixel art or website imagery are obvious examples. In a wide sense, even anything that makes the basic digital building blocks more visible within a digital context might be "Computationally Inspired" as well: big-pixel low-fi computer graphics on a new high-end computer, for example.<br /><br />"Computationally Minimal" is anything that uses a very low amount of computational resources, often making the digital building blocks such as pixels very discernible. Two years ago, I defined "Computationally Minimal Art" as follows: "[A] form of discrete art governed by a low computational complexity in the domains of time, description length and temporary storage. The most essential features of Computationally Minimal Art are those that persist the longest when the various levels of complexity approach zero." <br /><br />We can see that Computationally Inspired and Computationally Minimal have a lot of overlap but neither is a subset of another. Cross-stitch patterns are CM almost by definition as they have a limited number of discrete "pixels" with a limited number of different colors, but they are not CI unless they depict something that comes from the "computer world", such as video game characters. On the other hand, a sculpture based on a large amount of digitally corrupted data is definitely CI but falls out of the definition of CM due to the size of the source data.<br /><br />What CM and CI and especially their intersection have in common, however, is the tendency of showing off discrete digital data and/or computational processes, which gives them a lot of esthetic similarity. In CI, this is usually a goal in itself, while in CM, it is most often a side-effect of the related goal of low computational complexity. In either case, however, the visual result often looks like big-pixel graphics. This has caused confusion among many New Aesthetic bloggers who use adjectives such as "retro", "8-bit" or "nostalgic" when referring to this phenomenon, when what they are witnessing is just a way how the essence of digital technology tends to manifest visually.<br /><br />There has been <a href="http://www.imperica.com/daily/473-the-new-aesthetic-in-writing">a lot of on-line discussion</a> revolving New Aesthetic during the past month, and a lot of it seems like pseudo-intellectual, reality-detached mumbo-jumbo to me. In order to gain some insight and substance, I would like to recommend all the bloggers to take serious a look into the demoscene and other established forms of computer-centric expression. You may also find out that a lot of this stuff is actually not that new to begin with, it has just been gaining a lot of new momentum recently.viznuthttp://www.blogger.com/profile/06927455242083569579noreply@blogger.com3tag:blogger.com,1999:blog-1787947700033244607.post-44779570757835692722012-03-17T12:38:00.004+00:002012-03-17T12:50:50.434+00:00"Fabric theory": talking about cultural and computational diversity with the same wordsIn recent months, I have been pondering a lot about certain similarities between human languages, cultures, programming languages and computing platforms: they are all abstract constructs capable of giving a unique form or flavor to anything that is made with them or stems from them. Different human languages encourage different types of ideas, ways of expression, metaphors and poetry while discouraging others. Different programming languages encourage different programming paradigms, design philosophies and algorithms while discouraging others. The different characteristics of different computing platforms, musical instruments, human cultures, ideologies, religions or subcultural groups all similarly lead to specific "built-in" preferences in expression.<br /><br />I'm sure this sounds quite meta, vague or superficial when explained this way, but I'm convinced that the similarities are far more profound than most people assume. In order to bring these concepts together, I've chosen to use the English word "fabric" to refer to the set of form-giving characteristics of languages, computers or just about anything. I've picked this word partly because of its dual meaning, i.e. you can consider a fabric a separate, underlying, form-giving framework just as well as an actual material from which the different artifacts are made. You may suggest a better word if you find one.<br /><br /><h2>Fabrics</h2>The fabric of a human language stems (primarily) from its grammar and vocabulary. The principle of lingustic relativity, also known as the Sapir-Whorf hypothesis, suggests that language defines a lot about how our ways of thinking end up being like, and there is even a bunch of experimental support for this idea. The stronger, classical version of the hypothesis, stating that languages build hard barriers that actually restrict what kind of ideas are possible, is very probably false, however. I believe that all human languages are "human-complete", i.e. they are all able to express the same complete range of human thoughts, although the expression may become very cumbersome in some cases. In most Indo-European languages, for example, it is very difficult to talk about people without mentioning their real or assumed genders all the time, and it may be very challenging to communicate mathematical ideas in an Aboriginal language that has a very rudimentary number system.<br /><br />Many programmers seem to believe that the Sapir-Whorf hypothesis also works with programming languages. Edsger Dijkstra, for example, was definitely quite Whorfian when stating that teaching BASIC programming to students made them "mentally mutilated beyond hope of regeneration". The fabric of a programming language stems from its abstract structure, not much unlike those of natural languages, although a major difference is that the fabrics of programming languages tend to be much "purer" and more clear-cut, as they are typically geared towards specific application areas, computation paradigms and software development philosophies.<br /><br />Beyond programming languages there are computer platforms. In the context of audiovisual computer art, the fabric of a hardware platform stems both from its "general-purpose" computational capabilities and the characteristics of its special-purpose circuitry, especially the video and sound hardware. The effects of the fabric tend to be the clearest in the most restricted platforms, such as 8-bit home computers and video game consoles. The different fabrics ("limitations") of different platforms are something that demoscene artists have traditionally been concerned about. Nowadays, there is even an academic discipline with an expanding series of books, "Platform Studies", that asks how video games and other forms of computer art have been shaped by the fabrics of the platforms they've been made for.<br /><br />The fabric of a human culture stems from a wide memetic mess including things like taboos, traditions, codes of conduct, and, of course, language. In modern societies, a lot stems from bureaucratic, economic and regulatory mechanisms. Behavior-shaping mechanisms are also very prominent in things like video games, user interfaces and interactive websites, where they form a major part of the fabric. The fabric of a musical instrument stems partly from its user interface and partly from its different acoustic ranges and other "limitations". It is indeed possible to extend the "fabric theory" to quite a wide variety of concepts, even though it may get a little bit far-fetched at times.<br /><br /><h2>Noticing one's own box</h2>In many cases, a fabric can become transparent or even invisible. Those who only speak one language can find it difficult to think beyond its fabric. Likewise, those who only know about one culture, one worldview, one programming language, one technique for a specific task or one just-about-anything need some considerable effort to even notice the fabric, let alone expand their horizons beyond it. History shows that this kind of mental poverty leads even some very capable minds into quite disastrous thoughts, ranging from general narrow-mindedness and false sense of objectivity to straightforward religious dogmatism and racism.<br /><br />In the world of computing, difficult-to-notice fabrics come out as standards, de-facto standards and "best practices". Jaron Lanier warns about "lock-ins", restrictive standards that are difficult to outthink. MIDI, for example, enforces a specific, finite formalization of musical notes, effectively narrowing the expressive range of a lot of music. A major concern risen by "You are not a gadget" is that technological lock-ins of on-line communication (e.g. those prominent in Facebook) may end up trivializing humanity in a way similar to how MIDI trivializes music.<br /><br />Of course, there's nothing wrong with standards per se. Standards, also including constructs such as lingua francas and social norms, can be very helpful or even vital to humanity. However, when a standard becomes an unquestionable dogma, there's a good chance for something evil to happen. In order to avoid this, we always need individuals who challenge and deconstruct the standards, keeping people aware of the alternatives. Before we can think outside the box, we must first realize that we are in a box in the first place.<br /><br /><h2>Constraints</h2>In order to make a fabric more visible and tangible, it is often useful to introduce artificial constraints to "tighten it up". In a human language, for example, one can adopt a form of constrained writing, such as a type of poetry, to bring up some otherwise-invisible aspects of the linguistic fabric. In normal, everyday prose, words are little more than arbitrary sequences of symbols, but when working under tight constraints, their elementary structures and mutual relationships become important. This is very similar to what happens when programming in a constrained environment: previously irrelevant aspects, such as machine code instruction lengths, suddenly become relevant.<br /><br />Constrained programming has long traditions in a multitude of hacker subcultures, including the demoscene, where it has obtained a very prominent role. Perhaps the most popular type of constraint in all hacker subcultures in general is the program length constraint, which sets an upper limit to the size of either the source code or the executable. It seems to be a general rule that working with ever smaller program sizes brings the programmer ever closer to the underlying fabric: in larger programs, it is possible to abstract away a lot of it, but under tight constraints, the programmer-artist must learn to avoid abstraction and embrace the fabric the way it is. In the smallest size classes, even such details as the ordering of sound and video registers in the I/O space become form-giving, as seen in the sub-32-byte C-64 demos by 4mat of Ate Bit, for example.<br /><br /><h2>Mind-benders</h2>Sometimes a language or a platform feels tight enough even without any additional constraints. A lot of this feeling is subjective, caused by the inability to express oneself in the previously learned way. When learning a new human language that is completely different to one's mother tongue, one may feel restricted when there's no counterpart for a specific word or grammatical cosntruct. When encountering such a "boundary", the learner needs to rethink the idea in a way that goes around it. This often requires some mind-bending. The same phenomenon can be encountered when learning different programming languages, e.g. learning a declarative language after only knowing imperative ones.<br /><br />Among both human and programming languages, there are experimental languages that have been deliberately constructed as "mind-benders", having the kind of features and limitations that force the user to rethink a lot of things when trying to express an idea. Among constructed human languages, a good example is Sonja Elen Kisa's minimalistic "Toki Pona" that builds everything from just over 120 basic words. Among programming languages, the mind-bending experiments are called "esoteric programming languages", with the likes of Brainfuck and Befunge often mentioned as examples.<br /><br />In computer platforms, there's also a lot of variance in "objective tightness". Large amounts of general-purpose computing resources make it possible to accurately emulate smaller computers; that is, a looser fabric may sometimes completely engulf a tighter one. Because of this, the experience of learning a "bigger" platform after a "smaller" one is not usually very mind-bending compared to the opposite direction.<br /><br /><h2>Nothing is neutral</h2>Now, would it be possible to create a language or a computer that would be totally neutral, objective and universal? I don't think so. Trying to create something that lacks fabric is like trying to sculpt thin air, and fabrics are always built from arbitrarities. Whenever something feels neutral, the feeling is usually deceptive.<br /><br />Popular fabrics are often perceived as neutral, although they are just as arbitrary and biased as the other ones. A tribe that doesn't have very much contact with other tribes typically regards its own language and culture as "the right one" and everyone else as strange and deviant. When several tribes come together, they may choose one language as their supposedly neutral lingua franca, and a sufficiently advanced group of tribes may even construct a simplified, bland mix-up of all of its member languages, an "Esperanto". But even in this case, the language is by no means universal; the fabric that is common between the source languages is still very much present. Even if the language is based on logical principles, i.e. a "Lojban", the chosen set of principles is arbitrary, not to mention all the choices made when implementing those principles.<br /><br />Powerful computers can usually emulate many less powerful ones, but this does not make them any less arbitrary. On the contrary, modern IBM PC compatibles are full of arbitrary desgin choices stacked on one another, forming a complex spaghetti of historical trials and errors that would make no sense at all if designed from scratch. The modern IBM PC platform therefore has a very prominent fabric, and the main reason why it feels so neutral is its popularity. Another reason is that the other platforms have many a lot of the same design choices, making today's computer platforms much less diverse than what they were a couple of decades ago. For example, how many modern platforms can you name that use something other than RGB as their primary colorspace, or something other than a power of two as their word length?<br /><br />Diversity is diminishing in many other areas as well. In countries with an astounding diversity, like Papua-New-Guinea, many groups are abandoning their unique native languages and cultures in favor of bigger and more prestigious ones. I see some of that even in my own country, where many young and intelligent people take pride in "thinking in English", erroreusnly assuming that second-language English would be somehow more expressive for them than their mother tongue. In a dystopian vision, the diversity of millennia-old languages and cultures is getting replaced by a global English-language monoculture where all the diversity is subcultural at best.<br /><br /><h2>Conclusion</h2>It indeed seems to be possible to talk about human languages, cultures, programming languages, computing platforms and many other things with similar concepts. These concepts also seem so useful at times that I'm probably going to use them in subsequent articles as well. I also hope that this article, despite its length, gives some food for thought to someone. <br /><br />Now, go to the world and embrace the mind-bending diversity!viznuthttp://www.blogger.com/profile/06927455242083569579noreply@blogger.com9tag:blogger.com,1999:blog-1787947700033244607.post-91953264979745170122011-12-30T00:20:00.010+00:002011-12-31T19:47:40.614+00:00IBNIZ - a hardcore audiovisual virtual machine and an esoteric programming language<p>Some days ago, I finished the first public version of my audiovisual virtual machine, <a href="http://pelulamu.net/ibniz/">IBNIZ</a>. I also showed it off on YouTube with the following video:</p><iframe width="420" height="315" src="http://www.youtube.com/embed/aKMrBaXJvMs" frameborder="0" allowfullscreen></iframe><p> As demonstrated by the video, IBNIZ (Ideally Bare Numeric Impression giZmo) is a virtual machine and a programming language that generates video and audio from very short strings of code. Technically, it is a two-stack machine somewhat similar to Forth, but with the major execption that the stack is cyclical and also used at an output buffer. Also, as every IBNIZ program is implicitly inside a loop that pushes a set of loop variables on the stack on every cycle, even an empty program outputs something (i.e. a changing gradient as video and a constant sawtooth wave as audio).</p><br /><h2>How does it work?</h2><p>To illustrate how IBNIZ works, here's how the program <b>^xp</b> is executed, step by step:</p><img src="http://1.bp.blogspot.com/-cLecRdBr_IU/Tv0E-OCFgSI/AAAAAAAAAJk/NwnD-EgkKyM/s1600/ibnizexample.gif" /><p>So, in short: on every loop cycle, the VM pushes the values T, Y and X. The operation <b>^</b> XORs the values Y and X and <b>xp</b> pops off the remaining value (T). Thus, the stack gets filled by color values where the Y coordinate is XORed by the X coordinate, resulting in the ill-famous "XOR texture".</p><p>The representation in the figure was somewhat simplified, however. In reality, IBNIZ uses 32-bit fixed-point arithmetic where the values for Y and X fall between -1 and +1. IBNIZ also runs the program in two separate contexts with separate stacks and internal registers: the video context and the audio context. To illustrate this, here's how an empty program is executed in the video context:</p><img src="http://1.bp.blogspot.com/-JmRVA9ktdOQ/Tv0FQRUk6dI/AAAAAAAAAJw/D2sq5S_tj6I/s1600/emptyprog-video.png" /><p>The colorspace is YUV, with the integer part of the pixel value interpreted as U and V (roughly corresponding to hue) and the fractional part interpreted as Y (brightness). The empty program runs in the so-called T-mode where all the loop variables -- T, Y and X -- are entered in the same word (16 bits of T in the integer part and 8+8 bits of Y and X in the fractional). In the audio context, the same program executes as follows:</p><img src="http://2.bp.blogspot.com/-mqL14IaUsGc/Tv0FWWW14xI/AAAAAAAAAJ8/Mz9e1Re8jKQ/s1600/emptyprog-audio.png" /><p>Just like in the T-mode of the video context, the VM pushes one word per loop cycle. However, in this case, there is no Y or X; the whole word represents T. Also, when interpreting the stack contents as audio, the integer part is ignored altogether and the fractional part is taken as an unsigned 16-bit PCM value.</p><p>Also, in the audio context, T increments in steps of 0000.0040 while the step is only 0000.0001 in the video context. This is because we need to calculate 256x256 pixel values per frame (nearly 4 million pixels if there are 60 frames per second) but suffice with considerably fewer PCM samples. In the current implementation, we calculate 61440 audio samples per second (60*65536/64) which is then downscaled to 44100 Hz.</p><p>The scheduling and main-looping logic is the only somewhat complex thing in IBNIZ. All the rest is very elementary, something that can be found as instructions in the x86 architecture or as words in the core Forth vocabulary. Basic arithmetic and stack-shuffling. Memory load and store. An if/then/else structure, two kinds of loop structures and subroutine definition/calling. Also an instruction for retrieving user input from keyboard or pointing device. Everything needs to be built from these basic building blocks. And yes, it is Turing complete, and no, you are not restricted to the rendering order provided by the implicit main loop.</p><p>The full instruction set is described in the documentation. Feel free to check it out experiment with IBNIZ on your own!</p><br /><h2>So, what's the point?</h2><p>The IBNIZ project started in 2007 with the codename "EDAM" (Extreme-Density Art Machine). My goal was to participate in the esoteric programming language competition at the same year's Alternative Party, but I didn't finish the VM at time. The project therefore fell to the background. Every now and then, I returned to the project for a short while, maybe revising the instruction set a little bit or experimenting with different colorspaces and loop variable formats. There was no great driving force to insppire me to finish the VM until mid-2011 after <a href="http://countercomplex.blogspot.com/2011/06/16-byte-frontier-extreme-results from.html">some quite succesful experiments with very short audiovisual programs</a>. Once some of my <a href="http://countercomplex.blogspot.com/2011/10/algorithmic-symphonies-from-one-line-of.html">musical experiments</a> spawned a trend that eventually even got a name of its own, "<a href="http://canonical.org/~kragen/bytebeat/">bytebeat</a>", I really had to push myself to finally finishing IBNIZ.</p><p>The main goal of IBNIZ, from the very beginning, was to provide a new platform for the demoscene. Something without the usual fallbacks of the real-world platforms when writing extremely small demos. No headers, no program size overhead in video/audio access, extremely high code density, enough processing power and preferrably a machine language that is fun to program with. Something that would have the potential to displace MS-DOS as the primary platform for sub-256-byte demoscene productions.</p><p>There are also other considerations. One of them is educational: modern computing platforms tend to be mind-bogglingly complex and highly abstracted and lack the immediacy and tangibility of the old-school home computers. I am somewhat concerned that young people whose mindset would have made them great programmers in the eighties find their mindset totally incompatible with today's mainstream technology and therefore get completely driven away from programming. IBNIZ will hopefully be able to serve as an "oldschool-style platform" in a way that is rewarding enough for today's beginninng programming hobbyists. Also, as the demoscene needs all the new blood it can get, I envision that IBNIZ could serve as <a href="http://pelulamu.net/countercomplex/the_future_of_demo_art/">a gateway to the demoscene</a>.</p><p>I also see that IBNIZ has potential for glitch art and livecoding. By taking a nondeterministic approach to experimentation with IBNIZ, the user may encounter a lot of interesting visual and aural glitch patterns. As for livecoding, I suspect that the compactness of the code as well as the immediate visibility of the changes could make an IBNIZ programming performance quite enjoyable to watch. The live gigs of the chip music scene, for example, might also find use for IBNIZ.</p><br /><h2>About some design choices and future plans</h2><p>IBNIZ was originally designed with an esoteric programming language competition in mind, and indeed, the language has already been likened to the classic esoteric language Brainfuck by several critical commentators. I'm not that sure about the similarity with Brainfuck, but it does have strong conceptual similarities with FALSE, the esoteric programming language that inspired Brainfuck. Both IBNIZ and FALSE are based on Forth and use one-character-long instructions, and the perceived awkwardness of both comes from unusual, punctuation-based syntax rather than deliberate attempts at making the language difficult.</p><p>When contrasting esotericity with usefulness, it should be noted that many useful, mature and well-liked languages, such as C and Perl, also tend to look like total "line noise" to the uninitiated. Forth, on the other hand, tends to look like mess of random unrelated strings to people unfamiliar with the RPN syntax. I therefore don't see how the esotericity of IBNIZ would hinder its usefulness any more than the usefulness of C, Perl or Forth is hindered by their syntaxes. A more relevant concern would be, for example, the lack of label and variable names in IBNIZ.</p><p>There are some design choices that often get questioned, so I'll perhaps explain the rationale for them:</p><ul><li>The colors: the color format has been chosen so that more sensible and neutral colors are more likely than "coder colors". YUV has been chosen over HSV because there is relatively universal hardware support for YUV buffers (and I also think it is easier to get richer gradients with YUV than with HSV).</li><li>Trigonometric functions: I pondered for a long while whether to include SIN and ATAN2 and I finally decided to do so. A lot of demoscene tricks depend, including all kinds of rotating and bouncing things as well as more advanced stuff such as raycasting, depends on the availability of trigonometry. Both of these operations can be found in the FPU instruction set of the x86 and are relatively fundamental mathematical stuff, so we're not going into library bloat here.</li><br /><li>Floating point vs fixed point: I considered floating point for a long while as it would have simplified some advanced tricks. However, IBNIZ code is likely to use a lot of bitwise operations, modular bitwise arithmetic and indefinitely running counters which may end up being problematic with floating-point. Fixed point makes the arithmetic more concrete and also improves the implementability of IBNIZ on low-end platforms that lack FPU.</li><li>Different coordinate formats: TYX-video uses signed coordinates because most effects look better when the origin is at the center of the screen. The 'U' opcode (userinput), on the other hand, gives the mouse coordinates in unsigned format to ease up pixel-plotting (you can directly use the mouse coordinates as part of the framebuffer memory address). T-video uses unsigned coordinates for making the values linear and also for easier coupling with the unsigned coordinates provided by 'U'.</li></ul><p>Right now, all the existing implementations of IBNIZ are rather slow. The C implementation is completely interpretive without any optimization phase prior to execution. However, a faster implementation with some clever static analysis is quite high on the to-do list, and I expect a considerable performance boost once native-code JIT compilers come into use. After all, if we are ever planning to displace MS-DOS as a sizecoding platform, we will need to get IBNIZ to run at least faster than DOSBOX.</p><p>The use of externally-provided coordinate and time values will make it possible to scale a considerable portion of IBNIZ programs to a vast range of different resolutions from character-cell framebuffers on 8-bit platforms to today's highest higher-than-high-definition standards. I suspect that a lot of IBNIZ programs can be automatically compiled into shader code or fast C-64 machine language (yes, I've made some preliminary calculations for "Ibniz 64" as well). The currently implemented resolution, 256x256, however, will remain as the default resolution that will ensure compatibility. This resolution, by the way, has been chosen because it is in the same class with 320x200, the most popular resolution of tiny MS-DOS demos.</p><p>At some point of time, it will also become necessary to introduce a compact binary representation of IBNIZ code -- with variable bit lengths primarily based on the frequency of each instruction. The byte-per-character representation already has a higher code density than the 16-bit x86 machine language, and I expect that a bit-length-optimized representation will really break some boundaries for low size classes.</p><p>An important milestone will be a fast and complete version that runs in a web brower. I expect this to make IBNIZ much more available and accessible than it is now, and I'm also planning to host an IBNIZ programming contest once a sufficient web implementation is on-line. There is already a <a href="http://ibniz.asiekierka.pl/ibniz.html">Javascript implementation</a> but it is rather slow and doesn't support sound, so we will still have to wait for a while. But stay tuned!</p>viznuthttp://www.blogger.com/profile/06927455242083569579noreply@blogger.com79tag:blogger.com,1999:blog-1787947700033244607.post-10718991086915115402011-11-15T21:57:00.007+00:002011-11-15T22:39:21.408+00:00Materiality and the demoscene: when does a platform feel real?<p>I've just finished reading Daniel Botz's 428-page PhD dissertation "<a href="http://www.danielbotz.de/">Kunst, Code und Maschine: Die Ästhetik der Computer-Demoszene</a>".</p><p>The book is easily the best literary coverage of the demoscene I've seen so far. It is basically a history of demos as an artform with a particular emphasis on the esthetical aspects of demos, going very deeply into different styles and techniques and their development, often in relation to the features of the three "main" demoscene platforms (C-64, Amiga and PC).</p><p>What impressed me the most in the book and gave me most food for thought, however, was the theoretical insight. Botz uses late Friedrich Kittler's conception of media materiality as a theoretical device to explain how the demoscene relates to the hardware platforms it uses, often contrasting the relationship to that of the mainstream media art. In short: the demoscene cares about the materiality of the platforms, while the mainstream art world ignores it.</p><p>To elaborate: mainstream computer artists regard computers as tools, universal "anything machines" that can translate pure, immaterial, technology-independent ideas into something that can be seen, heard or otherwise experienced. Thus, ideas come before technology. Demosceners, however, have an opposite point of view; for them, technology comes before ideas. A computer platform is seen as a material that can be brought into different states, in a way comparable to how a sculptor brings blocks of stone into different forms. The possibilities of a material can be explored with direct, uncompromising interaction such as low-level programming. The platform is not neutral, its characteristics are essential to what demos written for it end up being like. While a piece of traditional computer art can often be safely removed from its specific technological context, a demo is no longer a demo if the platform is neglected.</p><p>The focus on materiality also results in a somewhat unusual relationship with technology. For most people, computer platforms are just evolutionary stages on a timeline of innovation and obsolescence. A device serves for a couple of years before getting abandoned in favor of a new model that is essentially the same with higher specs. The characteristics of a digital device boil down to numerical statistics in the spirit of "bigger is better". The demoscene, however, sees its platforms as something more multi-faceted. An old computer or gaming console may be interesting as an artistic material just because of its unique combination of features and limitations. It is fine to have historical, personal or even political reasons for choosing a specific platform, but they're not necessary; the features of the system alone are enough to grow someone's creative enthusiasm. As so many people misunderstand the relationship between demoscene and old hardware as a form of "retrocomputing", it is very delightful to see such an accurate insight to it.</p><h2>But is it really that simple?</h2><p>I'm not entirely familiar with the semantic extent of "materiality" in media studies, but it is apparent that it primarily refers to physicality and concreteness. In many occasions, Botz contrasts materiality against virtuality, which, I think, is an idea that stems from Gilles Deleuze. This dichotomy is simple and appealing, but I disagree with Botz in how central it is to what the demoscene is doing. After all, there are, for example, quite many 8-bit-oriented demoscene artists who totally approve virtualization. Artists who don't care whether their works are shown with emulators or real hardware at parties, as long as the logical functionality is correct. Some even produce art for the C-64 without having ever owned a material C-64. Therefore, virtualization is definitely not something that is universally frowned upon on the demoscene. It is apparently also possible to develop a low-level, concrete material relationship with an emulated machine, a kind of "material" that is totally virtual to begin with!</p><p>Computer programming is always somewhat virtual, even in its most down-to-the-metal incarnations. Bits aren't physical objects; concentrations of electrons only get the role of bits from how they interact with the transistors that form the logical circuits. A low-level programmer who strives for a total, optimal control of a processor doesn't need to be familiar with these material interactions; just knowing the virtual level of bits, registers, opcodes and pipelines is enough. The number of abstraction layers between the actual bit-twiddling and the layer visible to the programmer doesn't change how programming a processor feels like. A software emulator or an FPGA reimplementation of the C-64 can deliver the same "material feeling" to the programmer as the original, NMOS-based C-64. Also, if the virtualization is perfect enough to model the visible and audible artifacts that stem from the non-binary aspects of the original microchips, even a highly experienced enthusiast can be fooled.</p><p>I therefore think it is more appropriate to consider the "feel of materiality" that demosceners experience to stem from the abstract characteristics of the platform than its physicality. Programming an Atari VCS emulator running in an X86 PC on top of an operating system may very well feel more concrete than programming the same PC directly with the X86 assembly language. When working with a VCS, even a virtualized one, a programmer needs to be aware of the bit-level machine state at all times. There's no display memory in the VCS; the only way to draw something on the screen is by telling the processor to put specific values in specific video chip registers at specific clock cycles. The PC, however, does have a display memory that holds the pixel values of the on-screen picture, as well as a video chip that automatically refreshes its contents to the screen. A PC programmer can therefore use very generic algorithms to render graphics in the display memory without caring about the underlying hardware, while on the VCS everything needs to be thought out from the specific point of view of the video chip and the CPU.</p><p>It seems that the "feel of materiality" has particularly much to do with complexity -- of both the platform and the manipulated data. A high-resolution picture, taking up megabytes of display memory, looks nearly identical on a computer screen regardless of whether it is internally represented in RGB or YUV colorspace. However, when we get a pixel artist to create versions of the same picture for various formats that use less than ten kilobytes of display memory, such as PC textmode or C-64 multicolor, the specific features and constraints of each format shine out very clearly. High levels of complexity allow for generic, platform-independent and general-purpose techniques whereas low levels of complexity require the artist to form a "material relationship" with the format.</p><p>Low complexity and the "feel of materiality" are also closely related to the "feel of total control" which I regard as an important state that demosceners tend to reach for. The lower the complexity of a platform, the easier it is to reach a total understanding of its functionality. Quite often, coders working on complex platforms choose to deliberately lower the perceived complexity by concentrating on a reduced, "essential" subset of the programming interface and ignoring the rest. Someone who codes for a modern PC, for example, may want to ignore the polygonal framework of the 3D API altogether and exclusively concentrate on shader code. Those who write softsynths, even for tiny size classes, tend to ignore high-level synthesis frameworks that may be available on the OS and just use a low-level PCM-soundbuffer API. Subsets that provide nice collections of powerful "Lego blocks" are the way to go. Even though bloated system libraries may very well contain useful routines that can be discovered and abused in things like 4-kilobyte demos, most democoders frown upon this idea and may even consider it cheating.</p><p>Emulators, virtual platforms and reduced programming interfaces are ways of creating pockets of lowered complexity within highly complex systems -- pockets that feel very "material" and controllable for a crafty programmer. Even virtual platforms that are highly abstract, idealistic and mathematical may feel "material". The "oneliner music platform", merely defined as C-like expression syntax that calculates PCM sample values, is a recent example of this. All of its elements are defined on a relatively high level, no specification of any kind of low-level machine, virtual or otherwise. Nevertheless, a kind of "material characteristic" or "immanent esthetics" still emerges from this "platform", both in how the sort formulas tend to sound like and what kind of hacks and optimizations are better than others.</p><p>The "oneliner music platform" is perhaps an extreme example, but in general, purely virtual platforms have been there for a while already. Things like Java demos, as well as multi-platform portable demos, have been around since the late 1990s, although they've usually remained quite marginal. For some reason, however, Botz seems to ignore this aspect of the demoscene nearly completely, merely stating that multi-platform demos have started to appear "in recent years" and that the phenomenon may grow bigger in the future. Perhaps this is a deliberate bias chosen to avoid topics that don't fit well within Botz's framework. Or maybe it's just an accident. I don't know.</p><h2>Conclusion</h2><p>To summarize: when Botz talks about the materiality of demoscene platforms, he often refers to phenomena that, in my opinion, could be more fruitfully analyzed with different conceptual devices, especially complexity. Wherever the dichotomy of materiality and immateriality comes up, I see at least three separate conceptual dimensions working under the hood:</p><p>1. <b>Art vs craft</b> (or "idea-first" vs "material-first"). This is the area where Botz's theory works very well: demoscene is, indeed, more crafty or "material-first" than most other communities of computer art. However, the material (i.e. the demo platform) doesn't need to be material (i.e. physical); the crafty approach works equally well with emulated and purely virtual platforms. The "artsy" approach, leading to conceptual and "avant-garde" demos, has gradually become more and more accepted, however there's still a lot of crafty attitude in "art demos" as well. I consider chip musicians, circuit-benders and homebrew 8-bit developers about as crafty on average as demosceners, by the way.</p><p>2. <b>Physicality vs virtuality</b>. There's a strong presence of classic hardware enthusiasm on the demoscene as well as people who build their own hardware, and they definitely are in the right place. However, I don't think the physical hardware aspect is as important in the demoscene as, for example in the chip music, retrogaming and circuit-bending communities. On the demoscene, it is more important to demonstrate the ability to do impressive things in limited environments than to be an owner of specific physical gear or to know how to solder. A C-64 demo can be good even if it is produced with an emulator and a cross-compiler. Also, as demo platforms can be very abstract and purely virtual as well and still be appealing to the subculture, I don't think there's any profound dogma that would drive demosceners towards physicality.</p><p>3. <b>Complexity</b>. The possibility of forming a "material relationship" with an emulated platform shows that the perception of "materiality", "physicality" and "controllability" is more related to the characteristics of the logical platform than to how many abstraction layers there are under the implementation. A low computational complexity, either in the form of platform complexity or program size, seems to correlate with a "feeling of concreteness" as well as the prominence of "emergent platform-specific esthetics". What I see as the core methodology of the demoscene seems to work better at low than high levels of complexity and this is why "pockets of lowered complexity" are often preferred by sceners.</p><p>Don't take me wrong: despite all the disagreements and my somewhat Platonist attitude to abstract ideas in general, I still think virtuality and immateriality have been getting too much emphasis in today's world and we need some kind of a countercultural force that defends the material. Botz also covers possible countercultural aspects of the demoscene, deriving them from the older hacker culture, and I found all of them very relevant. My basic disagreement comes from the fact that Botz's theory doesn't entirely match with how I perceive the demoscene to operate, and the subculture as a whole cannot therefore be put under a generalizing label such as "defenders and lovers of the materiality of the computer".</p><p>Anyway, I really enjoyed reading Botz's book and especially appreciated the theoretical insight. I recommend the book to everyone who is interested in the demoscene, its history and esthetic variety, AND who reads German well. I studied the language for about five years at school but I still found the text quite difficult to decipher at places. I therefore sincerely hope that my problems with the language haven't led me to any critical misunderstandings.</p>viznuthttp://www.blogger.com/profile/06927455242083569579noreply@blogger.com2tag:blogger.com,1999:blog-1787947700033244607.post-14109139912328639412011-10-28T16:18:00.015+01:002011-10-28T18:55:50.571+01:00Some deep analysis of one-line music programs.<div>It is now a month since I posted the YouTube video "<a href="http://www.youtube.com/watch?v=GtQdIYUtAHg">Experimental music from very short C programs</a>" and three weeks since I <a href="http://countercomplex.blogspot.com/2011/10/algorithmic-symphonies-from-one-line-of.html">blogged about it</a>. Now that the initial craze seems to be over, it's a good time to look back what has been done and consider what could be done in the future.</div><div><br /></div><div>The developments since my last post can be summarized by my third video. It still represents the current state of the art quite well and includes a good variety of different types of formulas.</div><div><br /></div><iframe width="420" height="315" src="http://www.youtube.com/embed/tCRPUv8V22o" frameborder="0" allowfullscreen=""></iframe><div><br /></div><div>The videos only show off a portion of all the formulas that could be included. To compensate, I've created <a href="http://pelulamu.net/countercomplex/music_formula_collection.txt">a text file</a> where I've collected all the "worthy" formulas I've encountered so far. Most of them can be tested in the on-line <a href="http://wurstcaptures.untergrund.net/music/">JavaScript</a> and <a href="http://entropedia.co.uk/generative_music/">ActionScript</a> test tools. Some of them don't even work directly in C code, as they depend on JS/AS-specific features.</div><div><br /></div><div>As I'm sure that many people still find these formulas rather magical and mysterious, I've decided to give you a detailed technical analysis and explanation on the essential techniques. As I'm completely self-educated in music theory, please pardon my notation and terminology that may be unorthodox at times. You should also have a grasp of C-like expression syntax and binary arithmetic to understand most of the things I'm going to talk about.</div><div><br /></div><div>I've sorted my formula collection by length. By comparing the shortest and longest formulas, it is apparent that the longest formulas show a much more constructivist approach, including musical data stored in constants as well as entire piece-by-piece-constructed softsynths. The shortest formulas, on the other hand, are very often discovered via non-deterministic testing, from educated guesses to pure trial-and-error. One of my aims with this essay is to bring some understanding and determinism to the short side as well.</div><div><br /></div><div><span class="Apple-style-span" style="font-size:x-large;"> Pitches and scales</span></div><div><br /></div><div>A class of formulas that is quite prominent among the shortest ones is what I call the 't* class'. The formulas of this type multiply the time counter t with some expression, resulting in a sawtooth wave that changes its pitch according to that expression.</div><div><span class="Apple-style-span" style="font-family:arial;"><br /></span></div><div><div><div>A simple example of a t*-class formula would be t*(t&gt;&gt;10) which outputs a rising and falling sound (accompanied by some aliasing artifacts that create their own sounds). Now, if we introduce an AND operator to this formula, we can restrict the set of pitches and thus create melodies. An example that has been individually discovered by several people, is the so-called "Forty-Two Melody": t*(42&amp;t&gt;&gt;10) or t*2*(21&amp;t&gt;&gt;11).</div></div><div><br /></div><div>The numbers that indicate pitches are not semitones or anything like that, but multiplies of a base frequency (sampling rate divided by 256, i.e. 31.25 Hz at the default 8 kHz rate). Here is a table that maps the integer pitches 1..31 to cents and Western note names. The pitches on a gray background don't have good counterparts in the traditional Western system, so I've used quarter-tone flat and sharp symbols to give them approximate names.</div><div><br /></div><div><span class="Apple-style-span" style="color: rgb(0, 0, 238); -webkit-text-decorations-in-effect: underline; "><img src="http://3.bp.blogspot.com/-cWU10K29PpY/TqrLNjowRwI/AAAAAAAAAIM/puW4dVzLSvg/s640/integer_pitches.png" border="0" alt="" id="BLOGGER_PHOTO_ID_5668566514764105474" style="display: block; margin-top: 0px; margin-right: auto; margin-bottom: 10px; margin-left: auto; text-align: center; cursor: pointer;" /></span></div><div><span class="Apple-style-span" style="color:#0000EE;"><br /></span></div><div>By using this table, we can decode the Forty-Two Melody into a human-readable form. The melody is 32 steps long and consists of eight unique pitch multipliers (including zero which gives out silence).</div><div><br /></div><div><span class="Apple-style-span" style="color: rgb(0, 0, 238); -webkit-text-decorations-in-effect: underline; "><img src="http://1.bp.blogspot.com/-xfMFnc5dwVo/TqrLVV-_X6I/AAAAAAAAAIY/ynWueK4Eu6U/s640/42_melody.png" border="0" alt="" id="BLOGGER_PHOTO_ID_5668566648538226594" style="display: block; margin-top: 0px; margin-right: auto; margin-bottom: 10px; margin-left: auto; text-align: center; cursor: pointer;" /></span></div><div><span class="Apple-style-span" style="color:#0000EE;"><br /></span></div><div>The "Forty-Two Melody" contains some intervals that make it sound a little bit silly, detuned or "Arabic" to Western ears. If we want to avoid this effect, we need to design our formulas so that they only yield pitches that are at familiar intervals from one another. A simple solution is to include a modulo operator to wrap larger numbers to the range where simple integer ratios are more probable. Modifying the Forty-Two Melody into t*((42&amp;t&gt;&gt;10)%14), for example, completely transforms the latter half of the melody into something that sounds a little bit nicer to Western ears. Bitwise AND is also useful for limiting the pitch set to a specific scale; for example t*(5+((t&gt;&gt;11)&amp;5)) produces pitch multipliers of 4, 5, 8 and 9, which correspond to E3, G3, C4 and D4.</div><div><br /></div><div>Ryg's 44.1 kHz formula presented in the third video contains two different melody generators:</div><div><br /></div><div><div> ((t*("36364689"[t&gt;&gt;13&amp;7]&amp;15))/12&amp;128)</div><div> +(((((t&gt;&gt;12)^(t&gt;&gt;12)-2)%11*t)/4|t&gt;&gt;13)&amp;127)</div></div><div><br /></div><div>The first generator, in the first half of the formula, is based on a string constant that contains a straight-forward list of pitches. This list is used for the bass pattern. The other generator, whose core is the subexpression ((t&gt;&gt;12)^(t&gt;&gt;12)-2)%11, is more interesting, as it generates a rather deep self-similar melody structure with just three operators (subtraction, exclusive or, modulo). Rather impressive despite its profound repetitiveness. Here's an analysis of the series it generates:</div><div><br /></div><div><img src="http://4.bp.blogspot.com/-AJNWlXHeKfI/TqrLZSKW3lI/AAAAAAAAAIk/eH2UQ-v-VYM/s640/ryg_melody.png" border="0" alt="" id="BLOGGER_PHOTO_ID_5668566716231638610" style="display: block; margin-top: 0px; margin-right: auto; margin-bottom: 10px; margin-left: auto; text-align: center; cursor: pointer; color: rgb(0, 0, 238); -webkit-text-decorations-in-effect: underline; " /></div><div>It is often a good idea to post-process the waveform output of a plain t* formula. The sawtooth wave tends to produce a lot of aliasing artifacts, particularly at low sampling rates. Attaching a '&amp;128' or '&amp;64' in the end of a t* formula switches the output to square wave which usually sounds a little bit cleaner. An example of this would be Niklas Roy's t*(t&gt;&gt;9|t&gt;&gt;13)&amp;16 which sounds a lot noisier without the AND (although most of the noise in this case comes from the unbounded multiplication arithmetic, not from aliasing).</div></div><div><br /></div><div><div><span class="Apple-style-span" style="font-size:x-large;"> Bitwise waveforms and harmonies</span></div><div><br /></div><div>Another class of formulas that is very prominent among the short ones is the bitwise formula. At its purest, such a formula only uses bitwise operations (shifts, negation, AND, OR, XOR) combined with constants and t. A simple example is t&amp;t&gt;&gt;8 -- the "Sierpinski Harmony". Sierpinski triangles appear very often in plotted visualizations of bitwise waveforms, and t&amp;t&gt;&gt;8 represents the simplest type of formula that renders into a nice Sierpinski triangle.</div><div><br /></div><div>Bitwise formulas often sound surprisingly multitonal for their length. This is based on the fact that an 8-bit sawtooth wave can be thought of consisting of eight square waves, each an octave apart from its neighbor. Usually, these components fuse together in the human brain, forming the harmonics of a single timbre, but if we turn them on and off a couple of times per second or slower, the brain might perceive them as separate tones. For example, t&amp;48 sounds quite monotonal, but in t&amp;48&amp;t&gt;&gt;8, the exactly same waveform sounds bitonal because it abruptly extends the harmonic content of the previous waveform.</div><div><br /></div><div>The loudest of the eight square-wave components of an 8-bit wave is, naturally, the one represented by the most significant bit (&amp;128). In the sawtooth wave, it is also the longest in wavelength. The second highest bit (&amp;64) represents a square wave that has half the wavelength and amplitude, the third highest halves the parameters once more, and so on. By using this principle, we can analyze the musical structure of the Sierpinski Harmony:</div><div><br /></div><div><span class="Apple-style-span" style="color: rgb(0, 0, 238); -webkit-text-decorations-in-effect: underline; "><img src="http://3.bp.blogspot.com/-94qgZdfQf5I/TqrOLiispAI/AAAAAAAAAI8/whfqxTJb6PM/s640/sierpinski_harmony.png" border="0" alt="" id="BLOGGER_PHOTO_ID_5668569778645410818" style="display: block; margin-top: 0px; margin-right: auto; margin-bottom: 10px; margin-left: auto; text-align: center; cursor: pointer;" /></span></div><div><br /></div><div>The introduction of ever lower square-wave components can be easily heard. One can also hear quite well that every newly introduced component is considerably lower in pitch than the previous one. However, if we include a prime multiplier in the Sierpinski Harmony, we will encounter an anomaly. In (t*3)&amp;t&gt;&gt;8, the loudest tone actually goes higher at a specific point (and the interval isn't an octave either).</div><div><br /></div><div>This phenomenon can be explained with aliasing artifacts and how they are processed by the brain. The main wavelength in t*3 is not constant but alternates between two values, 42 and 43, averaging to 42.67 (256/3). The human mind interprets this kind of sound as a waveform of the average length (42.67 samples) accompanied by an extra sound that represents the "error" (or the difference from the ideal wave). In the t*3 example, this extra sound has a period of 256 samples and sounds like a buzzer when listened separately.</div><div><br /></div><div>The smaller the wavelengths we are dealing with are, the more prominent these aliasing artifacts become, eventually dominating over their parent waveforms. By listening to (t*3)&amp;128, (t*3)&amp;64 and (t*3)&amp;32, we notice an interval of an octave between them. However, when we step over from (t*3)&amp;32 to (t*3)&amp;16, the interval is definitely not an octave. This is the threshold where the artifact wave becomes dominant. This is why t&amp;t&gt;&gt;8, (t*3)&amp;t&gt;&gt;8 and (t*5)&amp;t&gt;&gt;8 sound so different. It is also the reason why high-pitched melodies may sound very detuned.</div><div><br /></div><div>Variants of the Sierpinski harmony can be combined to produce melodies. Examples of this approach include:</div><div><br /></div><div> t*5&amp;(t&gt;&gt;7)|t*3&amp;(t*4&gt;&gt;10) (from miiro)</div><div><br /></div><div> (t*5&amp;t&gt;&gt;7)|(t*3&amp;t&gt;&gt;10) (from viznut)</div><div><br /></div><div> t*9&amp;t&gt;&gt;4|t*5&amp;t&gt;&gt;7|t*3&amp;t/1024 (from stephth)</div></div><div><br /></div><div><div>Different counters are the driving force of bitwise formulas. At their simplest, counters are just bitshifted versions of the main counter (t). These are implicitly synchronized with each other and work on different temporal levels of the musical piece. However, it has also been fruitful to experiment with counters that don't have a simple common denominator, and even with ones whose speeds are nearly identical. For example, t&amp;t%255 brings a 256-cycle counter and a 255-cycle counter together with an AND operation, resulting in an ambient drone sound that sounds like something achievable with pulse-width modulation. This approach seems to be more useful for loosely structured soundscapes than clear-cut rhythms or melodies.</div><div><br /></div><div>Some oneliner songs attach a bitwise operation to a melody generator for transposing the output by whole octaves. A simple example is Rrrola's t*(0xCA98&gt;&gt;(t&gt;&gt;9&amp;14)&amp;15)|t&gt;&gt;8 which would just loop a simple series of notes without the trailing '|t&gt;&gt;8'. This part gradually fixes the upper bits of the output to 1s, effectively raising the pitch of the melody and fading its volume out. Also the formulas from Ryg and Kb in my third video use this technique. The most advanced use of it I've seen so far, however, is in Mu6k's song (the last one in the 3rd video) which synthesizes its lead melody (along with some accompanying beeps) by taking the bassline and selectively turning its bits on and off. This takes place within the subexpression (t&gt;&gt;8^t&gt;&gt;10|t&gt;&gt;14|x)&amp;63 where the waveform of the bass is input as x.</div><div><br /></div><div><span class="Apple-style-span" style="font-size:x-large;"> Modular wrap-arounds and other synthesis techniques</span></div><div><br /></div><div>All the examples presented so far only use counters and bitwise operations to synthesize the actual waveforms. It's therefore necessary to talk a little bit about other operations and their potential as well.</div><div><br /></div><div>By accompanying a bitwise formula with a simple addition or substraction, it is possible to create modular wrap-around artifacts that produce totally different sounds. Tiny, nearly inaudible sounds may become very dominant. Harmonious sounds often become noisy and percussive. By extending the short Sierpinski harmony t&amp;t&gt;&gt;4 into (t&amp;t&gt;&gt;4)-5, something that sounds like an "8-bit" drum appears on top of it. The same principle can also be applied to more complex Sierpinski harmony derivatives as well as other bitwise formulas:</div><div><br /></div><div> (t*9&amp;t&gt;&gt;4|t*5&amp;t&gt;&gt;7|t*3&amp;t/1024)-1</div><div><br /></div><div>I'm not going into a deep analysis of how modular wrap-arounds affect the harmonic structure of a sound, as I guess someone has already done the math before. However, modular addition can be used for something that sounds like oscillator hard-sync in analog synthesizers, although its technical basis is different.</div><div><br /></div><div>Perhaps the most obvious use for summing in a softsynth, however, is the one where modular wrap-around is not very useful: mixing of several sound sources together. A straight-forward recipe for this is (A&amp;127)+(B&amp;127), which may be a little long-winded when aiming at minimalism. Often, just a simple XOR operation is enough to replace it, although it usually produces artifacts that may sound good or bad depending on the case. XOR can also be used for effects that sound like hard-sync.</div><div><br /></div><div>Of course, modular wrap-around effects are also achievable with multiplication and division, and on the other hand, even without addition or subtraction. I'll illustrate this with just a couple of interesting-sounding examples:</div><div><br /></div><div> t&gt;&gt;4|t&amp;((t&gt;&gt;5)/(t&gt;&gt;7-(t&gt;&gt;15)&amp;-t&gt;&gt;7-(t&gt;&gt;15))) (from droid, js/as only)</div><div><br /></div><div> (int)(t/1e7*t*t+t)%127|t&gt;&gt;4|t&gt;&gt;5|t%127+(t&gt;&gt;16)|t (from bst)</div><div><br /></div><div> t&gt;&gt;6&amp;1?t&gt;&gt;5:-t&gt;&gt;4 (from droid)</div><div><br /></div></div><div><div>There's a lot in these and other synthesis algorithms that could be discussed, but as they already belong to a zone where traditional sound synthesis lore applies, I choose to go on.</div><div><br /></div><div><span class="Apple-style-span" style="font-size:x-large;"> Deterministic composition</span></div><div><span class="Apple-style-span" style="font-size:x-large;"><br /></span></div><div>When looking at the longest formulas in the collection, it is apparent that there's a lot of intelligent design behind most of them. Long constants and tables, sometimes several of them, containing scales, melodies, basslines and drum patterns. The longest formula in the collection is "Long Line Theory", a cover of the soundtrack of the 64K demo "Chaos Theory" by Conspiracy. The original version by mu6k was over 600 characters long, from which the people on Pouet.net optimized it down to 300 characters, with some arguable quality tradeoffs.</div><div><br /></div><div>It is, of course, possible to synthesize just about anything with a formula, especially if there's no upper limit for the length. Synthesis and sequencing logic can be built section by section, using rather generic algorithms and proven engineering techniques. There's no magic in it. But on the other hand, there's no magic in pure non-determinism either: it is very difficult to find anything outstanding with totally random experimentation after the initial discovery phase is over.</div><div><br /></div><div>Many of the more sophisticated formulas seem to have a good balance between random experimentation and deterministic composition. It is often apparent in their structure that some elements are results of random discoveries while others have been built with an engineer's mindset. Let's look at Mu6k's song (presented in the end of the 3rd video, 32 kHz):</div><div><br /></div><div> (((int)(3e3/(y=t&amp;16383))&amp;1)*35) +</div><div> (x=t*("6689"[t&gt;&gt;16&amp;3]&amp;15)/24&amp;127)*y/4e4 +</div><div> ((t&gt;&gt;8^t&gt;&gt;10|t&gt;&gt;14|x)&amp;63)</div><div><br /></div><div>I've split the formula on three lines according to the three instruments therein: drum, bass and lead.</div><div><br /></div><div>My assumption is that the song has been built around the lead formula that was discovered first, probably in the form of t&gt;&gt;6^t&gt;&gt;8|t&gt;&gt;12|t&amp;63 or something (the original version of this formula ran at 8 kHz). As usual with pure bitwise formulas, all the intervals are octaves, but in this case, the musical structure is very nice.</div><div><br /></div><div>As it is possible to transpose a bit-masking melody simply by transposing the carrier wave, it's a good idea to generate a bassline and reuse it as the carrier. Unlike the lead generator, the bassline generator is very straight-forward in appearance, consisting of four pitch values stored in a string constant. A sawtooth wave is generated, stored to a variable (so that it can be reused by the lead melody generator) and amplitude-modulated.</div><div><br /></div><div>Finally, there's a simple drum beat that is generated by a combination of division and bit extraction. The extracted bit is scaled to the amplitude of 35. Simple drums are often synthesized by using fast downward pitch-slides and the division approach does this very well.</div><div><br /></div><div>In the case of Ryg's formula I discussed some sections earlier, I might also guess that the melody generator, the most chaotic element of the system, was the central piece which was later coupled with a bassline generator whose pitches were deliberately chosen to harmonize with the generated melody.</div><div><br /></div><div><span class="Apple-style-span" style="font-size:x-large;">The future</span></div><div><br /></div><div>I have been contacted by quite many people who have brought up different ideas of future development. We should, for example, have a social website where anyone could enter new formulas, listen to the in a playlist-like manner and rate them. Another branch of ideas is about the production of new rateable formulas by random generation or by breeding old ones together with genetic algorithms.</div></div><div><br /></div><div><div>All of these ideas are definitely interesting, but I don't think the time is yet right for them. I have been developing my audiovisual virtual machine, which is the main reason why I did these experiments in the first place. I regard the current concept of "oneliner music" as a mere placeholder for the system that is yet to be released. There are too many problems with the C-like infix syntax and other aspects of the concept, so I think it's wiser to first develop a better toy and then think about a community mechanism. However, these are just my own priorities. If someone feels like building the kind of on-line community I described, I'll support the idea.</div><div><br /></div><div>I've mentioned this toy before. It was previously called EDAM, but now I've chosen to name it IBNIZ (Ideally Bare Numeric Impression giZmo). One of the I letters could also stand for "immediate" or "interactive", as I'm going to emphasize an immediate, hands-on modifiability of the code. IBNIZ will hopefully be relevant as a demoscene platform for extreme size classes, as a test bed for esoteric algorithmic trickery, as an appealing introduction to hard-core minimalist programming, and also as a fun toy to just jam around with. Here's a little screenshot of the current state:</div><div><br /></div><div><span class="Apple-style-span" style="color: rgb(0, 0, 238); -webkit-text-decorations-in-effect: underline; "><img src="http://3.bp.blogspot.com/-Gc-b-CmyQg0/TqrLdSfB24I/AAAAAAAAAIw/Uj_rZPx1L2o/s320/ibniz.jpeg" border="0" alt="" id="BLOGGER_PHOTO_ID_5668566785037818754" style="display: block; margin-top: 0px; margin-right: auto; margin-bottom: 10px; margin-left: auto; text-align: center; cursor: pointer; width: 320px; height: 320px; " /></span></div><div><span class="Apple-style-span" style="color:#0000EE;"><br /></span></div><div>In my previous post, I mentioned the possibility of opening a door for 256-byte demos that are interesting both graphically and musically. The oneliner music project and IBNIZ will provide valuable research for the high-level, algorithmic aspects of this project, but I've also made some</div><div>hands-on tests on the platform-level feasability of the idea. It is now apparent that a stand-alone MS-DOS program that generates PCM sound and synchronized real-time graphics can easily fit in less then 96 bytes, so there's a lot of room left for both music and graphics in the 256-byte size</div><div>class. I'll probably release a 128- or 256-byte demo as a proof-of-concept, utilizing something derived from a nice oneliner music formula as the soundtrack.</div><div><br /></div><div>I would like to thank everyone who has been interested in the oneliner music project, as all the hype made me very determined to continue my <a href="http://pelulamu.net/countercomplex/computationally-minimal-art/">quests for unleashing the potential of the bit and the byte</a>. My next post regarding this quest will probably appear once there's a version of IBNIZ worth releasing to the public.</div></div><div><br /></div>viznuthttp://www.blogger.com/profile/06927455242083569579noreply@blogger.com38tag:blogger.com,1999:blog-1787947700033244607.post-44790582289895410362011-10-02T14:15:00.003+01:002011-10-29T18:51:18.738+01:00Algorithmic symphonies from one line of code -- how and why?<div style="text-align: justify;"><span class="Apple-style-span" style="font-size:medium;">Lately, there has been a lot of experimentation with very short programs that synthesize something that sounds like music. I now want to share some information and thoughts about these experiments.</span></div><div><br /></div><div><span class="Apple-style-span" style="font-size:medium;">First, some background. On 2011-09-26, I released the following video on </span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">Youtube, presenting seven programs and their musical output:</span></span></div><div><br /></div><div><iframe width="420" height="315" src="http://www.youtube.com/embed/GtQdIYUtAHg" frameborder="0" allowfullscreen=""></iframe></div><div><br /></div><div><span class="Apple-style-span" style="font-size:medium;">This video gathered a lot of interest, inspiring many programmers to </span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">experiment on their own and share their findings.</span><span class="Apple-style-span" style="font-size:medium;"> This was further boosted </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">by </span><a href="http://wurstcaptures.untergrund.net/music/"><span class="Apple-style-span" style="font-size:medium;">Bemmu's on-line Javascript utility</span></a><span class="Apple-style-span" style="font-size:medium;"> that made it easy for anyone (even </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">non-programmers, I guess) to jump in the bandwagon. In just a couple of </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">days, people had found so many new formulas that I just had to release </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">another video to show them off.</span></span></div><div><br /></div><div><iframe width="420" height="315" src="http://www.youtube.com/embed/qlrs2Vorw2Y" frameborder="0" allowfullscreen=""></iframe></div><div><br /></div><div>Edit 2011-10-10: note that there's now a third video as well! <a href="http://www.youtube.com/watch?v=tCRPUv8V22o">http://www.youtube.com/watch?v=tCRPUv8V22o</a></div><div><br /></div><div><span class="Apple-style-span" style="font-size:medium;">It all started a couple of months ago, when I encountered a 23-byte C-64 </span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">demo</span><span class="Apple-style-span" style="font-size:medium;">, </span><a href="http://www.youtube.com/watch?v=7lcQ-HDepqk"><span class="Apple-style-span" style="font-size:medium;">Wallflower by 4mat of Ate Bit</span></a><span class="Apple-style-span" style="font-size:medium;">, that was like nothing I had ever seen </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">on that size class on any platform. Glitchy, yes, but it had a musical </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">structure that vastly outgrew its size. I started to experiment on my own </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">and came up with a 16-byte VIC-20 program whose musical output totally blew </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">my mind. My earlier blog post, "</span><a href="http://countercomplex.blogspot.com/2011/06/16-byte-frontier-extreme-results-from.html"><span class="Apple-style-span" style="font-size:medium;">The 16-byte frontier</span></a><span class="Apple-style-span" style="font-size:medium;">", reports these </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">findings and speculates why they work.</span></span></div><div><span class="Apple-style-span" style="font-size:medium;"><br /></span></div><div><span class="Apple-style-span" style="font-size:medium;">Some time later, I resumed the experimentation with a slightly more </span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">scientific mindset. In order to better understand what was going on, I </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">needed a simpler and "purer" environment. Something that lacked the </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">arbitrary quirks and hidden complexities of 8-bit soundchips and processors. </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">I chose to experiment with short C programs that dump raw PCM audio data. I </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">had written tiny "/dev/dsp softsynths" before, and I had even had one in my </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">email/usenet signature in the late 1990s. However, the programs I would now </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">be experimenting with would be shorter and less planned than my previous </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">ones.</span></span></div><div><span class="Apple-style-span" style="font-size:medium;"><br /></span></div><div><span class="Apple-style-span" style="font-size:medium;">I chose to replicate the essentials of my earlier 8-bit experiments: a wave </span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">generator whose pitch is controlled by a function consisting of shifts and </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">logical operators. The simplest waveform for /dev/dsp programs is sawtooth. </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">A simple </span><i><span class="Apple-style-span" style="font-size:medium;">for(;;)putchar(t++)</span></i><span class="Apple-style-span" style="font-size:medium;"> generates a sawtooth wave with a cycle length </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">of 256 bytes, resulting in a frequency of 31.25 Hz when using the the </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">default sample rate of 8000 Hz. The pitch can be changed with </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">multiplication. </span><i><span class="Apple-style-span" style="font-size:medium;">t++*2</span></i><span class="Apple-style-span" style="font-size:medium;"> is an octave higher, </span><i><span class="Apple-style-span" style="font-size:medium;">t++*3</span></i><span class="Apple-style-span" style="font-size:medium;"> goes up by 7 semitones from there, </span></span><span class="Apple-style-span"><i><span class="Apple-style-span" style="font-size:medium;">t++*(t&gt;&gt;8)</span></i><span class="Apple-style-span" style="font-size:medium;"> produces a rising sound. After a couple of trials, I came up with </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">something that I wanted to share on an IRC channel:</span></span></div><div><span class="Apple-style-span" style="font-size:medium;"><br /></span></div><div><i><span class="Apple-style-span" style="font-size:medium;">main(t){for(t=0;;t++)putchar(t*(((t&gt;&gt;12)|(t&gt;&gt;8))&amp;(63&amp;(t&gt;&gt;4))));}</span></i></div><div><span class="Apple-style-span" style="font-size:medium;"><br /></span></div><div><span class="Apple-style-span" style="font-size:medium;">In just over an hour, Visy and Tejeez had contributed six more programs on </span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">the channel, mostly varying the constants and changing some parts of the </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">function. On the following day, Visy shared our discoveries on Google+. I </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">reshared them. A surprising flood of interested comments came up. Some </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">people wanted to hear an MP3 rendering, so I produced one. All these </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">reactions eventually led me to release the MP3 rendering on Youtube with </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">some accompanying text screens. (In case you are wondering, I generated the </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">screens with an old piece of code that simulates a non-existing text mode </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">device, so it's just as "fakebit" as the sounds are).</span></span></div><div><span class="Apple-style-span" style="font-size:medium;"><br /></span></div><div><div><span class="Apple-style-span" style="font-size:medium;">When the first video was released, I was still unsure whether it would be </span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">possible for one line of C code to reach the sophistication of the earlier </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">8-bit experiments. Simultaneities, percussions, where are they? It would </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">also have been great to find nice basslines and progressions as well, as </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">those would be useful for tiny demoscene productions.</span></span></div><div><span class="Apple-style-span" style="font-size:medium;"><br /></span></div><div><span class="Apple-style-span" style="font-size:medium;">At some point of time, some people noticed that by getting rid of the </span><i><span class="Apple-style-span" style="font-size:medium;">t*</span></i><span class="Apple-style-span" style="font-size:medium;"> </span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">part altogether and just applying logical operators on shifted time values one could get</span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;"> percussion patterns as well as some harmonies. Even a </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">formula as simple as <i>t&amp;t&gt;&gt;8</i>, an aural corollary of "munching squares", has </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">interesting harmonic properties. Some small features can be made loud by </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">adding a constant to the output. A simple logical operator is enough for </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">combining two good-sounding formulas together (often with interesting </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">artifacts that add to the richness of the sound). All this provided material </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">for the "second iteration" video.</span></span></div><div><span class="Apple-style-span" style="font-size:medium;"><br /></span></div><div><span class="Apple-style-span" style="font-size:medium;">If the experimentation continues at this pace, it won't take many weeks </span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">until we have found the grail: a very short program, maybe even shorter than </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">a Spotify link, that synthesizes all the elements commonly associated with a </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">pop song: rhythm, melody, bassline, harmonic progression, macrostructure. </span></span><span class="Apple-style-span"><span class="Apple-style-span" style="font-size:medium;">Perhaps even something that sounds a little bit like vocals? We'll see.</span></span></div><div><br /></div><div><span class="Apple-style-span" style="font-size:large;">Hasn't this been done before?</span></div><div><br /></div><div><span class="Apple-style-span" style="font-size:medium;">We've had the technology for all this for decades. People have been building musical circuits that operate on digital logic, creating short pieces of software that output music, experimenting with chaotic audiovisual programs and trying out various algorithms for musical composition. Mathematical theory of music has a history of over two millennia. Based on this, I find it quite mind-boggling that I have never before encountered anything similar to our discoveries despite my very long interest in computing and algorithmic sound synthesis. I've made some Google Scholar searches for related papers but haven't find anything. Still, I'm quite sure that at many individuals have come up with these formulas before, but, for some reason, their discoveries remained in obscurity.</span></div><div><span class="Apple-style-span" style="font-size:medium;"><br /></span></div><div><span class="Apple-style-span" style="font-size:medium;">Maybe it's just about technological mismatch: to builders of digital musical circuits, things like LFSRs may have been more appealing than very wide sequential counters. In the early days of the microcomputer, there was already enough RAM available to hold some musical structure, so there was never a real urge to simulate it with simple logic. Or maybe it's about the problems of an avant-garde mindset: if you're someone who likes to experiment with random circuit configurations or strange bit-shifting formulas, you're likely someone who has learned to appreciate the glitch esthetics and never really wants to go far beyond that.</span></div><div><span class="Apple-style-span" style="font-size:medium;"><br /></span></div><div><span class="Apple-style-span" style="font-size:medium;">Demoscene is in a special position here, as technological mismatch is irrelevant there. In the era of gigabytes and terabytes, demoscene coders are exploring the potential of ever shorter program sizes. And despite this, the sense of esthetics is more traditional than with circuit-benders and avant-garde artists. The hack value of a tiny softsynth depends on how much its output resembles "real, big music" such as Italo disco.</span></div><div><span class="Apple-style-span" style="font-size:medium;"><br /></span></div><div><span class="Apple-style-span" style="font-size:medium;">The softsynths used in the 4-kilobyte size class are still quite engineered. They often use tight code to simulate the construction of an analog synthesizer controlled by a stored sequence of musical events. However, as 256 bytes is becoming the new 4K, there has been ever more need to play decent music in the 256-byte size class. It is still possible to follow the constructivist approach in this size class -- for example, I've coded some simple 128-byte players for the VIC-20 when I had very little memory left. However, since the recent findings suggest that an approach with a lot of random experimentation may give better results than deterministic hacking, people have been competing in finding more and more impressive musical formulas. Perhaps all this was something that just had to come out of the demoscene and nowhere else.</span></div><div><span class="Apple-style-span" style="font-size:medium;"><br /></span></div></div><div><div><span class="Apple-style-span" style="font-size:medium;">Something I particularly like in this "movement" is its immediate, hands-on collaborative nature, with people sharing the source code of their findings and basing their own experimentation on other people's efforts. Anyone can participate in it and discover new, mind-boggling stuff, even with very little programming expertise. I don't know how long this exploration phase is going to last, but things like this might be useful for a "<a href="http://countercomplex.blogspot.com/2011/06/we-need-pan-hacker-movement.html">Pan-Hacker movement</a>" that advocates hands-on hard-core hacking to greater masses. I definitely want to see more projects like this.</span></div><div><br /></div><div><span class="Apple-style-span" style="font-size:large;">How profound is this?</span></div><div><br /></div><div><span class="Apple-style-span" style="font-size:medium;">Apart from some deterministic efforts that quickly bloat the code up to hundreds of source-code characters, the exploration process so far has been mostly trial-and-error. Some trial-and-error experimenters, however, seem to have been gradually developing an intuitive sense of what kind of formulas can serve as ingredients for something greater. Perhaps, at some time in the future, someone will release some enlightening mathematical and music-theoretical analysis that will explain why and how our algorithms work.</span></div><div><span class="Apple-style-span" style="font-size:medium;"><br /></span></div><div><span class="Apple-style-span" style="font-size:medium;">It already seems apparent, however, that stuff like this stuff works in contexts far beyond PCM audio. The earlier 8-bit experiments, such as the C-64 Wallflower, quite blindly write values to sound and video chip registers and still manage to produce interesting output. Media artist Kyle McDonald has rendered the first bunch of sounds into </span><a href="http://www.flickr.com/photos/kylemcdonald/sets/72157627762378810/"><span class="Apple-style-span" style="font-size:medium;">monochrome bitmaps</span></a><span class="Apple-style-span" style="font-size:medium;"> that show an interesting, "glitchy" structure. Usually, music looks quite bad when rendered as bitmaps -- and this applies even to small chiptunes that sound a lot like our experiments, so it was interesting to notice the visual potential as well.</span></div><div><span class="Apple-style-span" style="font-size:medium;"><br /></span></div><div><span class="Apple-style-span" style="font-size:medium;"><span class="Apple-style-span" style="color: rgb(0, 0, 238); -webkit-text-decorations-in-effect: underline; "><img src="http://3.bp.blogspot.com/-Dqm7wUdc_b4/TohqK5jRKJI/AAAAAAAAAH4/I6Uk8HH75q0/s320/tejeeztune.jpg" border="0" alt="" id="BLOGGER_PHOTO_ID_5658889667271010450" style="display: block; margin-top: 0px; margin-right: auto; margin-bottom: 10px; margin-left: auto; text-align: center; cursor: pointer; width: 240px; height: 240px; " /></span></span></div><div><span class="Apple-style-span" style="color:#0000EE;"><br /></span></div><div><span class="Apple-style-span" style="font-size:medium;">I envision that, in the context of generative audiovisual works, simple bitwise formulas could generate source data not only for the musical output but also drive various visual parameters as a function of time. This would make it possible, for example, for a 256-byte demoscene production to have an interesting and varying audiovisual structure with a strong, inherent synchronization between the effects and the music. As the formulas we've been experimenting with can produce both microstructure and macrostructure, we might assume that they can be used to drive low-level and high-level parameters equally well. From wave amplitudes and pixel colors to layer selection, camera paths, and 3D scene construction. But so far, this is mere speculation, until someone extends the experimentation to these parameters. </span></div><div><span class="Apple-style-span" style="font-size:medium;"><br /></span></div><div><span class="Apple-style-span" style="font-size:medium;">I can't really tell if there's anything very profound in this stuff -- after all, we already have fractals and chaos theory. But at least it's great for the kind of art I'm involved with, and that's what matters to me. I'll probably be exploring and embracing the audiovisual potential for some time, and you can expect me to blog about it as well.</span></div></div><div><br /></div><div>Edit 2011-10-29: There's now <a href="http://countercomplex.blogspot.com/2011/10/some-deep-analysis-of-one-line-music.html">a more detailed analysis</a> available of some formulas and techniques.</div>viznuthttp://www.blogger.com/profile/06927455242083569579noreply@blogger.com477tag:blogger.com,1999:blog-1787947700033244607.post-30111998786982398102011-09-07T13:11:00.000+01:002011-09-07T13:22:06.852+01:00A new propaganda tool: Post-Apocalyptic Hacker World<div>I visited the Assembly demo party this year, after two years of break. It seemed more relevant than in a while, because I had an agenda.</div><div><br /></div><div>For a year or so, I have been actively thinking about the harmful aspects of people's relationships with technology. It is already quite apparent to me that we are increasingly under the control of our own tools, letting them make us stupid and dependent. Unless, of course, we promote a different world, a different way of thinking, that allows us to remain in control. </div><div><br /></div><div>So far, I've written a couple of blog posts about this. I've been nourishing myself with the thoughts of prominent people such as Jaron Lanier and Douglas Rushkoff who share the concern. I've been trying to find ways of promoting the aspects of hacker culture I represent. Now I felt that the time was right for a new branch -- an artistic one based on a fictional </div><div>world.</div><div><br /></div><div>My demo "Human Resistance", that came 2nd in the oldskool demo competition, was my first excursion into this new branch. Of course, it has some echoes of my earlier productions such as "Robotic Liberation", but the setting is new. Instead of showing ruthless machines genociding the helpless mankind, we are dealing with a culture of ingenious hackers who manage to outthink a superhuman intellect that dominates the planet.</div><div><br /><iframe width="560" height="345" src="http://www.youtube.com/embed/F1537t45xm8" frameborder="0" allowfullscreen=""></iframe><br /></div><div><br /></div><div>"Human Resistance" was a relatively quick hack. I was too hurried to fix the problems in the speech compressor or to explore the real potential of Tau Ceti -style pseudo-3D rendering. The text, however, came from my heart, and the overall atmosphere was quite close to what I intended. It introduces a new fictional world of mine, a world I've temporarily dubbed "Post-Apocalyptic Hacker World" (PAHW). I've been planning to use this world not only in demo productions but also in at least one video game. I haven't released anything interactive for like fifteen years, so perhaps it's about time for a game release.</div><div><br /></div><div>Let me elaborate the setting of this world a little bit.</div><div><br /></div><div><div>Fast-forward to a post-singularitarian era. Machines control all the resources of the planet. Most human beings, seduced by the endless pleasures of procedurally-generated virtual worlds, have voluntarily uploaded their minds into so-called "brain clusters" where they have lost their humanity and individuality, becoming mere components of a global superhuman intellect. Only those people with a lot of willpower and a strong philosophical stance against dehumanization remained in their human bodies.</div><div><br /></div><div>Once the machines initiated an operation called "World Optimization", they started to regard natural formations (including all biological life) as harmful and unpredictable externalities. As a result, planet Earth has been transformed into something far more rigid, orderly and geometric. Forests, mountains, oceans or clouds no longer exist. Strange, lathe-like artifacts protrude from vast, featureless plains. Those who had studied ancient pop culture immediately noticed a resemblance to some of the 3D computer graphics of the 1980s. The real world has now started to look like the computed reality of Tron or the futuristic terrains of video games such as Driller, Tau Ceti and Quake Minus One.</div><div><br /></div><div>Only a tiny fraction of biological human beings survived World Optimization. These people, who collectively call themselves "hackers", managed to find and exploit the blind spots of algorithmic logic, making it possible for them to establish secret, self-relying underground fortresses where human life can still struggle on. It has become a necessity for all human beings to dedicate as much of their mental capacities as possible to outthinking the brain clusters in order to eventually conquer them.</div><div><br /></div><div>Many of the tropes in Post-Apocalyptic Hacker World are quite familiar. A human resistance movement fighting against a machine-controlled world, haven't we seen this quite many times already? Yes, we have, but I also think my approach is novel enough to form a basis for some cutting-edge social, technological and political commentary. By emphasizing things like the role of total cognitive freedom and radical understanding of things' inner workings in the futuristic hacker culture, it may be possible to get people realize their importance in the real world as well. It is also quite possible to include elements from real-life hacker cultures and mindsets in the world, effectively adding to their interestingness.</div></div><div><br /></div><div><div>The "PAHW game" (still without a better title) is already in an advanced stage of pre-planning. It is going to become a hybrid CRPG/strategy game with random-generated worlds, very loose scripting and some very unique game-mechanical elements. This is just a side project so it may take a while before I have anything substantial to show, but I'll surely let you know once I have. Stay tuned!</div></div><div><br /></div>viznuthttp://www.blogger.com/profile/06927455242083569579noreply@blogger.com14tag:blogger.com,1999:blog-1787947700033244607.post-44425178458102132872011-07-24T09:45:00.000+01:002011-07-24T09:59:15.098+01:00Don't submit yourself to a game machine!<div>(This is a translation of <a href="http://viznut.blogspot.com/2011/07/ala-alistu-pelikoneille.html">a post in my Finnish blog</a>)</div><div><br /></div><div>Some generations ago, when people said they were playing a game, they usually meant a social leisure activity that followed a commonly decided set of rules. The devices used for gaming were very simple, and the games themselves were purely in the minds of the players. It was possible to play thousands of different games with a single constant deck of cards, and it was possible for anyone to invent new games and variants.</div><div><br /></div><div>Technological progress brought us "intelligent" gaming devices that reduced the possibility of negotiation. It is not possible to suggest an interesting rule variant to a pinball machine or a one-handed bandit; the machine only implements the rules it is built for. Changing the game requires technical skill and a lot of time, something most people don't have. As a matter of fact, most people aren't even interested in the exact rules of the game, they just care about the fun.</div><div><br /></div><div>Nowadays, people have submitted ever bigger portions of their lives to "gaming machines" that make things at least superficially easier and simpler, but whose internal rules they don't necessarily understand at all. A substantial portion of today's social interaction in developed countries, for example, takes place in on-line social networking services. Under their hoods, these services calculate things like message visibility -- that is, which messages and whose messages are supposed to be more important for a given user. For most people, however, it seems to be completely OK that a computer owned by a big, distant corporation makes such decisions for them using a secret set of rules. They just care about the fun.</div><div><br /></div><div>It has always been easy to use the latest media to manipulate people, as it takes time from the audience to develop criticism. When writing was a new thing, most people would regard any text as a "word of God" that was true just because it was written. In comparison, today's people have a thick wall of criticism against any kind of non-interactive propaganda, be that textual, aural or visual, but whenever a game-like interaction is introduced, we often become completely vulnerable. In short, we know how to be critical about an on-line news items but not how to be critical about the "like" and "share" buttons under them.</div><div><br /></div><div>Video games, in many ways, surpasses traditional passive media in the potential of mental manipulation. A well-known example is the so-called Tetris effect caused by a prolonged playing of a pattern-matching game. The game of Tetris "programs" its player to constantly analyze the on-screen wall of blocks and mentally fit different types of tetrominos in it. When a player stops playing after several hours, the "program" may remain active, causing the player to continue mentally fitting tetrominos on outdoor landscapes or whatever they see in their environment. Other kinds of games may have other kinds of effects. I have personally also experienced an "adventure game effect" that caused me to unwillingly think about real-world things and locations from the point of view of "progressing in the script". Therefore, I don't think it is a very far-fetched idea that spending a lot of time on an interactive website gives our brains a permission to adapt to the "game mechanics" and unnoticeably alter the way how we look at the world.</div><div><br /></div><div>So, is this a real threat? Are they already trying to manipulate our minds in game-mechanical means, and how? There has been perhaps even too much criticism of Facebook compared to other social networking sites, but I'm now it as an example as it is currently the most familiar one for the wide audience.</div><div><br /></div><div><div>As many people probably understand already, Facebook's customer base doesn't consist of the users (who pay nothing for the service) but of marketeers who want their products to be sold. The users can be thought as mere raw material that can be refined to better fit the requirements of the market. This is most visible in the user profile mechanic that encourages users to define themselves primarily with multiple choices and product fandom. The only space in the profile that allows for a longer free text has been laid below all the "more important things". Marketeers don't want personal profile pages but realiable statistics, high-quality consumption habit databases and easily controllable consumers.</div><div><br /></div><div>The most prominent game-mechanical element in Facebook is "Like", which affects nearly everything on the site. It is a simple and easily processable signal whose use is particularly encouraged. In its internal game, Facebook scores users according to how active "likers" they are, and gives more visibility to the messages of those users that score higher. Moderate users of Facebook, who use their whole brain to consider what to "Like" or not or what to share and not, gain less points and less visibility. This is how Facebook rewards the "virtuous" users and punishes the "sinful" ones. </div><div><br /></div><div>What about those users who actually want to understand the inner workings of the service, in order to use it better for their own purposes? Facebook makes this very difficult, and I believe it is on purpose. The actual rules of the game haven't been documented anywhere, so users need to follow intuitive guesses or experiment with the thing. If a user actually manages to reverse-engineer part of the black box, he or she can never trust that it continues to work in the same way. The changes in the rules of the internal game can be totally unpredictable. This discourages users from even trying to understand the game they are playing and encourages them to trust the control of their private lives to the computers of a big, distant company.</div><div><br /></div><div>Of course, Facebook is not representative of all forms of on-line sociality. The so-called imageboards, for example, are diagonally opposite to Facebook in many areas: totally uncommercial and simple-to-understand sites where real names or even pseudonyms are rarey used. As these sites function totally differently from Facebook, it can be guessed that they also affect their users' brains in a different way.</div><div><br /></div><div>Technically, imageboards resemble discussion boards, but with the game-mechanical difference that they encourage a faster, more spontaneous communication which usually feels more like a loud attention-whoring contest than actual discussion. A lot of the imageboard culture can be explained as mere consequences of the mechanics. The fact that images are often more prominent than text in threads makes it possible for users to superficially skim around the pictures and only focus on the parts that seize their attention. This contributes to the fast tempo that invites the users to react very quickly and spontaneously, usually without any means of identification, as if as part of a rebellious mob. The belief in radical anonymity and hivemind power have ultimately become some kind of core values of the imageboard culture.</div><div><br /></div><div>The possibility of anonymous commentary gives us a much greater sense of freedom than we get by using our real name or even a long-term pseudonym. Anonymous provocateurs don't need to be afraid of losing their face. They feel free to troll around from the bottom of their heart, looking for moments of "lulz" they get by heating someone up. The behavior is probably familiar to anyone who has been reading anonymous comments on news websites or toilet walls. Imageboards just take this kind of behavior to its logical extreme, basing all of its social interaction on a spontaneous mob behavior.</div></div><div><br /></div><div><div>Critics of on-line culture, such as Lanier and Rushkoff, have often expressed their concern of how on-line socialization trivializes our view of other people. Instead of interacting with living people with rich personalities, we seem to be increasingly dealing with lists, statistics and faceless mobs who we interact with using "Like", "Block" and "Add Friend" buttons. I'm also concerned about this. Even when someone rationally understands on the rational level that this is just an abstraction required by the means of communication to work, we may accidentally and unnoticeably become programmed by the "Tetris effects" of these media. Awareness and criticism may very well reduce the risk, but I don't believe they can make anyone totally immune.</div><div><br /></div><div>So, what can we do? Should we abandon social networking sites altogether to save the humanity of the human race? I don't think denialism helps anything. Instead, we should learn how to use the potential of interactive social technology in constructive rather than destructive means. We should develop new game mechanics that, instead of promoting collective stupidity and dehumanization, augment the positive sides of humanity and encourage us to improve ourselves. But is this anything great masses could become interested in? Do they any longer care about whether they remain as independent individuals? Perhaps not, but we can still hope for the best.</div></div>viznuthttp://www.blogger.com/profile/06927455242083569579noreply@blogger.com14tag:blogger.com,1999:blog-1787947700033244607.post-57573020007922079102011-06-21T19:14:00.000+01:002011-07-01T08:18:14.913+01:00The 16-byte frontier: extreme results from extremely small programs.While mainstream software has been getting bigger and more bloated year after year, the algorithmic artists of the demoscene have been following the opposite route: building ever smaller programs to generate ever more impressive audiovisual show-offs.<br /><br />The traditional competition categories for size-limited demos are 4K and 64K, limiting the size of the stand-alone executable to 4096 and 65536 bytes, respectively. However, as development techniques have gone forward, the 4K size class has adopted many features of the 64K class, or as someone summarized it a couple of years ago, "4K is the new 64K". There are development tools and frameworks specifically designed for 4K demos. Low-level byte-squeezing and specialized algorithmic beauty have given way to high-level frameworks and general-purpose routines. This has moved a lot of "sizecoding" activity into more extreme categories: 256B has become the new 4K. For a fine example of a modern 256-byter, see <a href="http://www.pouet.net/prod.php?which=53816">Puls by Rrrrola</a>.<br /><br /><iframe src="http://www.youtube.com/embed/R35UuntQQF8" allowfullscreen="" width="425" frameborder="0" height="349"></iframe><br /><br />The next hexadecimal order of magnitude down from 256 bytes is 16 bytes. Yes, there are some 16-byte demos, but this size class has not yet established its status on the scene. At the time of writing this, the smallest size category in the pouet.net database is 32B. What's the deal? Is the 16-byte limit too tight for anything interesting? What prevents 16B from becoming the new 256B?<br /><br />Perhaps the most important platform for "bytetros" is MS-DOS, using the no-nonsense .COM format that has no headers or mandatory initialization at all. Also, in .COM files we only need a couple of bytes to obtain access to most of the vital things such as the graphics framebuffer. At the 16-byte size class, however, these "couples of bytes" quickly fill up the available space, leaving very little room for the actual substance. For example, here's a disassembly of a "TV noise" effect (by myself) in fifteen bytes:<br /><pre>addr bytes asm<br />0100 B0 13 MOV AL,13H<br />0102 CD 10 INT 10H<br />0104 68 00 A0 PUSH A000H<br />0107 07 POP ES<br />0108 11 C7 ADC DI,AX<br />010A 14 63 ADC AL,63H<br />010C AA STOSB<br />010D EB F9 JMP 0108H<br /></pre><br />The first four lines, summing up to a total of eight bytes, initialize the popular 13h graphics mode (320x200 pixels with 256 colors) and set the segment register ES to point in the beginning of this framebuffer. While these bytes would be marginal in a 256-byte demo, they eat up a half of the available space in the 16-byte size class. Assuming that the infinite loop (requiring a JMP) and the "putpixel" (STOSB) are also part of the framework, we are only left with five (5) bytes to play around with! It is possible to find some interesting results besides TV noise, but it doesn't require many hours from the coder to get the feeling that there's nothing more left to explore.<br /><br />What about other platforms, then? Practically all modern mainstream platforms and a considerable portion of older ones are out of the question because of the need for long headers and startup stubs. Some platforms, however, are very suitable for the 16-byte size class and even have considerable advantages over MS-DOS. The hardware registers of the Commodore 64, for example, are more readily accessible and can be manipulated in quite unorthodox ways without risking compatibility. This spares a lot of precious bytes compared to MS-DOS and thus opens a much wider space of possibilities for the artist to explore.<br /><br />So, what is there to be found in the 16-byte possibility space? Is it all about raster effects, simple per-pixel formulas and glitches? Inferior and uglier versions of the things that have already made in 32 or 64 bytes? Is it possible to make a "killer demo" in sixteen bytes? A recent 23-byte Commodore 64 demo, <a href="http://www.pouet.net/prod.php?which=56935">Wallflower by 4mat of Ate Bit</a>, suggests that this might be possible:<br /><br /><iframe src="http://www.youtube.com/embed/7lcQ-HDepqk" allowfullscreen="" width="425" frameborder="0" height="349"></iframe><br /><br />The most groundbreaking aspect in this demo is that it is not just a simple effect but appears to have a structure reminiscent of bigger demos. It even has an end. The structure is both musical and visual. The visuals are quite glitchy, but the music has a noticeable rhythm and macrostructure. Technically, this has been achieved by using the two lowest-order bytes of the system timer to calculate values that indicate how to manipulate the sound and video chip registers. The code of the demo follows:<br /><pre>* = $7c<br />ora $a2<br />and #$3f<br />tay<br />sbc $a1<br />eor $a2<br />ora $a2<br />and #$7f<br />sta $d400,y<br />sta $cfd7,y<br />bvc $7c<br /></pre><br />When I looked into the code, I noticed that it is not very optimized. The line "eor $a2", for example, seems completely redundant. This inspired me to attempt a similar trick within the sixteen-byte limitation. I experimented with both C-64 and VIC-20, and here's something I came up with for the VIC-20:<br /><pre>* = $7c<br />lda $a1<br />eor $9004,x<br />ora $a2<br />ror<br />inx<br />sta $8ffe,x<br />bvc $7c<br /></pre><br />Sixteen bytes, including the two-byte PRG header. The visual side is not that interesting, but the musical output blew my mind when I first started the program in the emulator. Unfortunately, the demo doesn't work that well in real VIC-20s (due to an unemulated aspect of the I/O space). I used a real VIC-20 to come up with good-sounding alternatives, but this one is still the best I've been able to find. <a href="http://www.pelulamu.net/pwp/vic20/soundflower.mp3">Here's an MP3 recording of the emulator output</a> (with some equalization to silent out the the noisy low frequencies).<br /><br />And no, I wasn't the only one who was inspired by Wallflower. Quite soon after it came out, some sceners came up with "ports" to <a href="http://www.pouet.net/prod.php?which=57042">ZX Spectrum</a> (in 12 or 15 bytes + TAP header) and <a href="http://www.pouet.net/prod.php?which=56951">Atari XL</a> (17 bytes of code + 6-byte header). However, I don't think they're as good in the esthetic sense as the original C-64 Wallflower.<br /><br />So, how and why does it work? I haven't studied the ZX and XL versions, but here's what I've figured out of 4mat's original C-64 version and my VIC-20 experiment:<br /><br />The layout of the zero page, which contains all kinds of system variables, is quite similar in VIC-20 and C-64. On both platforms, the byte at the address $A2 contains a counter that is incremented 60 times per second by the system timer interrupt. When this byte wraps over (every 256 steps), the byte at the address $A1 is incremented. This happens every 256/60 = 4.27 seconds, which is also the length of the basic macrostructural unit in both demos.<br /><br />In music, especially in the rhythms and timings of Western pop music, binary structures are quite prominent. Oldschool homecomputer music takes advantage of this in order to maximize simplicity and efficiency: in a typical tracker song, for example, four rows comprise a beat, four beats (16 rows) comprise a bar, and four bars (64 rows) comprise a pattern, which is the basic building block for the high-level song structure. The macro-units in our demos correspond quite well to tracker patterns in terms of duration and number of beats.<br /><br />The contents of the patterns, in both demos, are calculated using a formula that can be split into two parts: a "chaotic" part (which contains additions, XORs, feedbacks and bit rotations), and an "orderly" part (which, in both demos, contains an OR operation). The OR operation produces most of the basic rhythm, timbres and rising melody-like elements by forcing certain bits to 1 at the ends of patterns and smaller subunits. The chaotic part, on the other hand, introduces an unpredictable element that makes the output interesting.<br /><br />It is almost a given that the outcomes of this approach are esthetically closer to glitch art than to the traditional "smooth" demoscene esthetics. Like in glitching and circuit-bending, hardware details have a very prominent effect in "Wallflower variants": a small change in register layout can cause a considerable difference in what the output of a given algorithm looks and sounds like. Demoscene esthetics is far from completely absent in "Wallflower variants", however. When the artist chooses the best candidate among countless of experiments, the judgement process strongly favors those programs that resemble actual demos and appear to squeeze a ridiculous amount of content in a low number of bytes.<br /><br />When dealing with very short programs that escape straightforward rational understanding by appearing to outgrow their length, we are dealing with chaotic systems. Programs like this aren't anything new. The HAKMEM repository from the seventies provides <a href="http://www.inwap.com/pdp10/hbaker/hakmem/hacks.html">several examples of short audiovisual hacks for the PDP-10 mainframe</a>, and many of these are adaptations of earlier PDP-1 hacks, such as Munching Squares, dating back to the early sixties. Fractals, likewise producing a lot of detail from simple formulas, also fall under the label of chaotic systems.<br /><br />When churning art out of mathematical chaos, be that fractal formulas or short machine-code programs, it is often easiest for the artist to just randomly try out all kinds of alternatives without attempting to understand the underlying logic. However, this easiness does not mean that there is no room for talent, technical progress or rational approach in the 16-byte size class. Random toying is just a characteristic of the first stages of discovery, and once a substantial set of easily discoverable programs have been found, I'm sure that it will become much more difficult to find new and groundbreaking ones.<br /><br />Some years ago, I made a preliminary design for a virtual machine called "Extreme-Density Art Machine" (or EDAM for short). The primary purpose of this new platform was to facilitate the creation of extremely small demoscene productions by removing all the related problems and obstacles present in real-world platforms. There is no code/format overhead; even an empty file is a valid EDAM program that produces a visual result. There will be no ambiguities in the platform definition, no aspects of program execution that depend on the physical platform. The instruction lengths will be optimized specifically for visual effects and sound synthesis. I have been seriously thinking about reviving this project, especially now that there have been interesting excursions to the 16-byte possibility space. But I'll tell you more once I have something substantial to show.viznuthttp://www.blogger.com/profile/06927455242083569579noreply@blogger.com12tag:blogger.com,1999:blog-1787947700033244607.post-89183517520943702122011-06-17T22:25:00.000+01:002011-06-17T23:13:01.600+01:00We need a Pan-Hacker movement.Some decades ago, computers weren't nearly as common as they are today. They were big and expensive, and access to them was very privileged. Still, there was a handful of people who had the chance to toy around with a computer in their leisure time and get a glimpse of what a total, personal access to a computer might be like. It was among these people, mostly students in MIT and similar facilities, where the computer hacker subculture was born.<br /><br />The pioneering hackers felt that computers had changed their life for the better and therefore wanted to share this new improvement method with everyone else. They thought everyone should have an access to a computer, and not just any kind of access but an unlimited, non-institutionalized one. Something like a cheap personal computer, for example. Eventually, in the seventies, some adventurous hackers bootstrapped the personal computer industry, which led to the the so-called "microcomputer revolution" in the early eighties.<br /><br />The era was filled with hopes and promises. All kinds of new possibilities were now at everyone's fingertips. It was assumed that programming would become a new form of literacy, something every citizen should be familiar with -- after all, using a computer to its fullest potential has always required programming skill. "Citizens' computer courses" were broadcasted on TV and radio, and parents bought cheap computers for their kids to ensure a bright future for the next generation. Some prophets even went far enough to suggest that personal computers could augment people's intellectual capacities or even expand their consciousnesses in the way how psychedelic drugs were thought to do.<br /><br />In the nineties, however, reality stroke back. Selling a computer to everyone was apparently not enough for automatically turning them into superhuman creatures. As a matter of fact, digital technology actually seemed to dumb a lot of people down, making them helpless and dependent rather than liberating them. Hardware and software have become ever more complex, and it is already quite difficult to build reliable mental models about them or even be aware of all the automation that takes place. Instead of actually understanding and controlling their tools, people just make educated guesses about them and pray that everything works out right. We are increasingly dependent on digital technology but have less and less control over it.<br /><br />So, what went wrong? Hackers opened the door to universal hackerdom, but the masses didn't enter. Are most people just too stupid for real technological awareness, or are the available paths to it too difficult or time-consuming? Is the industry deliberately trying to dumb people down with excessive complexity, or is it just impossible to make advanced technology any simpler to genuinely understand? In any case, the hacker movement has somewhat forgotten the idea of making digital technology more accessible to the masses. It's a pity, since the world needs this idea now more than ever. We need to give common people back the possibility to understand and master the technology they use. We need to let them ignore the wishes of the technological elite and regain the control of their own lives. We need a Pan-Hacker movement.<br /><br />What does "Pan-Hacker" mean? I'll be giving three interpretations that I find equally relevant, emphasizing different aspects of the concept: "everyone can be a hacker", "everything can be hacked" and "all hackers together".<br /><br />The first interpretation, "everyone can be a hacker", expands on the core idea of oldschool hackerdom, the idea of making technology as accessible as possible to as many as possible. The main issue is no longer the availability of technology, however, but the way how the various pieces of technology are designed and what kind of user cultures are formed around them. Ideally, technology should be designed so that it invites the user to seize the control, play around for fun and gradually develop an ever deeper understanding in a natural way. User cultures that encourage users to invent new tricks should be embraced and supported, and there should be different "paths of hackerdom" for all kinds of people with all kinds of interests and cognitive frameworks.<br /><br />The second interpretation, "everything can be hacked", embraces the trend of extending the concept of hacking out of the technological zone. The generalized idea of hacking is relevant to all kinds of human activities, and all aspects of life are relevant to the principles of in-depth understanding and hands-on access. As the apparent complexity of the world is constantly increasing, it is particularly important to maintain and develop people's ability to understand the world and all kinds of things that affect their lives.<br /><br />The third interpretation, "all hackers together", wants to eliminate the various schisms between the existing hacker subcultures and bring them into a fruitful co-operation. There is, for example, a popular text, Eric S. Raymond's "How To Become A Hacker", that represents a somewhat narrow-minded "orthodox hackerdom" that sees the free/open-source software culture as the only hacker culture that is worth contributing to. It frowns upon all non-academic hacker subcultures, especially the ones that use handles (such as the demoscene, which is my own primary reference point to hackerdom). We need to get rid of this kind of segregation and realize that there are many equally valid paths suitable for many kinds of minds and ambitions.<br /><br />Now that I've mentioned the demoscene, I would like to add that all kinds of artworks and acts that bring people closer to the deep basics of technology are also important. I've been very glad about the increasing popularity of chip music and circuit-bending, for example. The Pan-Hacker movement should actively look for new ways of "showing off the bits" to different kinds of audiences in many kinds of diverse contexts.<br /><br />I hope my writeup has given someone some food of thought. I would like to elaborate my philosophy even further and perhaps do some cartography on the existing "Pan-Hacker" activity, but perhaps I'll return to that at some later time. Before that, I'd like to hear your thoughts and visions about the idea. What kind of groups should I look into? What kind of projects could Pan-Hacker movement participate in? Is there still something we need to define or refine?viznuthttp://www.blogger.com/profile/06927455242083569579noreply@blogger.com177tag:blogger.com,1999:blog-1787947700033244607.post-42870179804258162542011-06-06T17:50:00.000+01:002011-06-06T18:09:20.340+01:00Ancient binary symbolism and why it is relevant todayIt is a well-known fact the human use of binary strings (or even binary numbers, see Pingala) predates electronics and automatic calculators by thousands of years.<br /><br />Divination was probably the earliest human application for binary arrays. There are several systems in Eurasia and Africa that assign fixed semantics to bitstrings of various lengths. The Chinese I Ching gives meanings to the 3- and 6-bit arrays, while the systems used in the Middle East, Europe and Africa tend to prefer groups of 4 and 8 bits.<br /><br />These systems of binary mysticism have been haunting me for quite many years already. As someone who has been playing around with bits since childhood, I have found the idea of ancient archetypal meanings for binary numbers very attractive. However, when studying the actual systems in order to find out the archetypes, I have always encountered a lot of noise that has blocked my progress. It has been a little bit frustrating: behind the noise, there are clear hints of an underlying logic and an original protosemantics, but whenever I have tried to filter out the noise, the solution has escaped my grasp.<br /><br />Recently, however, I finally came up with a solution that satisfies my sense of esthetics. I even pixelled a set of "binary tarot cards" for showing off the discovery:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/-o8nCacuQddM/Te0F2sD3OYI/AAAAAAAAABQ/KRH5NLvznzs/s1600/binarytarot.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 236px; height: 410px;" src="http://1.bp.blogspot.com/-o8nCacuQddM/Te0F2sD3OYI/AAAAAAAAABQ/KRH5NLvznzs/s320/binarytarot.gif" alt="" id="BLOGGER_PHOTO_ID_5615150747499313538" border="0" /></a><br />For a more complete summary, you may want to check out <a href="http://www.pelulamu.net/countercomplex/crosstrad-4bit.html">this table</a> that contains a more elaborate set of meanings for each array and also includes all the traditional semantics I have based them on.<br /><br />Of course, I'm not claiming that this is some kind of a "proto-language" from which all the different forms of binary mysticism supposedly developed. It is just an attempt to find an internally consistent set of meanings that match the various traditional semantics as closely as possible.<br /><br /><span style="font-size:180%;">Explanation<br /></span><br />In my analysis, I have translated the traditional binary patterns into modern Leibnizian binary numbers using the following scheme:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/-BZ8oz2jA2iM/Te0GQYZ1FMI/AAAAAAAAABY/cwCh3UZG0cY/s1600/nybbles.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 386px; height: 80px;" src="http://2.bp.blogspot.com/-BZ8oz2jA2iM/Te0GQYZ1FMI/AAAAAAAAABY/cwCh3UZG0cY/s320/nybbles.gif" alt="" id="BLOGGER_PHOTO_ID_5615151188899337410" border="0" /></a>This is the scheme that works best for I Ching analysis. The bits on the bottom are considered heavier and more significant, and they change less frequently, so the normal big-endian reading starts from the bottom. The "yang" line, consisting of a single element, maps quite naturally to the binary "1", especially given that both "yang" and "1" are commonly associated with activity.<br /><br />I have drawn each "card picture" based on the African shape of the binary array (represented as rows of one or two stones). I have left the individual "stones" clearly visible so that the bitstrings can be read out from the pictures alone. Some of the visual associations are my own, but I have also tried to use traditional associations (such as 1111=road/path, 0110=crossroads, 1001=enclosure) as often as they feel relevant and universal enough.<br /><br />In addition to visual associations, the traditional systems have also formed semantics by opposition: if the array 1111 means "journey", "change" and "death", its inversion 0000 may obtain the opposite meanings: "staying at home", "stability" and "life". The visual associations of 0000 itself no longer matter as much.<br /><br />The two operations used for creating symmetry groups are inversion and mirroring. These can be found in all families of binary divination: symmetric arrays are always paired with their inversions (e.g. 0000 with 1111), and asymmetric arrays with their reversions (e.g. 0111 with 1110).<br /><br />Because of the profound role of symmetry groups, I haven't represented the arrays in a numerical order but in a 4x4 arrangement that emphasizes the mutual relationships via inversion and mirroring. Each of the rows in the "binary tarot" picture represents a group with similar properties:<br /><ul><li> The top row contains the four symmetrical arrays (which remain the same when mirrored).</li><li>The second row contains the arrays for which mirroring and inversion are equivalent.</li><li>The two bottom rows represent the two groups whose members can be derived from each other solely by mirroring and inversion.<br /></li></ul>The semantics within each group are interrelated. For example, the third row ("up", "in", "out", "down") can be labelled "the directions". In order to emphasize this, I have chosen a pair of dichotomies for each row. For example, the row of the directions uses the dichotomies "far-near" and "horizontal-vertical", and the array called "up" combines the poles "far"+"vertical". All the dichotomies can be found in my summary table.<br /><br />The arrays in the top two groups have an even parity while those on the bottom two groups have an odd parity. This difference is important at least in Al-Raml and related systems, where the array getting the role of a "judge" in a divination table must have an even parity; otherwise there is an error in the calculation.<br /><br />The members of each row can be derived from one another by eXclusive-ORing them with a symmetrical array (0000, 1111, 0110 or 1001). For this reason, I have also organized the arrangement as a XOR table.<br /><br />The color schemes used in the card pictures are based on the colors in various 16-color computer palettes and don't carry further symbolism (even though 0010 happens to have the meaning of "red" in Al-Raml and Geomancy as well). Other than that, I have abstained from any modern technological connections.<br /><br /><span style="font-size:180%;">But why?<br /></span><br />Our subjective worlds are full of symbolism that brings various mental categories together. We associate numbers, letters, colors and even playing cards to various real-world things. We may have superstitions about them or give them unique personalities. Synesthetics even do this involuntarily, so I guess it is quite a basic trait for the human mind.<br /><br />Binary numbers, however, have remained quite dry in this area. We don't really associate them with anything else, so they remain alien to us. Even experts who are constantly dealing with binary technology prefer to hide them or abstract them away. This alienation combined to the increasing role of digitality in our lives is the reason why I think there should be more exposure for the various branches of binary symbolism.<br /><br />In many cultures, binary symbolism has attained a role so central that people base their conceptions of the world on it. A lot of traditional Chinese cosmology is basically commentary of I Ching. The Yoruba of West Africa use the eight-bit arrays of the Ifa system as "hash codes" to index their whole oral tradition. Some other West African peoples -- the Fon and the Ewe -- extend this principle far enough to give every person an eight-bit "kpoli" or "life sign" at their birth.<br /><br />I guess the best way to bring some binary symbolism to our modern technological culture might be using it in art. Especially the kind of art such as pixel art, chip music and demoscene productions that embrace the bits, bringing them forward instead of hiding them. This is still just a meta-level idea, however, and I can't yet tell how to implement in it practice. But once I've progressed with it, I'll let you know for sure!viznuthttp://www.blogger.com/profile/06927455242083569579noreply@blogger.com3tag:blogger.com,1999:blog-1787947700033244607.post-85012173365166436322011-06-02T13:39:00.000+01:002011-06-06T20:00:29.286+01:00What should big pixels look like?There has been some fuss recently about <a href="http://www.popsci.com/technology/article/2011-05/new-algorithm-smooths-8-bit-pixel-art-cute-bubbly-vector-drawings">a new algorithm that vectorizes pixel art</a>. And yes, judging from the example pictures, this algorithm by Johannes Kopf and Dani Lischinski indeed seems to produce results superior to the likes of hq*x and scale*x or mainstream vectorization algorithms. Let me duplicate the titular example for reference:<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/-qkvVUV4fS_4/TeeHcrdcJbI/AAAAAAAAABE/ZMFyPLsHg0s/s1600/depix.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 292px; height: 248px;" src="http://4.bp.blogspot.com/-qkvVUV4fS_4/TeeHcrdcJbI/AAAAAAAAABE/ZMFyPLsHg0s/s320/depix.gif" alt="" id="BLOGGER_PHOTO_ID_5613604387312903602" border="0" /></a>Impressive, yes, but as in case with all such algorithms, the first question that came to my mind was: "But does it manage dithering and antialiasing?". The paper explicitly answers this question: no.<br /><br />All the depixelization algorithms so far have been succesful only with a specific type of pixel art. Pixel art of a cartoonish style that has clear lines and not too many details. This kind of pixel art may have been mainstream in Japan, but in the Western sphere, especially in Europe, there has been a strong tradition of optimalism: the tendency of maximizing the amount of detail and shading within the limited grid of pixels. An average pixel artwork on the Commodore 64 or the ZX Spectrum has an extensive amount of careful manual dithering. If we wish to find a decent general-purpose pixel art depixelization algorithm, it would definitely need to take care of that.<br /><br />I once experimented by writing an undithering filter that attempts to smooth out dithering while keeping non-dithering-related pixels intact. The filter works as follows:<br /><ul><li>Flag a pixel as a dithering candidate if it differs enough from its cardinal neighborhood (no more than one of the four neighbors are more similar to the reference pixel than the neighbor average).</li></ul><ul><li>Extend the area of dither candidates: flag a pixel if at least five of its eight neighbors are flagged. Repeat until no new pixels are flagged.</li></ul><ul><li>For each flagged pixel, replace its color with the weighed average of all the flagged pixels within the surrounding 3x3 rectangle.</li></ul>Would it be possible to improve the performance of a depixelization algorithm by first piping the picture thru my undithering filter? Let's try out. Here is an example of how the filter manages with a fullscreen C-64 multicolor-mode artwork (from the demoscene artist Frost of Panda Design) and how the results are scaled by the hq4x algorithm:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/-rO_NTfM-MuQ/TeeGbBmEm-I/AAAAAAAAAA0/0WqaMhbkSxU/s1600/undither.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 260px; height: 320px;" src="http://2.bp.blogspot.com/-rO_NTfM-MuQ/TeeGbBmEm-I/AAAAAAAAAA0/0WqaMhbkSxU/s320/undither.png" alt="" id="BLOGGER_PHOTO_ID_5613603259383323618" border="0" /></a>The undithering works well enough within the smooth areas, and hq4x is even able to recognize the undithered areas as gradients and smooth them a little bit further. However, when looking at the border between the nose and the background, we'll notice careful manual antialiasing that even adds some lonely dithering pixels to smooth out the staircasing. My algorithm doesn't recognize these lonely pixels as dithering, and neither does it recognize the loneliest pixels in the outskirts of dithered gradients as dithering. It is a difficult task to algorithmically detect whether a pixel is intended to be a dithering pixel or a contour/detail pixel. Detecting antialiasing would be a totally different task, requiring a totally new set of processing stages.<br /><br />There seems to be still a lot of work to do. But suppose that, some day, we will discover the ultimate depixelization algorithm. An image recognition and rerendering pipeline that succesfully recognizes and interprets contours, gradients, dithering, antialiasing and everything else in all conceivable cases, and rerenders it in a high resolution and color without any distracting artifacts. Would that be the holy grail? I wouldn't say so.<br /><br />The case is that we already have the ultimate depixelization algorithm -- the one running in the visual subsystem of the human brain. It is able to fill in amazing amounts of detail when coping with low amounts of reliable data. It handles noisiness and blurriness better than any digital system. It can extrapolate very well from low-complexity shapes such as silhouette drawings or groups of blurry dots on a CRT screen.<br /><br />A fundamental problem with the "unlimited resolution" approach of pixel art upscaling is that it attempts to fill in details that aren't there -- a task in which the human brain is vastly superior. Replacing blurry patterns with crisp ones can even effectively turn off the viewer's visual imagination: a grid of blurry dots in the horizon can be just about anything, but if they get algorithmically substituted by some sort of crisp blobs, the illusion disappears. I think it is outright stupid to waste computing resources and watts for something that kills the imagination.<br /><br />The reason why pixel art upscaling algorithms exist in the first place is that sharp rectangular pixels (the result of nearest-neighbor upscaling) look bad. And I have to agree with this. Too easily recognizable pixel boundaries distract the viewer from the art. The scaling algorithms designed for video scaling partially solve this problem with their interpolation, but the results are still quite bad for the opposite reason -- because there is no respect for the nature of the individual pixel.<br /><br />When designing a general-purpose pixel art upscaling algorithm, I think the best route would go somewhere between the "unlimited resolution" approach and the "blurry interpolation" approach. Probably something like CRT emulation with some tasteful improvements. Something that keeps the pixels blurry enough for the visual imagination to work while still keeping them recognizable and crisp enough so that the beauty of the patterns can be appreciated.<br /><br />Nevertheless, I was very fascinated by the Kopf-Lischinski algorithm, but not because of how it would improve existing art, but for its potential of providing nice, organic and blobby pixels to paint new art with. A super-low-res pixel art painting program that implements this kind of algorithm would make a wonderful toy and perhaps even vitalize the pixel art scene in a new and refreshing way. Such a vitalization would also be a triumph for the idea of <a href="http://countercomplex.blogspot.com/2010/03/defining-computationally-minimal-art-or.html">Computationally Minimal Art</a> which I have been advocating.viznuthttp://www.blogger.com/profile/06927455242083569579noreply@blogger.com2tag:blogger.com,1999:blog-1787947700033244607.post-30064869486202315912011-05-17T17:45:00.000+01:002011-05-17T17:55:05.496+01:00Is it possible to unite transhumanism and degrowth?I have always had mixed feelings about transhumanism. On one hand, the movement provides fresh ideas and great speculative material, but on the other hand, it seems to suffer from a kind of adolescent "nothing is enough" attitude on every possible level.<br /><br />Transhumanism, in short, advocates the use of technology for turning humans into something better: creatures with ridiculously long lifespans, ridiculous levels of intelligence and ridiculous amounts of pleasure in life. In order to endlessly improve all the statistics, the transcendental mankind needs more and more energy and raw material. One planet is definitely not enough -- we need to populate an ever bigger portion of the universe with ever bigger brainlike structures. To me, these ideas sound like an extreme glorification of our current number one memetic plague, the ideology of endless economic growth. What a pity, since some of the stuff does make some sense.<br /><br />Fortunately, there seems to be more room for diversity in transhumanist thought than that. As there are currents such as "Christian Transhumanism" or "Social-Democratic Transhumanism", would it be possible to devise something like "Degrowthian Transhumanism" as well? Something that denounces the growth ideology while still advocating scientific and technological progress in order to transform mankind into something better? This would be a form of transhumanist philosophy even I might be able to appreciate and symphatize. But could such a bastard child of two seemingly conflicting ideologies be anything else than oxymoronic and inconsistent? Let's find out.<br /><br />The degrowth movement, as the name says, advocates the contraction of economies by downscaling production, as it views the excessive production and consumption in today's societies as detrimental to both the environment and the quality of human life. Consumers need to switch their materialist lifestyles into voluntary simplicity, and producers need to abandon things like planned obsolescence that artificially keep the production volumes up. Downscaling will also give people more free time, which can be used for noble quests such as charity and self-cultivation. These goals may sound agreeable on their own, even to a transhumanist, but what about the technological progress? Downscaling the industries would also slow it down or even reverse it, wouldn't it?<br /><br />Actually, quite many technologies make it possible to do "more with less" and therefore scale dependency networks down. Personal computers, for example, have succesfully replaced an arsenal of special-purpose gadgets ranging from typewriters and television sets to expensive studio equipment. 3D printing technology will reduce the need for specialized mass production, and once we get nanobots to assist in it, we will never require mines or factories anymore. A degrowthian transhumanist may want to emphasize this kind of potential in emergencing technologies and advocate their use for downshifting instead of upshifting. A radical one may even take the provocative stance that only those technologies that reduce overhead are genuinely progressive.<br /><br />A degrowthian transhumanist may want to advocate immaterial developments whenever possible: memetics, science, software. Information is much more lightweight to process than matter or energy and therefore an obvious point of focus for anyone who wants to support both degrowth and technological progress. Most people in our time do not understand how crucial software is in making hardware perform well, so a degrowthian transhumanist may want to shout it out every now and then. It is entirely possible that we already have the hardware for launching a technological singularity, we just don't have the software yet. We may all have savant potential in our brains, we just haven't found the magic formula to unleash it with. In general, we don't need new gadgetry as much as we need in-depth understanding of what we already have.<br /><br />In a downshifted society, people will have a lot of free time. A degrowthian transhumanist may therefore be more willing to adopt time-consuming methods of self-improvement than the mainline transhumanist who fantasizes about quick and easy mass-produced magic such as instant IQ pills. Wisdom is a classical example of a psychological feature that takes time to build up, so a degrowthian transhumanist may want to put a special emphasis on it. Using intelligence without wisdom may have catastrophic results, so we need superhuman wisdom to complement superhuman intelligence, and maybe even artificial wisdom to complement artificial intelligence. The quest for immortality is no longer just about an individual desire to live as long as possible, but about having individuals and societies that get wise enough to use their superhuman capacities in a non-catastrophic way.<br /><br />So, how would a race of degrowthian superhumans spend their lives? By reducing and reducing until there's nothing left but ultimate reclusion? I don't think so. The degrowth movement is mostly a reaction towards the present state of the world and not a dogma that should be adhered to its logical extreme. Once we have scaled our existence down to some kind of sustainable moderateness, we won't need to degrow any further. Degrowthian transhumans would therefore very well colonize a planet or moon every now and then, they just wouldn't regard expansion as an ends of its own. In general, these creatures would be rather serious about the principles of moderation and middle way in anything they do. They would probably also be more independent and self-sufficient than their extropian counterparts who, on their part, would have more brain cells, gadgets and space to toy around with.<br /><br />This was a thought experiment I carried out in order to clarify my own relationship with the transhumanist ideology. I tried to find a fundamental disagreement between the core transhumanist ideas and my personal philosophy but didn't find one. Still, I don't think I'll be calling myself a transhumanist (even a degrowthian one) any time soon; I just wanted to be sure how much I can sympathize this bunch of freaks. I also considered my point of view fresh enough to write it up and share with others, so there you are, hope you liked it.viznuthttp://www.blogger.com/profile/06927455242083569579noreply@blogger.com1tag:blogger.com,1999:blog-1787947700033244607.post-21534485820178954772011-02-01T02:02:00.000+00:002011-02-01T03:08:46.647+00:00On electronic wastefulnessMany things are horribly wrong in this world.<br /><br />People are becoming more and more aware of this. Environmental and economic problems have strengthened the criticism towards consumer culture, monetary power and political systems, and all kinds of countercultural movements are thriving. At the same time, however, ever more people are increasingly dependent on digital technology, which gets produced, bought, used and abandoned in greater masses than ever, causing an ever bigger impact on the world in the form of waste and pollution.<br /><br />Because of this, I have decided to finally summarize my thoughts on how digital technology reflects the malfunctions of our civilization. I became a hobbyist programmer as a schoolkid in the mid-eighties, and fifteen years later I became a professional software developer. Despite all this baggage, I'm going to attempt to keep my words simple enough for common people to understand. Those who want to get convinced by citations and technical argumentation will get those at some later time.<br /><br /><span style="font-size:180%;">Counter-explosion<br /></span><br />For over fifty years, the progress of digital technology has been following the so-called Moore's law, which predicts that the number of transistors that fit on a microchip doubles every two-or-so years. This means that it is possible to produce digital devices that are of the same physical size but have ever more memory, ever more processing speed and ever greater overall capabilities.<br /><br />Moore's law itself is not evil, as it also means that it is possibile to perform the same functions with ever less use of energy and raw material. However, people are people and behave like people: whenever it becomes possible to do something more easily and less consumingly, they start doing more of this something. This phenomenon is called "rebound effect" based on a medical term of the same name. It can be seen in many kinds of things: less fuel-consuming cars make people drive more, and less calories in food make weight-losers eat more. The worst case is when the actual savings becomes negative: a thing that is supposed to reduce consumption actually increases it instead.<br /><br />In information technology, the most prominent form of rebound effect is the bloating of software, which takes place in the same rate of explosiveness as the improvement of hardware. This phenomenon is called Wirth's law. If we took a time machine ride back to 1990 and told the contemporaries that desktop computers would be becoming thousand times faster in twenty years, they would surely assume that almost anything would happen instantaneously with them. If we then corrected them by saying that software programs still take time to start up in the 2010s and that it is sometimes painful to tolerate their slowness and unresponsiveness, they wouldn't believe it. How is it even possible to write programs so poorly that they don't run smoothly with a futuristic, thousand times more powerful computer? This fact would become even harder to believe if we told them that it also applies to things like word processors which are used for more or less exactly the same things as before.<br /><br />One reason for the unnecessary largeness, slowness and complexity of software is the dominant economic ideal of indefinite growth, which makes us believe that bigger things are always better and it is better to sell customers more than they need. Another reason is that rapid cycles of hardware upgrade make software developers indifferent: even if an application program were mindlessly slow and resource-consuming even on latest hardware, no one will notice it a couple of years later when the hardware is a couple of times faster. Nearly any excuse is valid for bloat. If it is possible to shorten software development cycles even slightly by stacking all kinds of abstraction frameworks and poorly implemented scripting languages on top of one another, it will be done.<br /><br />The bloat phenomenon annoys people more and more in their normal daily life, as all kinds of electric appliances starting from the simplest flashlight contain increasingly complex digital technology, which drowns the user in uncontrollable masses of functionality and strange software bugs. The digitalization of television, for example, brought a whole bunch of computer-style immaturity to the TV-watching experience. I've even seen an electric kitchen stove that didn't heat up before the user first set up the integrated digital clock. Diverse functionality itself is not evil, but if the mere existence of extra features disrupts the use of the basic ones, something is totally wrong.<br /><br />Even though many things in our world tend to swell and complexify, it is difficult to find a physical-world counterpart to software bloat, as the amount of matter and living space on our planet does not increase exponentially. It is not possible to double the size of one's apartment every two years in order to fit in more useless stuff. It is not possible to increase the complexity of official paperwork indefinitely, as it would require more and more food and accommodation space for the expanding army of bureaucrats. In the physical world, it is sometimes necessary to evaluate what is necessary and how to compress the whole in order to fit more. Such necessity does not exist in the digital world, however; there, it is possible to constantly inhale and never exhale.<br /><br /><span style="font-size:180%;">Disposability<br /></span><br />The prevailing belief system of today's world equates well-being with material abundance. The more production and consumption there is, the more well-being there is, and that's it. Even though the politicians in rich countries don't want to confess this belief so clearly anymore, they still use concepts such as "gross national product", "economic growth" and "standard of living" which are based on the idealization of boundless abundance.<br /><br />As it is the holy responsibility of all areas of production to grow indefinitely, it is important to increase consumption regardless of whether it is sensible or not. If it is not possible to increase the consumption in natural ways, planned obsolensce comes to rescue. Some decades ago, people bought washing machines and television sets for the twenty years to follow, but today's consumers have the "privilege" of buying at least four of both during the same timespan, as the lifespans of these products have been deliberately shortened.<br /><br />The scheduled breaking of electric appliances is now easier than ever, as most of them have an integrated microprocessor running a program of some kind. It is technically possible, for example, to hide a timer in this program, causing the device to either "break" or start misbehaving shortly after the warranty is over. This kind of sabotage may be beneficial for the sales of smaller and cheaper devices, but it is not necessary in the more complex ones; in their case, the bloated poor-quality software serves the same purpose.<br /><br />Computers get upgraded especially when the software somehow becomes intolerably slow or even impossible to run. This change can take place even if the computer is used for exactly the same things as before. Bloat makes new versions of familiar software more resource-consuming, and when reforms are introduced on familiar websites, they tend to bloat up as well. In addition, some operating systems tend to slow down "automatically", but this is fortunately something that can be fixed by the user.<br /><br />The experience of slowness, in its most annoying form, is caused by too long response times. The response time is the time between user's action and the indication that the action has been registered. Whenever the user moves the mouse, the cursor on the screen must immediately match the movement. Whenever the user presses a letter key on the keyboard, the same letter must appear on the screen immediately. Whenever the user clicks a button on the screen, the<br />graphic of the button must change immediately. According to usability research, the response time must be less than 1/10 seconds or the system feels laggy. When it has taken more than a second, the user's blood pressure is already increasing. After ten seconds, the user is convinced that "the whole piece of junk has locked up".<br /><br />Slow response times are usually regarded as an indicator that the device is slow and that it is necessary to buy a new one. This is a misconception, however. Slow response times are indicators of nothing else than indifferent attitudes to software design. Every computing device that has become available during the last thirty years is completely capable of delivering the response within 1/10 seconds in every possible situation. Despite this fact, the software of the 2010s is still usually designed in such a way that the response is provided once the program has first finished all the more urgent tasks. What is supposed to be more important than serving the user? In the mainframe era, there were quite many such things, but in today's personal computing, this should never be the case. Fixing the response time problems would be a way to permanently make technology more comfortable to use as well as to help the users tolerate the actual slowness. The industry, however, is strangely indifferent to these problems. Response times are, from its point of view, something that "get fixed" automatically, at least for a short while and in some areas, at hardware upgrades.<br /><br />Response time problems are just a single example of how the industry considers it more important to invent new features than to fix problems that irritate the basic user. A product that has too few problems may make consumers too satisfied. So satisfied that they don't feel like buying the next slightly "better" model which replaces old problems with new ones. Companies that want to ensure their growth prefer to do everything multiple times in slightly substandard ways instead of seeking any kind of perfection. Satisfaction is the worst enemy of unnecessary growth.<br /><br /><span style="font-size:180%;">Is new hardware any better?<br /></span><br />I'm sure that most readers have at least heard about the problems caused by the rat race of upgrade and overproduction. The landfills in rich countries are full of perfectly functioning items that interest no one. Having anything repaired is stupid, as it is nearly always easier and cheaper to just buy new stuff. Selling used items is difficult, as most people won't accept them even for free. Production eats up more and more natural resources despite all the efforts of "greening up" the production lines and recycling more and more raw material.<br /><br />The role of software in the overproduction cycle of digital technology, however, is not so widely understood. Software is the soul of every microprocessor-based device, and it defines most of what it is like to use the device or how much of its potential can be used. Bad software can make even good hardware useless, whereas ingenious software can make even a humble device do things that the original designer could never have imagined. It is possible to both lengthen and shortern product lifetimes via software.<br /><br />New hardware is often advocated with new features that are not actually features of the hardware but of the software it runs. Most of the features of the so-called "smartphones", for example, are completely software-based. It would be perfectly possible to rewrite the software of an old and humble cellphone in order to give it a bunch of features that would effectively turn it into a "smartphone". Of course, it is not possible to do complete impossibilities with software; there is no software trick that makes a camera-less phone take photos. Nevertheless, the general rule is that hardware is much more capable than its default software. The more the hardware advances, the more contrast there is between the capabilities of the software and the potential of the hardware.<br /><br />If we consider the various tasks for which personal computers are used nowadays, we will notice that only a small minority of them actually requires a lot from the hardware. Of course, bad software may make some tasks feel more demanding than what they actually are, but that's another issue. For instance, most of the new online services, from Facebook to Youtube and Spotify, could very well be implemented so that they run with the PCs of the late 1990s. Actually, it would be possible to make them run more smoothly than how the existng versions run on today's PC. Likewise, with better operating systems and other software, we could make the same old hardware feel faster and more comfortable to use than today's hardware. From this we can conclude that the computing power of the 2000s is neither useful, necessary nor pleasing for most users. Unless we count the pseudo-benefit that it makes bad and slow software easier to tolerate, of course.<br /><br />Let us now imagine that the last ten years in personal computing went a little bit differently -- that most of the computers sold to the great masses would have been "People's Computers" with a fixed hardware setup. This would have meant that the hardware performance would have remained constant for the last ten years. The 2011 of this alternate universe would probably be somewhat similar to our 2011, and some things could even be better. All the familiar software programs and on-line services would be there, they would just have been implemented more wisely. The use of the computers would have become faster and more comfortable during the years, but this would have been due to the improvement of software, not hardware. Ordinary people would never need to think about "hardware requirements", as the fixedness of the hardware would ensure that all software, services and peripherials work. New computers would probably be lighter and more energy-efficient, as the lack of competition in performance would have moved the competition to these areas. These are not just fringe utopian ideas; anyone can make similar conclusions by studying the history of home computing where several computer and console models have remained constant for ten years or more.<br /><br />Of course it is easy to come up with ideas of tasks that demand more processing power than what was available to common people ten years ago or even today. A typical late-1990s desktop PC, for example, plays ordinary DVD-quality movies perfectly but may have major problems with the HD resolutions that are fashionable in the early 2010s. Similarly, by increasing the numbers, it is possible to come up with imaginary resolutions that are out of the reach of even the most expensive special-purpose equipment available today. For many people, this is exactly what technological progress means -- increase in numerical measures, the possibility to do the same old things in ever greater scales. When a consumer replaces an old TV with a new one, he or she gets a period of novelty vibes from the more magnificent picture quality. After a couple of years, the consumer can buy another TV and get the novelty vibes once again. If we had an access to unlimited natural resources, it would be possible to go on with this vanity cycle indefinitely, but still without improving anyone's quality of life in any considerable extent.<br /><br />Most of the technological progress facilitated by the personal computing resources of the 2000s has been quantitative -- doing the same old stuff that became possible in the 1990s but with bigger numbers. Editing movies and pictures that have ever more pixels, running around in 3D video game worlds that have ever more triangles. It is difficult to even imagine a computational task relevant to an ordinary person that would require the number-crunching power of a 2000s home computer due to its nature alone, without any quantitative exaggeration. This could very well be regarded as an indicator that we already have enough processing power for a while. The software and user culture are lagging so far behind the hardware improvements, that it would be better to concentrate on them instead and leave the hardware on the background.<br /><br /><span style="font-size:180%;">Helplessness<br /></span><br />In addition to the senseless abundance of material items, today's people are also disturbed by a senseless abundance of information. Information includes not only the ever expanding flood of video, audio and text coming from the various media, but also the structural information incorporated in material and immaterial things. The expansion of this structural information manifests as increasing complexity of everything: consumer items, society systems, cultural phenomena. Those who want to understand the tools they use and the things that affect their life, must absorb ever greater amounts of structural information about them. Many people have already given up with understanding and just try to get along.<br /><br />Many frown upon people who can't boil an egg or attach a nail to a wall without a special-purpose egg-boiler or nailgun, or who are not even interested in how the groceries come to the store or the electricity to the wall socket. However, the expanding flood of information and the complexification of everything may eventually result in a world where neo-helplessness and poor common knowledge are the normal condition. In computing, complexification has already gone so far that even many experts don't dare to understand how the technology works but prefer to guess and randomize.<br /><br />Someone who wants to master a tool must build a mental model of its operation. If the tool is a very simple one, such as a hammer, the mental model builds up nearly automatically after a very short study. If someone who uses a hammer accidentally hits their finger with it, they will probably accuse themself instead of the hammer, as the functionality of a hammer can be understood perfectly even by someone who is not so capable in using it. However, when a computer program behaves against the user's will, the user will probably accuse the technology instead of themself. In situations like this, the user's mental model of how the program works does not match with its actual functionality.<br /><br />The more bloated a software program is, the more effort the user needs to take in order to build an adequate mental model. Some programs are even marketing-minded enough to impose its new and glorious features to the user. This doesn't help at all in forming the mental model. Besides, most users don't have a slightest interest in extensive exploration but rather use a simple map and learn to tolerate the uncertainty caused by its rudimentariness. When we also consider that programs may change their functionality quite a lot between versions, even enthusiasts will turn cynical and frustrated when their precious mental maps become obsolete.<br /><br />Many software programs try to fix the complexity problem by increasing the complexity instead of decreasing it. This mostly manifests as "intelligence". An "intelligent" programs monitors the user, guesses their intents and possibly suggests various courses of actions based on the intents. For example, a word processor may offer help in writing a letter, or a file manager may suggest things to do with a newly inserted memory stick. The users are offered all kinds of controlled ready-made functionality and "wizards" even for tasks they would surely prefer to do by themselves, at least if they had a chance to learn the normal basic functionality. If the user is forced to use specialized features before learning the basic ones, he or she will be totally helpless in situations where a special-purpose feature for the particular function does not exist. Just like someone who can use egg-boilers and nailguns but not kettles or hammers.<br /><br />The reasons why technology exists are making things easier to do and facilitating otherwise impossible tasks. However, if a technological appliance becomes so complex that its use is more like random guessing than goal-oriented controlling, we can say that the appliance no longer serves its purpose and that the user has been taken over by technology. For this reason, it is increasingly important to keep things simple and controllable. Simplicity, of course, does not mean mere superficial pseudo-simplicity that hides the internal complxity, but the avoidance of complexity on all levels. The user cannot be in full control without having some kind of an idea about what the tool is doing at any given time.<br /><br />In software, it may be useful to reorder the complexity so that there is a simple core program from which any additional complexity is functionally separated until the user deliberately activates it. This would make the programs feel reliable and controllable even with simple mental maps. An image processing software, for example, could resemble a simple paint program at its core level, and its functionality could be learned perfectly after a very short testing period. All kinds of auxilary functions, automations and other specialities could be easily found if needed, and the user could extend the core with them depending on the particular needs. Still, their existence would never disturb those users who don't need them. Regardless of the level of the user, the mental map would always match how the program actually works, and the program would therefore never surprise the user by acting against his or her expectations.<br /><br />Software is rarely built like this, however. There is not much interest in the market for movements that make technology genuinely more approachable and comprehensible. Consumer masses who feel themselves helpless in regards to the technology are, after all, easier to control than masses of people who know what they are doing (or at least think so). It is much more beneficial for the industry to feed the helplessness by drowning the people in trivialities, distancing them for the basics and perhaps even submitting them under the power of an all-guessing artificially-intelligent assistant algorithm.<br /><br /><span style="font-size:180%;">Changing the world<br /></span><br />I have now discussed all kinds of issues, of which I have mostly accused bad software, and of whose badness I have mostly accused the economic system that idealizes growth and material abundance. But is it possible to do something about these issues? If most of the problems are indeed software-related, then couldn't they be resolved by producing better software, perhaps even outside of the commercial framework if necessary?<br /><br />When calling for a counter-force for commercial software development, the free and open-source software (FOSS) movement is most commonly mentioned. FOSS software has mostly been produced as volunteer work without monetary income, but as the result of the work can be freely duplicated and used as basis of new work, they have managed to cause a much greater impact than what voluntary work usually does. The greatest impact has been among technology professionals and hobbyists, but even laypeople may recognize names such as Linux, Firefox and OpenOffice (of which the two latter are originally proprietary software, however).<br /><br />FOSS is not bound to the requirements of the market. Even in cases where it is developed by corporations, people operating outside the commercial framework can contribute to it and base new projects on it. FOSS has therefore, in theory, the full potential of being independent of all the misanthropic design choices caused by the market. However, FOSS suffers from most of these problems just as much as proprietary software, and it even has a whole bunch of its own extra problems. Reasons for this can be found in the history of the movement. Since the beginning, the FOSS movement has mostly concentrated on cloning existing software without spending too much energy on questioning the dominant design principles. The philosophers of the movement tend to be more concerned about legal and political issues instead of technical ones: "How can we maximize our legal rights?" instead of "How should we design our software so that it would benefit the whole humanity instead of just the expert class?"<br /><br />I am convinced that FOSS would be able to give the world much more than what it has already given if it could form a stronger contrast between itself and the growth-centric industry. In order to strengthen the contrast, we need a powerful manifest. This manifest would need to profoundly denounce all the disturbances to technological progress caused by the growth ideology, and it would need to state the principles on which software design should be based on in order to benefit human beings and nature in the best possible way. Of course, this manifest wouldn't exist exclusively for reinventing the wheel, but also for re-evaluating existing technology and redirecting its progress towards the better.<br /><br />But what can ordinary people do? Even a superficial awareness of the causes of problems is better than nothing. One can easily learn to recognize many types of problems, such as those related to response times. One can also learn to accuse the right thing instead of superficially crying how "the computer is slow" or "the computer is misbehaving". Changes in language are also a nice way of spreading awareness. If people in general learned to accuse software instead of hardware, then they would probably also learn to demand software-based solutions for their problems instead of needlessly purchasing new hardware.<br /><br />When hardware purchases are justifiable, those concerned of the environment will prefer second-hand hardware instead of new, as long as there is enough power for the given purposes. It is a common misconception to assume that new hardware would always consume less power than old -- actually, the trend has more often been exactly the opposite. During a period of ten years from the mid-1990s to the mid-2000s, for example, the power consumption of a typical desktop PC (excluding the monitor) increased tenfold, as the industry was more zealous to increase processing power than to improve energy efficiency. Power consumption curves for video game consoles have been even steeper. Of course, there are many examples of positive development as well. For example, CRT screens are worth replacing with similarly-sized LCD screens, and laptops also typically consume less than similar desktop PCs.<br /><br />There is a strong market push towards discontinuing all kinds of service and repair activity. Especially in case of cellphones and other small gadgets, "service" more and more often means that the gadget is sent out to the manufacturer which dismantles it for raw material and sends a new gadget to the customer. For this reason, it may be reasonable to consider the difficulty of do-it-yourself activity when choosing a piece of hardware. As all forms of DIY culture seem to be waning due to a lack of interest, it is worthwhile to support them in all possible ways in order to ensure that there will still be someone in the future who can repair something.<br /><br />Of course, we all hope that the world would change in a way such that the human- and nature-friendly ways to do things would always be the most beneficial ones even in "the reality of numbers and charts". Such a change will probably take longer than a few decades, however, regardless of the volume of the political quarrel. It may therefore not be wise to indefinitely wait for the change of the system, as it is already possible to participate in practical countercultural activity today. Even in things related to digital technology.viznuthttp://www.blogger.com/profile/06927455242083569579noreply@blogger.com5tag:blogger.com,1999:blog-1787947700033244607.post-76237609359743173932010-09-03T16:45:00.000+01:002011-02-01T14:23:31.495+00:00The Future of Demo Art: The Demoscene in the 2010s<p>Written by Ville-Matias Heikkilä a.k.a. viznut/pwp, released in the web on 2010-09-03. Also available <a href="http://pelulamu.net/countercomplex/the_future_of_demo_art/viznut-tfoda.pdf">in PDF format</a>.</p><h2>Introduction</h2>An end of a decade is often regarded as an end of an era. Around the new year 2009-2010, I was thinking a lot about the future of demo art, which I have been involved with since the mid-nineties. The mental processes that led to this essay were also inspired by various events of the 2010s, such as the last <a href="http://breakpoint.untergrund.net/">Breakpoint party</a> ever, as well as Markku Reunanen's licenciate<a href="http://www.kameli.net/demoresearch2/reunanen-licthesis.pdf"> thesis on the demoscene</a>.<br /><br />First of all, I want to make it clear that I'm not going to discuss "the death of the scene". It's not even a valid scenario for me. The demo culture is already 25 years old, and during these years, it has shown its ability to adapt to the changes in its technological and cultural surroundings, so it's not very wise to question this ability. Instead, I want to speculate what kind of changes might be taking place during the next ten years. What is the potential of the artform in the 2010s, and what kind of challenges and opportunities is it going to face?<br /><h2>After the nineties</h2>Back in the early nineties, demo art still represented the technological cutting edge in what home computers were able to show. You couldn't download and playback real-life music or movies, and even if you could, the quality was poor and the file sizes prohibitive. It was possible to scan photographs and paintings, but the quality could still be tremendously improved with some skilled hand-pixelling. Demos frequently showed things that other computer programs, such as video games, did not, and this made them hot currency among masses of computer hobbyists far beyond the actual demoscene. As a result, the subculture experienced a constant influx of young and enthusiastic newcomers who wanted to become kings of computer art.<br /><br />After the nineties, the traditional weapons of the demoscene became more or less ineffective. Seeing a demo on a computer screen is no longer a unique experience, as demos have the whole corpus of audiovisual culture to compete with. Programming is no longer a fashionable way of expressing creativity, as there is ready-made software easily available for almost any purpose. The massive, diverse hordes of the Internet make you feel small in comparison; the meaning of life is no longer to become a legend, but to sit in your own subcultural corner with an introvert attitude of "you make it, you watch it". Young and enthusiastic people interested in arts or programming have hundreds of new paths to choose from, and only a few pick the good, old and thorny path of demomaking.<br /><br />There are many people who miss the "lost days of glory" of their teens. To them, demos have lost their "glamor" and are now becoming more and more irrelevant. I see the things a little bit differently, however.<br /><br />Consider an alternative history where the glamor was never lost, and the influx of enthusiastic teenagers always remained constant. Year after year, you would have witnessed masses of newbies doing the same mistakes all over again. You would also have noticed that you are "becoming too old for this shit" and looked for a totally different channel for your creativity. The average career of a demo artist would thus have remained quite short, so there would never have been veteran artists with strong and refined visions and thus no chance for the artform to grow up. Therefore, I don't see it as a bad thing at all that demos are no longer as fashionable as they used to be.<br /><br />There have been many changes in the demo culture during the last ten years. Most of them can be thought of as adaptations to the changing social and technological surroundings, but you can also think about them as belonging to a growth process. As your testosterone levels have lowered, you are no longer as arrogant about your underground trueness as you used to be. As you have gathered more experience and wisdom about life and the world, you can appreciate the diversity around yourself much better than you used to be. More outreach and less fight, you know.<br /><br />When thinking about the growth process, one should also consider how the relationship between the demoscene and the technology industry has changed. In the eighties, it was all about piracy. In the nineties, people forgot about the piracy and started to dream about careers in software industry. Today, most sceners already have a job, so they have started to regard their freetime activity as a relief from their career rather than as something that would support it.<br /><br />Especially those who happen to be coders "on both sides" tend to have an urge to separate the two worlds in some way or another by emphasizing the aspects that differentiate democoding from professional programming. You can't be very creative, independent, experimental or low-level in most programming jobs, so you'll want to be that in your artistic endeavours. You may want to choose totally different platforms, methods and technical approaches so that your leisure activity actually feels like leisure activity.<br /><br />Thus, although many demosceners work in the software industry, the two worlds seem to be drifting apart. And it is not just because of the separation of work and freetime, but also because of the changes in the industry and the world in general.<br /><br />Although the complexity of everything in human culture has been steadily increasing for a couple of centuries already, there has been a very dramatic acceleration during the past few decades, especially in technology. This means, among all, that there are more and more prepackaged blackboxes and less and less room for do-it-yourself activities.<br /><br />Demo art was born in a cultural environment that advocated hobbyist programming and thorough bitwise understanding of one's gear. The technical ambitions of democoders were in complete harmony with the mainstream philosophy of that era's homecomputing. During the following decades, however, the mainstream philosophy degraded from do-it-yourself into passive consumerism, while the demoscene continued to cultivate its original values and attitudes. So, like it or not, demos are now in a "countercultural" zone.<br /><br />While demos have less and less appeal to the mainstream industry where the "hardcore" niches are gradually disappearing, they are becoming increasingly interesting to all kinds starving artists, grassroots hippies, radical do-it-yourself guys and other "countercultural" people. And if you want your creative work to make any larger-scale sense in the future world, I guess it might be worthwhile to start hang around with these guys as well.<br /><h2>Core Demoscene Activity</h2>The changes during the last ten years have made the demoscene activity somewhat vague. In the nineties, you basically made assembly code, pixel graphics and tracker music, and that was it. The scene was the secret cult that maintained the highest technical standards in all of these "underground" forms of creativity. Nowadays, everyone you know uses computers for creativity, some of them even being better than you, and most computer-aided creativity falls under some valid competition category at demoparties. Almost any deviantART user could submit their work in an average graphics compo, sometimes even win it. As almost anything can be a "demoscene production", being a "demoscener" is no longer about what your creative methods are like, but whom you hang around with.<br /><br />When talking about demo art, it is far too easy to concentrate on the social background ("the scene") instead of the actual substance of the artform and the kind of activity that makes it unique. For the purposes of this essay, I have therefore attempted to extract and define something that I call "Core Demoscene Activity". It is something I regard as the unique essence of demo art, the pulsating heart that gives it its life. All the other creative activities of demo art stem from the core activity, either directly or indirectly.<br /><br />When defining "core demoscene activity", we first need to define what it isn't. The first things to rule out are the social aspects such as participating in demoscene events. These are important in upholding the social network, but they are not vital for the existence of demos. Making demos is supposed to be the reason for attending parties, not the other way around.<br /><br />The core activity is not just "doing creative things with a computer" either. Everyone does it, even your mother. And not even "making non-interactive realtime animations", as there are other branches of culture that do the same thing -- the VJ and machinima communities, for example. Demos do have their own esthetic sensibilities, yes, but we are now looking for something more profound than that.<br /><p>What is the most essential thing, in my opinion, is the program code. And not just any tame industry-standard code that fulfills some given specifications, but the wild and experimental code that does something that opens up new and unpredicted possibilities. Possibilities that are simply out of the reach of existing software tools. Although there are other areas of computer culture that practise non-compromising hard-core programming, I think the demoscene approach is unique enough to work as a basis of a wholesome definition.</p>The core activity of the demoscene is very technical. Exploration and novel exploitation of various possible hardware and software platforms. Experimentation with new algorithms, mathematical formulas and novel technical concepts. Stretching the expressive power of the byte. You can remove musicians, graphicians and conceptual experimenters, but you cannot remove hardcore experimental programming without destroying the essence of demo art.<br /><br />The values and preferences of demoscene-style programming are very similar to those of traditional hackers (of the MIT tradition). A major difference, however, seems to be that a traditional hacker determines the hack value of a program primarily by looking at the code, while a demo artist primarily looks at the audiovisual output. An ingenious routine alone is not enough; it must also be presented well, so that non-programmers are also able to appreciate the hack value. A lot of effort is put in presentational tweaking in order to maximize the audiovisual impact. This relationship between code and presentation is another unique thing in demo art.<br /><br />Here is a short and somewhat idealized definition of "Core Demoscene Activity":<br /><ul><br /><li>Core Demoscene Activity is the activity that leads to the discovery of new techniques to be used in demo art.</li><br /><li>Everything in Core Demoscene Activity needs to directly or indirectly support the discovery of new kind of audiovisual output. Either something not seen on your platform before, or something not seen anywhere before.</li><br /><li>The exploration should ideally concentrate on things that are beyond the reach of existing software tools, libraries or de-facto standard methods. This usually requires a do-it-yourself approach that starts from the lowest available level of abstraction.</li><br /><li>General-purpose solutions or reusable code are never required on this level, so they should not interfere with the research. Rewrite from scratch if necessary.</li></ul><p>Of course, the core activity alone is not enough, as the new discoveries need to be incorporated in actual productions, which also often include a lot of content created with non-programmatical methods. So, here is a four-level scheme that classifies the various creative activities of demo art based on their methodological distance from the "core". Graphically, this could be presented as nested circles. Note that the scheme is not supposed to be interpreted as a hierarchy of "eliteness" or "trueness", it is just one possible way of talking about things.</p><ul><li>First Circle / Core Demoscene Activity: Hardcore experimental programming. Discovery of new techniques, algorithms, formulas, theories, etc. which are put in use on the Second Circle.</li><br /><li>Second Circle Activity: Application-level programming. Demo composition, presentational tweaking of effect code, content creation via programming, development of specialized content creation tools (trackers, demomakers, softsynths), etc.</li><br /><li>Third Circle Activity: Content creation with experimental, specialized and "highly non-standard" tools. Musical composition with trackers, custom softsynths or chip music software; pixel and character graphics; custom content creation software (such as demomakers), etc.</li><br /><li>Fourth Circle Activity: Content creation with "industry-standard tools" including high-profile software and "real-life" instruments. Most of the bitmap graphics, 3D modelling and music in modern "full-size" demos have been created with fourth-circle techniques. Design/storyboard work also falls in the fourth circle. Blends rather seamlessly with mainstream computer-aided creativity.</li></ul><p>It should be noted that the experimental or even "avant-garde" attitude present in the Core Activity can also be found on the other levels. This also makes the Fourth Circle important: while it is possible to do conceptual experimentation on any level, general-purpose industry-standard tools are often the best choices when trying out a random non-technical idea.</p>The four-circle scheme seems to be applicable to some other forms of digital art as well. In the autumn 2009, the discovery of Mandelbulb, an outstanding 3D variant of the classic Mandelbrot set, inspired me look into the fractal art community. The mathematical experimentation that led to the discovery of the Mandelbulb formula was definitely a kind of "core activity". Some time later, an "easy-to-use" rendering tool called "Mandelbulber" was released to the community in what I would classify as "second-circle" activity. The availability of such a tool made it possible for the non-programmers of the community to use the newly discovered mathematical structure in their art in activities that would fall on the third and fourth circles.<br /><br /><h2>Is it only about demos?</h2>The artistic production central to demo culture is, obviously, the demo. According to the current mainstream definition, a demo is a stand-alone computer program that shows an audiovisual presentation, a couple of minutes long, using real-time rendering. It remains exactly the same from run to run, and you can't interact with it. But is this all? Is there something that demo artists can give to the world besides demos?<br /><br />I'm asking this for a reason. The whole idea of a demo, defined in this way, sounds somewhat redundant to laymen. What is the point in emphasizing real-time rendering in something that might just as well be a prerendered video? Isn't it kind of wasteful to use a clever technical discovery to only show a fixed set of special cases? In order to let the jewels of Core Demoscene Activity shine in their full splendor, there should be a larger scale of equally glorified ways of demonstrating them. Such as interactive art. Or dynamic non-interactive. Maybe games. Virtual toys. Creative toys or games. Creative tools. Or something in the vast gray areas between the previously-mentioned categories.<br /><br />The idea of a "non-interactive realtime show" is, of course, tightly knit with the standard format of demoparty competitions. Demos are optimized for a single screening for a large audience, and it is therefore preferrable that you can fix as many things as possible beforehand. Realtime rendering wasn't enforced as a rule until video playback capabilities of home computers had become decent enough to be regarded as a threat to the dominance of hardcore program code.<br /><br />But it's not all about party screenings. There are many other types of venues in the world, and there are, for example, people who still actually bother to download demoscene productions for watching at home. These people may even desire more from their downloaded programs than just a couple of minutes of entertainment. There may be spectators who, for example, would like to create their own art with the methods used in the demo. Of the categories mentioned before, I would therefore like to elevate creative toys and tools to a special position.<br /><br />It is proven that creative tools originating in the demoscene may give rise to completely new creative subcultures. Take trackers, for example. The PC tracker scene of the nineties was much wider than the demoscene which gave it the tools to work with. In the vast mosaic of today's Internet world, there is room for all kinds of niches. Release a sufficiently interesting creative tool, and with some luck, you'll inspire a bunch of freaks to find their own preferred means of creativity. The freaks may even form a tight-knit community around your tool and raise you to a kind of legend status you can't achieve with demo compo victories alone.<br /><br />Back in the testosterone-filled days, you frowned upon those who used certain creative tools without understanding their deep technicalities. But nowadays, you may already realize the importance of "laymen" exploring the expressive possibilities of your ingenious routine or engine. If you are turned off by the fact that "everyone" is able to (ab)use your technical idea, you should move on and invent an even better one. The Core Activity is about continuous pushing of boundaries, not about jealously dwelling in your invention as long as you can.<br /><br />Now, is there a risk that the demoscene will "bland out" if "non-demo productions" will receive as much praise and glory as the "actual" demos? I don't think so. To me, what defines the demoscene is the Core Activity and not the "realtime non-interactive production". As long as you nurture the hardcore spirit, it manifests itself in all kinds of things you produce, regardless of how static, realtime, bouncy or cubistic they are.<br /><br /><h2>Parties and social networks</h2>An important staple in keeping demo culture alive is the demoparty. It both strengthens the social bonds and motivates the people involved to create and release new material. Of course, extensive remote communication has always been there, but flesh-and-blood meetings are the ones that strengthen the relationships to span years and decades.<br /><p>As there are so many people who have deeply dedicated theirselves to demo art for so many years, I am convinced that there will be demoscene parties in 2020 as well. Only a global disaster of an apocalyptic scale can stop them from taking place.</p>While pure insider parties may be enough for keeping the demoscene alive, they are not enough for keeping it strong and vital. There is a need for fruitful contacts between demo artists and other relevant people, such as other kinds of artists and potential newcomers. High-profile mainstream computer parties, such as Assembly, have been succesful in establishing these contacts in the past, but much of the potential for success has faded out during the last decade, as an average demo artist has less and less in common with an average Assembly visitor.<br /><br />I think it is increasingly vital for demo artists to actively establish connections with other islets of creative culture they can relate to. The other high-profile Finnish demoparty, Alternative Party, has been very adventurous in this area. Street and museum exhibitions that bring demo art to "random" people may be fruitful as well, even in surprising ways. When looking for contacts, restricting oneself to "geeky subcultures" is not very relevant anymore, as everyone uses computers and digital storage formats nowadays, and being creative with them -- even in ways relevant to demo art -- does not require unusual levels of technological obsession.<br /><br />Crosscultural contacts, in general, have the potential of giving demosceners more room to breath. While a typical demoparty environment strongly encourages a specific type of artwork (i.e. demos), other cultural contexts may inspire demo artists to create totally different kinds of artifacts. I'm also sure that many experimental artists would be happy to try out some unique creative tools that the demo community may be able to give to them, so the collaboration may work well in both directions.<br /><br /><h2>Real and virtual platforms</h2>The relationship between demo artists and computing platforms has changed dramatically during the past ten years. Back in the nineties, you had a limited number of supported platforms with separate scenes and competitions. Nowadays, you can choose nearly any hardware or software platform you like, and different platforms often share the same competitions. Due to the existence of decent emulators and easy video captures, the scene is no longer divided by gear ownership. Anyone can watch demos from any platform or even try to develop for almost any platform without owning the real hardware. Also, as the average age of demosceners has risen, platform fanboyism is now far less common.<br /><p>The freedom is not as full as it could be, however. There are people who build their own demo hardware and they are praised for this, but what about creating your own entirely software-based "virtual platforms"? Most demo artists don't even think about this idea. Of course, there are many coders who have created ad-hoc integrated virtual machines in order to, for example, improve the code density in 4K demos, but "actual" platforms are still something that need to be defined by the industry. In the past, it even required quite a tedious process until a new hardware platform became accepted by the community.</p>So, why would we need virtual platforms in the first place? Let's talk about the expressive power of chip music, for example. There are various historical soundchips that have different sets of features and limitations, and after using several of them, a musician may not be completely satisfied by any single chip. Instead, he or she may imagine a "perfect soundchip" that has the exact combination of features and limitations that inspires him/her in the best possible way. It may be a slight improvement of a favorite chip or a completely new design. Still, someone who composes for a virtual chip rather than an authentic historical chip may not be regarded as very "true". There is still certain history-fetishism that discourages this kind of activity. In my earlier<span style="text-decoration: underline;"></span><a href="http://www.blogger.com/computationally-minimal-art/"> essay about Computationally Minimal Art</a>, however, I expressed my belief that the historical timeline will lose its meaning in the near future. This will make "non-historical experimentation" more acceptable.<br /><br />It is already relatively acceptable to run demos with emulators instead of real hardware, even in competitions, so I think it's only a matter of time that completely virtual platforms (or "fake emulators") become common. For many, this will be a blessing. Artists will be happier and more productive working with instruments that lack the design imperfections they used to hate, and the audience will be happier as it gets new kinds of esthetic forms to appreciate.<br /><br />Virtual platforms may also introduce new problems, however. One of them is that none of the achieved technical feats can be appreciated if the platform is not well-understood by the audience: if you participate in a 256-byte competition with a demo written for your own separate virtual machine, it is always relevant for the spectator to assume that you have cheated by transferring logic from the demo code into the virtual machine implementation. You could, for example, put an entire music synthesizer in your virtual machine and just use a couple of bytes in the demo code to drive it. If you want your technical feats appreciated, the platform needs to pass some kind of a community acceptance process beforehand.<br /><br />On the other hand, virtual platforms may eventually become mandatory for certain technical feats. It is already difficult in modern operating systems, for example, to create very small executables that access the graphics and sound hardware. As the platforms "improve", it may eventually become impossible to do certain things from within, say, a four-kilobyte executable. In cases like this, the community may need to solve the problem with a commonly accepted "virtual platform", i.e. a loader that allows running executables given in a format that has less overhead. Such a loader may also be used for fixing various compatibility problems that are certain to arise when new versions of operating systems come out.<br /><br />Within some years from now, we may have a plethora of virtual machines attempting to represent "the ultimate demo platform". There will be a need for classifying these machines and deciding about their validity in various technical competitions. Despite all the possible problems and controversies they are going to introduce, I'm going to embrace their arrival.<br /><br />But what about actual hardware platforms, then? I guess that there won't be as much difference by 2020 anymore. FPGA implementations of classic hardware have already been available for several years, and I assume it won't take long until it will be common to synthesize both emulators and physical hardware from the same source code. Once we reach the point that it is easy for anyone to use a printer-type device to produce a piece of hardware from a downloadable file, I don't think it'll really matter so much to anyone whether something is running virtually or physically.<br /><br />Regarding the next decade of the mainstream hardware industry, I think the infamous Moore's law makes it all quite predictable and obvious: things that were not previously possible in real time will be easy to do in real time. There will be smaller video projectors and all that. Mobile platforms will be as powerful as today's high-end PCs, so you won't be able to get "oldschool kicks" from them anymore. If you want such kicks from an emerging technology, you won't have many niches left; conductive ink may be one of the last possibilities. Before 2020, your local grocery store will probably be selling milk in packages that have ink-based circuits displaying animations, and before that happens, I'm sure that the demoscene will be having lots of fun with the technology.<br /><h2>Paths of initiation</h2>It is already a commonly accepted view that the demoscene needs newcomers to remain vital, and that they need to be actively recruited since the influx is no longer as overwhelming as it used to be. This view represents a dramatic change from the underground-elitist attitudes of the nineties, when potential newcomers were often forced thru a tight social filter that was supposed to separate the gifted individuals from the "lamers". Requiring guidance was a definitive sign of weakness; if you couldn't figure out the path of initiation on your own, no one was going to help you. You simply got stuck in the filter and never got in.<br /><br />According to my experiences, it is not very difficult to get people interested in demo art as long as you manage to pull the right strings. It is also relatively easy to get them participate in demoscene events. But getting them involved in the various creative activies is a much more complex task, especially when talking about the inner-circle activities that require programming. It is not about a lack of will or determination but more like about uncertainty about how to get started.<br /><br />A lot of consideration should be put in the paths of initiation during the following decade. Instead of generalizing from their own past experiences, recruiters should listen to the stories of the recent newcomers. What kind of paths have they taken? What kind of niches have they found relevant? What have been the most difficult challenges in getting involved? Success stories and failure stories should both be listened to.<br /><br />I'm now going to present some of my own ideas and observations about how democoder initiation works in today's world and how it does not. These are all based on my personal experiences with recent newcomers and not on any objective research, so feel free to disagree.<br /><br />First, I want to outline my own theory about programming pedagogy. This is something I regard as a meaningful "hands-on" path for hobbyist programmers in general, not only for aspiring democoders. Lazy academic students (whose minds get "mutilated beyond recovery" by a careless choice of first language) may prefer a more theoretical route, but this three-phase model is something I have witnessed to work even for the young and the practical-minded, from one decade to another.<br /><ul><br /><li>First phase: Toy Language. It should have an easy learning curve and reward your efforts as soon as possible. It should encourage you to experiment and gradually give you the first hints of a programming mindset. Languages such as BASIC and HTML+PHP have been popular in this phase among actual hobbyists.</li><br /><li>Second phase: Assembly Language. While your toy language had a lot of different building blocks, you now have to get along with a limited selection. This immerses you into a "virtual world" where every individual choice you make has a tangible meaning. You may even start counting bytes or clock cycles, especially if you chose a somewhat<br /> restricted platform.</li><br /><li>Third phase: High Level Language. After working on the lowest level of abstraction, you now have the capacity for understanding the higher ones. The structures you see in C or Java code are abstractions of the kind of structures you built from your "Lego blocks" during the previous phase. You now understand why abstractions are important, and you may also eventually begin to understand the purposes of different higher-level programming techniques and conventions.<br /></li></ul><p>Based on this theory, I think it is a horrible mistake to recommend the modern PC platform (with Win32, DirectX/OpenGL, C++ and so on) to an aspiring democoder who doesn't have in-depth prior knowledge about programming. Even though it might be easy to get "outstanding" visual results with a relative ease, the programmer may become frustrated by his or her vague understanding of how and why their programs work.<br /></p>The new democoders I know, even the youngest ones, have almost invariably tried out assembly programming in a constrained environment at some point of their path, even if they have eventually chosen another niche. 8-bit platforms such as C-64 or NES, usually via emulator, have been popular choices for "first hardcore coding". Sizecoding on MS-DOS has also been quite common.<br /><br />Not everyone has the mindset for learning an actual "oldschool platform" on their own, however. I therefore think it might be useful to develop an "educational demoscene platform" that is easy to learn, simple in structure, fun to experiment with and "hardcore" enough for promoting a proper attitude. It might even be worthwhile to incorporate the platform in some kind of a game that motivates the player to go thru varying "challenges". Putting the game online and binding it to a social networking site may also motivate some people quite a lot and give the project some additional visibility.<br /><br /><h2>Conclusion</h2>We have now covered many different aspects of the future of demo art in the 2010s, and it is now the time to summarize. If we crystallize the prognosis to a single word, "diversity" might be a good choice.<br /><p>It indeed seems that the diversity in what demo artists produce will continue to increase in all areas. There will be more platforms available, many of them designed by the artists themselves. There will be more alternatives to the traditional realtime non-interactive demo, especially via the various "new" venues provided by "crosscultural contacts". And I'm sure that the range of conceptual and esthetic experimentation will broaden as well.</p>Back in the nineties, most demo artists were "playing the same game", with the same rules with relatively similar goals. After that, the challenges became much more individual, with different artists finding their very own niches to operate in. There are still "major categories" today, but as the new decade continues, they will have less and less meaning compared to the more individual quests. This may also reduce the competitive aspect of the demo culture: as everyone is playing their own separate game, it is no longer possible to compare the players. Perhaps, at some point of time, someone will even question the validity of the traditional compo format.<br /><br />Another keyword for the next decade could be "openness". It will show both in the increased outreach and "crossculturality". There will be an increasing amount of demo artists who operate in other contexts besides the good old demoscene, and perhaps there will also be more and more "outsiders" who want to try out the "demoscene way" for a chance, without intentions of becoming more integral members of the subculture.<br /><br />In the nineties, many in the scene were dreaming about careers in the video game industry. After that, there have been similar dreams about the art world: gaining acceptance, perhaps even becoming professional artists. The dreams about the video game industry came true for many, so I'm convinced that the dreams about the art world will come true as well.viznuthttp://www.blogger.com/profile/06927455242083569579noreply@blogger.com3tag:blogger.com,1999:blog-1787947700033244607.post-56301093852772186392010-04-18T16:36:00.000+01:002011-01-26T16:44:14.052+00:00Behind "Dramatic Pixels"<p>I released a minimalistic demo called "Dramatic Pixels" at <a href="http://breakpoint.untergrund.net/">Breakpoint 2010</a>. It is an experiment in narrative using very minimal visual output: three colored character blocks ("big pixels") moving on an entirely black background, synchronized to musical accompaniment. (<a href="http://noname.c64.org/csdb/release/?id=90380">CSDB</a>, <a href="http://www.pouet.net/prod.php?which=54667">Pouet.net</a>)</p><br /><object width="480" height="385"><param name="movie" value="http://www.youtube.com/v/9eQjU94s5LU&amp;hl=en_US&amp;fs=1&amp;"><param name="allowFullScreen" value="true"><param name="allowscriptaccess" value="always"><embed src="http://www.youtube.com/v/9eQjU94s5LU&amp;hl=en_US&amp;fs=1&amp;" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="480" height="385"></embed></object><br /><br />I was expecting the demo to cause very mixed reactions in the audience, but to my surprise, it actually won the competition it was in (4-kilobyte Commodore 64 demo) and the reception has been almost entirely positive. This -- along with the fact that a somewhat similar<a href="http://www.pouet.net/prod.php?which=54378"> production</a> was released by Skrju and Triebkraft for the ZX Spectrum just two months earlier -- inspired me to write this short essay about the philosophy behind this production. And besides, visy/trilobit has also<a href="http://neuronom.be/?x=entry:entry100415-130646"> blogged</a> about "Dramatic Pixels" recently, so I think I am obliged to do the same.<br /><br /><h2>Background</h2>For quite some time already, I have been on a philosophical excursion to the nature of "hard-core" digital creativity, especially the deep essences of the demoscene and the "8-bit" culture. The so far biggest visible result of this excursion has been my<a href="http://www.pelulamu.net/countercomplex/computationally-minimal-art/"> recent essay about Computationally Minimal Art</a>, which, among all, separates the ideas of "optimalism" and "reductivism". I have noticed that the audiovisual digital culture (including the demoscene) has traditionally been very optimalist in nature, aiming at fitting as much complexity as possible within given boundaries. The opposite approach, reductivism, which embraces minimal complexity itself as an esthetic goal, is very seldom used by the demoscene, however.<br /><br />In December 2009, I was pondering about how to express "complex real-world phenomena" such as human emotions via "extreme reductivism". I was planning to design a low-pixel "video game character" that shows a wide range of emotions with facial and bodily expressions, and I particularly wanted to find out the minimum number of facial pixels required to express all the nine emotional<a href="http://en.wikipedia.org/wiki/Rasa_%28aesthetics%29"> responses (rasas)</a> of the Indian theatre. When minimizing the number of pixels, however, I realized that facial expressions might not in fact be necessary at all; movement patterns and rhythms alone seemed to be enough for differentiating fear from bravery, or certainty from uncertainty. If the character only needs to move around for full expressive power, its pixel pattern can very well be reduced to a single pixel.<br /><br />I quickly did a couple of experiments with this idea of "pixel drama". As the results were convincing enough, I started to plan a minimalistic movie using only single-pixel characters. As the movie was quite probably to be implemented as a demoscene production, I thought it would be important to have a somewhat "operatic" approach, synchronizing the visual action with a strong musical accompaniment.<br /><br />After some initial sketches, I didn't really think about the idea for a couple of months. But less than a week before the Breakpoint party, I decided to implement it on the C-64. The choice of platform could have been just about anything, however, from VCS to Win32. C-64 just seemed like the best and easiest choice considering the competition categories available at Breakpoint. The size of the demo ended up to be about 1.5K bytes, and I later also released a 1K version where the introductionary text was removed.<br /><br /><h2>The demo itself</h2>Technically, everything in "Dramatic Pixels" is centered around the music player routine, which is also responsible for the choreography: the bytes that encode the notes of the lead channel also contain bits that control the movement of the pixels. To be exact, every time a new note is played by the lead instrument, exactly one of the three pixels takes a single step towards one of the four cardinal directions. This is an intentional technical decision that ties the pixel movement seamlessly to the music. Internally, the whole show is a series of looping sequences that are both musical and visual at the same time.<br /><br />All the actual musical notes, by the way, are encoded by only two bits each. These two bits form an index to a four-note set, which is defined by two variables (indicating base pitch and harmonic structure). These variables are manipulated on the fly by a higher-level control routine that is also responsible for the other macro-level changes in the demo. I prefer to encode melodies in this way rather than as absolute pitches, as a more "indirect" approach makes it more compact and closer to the essence of the musical structure. And, in the case of this demo, I wanted some minimalism (or maybe serialism) in the musical score as well, and the possibility to repeat the same patterns in different modes helps in this goal.<br /><p>The 6502 assembly source code of the 1K version is available for those who are interested. It should be relatively easy to port to any 6502-based platform (with the music player probably requiring most work), so I've been planning on releasing separate versions for VIC-20 and Atari 2600 as well.</p>So, what about the story, then? Most of the interpretations I've heard have been somewhat similar and close to my own intentions, so I think my decisions about the audiovisual language have been relatively succesful: Red and Blue meet, fall in love, become estranged, cheat on each other with Green, and in the end everyone gets killed. However, there are some portions that are apparently more difficult.<br /><p>When I created the characters, I had no intentions of assigning genders to the pixels. Still, some people have interpreted Red as male and Blue as female. This probably stems from the differences in the base pitches (when Blue moves, the pitch is an octave higher than when Red moves), but the personalities of the pixels may also matter. Red is more stereotypically masculine, making more initiatives, while Blue mostly responds to these initiatives. I don't know whether the interpretations would have been different if I had chosen Blue to be the initiator.</p>The second part, where Red and Blue spend time on the opposite sides of the screen, is perhaps the most difficult to follow. I intended this part to represent everyday life where both pixels have their own daytime activities and only see each other at home very briefly in evenings (and don't pay much attention to one another even then). Also, the workplaces are so far away that the pixels can't see each other cheating until Red decides to get closer to Blue's workplace. And no, Green does not represent two different pixel personalities depending on the partner -- it's the same despisable creature in all cases. The part is intentionally slightly too long and repetitive in order to emphasize the frustration that repetitive everyday routines may lead to.<br /><br /><h2>Comparison to the Spectrum demo</h2>I would now like to compare "Dramatic Pixels" to the 256-byte Spectrum demo I mentioned earlier, "A true story from the life<a href="http://www.pouet.net/prod.php?which=54378"> of a lonely cell</a>" by Sq/Skrju and Psndcj/Triebkraft. Although I'm trying to follow the Spectrum demoscene due to some very visionary groups therein, this demo was so recent that I never managed to even hear about it until I had finished "Dramatic Pixels".<br /><br />In both demos, there are three characters represented by solid-colored blocks. The blocks express emotion mostly by the way how they move. In "A true story", all movement happens in one dimension, so it is basically all about back-and-forth movement in varying rhythms. "Dramatic Pixels" can be very easily seen as a refinement of this concept, adding a musical accompaniment and another dimension (although it may very well have worked in 1D as well). The stories in both demos are based on the love triangle model, although my story is a little bit more complex.<br /><br />"Great minds think alike", yes, but the coincidence still baffles me. Is it really just a coincidence or a result of some external factors? Deep thoughts about the state of the demoscene, perhaps combined with some general angst about the potential of the art form in the 2010s, were part of the mental process that lead me to create "Dramatic Pixels". I haven't discussed this with Sq, but perhaps there was something similar going on in his mind as well.<br /><br /><p>To add an additional spice to the mystery: the recent video game inspired short film called "<a href="http://www.dailymotion.com/video/xcv6dv_pixels-by-patrick-jean_creation">Pixels</a>" was put on the web on the same day (2010-04-07) as I put the video of "Dramatic Pixels" on Youtube.</p><p><br /></p><h2>The bigger purpose</h2>For some time already, I have been writing pretty words about "thinking out of the box" in the demoscene context. But pretty words are hollow unless you back them up with some practical evidence, such as an actual demo.<br /><p>I considered it important to finish "Dramatic Pixels" for Breakpoint, as I had just recently released my essay about Computationally Minimal Art. I wanted to release a production that would support some of its ideas, especially the equality of reductivism as a "boundary-pushing" approach.<br /></p>When working on "Dramatic Pixels", I made two observations about my mental reactions. First, extreme visual minimalism can give me the same kind of "boundary-pushing shivers" as some groundbreaking optimalist demos can, so I got the subjective evidence I desired about the power of the reductivist approach. And second, despite the existence of the narrative, I never felt any "narrative embarrasment" that is almost a given with story-based demos (even the good ones). I don't yet know what the missing embarrassing element is; narrative text, dialogue, human-like characters? I still need to think this over, I guess.<br /><br />Anyway, I hope this experiment broke some new ground that would inspire some further experimentation in computational minimalism. I think traditional minimalists have already done quite a lot of "basic research" during the last hundred years or so, so I would like the inspired productions to choose a fresh route by emphasizing those areas that are unique in the computational approach.viznuthttp://www.blogger.com/profile/06927455242083569579noreply@blogger.com0