aleph is something of a curiosity: it’s a dedicated box uniquely designed for sonic exploration that isn’t a conventional computer. It comes from the creator of the monome, but while dynamic mapping is part of the notion, it is the first monome creation capable of making sound on its own. The monome is a controller that uses a grid for whatever you want; aleph is a self-contained instrument that makes any sound you want.

But this isn’t only a story about some specialist, boutique device. It’s a chance to peer into the minds of two imaginative inventor/musicians, and see what they think the future might hold for their own musical creativity.

And so it’s a pleasure to talk to Brian Crabtree and Ezra Buchla. As brain-picking targets go, you could do worse. Synth pioneer Don Buchla is Ezra’s dad, of course – but it’s just as significant that Ezra has been a big figure in the experimental scene, including as a composer and doing turns in experimental bands The Mae Shi and Gowns. (See monome’s own, engaging interview with Ezra from May.) And then there’s Brian, whose vision has proven as prescient as anyone’s in recent years. What appeared as a boutique oddity in the monome has predicted a striking number of trends in hardware. It pushed openness, sustainability, community support, and more minimal, material-minded, un-branded physical design. And it has been followed by years in which the light-up grid it championed has become the single most dominant paradigm for controlling music on the computer. Not bad for an independent designer.

Of course, the monome had its fair share of criticism when it debuted. Heck, I even complained about its lack of velocity sensitivity at the time. But that’s another reason to look deeper here. Sure, aleph is pricey. Sure, it’s not as easy to run custom software as it would be on an embedded Linux device. Sure, there aren’t going to be a whole lot made. But as a digital musical instrument, there’s a chance for even a few alephs to make a big impact – and for the ideas behind it to spread beyond this project alone.

So, let’s hear some of Brian’s and Ezra’s ideas, on the even of the aleph launch.

We also get treated to a new video for more evidence of what this could become.

By the way, if you like interviews – there’s a terrific series of artist interviews focused as much on music as grids, on the monome site. Well worth saving to read, as you have now around a couple dozen articles:http://monome.org/category/interview/

See also a great interview with monome (Kelli and Brian):THOSE WHO MAKE

CDM: First, most generally, why a dedicated box? Operating systems can cause problems, yes – but it is possible these days to set up a Linux system, for instance, with low latencies and high reliability, especially if you aren’t using a GUI. So why go this route?

Brian: For myself there are several primary motivations. First and most important is having a machine that powers on almost immediately and is ready to do something interesting, more akin to an instrument than a fragile environment which fundamentally requires a lot of setup. It feels strange to be able to power on the aleph, plug in a grid, and immediately be running some sort of algorithmic sequencer driving a synth in complicated ways. This is, of course, all possible on a computer, but I find a lot of frustration in perfectly calibrating a long chain of software and hardware. While I prefer my tools to be unpredictable in what they might sound like, I prefer predictability when it comes to reliability. We don’t have to worry about operating systems– there is none. There doesn’t need to be an upgrade cycle which creates obsolescence. I like the idea of this platform still working in ten years.

Ezra has done almost all of the code, so for me also this presented an exceptional opportunity as a learning platform. We decided to extend this learning to anyone interested by making the frameworks open-source, facilitating a development process, and, very soon, creating tutorials. The aleph is looking to be a wonderful little ecosystem for DSP and control experiments.

Ezra: Performance was certainly a goal, and is facilitated by designing at the lowest level– programming directly to the chip. On the DSP side, we can do single-sample latency. We’ve been down the path of embedded Linux for audio. It seems not quite ready to support this kind of project on a scale that’s bigger than pure DIY and smaller than full-scale industrial manufacture. In a word, Linux adds too much overhead in systems complexity. For us to make effective use of it, we’d have to become kernel developers, which is not happening. Programming the chip directly is more efficient and, in a way, less work.

Ezra Buchla, at work. Photo courtesy the artist.

What was the musical application envisioned here? It seems there were some notions that were driven by your musical needs, and Ezra’s. Can you talk about the personal motivation, and how you think it might extend to the user base?

Brian: This is the most difficult question, of course – because I don’t want to narrowly define capabilities. It can be a drum machine or synth or echo-y texture processor, yet what I’m most excited about is having a system which gracefully facilitates experimentation. A box that I can modify severely or instead just turn on and play.

Ezra: My thoughts exactly. The possibilities of musical computing have filled many books and will never be exhausted.

But for a couple of specific examples: I’m looking forward to using complex and time-varying networks of filtering and buffer manipulation carried over from the laptop. (The things that computers are good at: granular delay, live-sample-chopping, additive synthesis, resonant networks.) It feels great to take a single rugged box to a theater or gallery and just plug the instrument into this kind of processing.

At the same time, I’m stoked on a new percussion synthesizer that I’d never even thought of making before, and the new musical territories it suggests. We’re lining up an insipring crew of developer-artists to participate in this experiment, and I’m looking forward to the diversions they discover.

Is there so far any particular connection to the monome community, in terms of interest from there?

Brian: There is substantial interest in the monome community in that I suspect many people share my same goals: being able to use controllers in interesting ways in a more immediate way, the same way that people enjoy using modular synths or just playing a real piano.

It seems many in the monome community are looking forward to getting deeper into programming. The device tends to propel users into learning more about technology then they expected. If that is your thing, it’s incredibly empowering.

Can you talk at all about BEES, your new modular environment, or show us what it looks like?

Brian: It’s basically a control environment which allows control sources (knobs, footswitches, monome grids, MIDI, etc.) to be mapped to parameters (control parameters like preset number or knob range, but also DSP parameters like feedback or filter cutoff.) BEES can switch DSP modules (each module tends to have a ton of functionality bundled together) while running, effectively “hosting” each one. The DSP module reports what parameters are available to be mapped.

Additionally, BEES creates “operators” which transform or generate control streams. So a multiplier could be added between a knob and a DSP parameter (say filter cutoff), changing the sensitivity of the knob. A footswitch could trigger a random operator which drives feedback (stomp stomp stomp stomp). A sequencer operator using a monome grid could drive the CV outs (to a modular) and then a heap of other operators could drive the sequencer in different ways (tempo, step, etc.) for something messy and interesting, all within the patching environment, without programming.

All of this stuff happens with a menu system. We’ve made a great effort to make it graceful, but we also acknowledge that designing a complex system requires more visualization, so everything in BEES will be controllable via OSC via the usb connection to a computer. A friend is working on an browser-based editor that we’re excited about.

We’ll have video soon of all of this.

Is there applicability beyond just aleph? I imagine this would be a question for people investing development time; is there a future for BEES or the stuff they write beyond this very limited-run hardware?

Brian: First off, I have no intention of this being a limited run– we simply produce according to perceived demand. If people are interested, we’ll make more.

While I understand there may be a desire to port BEES to an iPhone, it doesn’t seem a perfect fit. We designed the aleph to be good at sound and control specifically, rather than having a phone also be a stompbox using a dongle.

Ezra: We could, for example, build a big Max object that runs BEES, and you could run BEES patches in Max. Why not? Perhaps more reasonably, an aleph application host for Linux, that communicates via OSC, actually has existed at various points, and if it seems useful, it will get a resurrection.

As for the DSP: most of the audio code so far is object-oriented and easy to read, but not optimal. Inner loops will get harder to read as they get faster, and some heavy low-level stuff like FFT will be pretty tough going. But a good algorithm on the musical level is always reusable.

Just to make certain, I have this right, yes? — hardware is proprietary, but the software (including BEES) and toolchain will all be open source?

Brian: The hardware source will be proprietary in that we’re not going to post it publicly and rights will be reserved. But if someone is interested, we’d be happy to share what we’ve done. But what we’ve been doing is nothing like DIY electronics or terribly relevant to kit building. We are reasonable and curious about what happens to our work, and simply prefer that people contact us directly.

Why Blackfin DSP specifically? [aleph uses this DSP hardware/software platform.] What can musicians – or developers – expect out of this platform in terms of performance? How easy will it be to develop for?

Brian: In terms of development ease I’d say it’ll require pretty much the same commitment that most similar programming projects would require. We’re making a disk image with the toolchain set up which can be run in a virtual machine. It’s not going to be an Arduino experience, which is the product of years of great work removing complexity.

But really we have no illusions that the developer audience will be more than a small fraction of total users. What was important to us is to make these sources available, and to design a system that can be radically altered without programming. I’d rather be patch-editing than programming most of the time. We aimed for some equivalent of patching when designing the fundamental configurability of the aleph.

Ezra: The Blackfin is a fixed-point DSP with a peculiar dual architecture of 32-bit data buses and parallel 16-bit ALUs [arithmetic units]. So indeed, it is a different experience from what many programmers are used to. From my perspective, there are two big practical reasons for looking at this family of parts: speed-to-cost ratio, and accessibility — by which I mean, the freedom from proprietary toolchains and difficult packages like BGA [Ball Grid Array, difficult meaning in terms of assembly].

So when I hear (or ask) the question “why Blackfin” it usually refers to the lack of an FPU [Floating Point - math - Unit], and may be followed by, “why not SHARC?” [Analog Devices DSP platform] – the answer is lack of an open-source toolchain. Sometimes the followup is “why not ARM/NEON?” [accelerated instruction set for multimedia and DSP] which is sort of harder to answer. I guess because those tend to be SoC [System on a Chip] configurations, they feel overly complex, and have overall tended to be less appealing for this or that reason.

I like the Blackfin parts because they are fast and simple. The gcc [open compiler] tools work well, with an active community around them. The BF533 [DSP processor] is fast enough that quite naively-written C code can usually get the job done, and on the other hand the ASM [Assembler] instruction set is easy and, I’d have to say, fun. For example, it is well suited to mixed datatypes, and it will be great for porting 8-bit code into a massively faster and more parallel environment.

I don’t think using a fixed-point DSP is a hardship; it is a natural fit for audio. Anyways, the Blackfin float implementation is non-IEEE but fast enough to use when necessary, e.g. filling a lookup table. I think embedded DSP nerds will have a blast with this platform, and it only takes a few!

All that said, its always nice to have more speed. I’m sure there will be criticism of the aleph’s processing power as it doesn’t compare to what a modern computer can do. But it compares very well with what a computer could do a decade ago (at considerable expense), and I’m old enough/young enough to be pretty stoked about that level of digital processing in a small, instant-on metal box with good sound.

For all its impact, monome is these two people: Brian and Kelli. Photo courtesy monome.

What’s the latest on monome? What can we expect to happen next? And now that the world is starting to be full of grids, where do you see monome’s role – as monochrome and on/off buttons amidst RGB and pressure-sensing grids? (I suppose that in itself is sort of interesting.)

Brian: Monome is still just Kelli and myself, though we just hired on Trent Gill (Galapagoose, co-creator of [grid sampling instrument] MLRV). It’s been great collaborating with Ezra. I’m looking forward to refining and further exploring grids both on the aleph and with our existing application and user base. I still feel our minimalist grids have a level of flexibility not seen in others out there.

Outside of electronics, Kelli launched a lovely ceramics design studio (kellicain.com) and we’re considering a label for our apple cider co-op. Ezra and Trent continue to produce shockingly good music and we pressure them regularly to make more.

Someone in comments repeated this idea that you’re uninterested in getting these in the hands of lots of people. But whether it was intentional or not, it seems some of the ideas of the monome project are in the hands of lots of people. We’ve talked about this before, but curious if your take has evolved on that, at all, particularly as grids begin to shift to new instrumental applications.

Brian: Honestly I’m not seeing any shift in grid usage. Pitch maps, clip launching, and drum triggers have been dominant for years now– it is now solidly part of the electronic music vernacular. In this way the grid is not an innovative proposition. But I do hope to see more interest in grid uses outside these fundamental three approaches– there is so much left to be explored!

It’s a completely silly accusation that we wouldn’t want to get these into peoples’ hands. We understand that the aleph has a pretty small audience, but we’re grateful that the support from this audience allowed us to bring a device like this into the world. I’m not going to bother with the usual list of production expenses or yet another attempt at consumer re-education– rather I’d like to say that we aimed very high when designing the aleph. We’re incredibly excited about the unprecedented capabilities of this little box. We’re terribly interested in getting these into the hands of people who share our enthusiasm.

I’m still a bit confused. Isn’t it still possible that music created with a similar process to aleph could sound the same? Such as a laptop or other older hardware hooked to the same equipment. Does the music actually change because of what aleph is brining or is it just another approach to achieving the same results? I can understand there was a lot of hard work done on aleph, but as mentioned in the article will it only appeal to a select group? I’m still thinking a laptop loaded with Max 6, PD, AudioMulch, etc. has lots of creative potential in direct competition with aleph.

RajaTheResidentAlien

‘competition’ is a very old and outdated way of thinking about it(i would recommend thinking more along the lines of ‘symbiosis’ between all forms of tech). the aleph does, in fact, offer things that Max, PD, AudioMulch cannot do. Have you tried running DSP in those environs with a vector-size/latency of only 1 sample. They don’t work as well as the aleph will because of the overhead taken up by graphics.

“Does the music actually change because of what aleph is brining…”

Music NEVER changes because of what some technology brings, it only changes based on the individuality of users and how those users bring that individuality to each form of tech( “guns don’t kill people, i do” <-that kind of thing ).
The aleph offers a new way of thinking about 'instruments'(even plurally networked together). Therefore, this new way of thinking will definitely cause people to create a different kind of music, but only because it attracts a certain kind of individual thinking.

The aleph isn't for everyone. If you buy it with mistrust, thinking about how it can 'compete' with Max/PD/etc., you will probably be one of those who uses it in a very average way that could have worked just as well in Max/PD/etc.

(Personally, if i could afford one, i'd buy it just to create a guitar-effects pedal which worked more like a modular/reprogrammable/repatchable analog synthesizer. Max/PD/etc. could do this, but there would be more latency(before gen~, Max wouldn't even be able to do this… unless you programmed externals in C… but then, there's another thing: programming externals in C for Max/PD have a more complex compile process than building the same DSP routine in C for the aleph… i haven't even tried writing DSP modules in C for the aleph, but i can already tell it's much simpler than when i had to incorporate Cycling74's extensive API for building MSP/signal objects… not to mention, the slight but still annoying differences in having to compile those externals on Mac vs. PC)).

Sean Costello

The Blackfin will run more efficiently with vector sizes >1. IIRC, the “knee” for the payoff of performance versus latency was somewhere between 16 and 32 samples. In other words, block sizes less than 16 samples saw much high CPU usages, there was a slight performance improvement between 16 and 32 samples, and there was not much performance gain with vector sizes larger than 32 samples. Since this is open source, I presume that developers can set up their own FIFO buffers to choose whatever internal block size works best for their processing.

of course you can add buffering! for many algorithms it is the obvious move, like streaming (latency doesn’t matter, asychronous stuff is happening) and FFT / convolution / FIR (latency is imposed by the algo itself.)

what i’m saying above is that the aleph hardware imposes only the most minimal latency, and that this is a very interesting opportunity for sound/circuit design. for example analog saturation within IIR loop, sync/feedback with external oscillators. stuff that has been difficult or impossible to achieve with available platforms.

as far as optimization, the metrics you quote are not for “the blackfin,” they are for a particular class of software; i assume VisualAudio, because that framework uses the common strategy of amortizing function-call overhead by performing block processing in the inner loop of each module.

obviously, this strategy sucks when the block size is small enough! but VA has no choice; it cannot perform inter-module optimizations because it’s a dynamic graph.

when going for single-sample processing, in which i have a specific interest, one has to leverage vectorized operations in a different way. since the graphs are usually parallel/polyphonic, one can for example interleave state variables for multiple voices and vectorize/pipeline their compuation.

in other words, there are many opportunities to avoid excessive stack/frame manipulation and pipeline stalls even in a single frame.

[interesting aside: i'm pretty sure VA module code made it into both the visualDSP and gcc libraries, in e.g. bfin-elf/include/filter.h ... ??]

in any case its a good point that the behavior is not fixed, and that 1samp is not the optimal latency for many or most applications.

BTW, the way we’ve been approaching stuff is to implement algorithms in the shortest way, maximizing code reuse, and then to aggressively refactor. i will be doing a lot of refactoring in the coming months! fortunately we have some other people coming on board too, who are much smarter than me,

ez

experimentaldog

I meant competition in terms of sales. Aleph has to compete with other similar methods of production in order to sell. I’m not sure I get aleph being that different though, only that it’s a smart and compact way to make electronic music. It is a computer, it can sync to outboard gear and be used with peherals. It’s up to the composer. Like any platform, the music could be potentially good or bad regardless of vector-size and latency. I think it’s a neat platform but I’m just trying to justify the price tag and how open-source contributors who can’t afford it may not be able to contribute to the aleph community. I know its not the same, but these were similar questions that ran through my head when I bought a Lemur. Expensive equipment, small user base etc. Although I think an aleph has more to offer over time so to speak. Curiosity be damned, we have entered an interesting time of post-pc music machines such as the aleph. I hope to try one in the field some day so I can actually have a legitimate opinion of it.

gli

wise words

weird times

squaretooth

Of course. I think it’s more about them knowing that they have built a certain type of hardcore following with the monome and realizing that it would be worth selling something like this, especially if it’s priced high and made in limited quantities, enough to give it a sense of exclusivity. Attach phrases like “open source” to market it even though the hardware itself is not open source at all, and they’ve got something that they know will appeal to their customer base. Anything that this box does can already be done with existing tools, including the iPad which powers on immediately. It’s more about a smart business move based on predicting their customer base’s purchase habits rather than being able to do anything special musically that can’t already be done with other existing tools.

SMH

Show me an iPad with CV-outs, you moron.

Also, did you actually read the interview? It was explicitly stated that this isn’t a limited production. Your post sounds more like you just walked out of a marketing meeting as opposed to actually knowing anything about monome and their user-base. Keep up the great work.

http://pkirn.com/ Peter Kirn

Whoa – hang on. First, easy on the name calling; it’s not necessary.

Second, his major claims are essentially correct.

1. Brian talks about short power-on times. But it is possible to get that on computing platforms like the iPad. So, on that one point, the aleph isn’t entirely alone. (He didn’t address the latency question or connectivity, true.)

2. The aleph is for now definitely limited run. Brian says greater demand could prompt more, but it’s fair to say this isn’t a mass-market device.

3. It’s true that the hardware isn’t open source. Now, you can’t open-source Bluefin, but there’s more to the design than the chip.

You can then argue the conclusions. But just to be clear, you’re calling someone an idiot for several points that are in fact correct. So maybe focus on the actual argument (which I think is worth having) and less on being arbitrarily rude.

otool

brian also said if people were truly interested in the design to get in contact with him.

http://pkirn.com/ Peter Kirn

Right, but to be clear, it’s not “open source” unless you’re sharing the source. And in fact, the product description terms it “open source.” It’s not so great, actually, and I wish Brian would be more clear about this. That’s why I clarified it in the interview.

Sharing specs or designs with those who ask is great. And it’s great that, for instance, KORG shared schematics for some of its hardware. This used to be common practice in the industry.

But while those things are good, they cannot accurately be described as “open source.”

zebra

i understand the passion for open hardware, truly. but it is a concept of more recent vintage and is tricky to know how to handle especially in a case like the aleph where development and component cost is very high.

i think it is actually quite fair to describe this as an
open source device. even from a legal standpoint it applies to every
aspect of the device’s functionality that might conceivably be e.g.
covered by patent law. and in a more informal usage it certainly applies to the intentions behind the work. judge by the fact that we are not applying for patent rights, only reserving copyright on actual design documents.

it is also interesting, to me, that this criticism comes up next to a consideration of relative functional merits of the iPad. which of course you can’t even legally modify to launch your application on startup.

in any case, i can imagine the specific decision re:copyright changing if there is a good case for e.g. schematic release. as it stands its hard to see the wide benefits of releasing copyright just yet. but again this is almost a formality considering that the overall design is pretty transparent and we are of course willing to facilitate extension and experimentation with the device.

zebra

in other words, it’s hard to imagine anyone else being effing crazy enough to actually build one of these boards, unless they have money to burn. in every case i can think of it would be more effective engineering to consult with us on an alternative design.

zebra

do you keep meaning to say Blackfin instead of Bluefin? its confusing.

zebra

i have to say that i think the OP is being pretty rude already. if what he says is true then brian and i are callous liars and everything we say in the interview is BS. that is offensive and stating it seems irrelevant. i was ignoring him but now you are legitimizing this weird little character attack, so hm.

as for the open-source labelling, its a fair criticism. we are always talking about the firmware being open-source and eminently hackable. the hardware would be open source too if we thought there was any chance of users wanting to modify it in that way. but as it is i think sharing design documents would only encourage large scale plastic clones.

not that it much matters, the circuits are simple enough the te layouts are clear enough that you can just look at the board and fifure it out. and specific pin configuration can be determined directly from the firmware.

RajaTheResidentAlien

Peter, you should probably take your own advice and notice that the OP actually finished his entire post with a very false and slanderous statement. I think maybe you have your own bias, and if someone else posts along the same bias, you simply tend to gloss over any other kind of negativity present there. But it is a shame that it is through this bias that you moderate CDM comments.
This is moderation by Peter Kirn’s sense of ‘opportunism’, not by actual reason or rationality.

http://pkirn.com/ Peter Kirn

I didn’t say I agreed with everything he said. I said specifically, you can reach what conclusions you like. He said some things that I believe to be factually accurate.

As for the idea that aleph doesn’t offer any musical utility, I’ve just run *two* feature articles on what that musical utility might be. So I don’t agree with this conclusion necessarily, no.

But it’s hardly “slanderous” to suggest the monome project is selling things that predict what their customers want. I would rather hope that was the case.

I get involved in these cases because I think it’s worth intervening when people start calling each other names. It’s what allows us to keep what I think is a productive comment sections when other sites have either been overrun by trolls or abandoned comments entirely.

So if you value your own opinion, find a way to express it in a way people can actually understand and appreciate.

And, fine, call me opportunistic, biased, and trying to promote some underhanded agenda you can’t even articulate, but I don’t think it’s me who looks bad in this case.

RajaTheResidentAlien

SMH, you are right, and i think maybe Peter Kirn seems to have a very underhanded/hidden agenda here(this seems to be a very manipulative pattern and takes away from any notion of CDM’s respectability in my mind).

The OP is indeed being a moron when he says this last, and therefore, MOST significantly to be focused on: “business move based on predicting their customer base’s purchase habits”

I guess he just says this without realizing the aleph seems to be selling less than any monome ever has and neither Brian nor Ezra ever tried to consult with the monome community or guess as to what the community really wanted, they mainly created this device out of their own original sense of performance needs. It is indeed ‘moronic’ to just jump to quick conclusions without any facts.

http://pkirn.com/ Peter Kirn

Fine. If I have an underhanded / hidden / manipulative agenda, what is it that agenda is supposed to accomplish?

Or is it so hidden you don’t know?

I don’t know what you’re talking about, but maybe I’ve fooled even myself? Seriously, would love to know what you’re even saying here…

gli

not sure if raj is trolling you about the motive behind your reply

this is a great blog and the only “agenda” i’ve seen year after year is: try to provide decent coverage the expansive (and expanding) electronic music scene and the tools used to produce various styles

i’ve always thought it was cool that you dive into the comments section as well

however, i WILL agree with what Raja said about the OP

name calling may be unnecessary but squaretooth started things on the wrong foot with several of his comments

He raised a valid question regarding the use of “open source” and he also added, “anything that [aleph] does can already be done with existing tools…” True or not, i see nothing wrong with voicing that opinion.

But his other statements were velied attacks on monome as a company and indirect jabs at the entire userbase:

“It’s more about a smart business move based on predicting their customer base’s purchase habits rather than being able to do anything special musically that can’t already be done with other existing tools.”

“it’s…about them knowing that they have built a certain type of hardcore following with the monome and realizing that it would be worth selling something like this, especially if it’s priced high and made in limited quantities, enough to give it a sense of exclusivity. ”

So, squaretooth is calling brian, kelli, and ezra, ‘liars’ (and the people who’ve been duped into buying their devices, ‘dummies’)

it should be obvious why SMH & Raja took offense

RajaTheResidentAlien

Ya, Thanks Gli, i was pretty shocked Peter didn’t see this. This betrayed a gross bias on his part.
(also, i never ‘troll’, i simply post narcissism because others who bother to comment on such narcissism, then in turn, end up exposing themselves as being narcissistic enough to think it important to react to another person’s narcissism. it’s my little ’1st World Social Experiment’ …but thanks for noticing ;D)

gli

“troll” was probably a bit too harsh

i wasnt sure if you were yanking his chain just for fun…

RajaTheResident…Troll!:D

@Gli (& also @PeterKirn)
“i wasnt sure if you were yanking his chain just for fun…”
Hahahaha… well… I still challenge the 1st-world definition of “open-source”… saying that there is a ‘correct’ use of ‘open-source’ term is a bit like Americans claiming that their use of the term ‘democracy’ is the correct one. (Sorry PeterKirn, i don’t buy into your claim of ‘correctness’ so easily.)

BUT! yes, i was pulling his chain… just a little… ;D

I do like what Peter added though:
“I asked Brian to clarify the text on the aleph site so it’s clear the software *is* open source, and the hardware isn’t.”

That is a fair enough place of compromise to leave this argument. So thanks to both of you for indulging my need for a bit more depth of thought on this matter.
(In case it’s necessary: my apologies for being so… aggressive

RajaTheResidentAlien

Apologies for not being clearer, i am referring to spreading your own vision of open-source. This definition was coined by a minority of folks compared to most of the rest of the world who can’t afford computers. And i think this ideal of open-source pervades most of your articles: what i notice not just from the articles but by talking to people you’ve interviewed, is that you wait until you’ve got the interview before you slam people in the comments. I just think that’s a bit… sleazy.
As for ‘open-source’ i think Brian and Ezra’s sense of open-source is much better than yours: as it encourages people to get in touch with them first before it is ‘open’. This is still very open to me, but unlike the usual definition of ‘open-source’ this is more based around people because it encourages communication between the people who would use that source and the people who developed it.
I think your sense of ‘open-source’ is a very elitist 1st world one.

gli

very interesting + makes me think of the arc clone/diy thread

if someone is disappointed with the fact that recent monome products are not “open source” by the popular definition: tehn is actually OFFERING SPARE PARTS to a user who is planning to build one himself…

if that isnt “open” i dunno what is

http://pkirn.com/ Peter Kirn

There’s not a “popular” definition of open source. There is correct use of the term, and incorrect use of the term. Even further, there are specific definitions to which one can refer, as we do on MeeBlip, for greater clarity and less confusion.

I’m not criticizing the project. On the contrary, I firmly believe the decision to open source a project is one that must be made by the creator, and that it shouldn’t be taken lightly. You simply want to understand what a project is.

I asked Brian to clarify the text on the aleph site so it’s clear the software *is* open source, and the hardware isn’t. (Not only the Blackfin, but the rest of the designs.) This came up because the aleph description originally could be read as being an open source project (a big header described it as such without clarification), and that’s simply not correct.

Brian made that change, and I’m satisfied. Brian has been usually very clear about how he uses this terminology.

And I’d say there’s a scale of “openness” on a project, and then there’s some classical definition of open source. Of course it’s worth considering both.

Shannon

Max, PD, AudioMulch, CSound, Reaktor, SuperCollider, etc, all have overlaps in the type of sounds they make but that doesn’t negate the value of each nor mean they are interchangeable. They have different strengths and bring different ways of working.

I can see the Aleph being appealing to gigging musicians. A small dedicated box could replace a laptop, external soundcard, a MIDI controller and a whole lot of cables. It might be a cheaper and more stable option to boot!

partysub

ezra buchla rules!

Dave O Mahony

I can’t help but think of this as a different way of doing what I’d want to do. As Raj said, if I could afford one and could code well enough, I’d like representations of the functionality of say Izotope Stutter Edit, Reaktor Travellizer/Newscool or GRM Tools Freeze/PitchAcuum in a guitar stomp box. Added to that, I’d like to connect my Win 7 laptop, Mac Mini, iPad and Eurorack Modular together for live performance.

Sure a lot of this can be done now but I think this is a different paradigm. It seems to be another way of working and if I were to think of it as a replacement for X, Y or Z I think I would be disappointed.

seems like most of the debate topics are addressed in the interview. i’m reading a lot of comparisons to other programmable environments and i think the component that’s perhaps the most overlooked and unquestionably the most important is user community and contributions. what’s great about maxmsp, for example – is that i can often google something i’m looking to do, find a forum topic that addresses it and incorporate it into my patch within seconds. that kind of community support only comes with time. i really hope aleph takes off, because to be able to follow the same process on dedicated hardware that devotes all of its resources to sound is incredibly exciting / inspiring.

Greg Lőrincz

With this price it won’t take off.

http://facebook.com/sequadion Sequadion

An active and helpful user community is absolutely essential. They might be able to capitalize on the existing monome community, but writing low-level DSP code is very different from putting together an Max/MSP patch for the monome.

http://pkirn.com/ Peter Kirn

Yeah, precisely, things are addressed better above than in comments – oh, well, I’m still keeping comments around as *sometimes* the conversations are productive. And we had a lot of questions from the last aleph story that I ran this time.

http://www.waveplantstudios.com waveplant

tehn said today on the monome forums that the focus on programming has been overemphasized, so the bees environment might not be as low-level as anticipated. sure, $1400 is no small investment but neither are the monomes & arcs that consistently sell out.

jengel

How many years was it from the monome to ableton push? I won’t be surprised when a few years from now some major companies start coming out with flashy embedded systems inspired by things like aleph and raspi.

Me

“And it has been followed by years in which the light-up grid it
championed has become the single most dominant paradigm for controlling
music on the computer.”
a) Source?
b) Hyperbole?

gli

how about…c) anecdotal?

truthfully, i dont think kirn meant that monomes *themselves* are the most dominant

but when you add up apc’s, launchpads, maschines, push’s etc

i think he has a point

RajaTheResident’Me’

Ha! I somewhat agree with ‘Me’ here! (partly because that sounds hilarious for me to say ;D)
I might also call it a ‘temporal hyperbole’(or ‘anecdotal’ as gli wisely put it). Pushing buttons has been popular since the MPC, long before monomes. Sequencers have run loops, left to right, long before MLR. Pushing buttons with lights under them is just what’s made the most sense for right now, most accessibly, to the most people, given our general ignorance of everything else that’s possible and the way music is marketed(this is also a ‘sampling’ age, where pushing buttons coupled to triggers of some sort seems more transparent for performance of the most popular musics of our time).
But the fact that Peter perceives it as ‘the single most dominant’ is also informative. It wasn’t until Peter mentioned it that i realize now how grids and matrices have become a bit ubiquitous to this age of digital art. From an engineering/multimedia standpoint, matrices in powers of 2 are widely applicable, but from a musician’s standpoint, a hexagonal but open-ended layout like the Terpstra with changeable LEDs might’ve made more sense…

But enough of my inconclusive rant, i think maybe… we can all just hope that things keep changing

Greg Lőrincz

As much as I like the idea of the aleph, it’s outrageously expensive.

gli

curious

what do you think is a more reasonable asking price for what it does?

Greg Lőrincz

Not sure, maybe $500. That said, I’m aware that the development costs a lot of money but the current price is totally out of my range. I’m working on synth modules (commercial project) that we want to sell for two-digit prices. From this point of view, anything over £100 is expensive:)

gli

going on a tangent
that actually sounds cool

hardware or software? eurorack? does your company have a website?

i may be interested

Greg Lőrincz

I wish I could tell you more…

gli

back to aleph:

how did you arrive at 500?
adding up the cost of devices it would take to replicate/replace the functions it is capable of might be the best method

Greg Lőrincz

The figure is based on what I could afford to pay.

gli

fair enough

i’ve had ups and downs financially and its mere coincidence that i had enough $$$ when this was announced

it would be nice if it were cheaper, but i personally cant fault monome for the pricetag considering what it will allow me to do

RajaTheResidentSomethingOrOthr

Ha, so sorry for trolling so late after the article is done. But I, finally reread this part of the comments, and i empathize with Greg here a little!
Personally, i was hoping it might be $500-$800 when i first heard about it.
I’m hoping they put out something that is DSP-only someday(i also don’t want to get into using CV outs as it requires me to acquire more things for the studio… this would cause me to spend more and more, even beyond the initial purchase of the aleph… i’m not much of an analog synth owner myself, prefer to do most stuff in software particularly because i enjoy, in my own humbly simple way, dsp programming myself…i.e. writing granular-synthesis msp externals in C or just creating fx synths in supercollider…).

Greg Lőrincz

This thing it totally out of reach for most of us.

http://sequadion.com/ Sequadion

I’ve been thinking about experimenting with an embedded Linux powered musical device for a while now, and the aleph has finally inspired me to give it a try.
Of course I do understand the relative merits of writing bare metal DSP code in terms of reliability and performance. However, I can’t help but wonder about the capabilities of a little ARM-based computer running a headless Linux distro, e.g. with libpd. This approach would have the advantage of an existing OS, a relatively popular modular programming environment and cheap hardware.

Greg Lőrincz

I want to know more about it!

http://brunoafonso.com Bruno Afonso

I’m excited to hear that they are bringing other people onboard to provide useful finished algorithms/modules/etc. A fair amount of people will have spare cash to invest in aleph and fool themselves that anyone can quickly code an idea into a DSP algorithm/chip without a lot of previous knowledge. The easiest they can make this idea->to->code process happen, the better for everyone (devs/sellers and end users). In a perfect world, there would be an alephGen (thinking of max gen~ here), where one could build and debug a Max patch/PD and convert/port it as much as possible into aleph native code. You could then fine tune it in aleph itself.

I love the aleph concept. It’s something that I’m sure a lot of people have thought about but was never easy to implement. Neither affordable as one would love to I guess.The idea alone of controlling CV at high rates with a lot of processing power is tantalizing.. getting that done via a computer is currently not possible and similar setups are not compact and straightforward to setup. That is for me the killer feature.

Regarding audio in/out I think the new element here is to make it accessible to everyone. I’d be willing to bet that virtually anyone that can seriously code a blacken or other DSP-like solutions (i.e., ARM) already has their own DIY audio setup. Not with preamps though, so that’s a nice touch. So to capitalize on the spreading the DSP gospel, the key is to get more people to realize why they are nice. And we go back to making them super easy to program.