Posts in
Haskell Art

This is substantially interesting to me personally, persuant a paste below,
as a spectator of PLD.
Pretty keen to read the, Hey, but, wait,
Alfred Matthews.
Better Collaboration
One of the most difficult things about collaborative development is
handling merge conflicts.
The vast majority of merge conflicts are results of non-functional changes:
Renames, reformatting, textual movement of code lines between files, etc.
In Lamdu, "names", the "position" of the code and other non-functional
aspects of code are separate from the code itself, and avoid conflicts.
Rename conflicts
To get a conflict due to "rename" operations, two developers must rename
the same variable to two different names. Even then, the code is still
executable, but its on-screen rendering will display a localized conflict.
Formatting conflicts
Formatting is automatic, so there is no way to generate a conflict.
Code movement conflicts
The "position" of code is meta-data attached to the code, helping to find
that code and position its rendering.
Since code is not structured into text files, code "position" conflicts are
similarly less harmful, less likely and localized.
Change tracking
Instead of heuristically guessing what changed in the code, as traditional
version control systems do, Lamdu uses an integrated revision control
system and records the changes by the programmer as revisions.
This acts as a record of the developer's intent, allowing the RCS to
distinguish, for example, between function deletion and writing of a
similar function, and the modification of that same function. The recording
of intent helps prevent and resolve conflicts.
Regression Debugging
Integrated revision control and live test cases will allow "Regression
Debugging".
When a change causes a regression, the root of the problem can be found
quickly, by finding the deepest function application whose result value
diverged from the correct version of the code.
Automatic Formatting and Sugaring
Lamdu attempts to take away as much inconsequential freedom from the
developer, to free his mind and his key strokes to deal with the parts that
matter in the code. Thus, Lamdu does not provide means to edit formatting
on a case-by-case basis. Generalized changes to layout rules can be
provided, instead.
Additionally, to avoid further stylistic dilemmas, Lamdu uses automatic
sugaring of code, as the dual of typical "de-sugaring" done by textual
languages.
The code is edited and displayed in its sugared form. The edits to this
form are translated to lower-level, simpler edits of the stored language,
which is de-sugared. Lamdu uses "co-macros" that capture patterns in the
lower-level code and automatically sugar it to canonical form. This frees
the programmer from worrying about whether to use sugar for each particular
case.
On Thu, Apr 26, 2018, 5:16 AM Alan & Kim Zimmerman <email obscured>>
wrote:

I thought about making one of those some time ago, if types are not complex
(no HKTs, no constraints), it would be easy to manage I think.
I'm the maintainer of haskell.do , an interactive Haskell editor, maybe we
can think about something :)
Cheers!
El jue., 26 abr. 2018 a las 9:48, Alex McLean <email obscured>>) escribió:

Hi all,
I'm wondering if anyone has made a 'block editor' for Haskell, i.e. a
syntax-aware text editor where you make a program by snapping together
type-compatible words, usually with a mouse. Something similar to
Scratch for example: https://scratch.mit.edu/
Any leads appreciated!
Best wishes

=========
Paper submission deadline June 28
Performance submission deadline July 8
Author Notification July 21
Camera Ready August 5
Workshop September 29
About FARM
==========
The ACM SIGPLAN International Workshop on Functional Art, Music,
Modelling and Design (FARM) gathers together people who are harnessing
functional techniques in the pursuit of creativity and expression. It
is co-located with ICFP 2018, the 23rd ACM SIGPLAN International
Conference on Functional Programming, and with Strange Loop, in
St. Louis, Missouri, USA.
Functional Programming has emerged as a mainstream software
development paradigm, and its artistic and creative use is booming. A
growing number of software toolkits, frameworks and environments for
art, music and design now employ functional programming languages and
techniques. FARM is a forum for exploration and critical evaluation of
these developments, for example to consider potential benefits of
greater consistency, tersity, and closer mapping to a problem domain.
FARM encourages submissions from across art, craft and design,
including textiles, visual art, music, 3D sculpture, animation, GUIs,
video games, 3D printing and architectural models, choreography,
poetry, and even VLSI layouts, GPU configurations, or mechanical
engineering designs. Theoretical foundations, language design,
implementation issues, and applications in industry or the arts are
all within the scope of the workshop. The language used need not be
purely functional (“mostly functional” is fine), and may be manifested
as a domain specific language or tool. Moreover, submissions focusing
on questions or issues about the use of functional programming are
within the scope.
FARM 2018 website : http://functional-art.org/2018/
Call for Performances
=====================
Submission deadline: July 8, 2018.
Submission URL: https://easychair.org/conferences/?conf=farm2018 .
FARM also hosts a traditional evening of performances. For this year’s
event, FARM 2018 is seeking proposals for live performances which
employ functional programming techniques, in whole or in part. We
would like to support a diverse range of performing arts, including
music, dance, video animation, and performance art.
We encourage both risk-taking proposals which push forward the state
of the art and refined presentations of highly-developed practice. In
either case, please support your submission with a clear description
of your performance including how your performance employs functional
programming and a discussion of influences and prior art as
appropriate.
Call for Papers and Demos
=========================
Submission deadline: June 28, 2018 (note that this is earlier than for
performances).
Submission URL: https://easychair.org/conferences/?conf=farm2018 .
We welcome submissions from academic, professional, and independent
programmers and artists.
Submissions are invited in three categories:
1) Original papers
We solicit original papers in the following categories:
- Original research
- Overview / state of the art
- Technology tutorial
All submissions must propose an original contribution to the FARM
theme. FARM is an interdisciplinary conference, so a wide range of
approaches are encouraged.
An original paper should have 5 to 12 pages, be in portable document
format (PDF), using the ACM SIGPLAN style guidelines and the ACM
SIGPLAN template. [ http://www.sigplan.org/Resources/Author/ -- use
the 'sigplan' sub-format. ]
Accepted papers will be published in the ACM Digital Library as part
of the FARM 2018 proceedings. See http://authors.acm.org/main.cfm for
information on the options available to authors. Authors are
encouraged to submit auxiliary material for publication along with
their paper (source code, data, videos, images, etc.); authors retain
all rights to the auxiliary material.
2) Demo proposals
Demo proposals should describe a demonstration to be given at the FARM
workshop and its context, connecting it with the themes of FARM. A
demo could be in the form of a short (10-20 minute) tutorial,
presentation of work-in-progress, an exhibition of some work, or even
a performance. Demo proposals should be in plain text, HTML or
Markdown format, and not exceed 2000 words. A demo proposal should be
clearly marked as such, by prepending Demo Proposal: to the title.
Demo proposals will be published on the FARM website. A summary of the
demo performances will also be published as part of the conference
proceedings, to be prepared by the program chair.
3) Calls for collaboration
Calls for collaboration should describe a need for technology or
expertise related to the FARM theme. Examples may include but are not
restricted to:
- art projects in need of realization
- existing software or hardware that may benefit from functional programming
- unfinished projects in need of inspiration
Calls for collaboration should be in plain text, HTML or Markdown
format, and not exceed 5000 words. A call for collaboration should be
clearly marked as such, by prepending Call for Collaboration: to the
title.
Calls for collaboration will be published on the FARM website.
Authors take note
=================
The official publication date is the date the proceedings are made
available in the ACM Digital Library. This date may be up to two weeks
prior to the first day of your conference. The official publication
date affects the deadline for any patent filings related to published
work.
All presentations at FARM 2018 will be recorded. Permission to publish
the resulting video (in all probability on YouTube, along with the
videos of ICFP itself and the other ICFP-colocated events) will be
requested on-site.
Questions
=========
If you have any questions about what type of contributions that might
be suitable, or anything else regarding submission or the workshop
itself, please contact the organisers at:
<email obscured>
Organizing Committee
====================
Brent Yorgey (general chair)
Donya Quick (program chair)
Tom Murphy (performance chair)
Program Committee
=================
Heinrich Apfelmus (self-employed)
Brian Heim (Yale, USA)
Can Ince (University of Hiddersfield, UK)
Chris Martens (NC State University, USA)
Eduardo Miranda (University of Plymouth, UK)
Iris Ren (Utrecht University, Netherlands)
Henning Thielemann (self-employed)
Didier Verna (EPITA, France)
Dan Winograd-Cort (Target, USA)
Halley Young (University of Pennsylvania, USA)

Happy to report, the new years eve playlist at my party last night was played
with Vivid, and it went very well!
We heard a couple of xruns on one song because I hadn't been freeing buffers
after playback, but a minute of livecoding fixed it and the rest of the night
went flawlessly.
Keeping an apartment of friends entertained is great motivation to write
dependable code. Happy new years, haskell artists!

On Fri, 8 Dec 2017, Francesco Ariis wrote:
> On Fri, Dec 08, 2017 at 10:29:11PM +0100, Henning Thielemann wrote:
>>
>> Hi all,
>>
>> this song has been milked to death but it is so regular that I could not
>> resist to program it using the live sequencer:
>> https://

>
> Enjoyable and didactic, well done!
The Prelude performed using the Prelude, so to speak. :-)

On Fri, Dec 08, 2017 at 10:29:11PM +0100, Henning Thielemann wrote:
>
> Hi all,
>
> this song has been milked to death but it is so regular that I could not
> resist to program it using the live sequencer:
> https://

The only recordings I could find are https://livestream.com/oxuni/ICFP-2017
I can't be certain they don't have FARM because there are many hours
and they're unlabeled, but they seemed to be in more or less
chronological order, and the last one is one day before FARM,
according to the published schedule. I could be wrong, but I looked
for a while and couldn't find anything.
I found one of the talks at a different venue though:
https://

On Tue, Nov 21, 2017 at 12:28 AM, Ivan Perez <email obscured>> wrote:
> On 07/11/17 18:49, Evan Laforge wrote:
>> On Tue, Nov 7, 2017 at 4:39 AM, Ivan Perez <email obscured>> wrote:
>>> They were.
>>>
>>> I suspect the work needed for this may be more than may seem at first: I
>>> should have received a message to verify something about my Haskell
>>> Symposium and FARM talks, and haven't received either of them so far.
>>>
>>> I can contact the FARM organiser on your behalf if you want :)
>> Yes please, if you think it will help. I guess it's already in the
>> pipeline, but maybe a reminder would encourage whoever is doing the
>> work.
>>
> From Jean Breeson: "FARM videos were taken by the ICFP organizers and
> will be handled / published by them."
>
> Are they not available in the ICFP "channel"?
>
> Ivan
>
>
>
> --
>
> Read the whole topic here: Haskell Art:
> http://lurk.org/r/topic/4VdjnVcxwxZTesFg83QDaD
>
> To leave Haskell Art, email <email obscured> with the following
email subject: unsubscribe

On 07/11/17 18:49, Evan Laforge wrote:
> On Tue, Nov 7, 2017 at 4:39 AM, Ivan Perez <email obscured>> wrote:
>> They were.
>>
>> I suspect the work needed for this may be more than may seem at first: I
>> should have received a message to verify something about my Haskell
>> Symposium and FARM talks, and haven't received either of them so far.
>>
>> I can contact the FARM organiser on your behalf if you want :)
> Yes please, if you think it will help. I guess it's already in the
> pipeline, but maybe a reminder would encourage whoever is doing the
> work.
>
From Jean Breeson: "FARM videos were taken by the ICFP organizers and
will be handled / published by them."
Are they not available in the ICFP "channel"?

Several people have asked me lately about defining sounds in Vivid and then
playing them in Tidal. I've written up a quick-and-dirty howto on
vivid-synth.com . Happy hacking! Feel free to ask questions if anything's
confusing or you're having issues.

On Mon, Nov 13, 2017 at 12:25 PM, Alex McLean <email obscured>> wrote:
> I don't know if this is relevant, but in tidalcycles (tidalcycles.org)
> I dealt with the problem of how to sample a function of (rational)
> time by instead working with functions of timespans, basically (Time,
> Time) -> [(Time, Time, a)], i.e. a function of start and end times,
> which returns a list of values and the timespans which they are active
> within (which will of course intersect with the input timespan).
That sounds kind of like the hybrid function + explicit breakpoints
approach I was theorizing about, except the other way around. I was
thinking it would be ([(Time, Time)], Time -> a), or perhaps [(Time,
Time, Time -> a)], where the Time ranges are marking different kinds
of functions. But, if I understand what you're saying, you have a
function that returns time segments, e.g. f (start, end) = [(0, 1, a),
(1, 2, b)]... or something? I assume the value 'a' is constant within
its time range, which makes it like the piecewise-constant version?
In that case, why have a separate end time, instead of just the start
times?
What types are the values? If they're numeric, how do you handle
interpolation between them?

I don't know if this is relevant, but in tidalcycles (tidalcycles.org)
I dealt with the problem of how to sample a function of (rational)
time by instead working with functions of timespans, basically (Time,
Time) -> [(Time, Time, a)], i.e. a function of start and end times,
which returns a list of values and the timespans which they are active
within (which will of course intersect with the input timespan).
This can be used to represent functions of either continuous or
discrete time. In the former case, you can just take the midpoint of
the given input timespan to compute the output from. So the duration
of the input timespan is a bit like the resolution of the sampling
rate. With both continuous and discrete functions of time represented
in the same type, it becomes very easy to compose them together.

On 12 November 2017 at 05:04, Evan Laforge <email obscured>> wrote:
> Has anyone done work with, or have recommendations for how to represent a
> possibly discontinuous function, specifically a time to float signal?
>
> This isn't specifically related to Haskell or to art, but I'm thinking of
> Haskell implementations, and anyone dealing with music or animation surely
> has to deal with values that change in time.
>
> The context is that I construct various signals in ad-hoc ways, but usually
> via concatenating segments (of various curves, but flat and linear are
> common), and then they turn into instructions for some backend. In the past,
> the main backend was MIDI, so I represented the signals as Vector (Time, Y)
> where both Time and Y are Double. The interpretation was that each sample
> sets a constant value, so to convert to MIDI I just emit the samples
> directly.
>
> However, this only works because MIDI is low bandwidth and we're forced to
> accept that the receiving synthesizer is going to be getting these rough
> signals and smoothing them out internally. Once I start working with my own
> synthesizers I need audio rate controls and this becomes really wasteful,
> especially since I don't know up front what the eventual backend would be.
> I'd be forced to use an audio level sample rate globally and then thin it out
> for MIDI. Since I always wind up serializing the signal in one way or
> another at the end, having an efficient represenation is important. This is
> also why the traditional fixed sampling rate is out, even though the sparse
> approach adds plenty of complexity (for instance, resampling both inputs to
> add them together).
>
> The next thought is retain the sparse [(Time, Y)] representation but
> interpret it as linear segments. This means a discontinuous segment actually
> requires two samples, e.g. [(0, 0), (1, 0), (1, 1), (2, 1)]. Leaving that
> with a sample-oriented API becomes seriously error prone because you have to
> remember to handle before and after coincident samples, and split segments
> when merging or slicing signals, etc. But perhaps with an explicitly
> segment-oriented API I could hide all of that. Perhaps have a special
> encoding for flat segments if they're common enough... though the obvious
> encodings don't actually save any space and add complexity, so maybe not
> bother with that part. I've never heard of anything like that though, are
> there any examples out there?
>
> Of course, the most idiomatic representation is surely a function Time -> Y.
> Not only can I concatenate curves with perfect accuracy and arbitrary
> resolution and leave the sampling to the backend, it also elegantly allows
> efficient transformations. For instance, shifting the Time is just composing
> addition on the front, while in a sample-oriented representation you have to
> either transform all the samples, or add a field for an offset and remember
> to have every access function take it into account. That in turn adds plenty
> of complexity and only works for the specific transformations hardcoded in.
> In practice, f(x+k) and k * f(x) serve most purposes.
>
> I haven't tried this yet, but some issues make me hesitate. One is that I
> lose structure. To find the inflection points I'd have to sample and see how
> the values change. For instance, I'll surely find myself trying to infer
> linear segments back out again, because various backends (including GUI) do
> well with linear segments. And then I worry about memory leaks. For a data
> structure, I can flatten the whole thing and be sure no thunks are inside,
> but for a function built from composing other functions, I have to make sure
> every single component function isn't holding on to anything it doesn't need.
> It seems very dangerous. So maybe in the pure form the function is out.
> Maybe there's some kind of hybrid approach, with a pair of a function and
> a vector annotations of where the break points are, with say
> Annotation = Flat | Linear | Other. I'd have to transform them together, so
> I still wind up with a Vector (Time, Annotation) with some of the same
> problems as the (Time, Y) samples, but maybe it's doable. But even if it is,
> I might not need the additional accuracy over approximation with linear
> segments, and I don't see any way around the memory leak problem. Still, it
> would be interesting to hear if there are implementations of this kind of
> approach.
>
> I think in the end I've more or less convinced myself to continue with the
> linear segments approach, but put on an explicitly segment-oriented API that
> makes it hard to mess them up, whatever that winds up looking like. But has
> anyone else faced this kind of problem, or seen elegant solutions to it?
>
> Thanks!
>
> --
>
> Read the whole topic here: Haskell Art:
> http://lurk.org/r/topic/2wxYdLvac7CZRnjVRka8SJ
>
> To leave Haskell Art, email <email obscured> with the following
email subject: unsubscribe
--
blog: http://slab.org/