I'm a big fan of Perl, am excited about Perl6, and find Extreme Programming to be very interesting. We've started down the XP path at work and already I'm enjoying greater productivity. In one of our last stand-up meetings (where we sit down, oddly enough), one of the programmers asked a question about Perl6 in relation to XP and I mentioned that Perl6 is being designed to support XP. He asked what I meant and I realized that I was just parroting (ha!) what I had just read on http://use.perl.org.

Beyond the ability to evolve Perl itself is the ability to evolve solutions written in Perl. Among other things, this involves the scalability of solutions in Perl. The user must be able to start quick-and-dirty and grow the program into a robust large system. Users must be able to switch programming styles, paradigms, and methodologies as they refactor their programs in the light of new requirements. Perl must not only be an example of, but must also support, Extreme Programming.

I can understand the desire to be able to start small and expand upon it. I can also understand the idea behind being able to easily rethink and refactor a problem. However, the answer still seems to be a bit vague. In fact, it almost seems like a marketing response: "yes, yes, we're working on PSI::ESP. It'll be out in the next release."

Does anyone familiar with Perl6 care to offer concrete examples of what was meant by the XP quote?

Here are some of the tenets of XP (stolen from
extremeprogramming.org)
that we intend Perl 6 to support.
And some practical examples of Perl 6 features that will support each
of them. This list is by no means exhaustive (either of tenets or
or supportive features) -- it's more to demonstrate that we aren't
(just) marketing geniuses ;-)

The project is divided into iterations/No functionality is added early.

To do that, you need to be able to build applications incrementally.
That implies leaving stubs representing components that are essential but not yet built.
That's particularly easy in Perl 6:

To do those things, you need to leave behind code that others can pick up
easily. Many of the changes we're making to Perl 6 support that. For a simple
(but non-trivial) example, rationalizing variable sigils cleans up code
dramatically, making it easier to grok quickly. For a more complex example,
Larry showed the design team the first draft
of A5 yesterday. It's sensational! He's heavily refactored the regex syntax,
and the result is that simple regexes become vastly more readable, and complex
regexes vastly easier to get right.

Refactor whenever and wherever possible.

One of the unexpected benefits of the structural unification we've been
making is that refactoring does become significantly easier. For example,
every loop block is now actually a subroutine/closure specification. And
if you use the -> $var {...} iterator specification, the loop block is
actually a subroutine with parameters. So hoisting that code into a
subroutine becomes utterly trivial.

Code must be written to agreed standards.

Perl 6 will make it easier to create, impose, and verify compliance
with coding standards. Two examples:

Sandboxing will be much easier
to use, so it will be possible to turn off language features whose use
contravenes the agreed standard. For example, it will be simple to turn
off the use of backticks, and have them "caught" at compile-time.

It will also be comparatively easy to intercept code in the middle of
compilation (e.g. at the optree level) and write Perl modules that
verify particular code structures...as opposed to individual constructs.
For example, you might decree that variable names must conform to the
project's data dictionary, and have a "policy test" module check every VAR node in
the optree as the program is compiled. Or you might
require that all pattern matches use an explicit $var =~ (rather than implicitly matching against the current topic). You'll be able to write a parse-tree analyser to verify that too.

Leave optimization till last.

...or leave it to the machine. An important design goal for both Perl 6 and
the underlying Parrot engine is to automate optimization as far as possible.
The introduction of variable typing (in addition to Perl 5's value typing)
helps there, as does the unification of blocks and closures. At the other
end of the spectrum, Parrot's register-based architecture allows us to
draw on the extensive literature on hardware and assembler optimization.
BTW, where optimization at the higher, human level is desirable, it's almost always
optimization of algorithm, not code. Many of the new, more advanced features
that Perl 6 will add (e.g. higher order functions, lazy data structures,
generators, coroutines, superpositions, smart matching, etc.) are specifically
designed to support those more sophisticated algorithms.

No overtime.

Hardly new. Perl is already a "no overtime" language, since it allows you
to get more done, more quickly, without the mechanics of the language
getting in the way. We have no intention of changing that aspect of the
language. In fact, every change we consider for Perl 6 is explicitly weighed against
our stated goal that "Easy things should stay easy, hard things should
become easier, and impossible things should become merely hard."

Thanks for the topic you've picked Ovid. I too am very excited and look forward to many innovative 'tools' in Perl6 that will greatly enhance and support the kind of software development paradigm as XP.

You are asking...

Does anyone familiar with Perl6 care to offer concrete examples of what was meant by the XP quote?

I'm happy to know of one such feature you are inquiring about. The feature is termed as 'higher-order functions'. You can read a full thread on the related RFC. Damian Conaway also touches this subject in one of his diary entries. Moreover, Damian has written an excellent Perl5 module to simulate this feature of Perl6. Not surprisingly the module is called Perl6::Currying.

Having read through these resources, I'm sure you'll agree with me that this feature may well be amongst the few ones welcomed by the practitioners of the XP approach. One can develop a higher-order function and still use the function even with incomplete parameter list. This is great for a lot of resons. First, there's a way for you to write a function and test it in stages. Second, this allows for greater flexibility as to the use of a higher-order function. For example, you can derive lower-order variants of the higher-order functions to accomplish specific (albeit 'narrower' in purpose) tasks!

Anyhow, please do take a read as I can't possibly describe this feature as good as it has already been described in the aforementioned resources! ;-)

It will be much easier to write a
refactoring engine in Perl 6 -- and here's why...

You'll be able to grab the parse-tree from the compiler before it's converted to op-codes. Or after it's been converted, but before it's been reduced, if you prefer. Once you have a structured representation of the code, refactoring is "merely" the application of standard mathematical tree/graph factoring and optimization techniques.

I am afraid you have it wrong. "Currying" and "higher-order functions" are two unrelated features that only have one thing in common: they often appear in functional languages. A function is "higher-order" if it accepts a function as it's parameter or if it returns a function (it "created").

Higher order functions are long supported by Perl 5, even though we usualy do not use that name. (I guess so that we do not scare people away. Higher-order functions sounds too "functional".)

On the other hand a function is curried if it returns a "partialy applied" function if called with only the first few parameters. Of course then it is also higer-order actualy, but that's not the point.

Crucial parts of complex systems are well defined feedback loops, short term
loops being embedded in long terms ones. This
is what Larry calls the
whirlpool. In a sense, perl5, allowing to flesh up oneliners was already geared toward XP.
Here is the
XP version of the whirlpool model.