Invited Speakers

Dr. Lawrence Hunter Building a Mind for Life

The continued high growth rate of research publications in
bioinformatics has led to a torrent of new data relevant to
understanding life and promoting human health. To keep up,
the biological community has built up some very large ontologies
(many tens of thousands of terms) identifying the basic phenomena
of interest, as well as databases of organism models linking these
terms to particular genes of interest, based on published evidence.

Carefully managed databases provide structured and continuously
updated information from a dizzying variety of biomedical data
sources. The global move toward electronic patient records and
patient-based social networking sites has the potential to
provide far deeper insight into disease progression and the
effectiveness of therapy than ever before possible. However,
the computational challenges in effectively exploiting all of
this information are immense.

In this talk, I will describe some surprisingly effective, recent
computer systems that integrate natural language processing,
semantic data integration, automated inference, and visual analytics
to support knowledge-based data analysis at genomic scale. I will also speculate about future developments, argue that biomedicine provides the most likely context for the first solution of an AI-complete problem, and try to explain why that is.

Bio:

Dr. Lawrence Hunter is the Director of the Computational Bioscience Program and of the Center for Computational Pharmacology at the University of Colorado School of Medicine, and a Professor in the departments of Pharmacology and Computer Science (Boulder). He received his Ph.D. in computer science from Yale University in 1989, and then spent more than 10 years at the National Institutes of
Health, ending as the Chief of the Molecular Statistics and
Bioinformatics Section at the National Cancer Institute. He
inaugurated two of the most important academic bioinformatics
conferences, ISMB and PSB, and was the founding President of the
International Society for Computational Biology. Dr. Hunter's research
interests span a wide range of areas, from cognitive science to
rational drug design. His primary focus recently has been the
integration of natural language processing, knowledge representation
and machine learning techniques and their application to interpreting
data generated by high throughput molecular biology.

Jans Aasman AllegroGraph and the Linked Open Data Cloud

Franz created a first prototype of AllegroGraph in 2005 for a
DOD conference on the Semantic Web. It was so well received that we
have been working on it ever since. Currently we have more than 15
active, full time developers working on the product and we are at
version 4.0. We have a respectable user base that is helping us to
fund further development.

In my talk I will cover several topics.

How Lisp helped us to get ahead of the competition.

The internal architecture of the database. AllegroGraph looks very
much like a parallel lisp with shared memory and Lispers might find
it interesting to get an idea of the underlying mechanisms

Development. This is the biggest project we have ever done at Franz
and so we had to come up with a development methodology that deals
with the fact that the developers work from Hawaii to Berlin.

Interfacing with other languages. Currently we support multiple Java
clients, a Python client, Ruby, Scala, Clojure, C#, and even Perl.

A customer story and how this customer helps us to get to a trillion triples.

Bio:

Dr. Jans Aasman, CEO of Franz Inc. started his career as an
experimental and cognitive psychologist, earning his PhD in cognitive
science with a detailed model of car driver behavior using Lisp and
Soar. He has spent most of his professional life in telecommunications
research, specializing in intelligent user interfaces and applied
artificial intelligence projects. From 1995 to 2004, he was also a
part-time professor in the Industrial Design department of the
Technical University of Delft. Jans is currently the CEO of Franz
Inc., the leading commercial supplier of Common Lisp and scalable RDF
database products that provide the storage layer for powerful
reasoning and ontology modeling capabilities for Semantic Web
applications. Dr. Aasman has gained notoriety as a conference speaker
at such events as Semantic Technologies Conference, International
Semantic Web Conference, Java One, Linked Data Planet, INSA, GeoWeb,
ICSC, RuleML and DEBS.

Marc Feeley Gambit Scheme: Inside Out

Gambit is a complete development system for the Scheme programming
language which includes an optimizing compiler, and interpreter and
debugger. The system has evolved over more than 20 years from a
research project into a robust implementation suitable for commercial
development. It has been used for implementing other languages,
distributed systems, and computer games. The talk presents an
overview of the system from a user's perspective and also its
evolution and implementation, particularly the portable Scheme to C
compiler.

Bio:

Marc Feeley has always had an interest in the implementation of
programming languages. His first compiler compiled
BASIC to native code on a 48K Z-80 based TRS-80. A BYTE special issue of
Lisp got him hooked on the language in 1979. He has been studying
and implementing Lisp, Scheme, Erlang, JavaScript and related languages
since 1985. His main interests are in dynamic languages, parallel
processing, embedded systems and optimizing compilers.
Over the past few years he has served as editor in chief of the R6RS
and on the Scheme Language steering committee.
He is a professor at the departement d'informatique
et recherche opétionnelle de l'Université de Montréal.

Peter Seibel Common Lisp Standardization: The Good, the Bad, and the Ugly

One of Common Lisp's great strengths is the high-quality ANSI
standard. However in these days of popular and successful languages
defined by de facto specifications, dominant implementations, and
benevolent dictators, not everyone sees the standard as an unmitigated
benefit. Especially frustrating to many new Lispers is the seeming
impossibility of reopening the standard to just fix a few things or to
standardize a few areas not currently covered. This talk will discuss
the history of how the Common Lisp standard came to exist, why it will
probably never be changed, and what that means for the future of
Common Lisp specifically and Lisp generally.

Bio:

Peter Seibel is either a writer turned programmer or programmer turned
writer. After picking up an undergraduate degree in English and
working briefly as a journalist, he was seduced by the web. In the
early 90s he hacked Perl for Mother Jones Magazine and Organic
Online. He participated in the Java revolution as an early employee at
WebLogic and later taught Java programming at UC Berkeley Extension.
He is also one of the few second generation Lisp programmers on the
planet and was a childhood shareholder in Symbolics, Inc. In 2003 he
quit his job as the architect of a Java-based transactional messaging
system, planning to hack Lisp for a year. Instead he ended up spending
two years writing the Jolt Productivity award-winning Practical
Common Lisp. His most recent book is Coders at Work, a
collection of Q&A interviews with fifteen notable programmers and
computer scientists. When not writing books and programming computers
he enjoy practicing Tai Chi. He live in Berkeley, California, with his
wife Lily, daughters Amelia and Tabitha, and dog Mahlanie.

The release of F# 2.0 makes functional programming a viable choice for mainstream development in the context of .NET. We'll look at the evolution that set the scene for F#: the web and multi-core changes that have taken place in
the industry to make functional programming more relevant, the long and rich history of functional programming itself, through to the technical stuff: the introduction generics in .NET, LINQ in C# and the evolution of F# itself.
We'll look at F# today including its parallel and asynchronous programming support, and sneak a preview of F# 3.0 as we integrate a world of data into the functional programming experience.

Bio:

Don Syme is a Principal Researcher at Microsoft Research,
Cambridge. He is the designer of the F# language, recently released as
F# 2.0. He is also the co-designer and co-implementer of generics for
the .NET CLR, now used as a fundamental part of the programming models
of C# and VB in Mono and Microsoft implementations.

Lowell Hawkinson Lisp for Breakthrough Products

The theme of my talk is why Lisp is a great choice for highly
ambitious software products, and has consistently been my choice over
a nearing-50-year-long career. I will focus on two such choices. The
first was made for G2, the real-time expert system product of my 1986
startup, Gensym Corp. -- a choice that was key to G2's becoming
arguably the most successful commercial AI product of that era. It is
today a 23-year-old product that continues to run mission-critical
industrial applications worldwide. The second choice was made for the
two products of my 2010 startup, Expressive Database: EXP DB, a novel
database that can hold billions of highly semantically and
contextually organized expressions, and EXP READ, an
natural-language-to-EXP translator, built upon EXP DB, which aims to
achieve near-human-level NLU. Among other unequaled capabilities,
Lisp's support for data abstraction through macro definition is
critically important to the success of all these ambitious products,
and Lisp's model of software that can dynamically change as it
continues to run in real time was central to many of G2 customers'
application successes. For G2, I will illustrate these points with
many anecdotal and sometimes outrageous application examples -- at
NASA, at Aughinish Alumina, at Intelsat, at Lafarge, at Siemens, and
at Nabisco. For EXP DB and EXP READ, I will explain how various of
Lisp's good ideas can be adapted to the AI Holy Grail problem of NLU
("Natural Language Understanding"), how Lisp is needed to support the
simple-on-the-surface-but-complicated-underneath data abstractions of
expressive databases, and why I believe NLU will first be seriously
"solved" using expressive databases (or something akin thereto). I
plan to start my talk with a few nostalgic reminiscences from the
early days of G2, and end it by reiterating how critically important
Lisp can be to the success of highly ambitious software products.

Bio:

Lowell Hawkinson has had a long, Lisp-based career, beginning at Yale
as an undergraduate in 1962. There, he built a Lisp interpreter and
compiler called YULISP, which had a rather sophisticated relocating
garbage collector -- perhaps the first instance of this kind of
garbage collector. From 1963-65, he worked under the mentorship of
Harold V. McIntosh while at various university postings (McIntosh
sponsored the first International Lisp conference, in Mexico City in
1963.) From 1965-67 he led the effort to implement Lisp 2, a
John-McCarthy-sponsored and ARPA-funded intended successor to Lisp
1.5. From 1973-83, he worked in Lisp on natural-language-based
knowledge representation at MIT's Laboratory for Computer Science. In
1983 he went to Lisp Machines, Inc., first as a general manager, then
as development manager for PICON, a real-time expert system tool built
in Lisp. From 1986-2007, with a couple of breaks, he was chairman &
CEO of Gensym Corp., a software company with a Lisp-based real-time
expert system product G2 that grew from an 8-person startup to a 300+
person public company. As of 2010 he has a new startup, Expressive
Database, that is using Lisp for building a novel "expressive
database" product, and then an NLU product on top of that.