Monday, April 27, 2009

Do any IT firms have explicit pro bono programs or open-source contribution programs which might mirror the pro bono activities of legal firms? The law industry obviously has a long-standing culture of engaging in pro-bono activities with the imprimatur of the corporation.

A few IT companies obviously house and/or champion open-source projects. However I'm not sure how these are really structured. Does 'the system' adequetly encourage and/or support these activities (beyond simple competitive-self interest, or enlighted corporate social responsibility)? Are there tax breaks for companies engaging in open-source or pro-bono activities such as ethical or charitable software development?

I just don't know. Perhaps the answer is very well-known, but I didn't turn up much in 20 minutes of looking. I just felt like blogging my curiosity on the topic...

Thursday, April 9, 2009

As some know, I work on a natural language generation system for weather forecast reporting. I also have an interest in general AI. This post represents little more than the idle thinkings from my lunch break...

The ultimate question of AI is, of course, can we build a machine that thinks (for various definitions of thinks)? One response to that question by anyone interested in programming is to imagine how such a thing might be constructed.

I have presented on the topic of the structure of an AI agent before. I have been trying to flesh out some aspects of what many might call the most fundamental component, the reasoning and learning engine. As I write that, however, it is of course possible that these be two separate systems, but I will continue as though they were one. By learning here, I don't just mean laying down and accessing memories, but rather the act of mapping a new situation onto an existing conceptual framework, either by creating new conceptual substructures or recognising the applicability of an existing conceptual substructure.

Along the way, a number of people have offered their insights as to what may be the best way to perform reasoning and learning, but I'm not convinced they have solved the issue of conceptual structure yet.

I have come to think that any AI system will need, in addition to generalised learning tools, a number of pre-written or pre-constructed concepts and processes for understanding information. An example of a conceptual construct may be a Bayesian Network. The agent might go -- oh hey look, here's a situation with a bunch of discrete inputs which I can recognise, and a few output states! Great, I can map this new situation onto a Bayesian Net and learn some appropriate responses.

Going further, the system may be able to recognise a new situation as being related to a known situation. Oh great -- this situation is really a variation on the Poker Card Game! I'll copy that network and get a head start on my learning. I'll just set these probabilities to .5 since they're unknowns and away I go.

However, what's not clear to me is how an agent could possibly going about choosing the appropriate input schema, come up with its own output states, or infer network structure. This problem extends to many kinds of reasoning engine -- rule-based systems, ANN, others.

Clearly, there is no One True Network to rule them all. I don't know whether anyone can conceive of a any structure which is inherently able to perform all thinking. The human brain doesn't appear to be built that way either. To my understanding, it is born with some inherent structures plus the ability to learn. It demonstrated remarkable plasticity and regenerative capacity, but it's still the case that there are certain physical areas which strongly tend to be responsible for particular kinds of thinking.

It is also probably true that no-one is every going to have the kind of direct insight and 'just come up with' a fully generalisable reasoning engine capable of learning how to deal with any situation.

However, it does seem to me to be possible to proceed along the following path: * Identify some situations * Write conceptual structures which can reason about those situations * Write additional software which attempts to map new situations onto existing situations * Write software which is capable of evolving new conceptual structures to some extent

It seems as though learning new things needs a structure to cling to -- like evolution. It's very difficult to cross certain functional divides solely through a process of evolution. Unlike organisms, who live, reproduce and die, conceptual structures can do more than that. We can use our insight as humans to build more advanced structures more quickly than may arise through chance. If we see the mind itself as consisting of a low-level, always-on, processing algorithm in which a multitude of conceptual structures exist, I think that could help. We should be able to give any AI a head-start by building some specific conceptual structures while still allowing others to evolve and grow.

Thursday, April 2, 2009

Quote: '''Open science refers to information-sharing among researchers and encompasses a number of initiatives to remove access barriers to data and published papers, and to use digital technology to more efficiently disseminate research results. Advocates for this approach argue that openly sharing information among researchers is fundamental to good science, speeds the progress of research, and increases recognition of researchers. Panelists: Jean-Claude Bradley, Associate Professor of Chemistry and Coordinator of E-Learning for the School of Arts and Sciences at Drexel University; Barry Canton, founder of Gingko BioWorks and the OpenWetWare wiki, an online community of life science researchers committed to open science that has over 5,300 users; Bora Zivkovic, Online Discussion Expert for the Public Library of Science'''