In each part, I plan to discuss the problem, the strategy to solve it, the choice of machine learning technique and the main configuration issues the participants need to understand to successfully deploy machine learning applications. I will also show snippets of the code used. For example:

I expect the audience to be mainly Perl savvy people. However, the talk is open to all the people attending the conference. Therefore, some people in the audience might not be familiar with Perl.

The talk is scheduled to last 45 minutes. I plan to cover each part in about 10 minutes to leave between 5 and 10 minutes for questions and answers. I do not plan to explain the snippets in detail because I do not have enough time. However, I will make the code available for all those interested. My questions for you Fellow Monks are:

If you were attending this session, would you expect me to describe the code in detail?

Do you think it is a good strategy to concentrate on the machine learning part rather than on the Perl part?

What suggestion do you have in terms of points that I should (should not) cover?

Provide a reason for using Perl versus something else, and the modules you chose (I know several don't have alternatives). Also make sure that the FCM algorithm gets accross despite any possible language barriers that may exist in your audience. I suggest showing a flowchart of the algorithm before the Perl implementation and then highlighting some of the stages within the Perl. Also check your function header for the fcm function, I don't think it is accurate.

Regarding the SVM part, try to explain SVM better than the wikipedia article. I just couldn't grok it so I don't have much else to say. Perhaps explain why you're using IPC::Open3 to talk to a library and not XS or Inline?

The third part seems rather easy to understand if one has a basic knowledge of ANNs and how they're represented mathematically, the one major inconsistency I find is you talk a lot about doing things with Data, but what data will you use? Will it be the stock market data mined in the beginning of PartI for consistency or will you use simpler data later on to allow the points to shine through?

I will address your comments one by one. Please, let me know if I miss something ;-)

Provide a reason for using Perl versus something else, and the modules you chose (I know several don't have alternatives).

About Perl, I want to show that Perl is a valid alternative for machine learning. I do not claim that Perl is the best option for every single application in which you might want to use machine learning. However, I claim that Perl can shine in different aspects, which is related to your second comment. The modules were selected to show different ways in which you can use Perl for machine learning (they represent only one way of the many ways to do things using Perl):

For data gathering, visualization, and analysis (Part I). It really is easy to mine the web for data using Perl. Once you have the data, you can easily transform them to have a format that would facilitate further analysis. Perl also allows you to quickly plot the data to facilitate collaboration with the problem domain expert. The choice of Fuzzy C-Means (FCM) for data analysis has to do with my expertise in using it to make sense of data ;-) Writing a FCM implementation in Perl was one of the first things I did when learning Perl. So I am really proud of it :-)

For Decision Support Systems (Part II). Here, instead of using one of the CPAN modules for Support Vector Machines (SVM), I decided to call a SVM binary using IO::Open3. The main reason for doing so, is that I want to show that you can easily call applications written in other languages using Perl. This is just other way of using Perl for machine learning: you do the data gathering and preparation using Perl and then you call an application written in another language. The data for this part consists of image data and clinical records of patients with Scoliosis that participated in one study we did at my University. Note: the data is not publicly available because we do not have ethics approval to do so. Our ethics approval is only for data analysis in our lab.

For Pattern Recognition (Part III). The choice of writing my own radial basis function neural network code has to do with the fact that I like to learn by doing. Again, I translated some old code of mine to Perl. The data for this part comes from Environment Canada. The problem we wanted to solve was to classify storm cells in one of four possible classes: Hail, Rain, Tornado, Win.
Note: this data is not publicly available. It belongs to Environment Canada.

Also make sure that the FCM algorithm gets accross despite any possible language barriers that may exist in your audience. I suggest showing a flowchart of the algorithm before the Perl implementation and then highlighting some of the stages within the Perl. Also check your function header for the fcm function, I don't think it is accurate.

Explaining the FCM should not be that hard considering that I have several years of experience presenting my research with it to general and scientific audiences. Regarding the function header, you are
right, I will fix it as soon as I can.

Regarding the SVM part, try to explain SVM better than the wikipedia article. I just couldn't grok it so I don't have much else to say. Perhaps explain why you're using IPC::Open3 to talk to a library and not XS or Inline?

I will do my best! I like to explain the SVM comparing it with a neural networks classifier in solving a two-class classification problem. In particular, I like to stress that while the outputs of the neural network classifier are obtained using any plane that would separate the two classes, the outputs of the SVM are obtained using the plain that maximizes the separation between classes.

Regarding the use of IPC::Open3, I already explained that when answering your first set of comments.

The third part seems rather easy to understand if one has a basic knowledge of ANNs and how they're represented mathematically, the one major inconsistency I find is you talk a lot about doing things with Data, but what data will you use? Will it be the stock market data mined in the beginning of PartI for consistency or will you use simpler data later on to allow the points to shine through?

As I mentioned above, the data for Parts II and III are different from that in Part I. For Part II, I will use clinical data. For Part III, I will use weather data. In my experience, the data in Part II is the most complex one, then the one in Part III. The data in Part I is the simplest of the three.

It is a very good thought, indeed. However, you would need to think carefully and extensively on what kind of features the articles you are interested in have in common. You could use some sort of data clustering (FCM, maybe?) to help you with this task. You would then need to find a way to extract those features consistently. Finally, you could use a classifier to filter the raw data and present you only with the stuff you are interested in. When you design the classifier, try to incorporate a confidence index that tells you how reliable the results are. In this way, you could play with the outputs until you are happy with the results. Does it make sense?

It does make sense... at least as far as my (quite limited) knowledge of ML goes :-)

One of my always-backburnered thoughts was to build a neural-net-backed "observer" that would watch my browsing habits for a few months, noting things like how long I spend on a particular page, whether I follow links from it, etc., and from that be able to make predictions on stuff I might be interested in.

Sounds quite interesting. Any chance of videotaping it? You could put it on your site, YouTube (though low res), or Zudeo, or democracy player, etc. I enjoyed watching video on my iPod of sessions I missed at YAPC::Asia, although the code projected on the screen was too small to see. Certainly a bunch of videos on interesting subjects in Perl could be a great way to introduce people to it.

By the way I did a survey of natural language parsing programs in Perl a while back just as an initial dip into it, but never actually had an opportunity to use those tools. I don't remember if it is Perl (a lot are Java but I think this is not) but have you used the uk program GATE and are you going to talk about that sort of thing (head parsing, automatic categorization/chunking, extraction of key noun phrases, etc.)? I am not familiar with the apps you are talking about but am quite interested in how to easily incorporate machine learning into my Perl systems.

Sounds quite interesting. Any chance of videotaping it? You could put it on your site, YouTube (though low res), or Zudeo, or democracy player, etc. I enjoyed watching video on my iPod of sessions I missed at YAPC::Asia, although the code projected on the screen was too small to see. Certainly a bunch of videos on interesting subjects in Perl could be a great way to introduce people to it.

I would have to ask the organizers. Thanks for bringing that up to my attention

By the way I did a survey of natural language parsing programs in Perl a while back just as an initial dip into it, but never actually had an opportunity to use those tools. I don't remember if it is Perl (a lot are Java but I think this is not) but have you used the uk program GATE and are you going to talk about that sort of thing (head parsing, automatic categorization/chunking, extraction of key noun phrases, etc.)? I am not familiar with the apps you are talking about but am quite interested in how to easily incorporate machine learning into my Perl systems.

I am not familiar with GATE. However, it looks to me that it is written in Java, at least that is what is said in their SourceForge page ;-) About natural language processing, I am not going to talk about it in this presentation. I decided to focus only on problems I have already worked on. In any case, natural language processing is certainly something I am interested in. So, I will be more than happy to read about any lead you have in that area.

This sounds like an interesting talk and I hope to see it. It will be a great talk if I leave excited about what I can do with the tools, how I can get started, how to scale up from my starting point, and what to do if I get stuck.

Thanks for this opportunity to make requests about your talk. Here are some ideas, both general and specific. Please feel free to use or ignore any of them.

When I see a talk like this, I want to learn things that will help me get started more quickly if I decide to do something similar. I like to learn the approach for a minimal working example (like the synopsis). Then I am ready to hear a story about what you did to scale it up to a solve a real-world problem. Typically I wouldn't really care about your exact example problem, I just want to explore the boundaries. How well does it scale?

I want inside information that isn't in the documentation. For example, when you have a problem, how do you get good help? Is the code problematic on some platforms? What tools, libraries and skills are needed to make the system work? Do APIs break at each release, or are they stable?

For example I think that PGPlot is great and I use it, but it can be hard to build and get working. If you really need the features, use it, but if you don't, there are much easier alternatives. It has the classic build problem of a large number of options, many dependencies, and I haven't figured out how it is simple to use on simple problems. It would be of great value to me if you could explain an easy way to build and use PGPlot on Windows, Linux and OSX ;-). This is the kind of inside information that makes attending the conference a good investment.

Perl is great for gathering huge amounts of data. The challenge quickly becomes solving problems with the dataset, for example those caused by network and server outages or other annoyances.

It would be good to hear about SVM and what sort of problems it is good for. For example, how much data do you need? How much more data is needed for each new feature? What if your data isn't perfectly clean? What types of data are usually used? When would I want to use Perl with LIBSVM instead of R or Matlab?

Wish I could attend.
Since you want to cover a lot of ground, I would stress the fact that the entire project is doable in perl, i.e. that the different kind of libraries you need, are all available and easy to use/install. So I would go for the combination of architcture and HL solution in perl. If the people are perl savvy enough, they should be able to read the code details, otherwise, they won't follow them anyhow.Tabari