ECTOR learns from what people say. It is based on an artificial intelligence architecture, that is inspired from Copycat, an AI system from Mitchell and Hofstadter. The Concept Network it uses is a mix between neural and semantic networks. It uses co-occurrences to compute the influence of one semantic node on another. The links are statistically weighted. So, ECTOR does not know anything at its "birth". Notice: this is the beginning of the porting from cECTOR.

For current development version, see svn checkout: we should now create expression nodes. History. A Simple Chatbot in 71 lines of Python. A naive chatbot program.

No parsing, no cleverness, just a training file and output. It first trains itself on a text and then later uses the data from that training to generate responses to the interlocutor’s input. The training process creates a dictionary where each key is a word and the value is a list of all the words that follow that word sequentially anywhere in the training text. If a word features more than once in this list then that reflects and it is more likely to be chosen by the bot, no need for probabilistic stuff just do it with a list.

The bot chooses a random word from your input and generates a response by choosing another random word that has been seen to be a successor to its held word. It isn’t very realistic but I hereby challenge anyone to do better in 71 lines of code !! Its responses are rather impressionistic to say the least ! I used War and Peace for my “corpus” which took a couple of hours for the training run, use a shorter file if you are impatient…
Speech2txt. Comment bénéficier de la reconnaissance vocale gratuite. Dans cet article, je vous montre comment bénéficier gratuitement d’un outil de reconnaissance vocale en utilisant l’API de Google, et je vous fournis le script qui va vous permettre de le faire depuis votre ordinateur.

Just yesterday, Google pushed version 11 of their Chrome browser into beta, and along with it, one really interesting new feature- support for the HTML5 speech input API. This means that you’ll be able to talk to your computer, and Chrome will be able to interpret it. This feature has been available for awhile on Android devices, so many of you will already be used to it, and welcome the new feature. If you’re running Chrome version 11, you can test out the new speech capabilities by going to their simple test page on the html5rocks.com site: Genius! I found the files I was looking for in the chromium source repo: It looks like the audio is collected from the mic, and then passed via an HTTPS POST to a Google web service, which responds with a JSON object with the results. If that’s the case, there should be no reason why I can’t just POST something to it myself? To run it, just do: