Parry

Parry is natural language program that simulates the thinking of a paranoid individual. This thinking entails the consistent misinterpretation of others motives – others must be up to no good, they must have concealed motives that are dangerous, and their inquiries into certain areas must be deflected - which Parry achieves via a complex system of assumptions, attributions, and "emotional responses" triggered by shifting weights assigned to verbal inputs.

Parry was the first to pass the Turing Test - it was in the early seventies, when human interrogators, interacting with the program via remote keyboard, were unable with more than random accuracy to distinguish Parry from an actual paranoid individual.

Additional comments by developer Kenneth Mark Colby:Fifty years ago there was only one psychiatrist thinking about the ways in which computers could contribute to the understanding of mental illness: Kenneth Mark Colby. Thus began a project that lasted until his death in 2001.

Kenneth Colby graduated from Yale University in 1941 and from Yale Medical School in 1943. He practiced psychoanalysis for the first several decades of his career, and was clinical associate at the San Francisco Institute of Psychoanalysis. But Colby became disenchanted with psychoanalysis because, in his view, it failed to satisfy the most fundamental requirement of a science, that being the generation of reliable data.

In 1961 he spent a year as a Fellow at the Center for Advanced Study in the Behavioral Sciences, where he developed several of the ideas that were to inform the rest of his career. Among these was the conviction that computer models of the mind promised a more scientific approach to the study of cognitive processes and their aberrations. Following this conviction, he joined the Department of Computer Science at Stanford University in the early sixties, and soon became a pioneer in the emerging field of artificial intelligence. In 1967 the National Institute of Mental Health recognized his research potential when he was awarded a Career Research Scientist Award.

At the Stanford Artificial Intelligence Laboratory, Colby created a natural language program called “Parry” that simulated the thinking of a paranoid individual. This thinking entails the consistent misinterpretation of others motives – others must be up to no good, they must have concealed motives that are dangerous, and their inquiries into certain areas must be deflected - which Parry achieved via a complex system of assumptions, attributions, and “emotional responses” triggered by shifting weights assigned to verbal inputs. This program was the first to pass the “Turing Test” (named for the British mathematician Alan Turing, who defined as “intelligent” any computer that could successfully impersonate a human in a typed “conversation”). Parry did so in the early seventies, when human interrogators, interacting with the program via remote keyboard, were unable with more than random accuracy to distinguish Parry from an actual paranoid individual.

Professor Colby came to UCLA as a professor of psychiatry in 1974, at the invitation of Jolly West, M.D., then chair of the Department of Psychiatry and director of the Neuropsychiatric Institute. He was jointly appointed professor in the Department of Computer Science a few years later, and continued to work on the theory and application of artificial intelligence in neuropsychiatry. During his tenure at UCLA Colby published important theoretical works on the unreliability of diagnosis in psychiatry, on cognitive science and psychoanalysis. He was one of the first to appreciate the possibilities of computer-assisted psychotherapy, and with his son Peter created a program called “Overcoming Depression,” which included a natural language component that administered a version of “cognitive-behavioral” therapy for depression. He and Peter formed Malibu Artificial Intelligence Works in 1989, and they continued to develop and market “Overcoming Depression” until Professor Colby’s death at age 81, on April 20, 2001.

Other chatbots in Body health

Privacy statement

Summary:

We will NEVER spam you, nor publish or sell your details to any third party. We hate spam, just as much as you do.

What data does chatbots.org store?

We’ll store all the details you enter on chatbots.org in our database and we maintain statistics of your visits with the sole reason to give you the best personalized service possible.

How does chatbots.org store my data?

We make use of Expression Engine, one of the largest weblog publication systems in the world. US President Barack Obama has used it for WhiteHouse.gov. Our system makes use of a MySQL database.

How do I access my data?

If you are a chatbots.org member, you can access your personal data through your account panel after you login. Additionally, your statistics (number of visits, numbers of reactions, duration of your visits etcetera) will be accessible to you in the future. If you are a guest, please contact Erwin van Lun, founder and managing director of chatbots.org,with your question.

What data is shown?

Chatbots.org allows members to build their profile on a dedicated profile page and show it to the outside world to help them to build their reputation as a chatbot expert. Chatbots.org also allows members to turn off this option if they prefer. However, when members have written a post or a reaction, the name they’ve entered in their profile will always be shown, including a link to their profile. If people click this link, a blocked page may be shown (dependent on your preference). We will create an ‘alias’ option in the future for those members who do not want to use their real names, but we strongly believe that professionals should reveal their identity.

If you aren’t a member, your e-mail address will be necessary when you leave a comment on the site, for follow-up comments, any questions we might have about your comment (which isn’t very likely) or for direct reactions.

When will you use my contact details?

As a member or a guest, we probably know your e-mail address and in some situations also your telephone number or residential address details. We will NEVER spam you, nor publish or nor sell your details to any third party. We hate spam, just as much as you do.

We will use your e-mail address to notify you about new comments in a post you commented on earlier (you are able to turn this option off for eacharticle), for account settings confirmations (if you’ve changed your password, for example) or for occasional notifications on major changes in the site (typically 1-5 per year). Obviously, we’ll use your e-mail address to send you the e-mail newsletters you’ve subscribed to. We may also approach you if you’ve left some brilliant comments on the site: we might want to work with you! We will not send you product or service offerings!

We will use your telephone details after we’ve tried to contact you via e-mail and this e-mail bounced, resulted in other error messages or you simply didn’t answer for some kind of reason. If we have the impression your e-mail address doesn’t work (anymore) we might contact you via phone.

We will use your residential address details when we need to ship something to you that we can’t send by email. Additionally we’ll mention your address on invoices.

Who can modify my data?

We have a very small team (typically max five persons) that has access to your personal data. Please contact Erwin van Lun, founder and managing director of chatbots.org, for the most recent list of the individuals who can access your data. If you have subscribed to one of our newsletters, details like name and e-mail address will be made available to our e-mail service provider for single usage.

Is my data secure?

We´ll do all that we reasonably can to protect your data. Reasonable as we are not a large financial international institution or a military organization. You can expect us to follow all Expression Engine security guidelines, make backups and we don´t provide passwords to other individuals.

I received comment spam!

Unfortunately third parties try to destroy the web by putting comment spam (comments placed by robots, with links to dubious websites) onto websites and thus also on chatbots.org. We’ll do everything we can to avoid comment spam whilst also avoiding barriers for people to react (making reactions too secure or too complex will kill the dynamics of the site). It´s all about balance. Comment spammers don’t have access to your e-mail address. You can always unsubscribe to notifications on specific postings.

Any other questions

Please contact Erwin van Lun, founder and managing director of Chatbots.org, if you have any additional questions.

Gallery

Grey out:

Chatbot info

The Chat Bot Future

A chat bot is a humanlike conversational character.
Its conversational skills and other humanlike behaviour is simulated through artificial intelligence.
It often acts as a virtual assistant and it can have its own visualisation through an avatar
or it is faceless. We expect that through the years every conversational chat bot will grow into a real virtual human.