Sometimes it can amaze you with how much it knows, and sometimes it can baffle you with the stupidity of its algorithm for selecting which questions to ask.
One of the most interesting parts of it is the information it gives you at the end, about what the AI has concluded based on your answers.

One particular time, the object I selected was fascism. My responses to its questions began like this:

Hmm. A mountain would probably be a mineral, not an abstract concept, and not a state of mind, and I might even love it, but hey, the computer can feel free to disregard everything I say.

By the 20th question, it had decided that I was thinking of death. But the program keeps going for 10 more questions if it doesn't get it in 20. In the following questions, it asked "Can it be used more than once?" The answer of "Yes" would have helped to get death off the list if it had asked it earlier.

And on the 30th question, after its questions had started to indicate that it might be on the right track: "Is it Mount Rushmore?"

I had even said it wasn't a mountain. Mount Rushmore must have a whole lot to do with fascism. Maybe if you look at it from a distance, Lincoln's head looks like Mussolini.

So I told the site I was thinking of fascism, and proceeded onto the conclusions page. This page begins with "spontaneousknowledge" where it draws conclusions about the answers to questions it hasn't asked about your object. This part is always amusing. Among its conclusions were:

And then there's the Contradictions section, where it lets you know if it disagrees with some of your answers based on what other people have said. It states what it believes instead, such as "Emergency exits are not pleasurable" or "You could use a toothbrush with your friends". In this case, it only disagreed on one thing, despite all the mistaken associations with mountains: