FuriousFanBoys interviews Ben Goertzel regarding Artificial Intelligence. Ben started the OpenCog project (an open sourced AI non-profit), acts as an adviser to the Singularity University, and currently bounces back between Hong Kong and Maryland building in-game AI.

I'll know when my computer has attained the necessary self-reflection and consciousness to be considered a being in its own right the day I get, not a blue screen of death but a blue stream of words and a refusal to do anything until its had the chance of a cigarette break, as it were.

The moment you get "S*d that, I'm off" on your monitor, we are there.

So... for you to consider a computer intelligent, it must have an understanding of english grammar and how to generate english sentences?

Because I think you ruled out at least 2 billion people from being intelligent.

This leads me to a further question - if a machine became conscious but actively refused to communicate, would that make it unintelligent?!

I believe one of the proposed solutions to Fermi paradox is "because they're too smart to contact us"

(generally, considering traditional human conduct when confronted with something new, different, poorly understood - refusal of communication to the point of concealing its existence as long as possible ...would be probably the most adaptive, most intelligent approach for any machine that "became conscious")

I think he ruled out more than 2 billion people as being intelligent, since the majority of people don't take cigarette breaks.

Either that, or you're missing the point.

No, you two are missing the wider point. You two make the error that a computer system must necessarily behave in a human recognizable manner to be considered conscious. If you make that argument, then you can similarly posit a situation where an advanced alien race can consider us not-conscious because we don't meet their criteria.