Many materialists believe that we should, in principle, be able to build a conscious computing machine. Others disagree. I favour a sceptical position, but of another variety. The problem isn't that it would be impossible to create a conscious computer. The problem is that we cannot know whether it is possible. There are principled reasons for thinking that we wouldn't ever be able to confirm that allegedly conscious computers were conscious. The proper stance on computational consciousness is agnosticism. Despite this agnosticism, I think we are very close to understanding the material basis of consciousness. Close, but we will never get all the way there. Our understanding of the material basis of consciousness is ineluctably incomplete. That makes me a mysterian. But I am not a defeatist mysterian. I do not think the irresolvable mysteries of consciousness prevent us from formulating concrete empirically grounded theories of consciousness. I will even outline such a theory below. I also think we can identify some properties that are necessary for consciousness, and some that are sufficient. The problem is we cannot find properties that are both. This may sound like a contradiction, but it is not. We can know necessary properties and sufficient properties without knowing necessary and sufficient properties. One can know that drinking two litres of alcohol is sufficient for intoxication, and that at least one teaspoon of alcohol is necessary without knowing the quantity that is both necessary and sufficient. In the case of consciousness, the properties we know to be sufficient are properties that computers lack. The properties we know to be necessary can be possessed by computers, but we don't know whether those properties suffice. We can equip computers with every knowable necessary property and still scratch our heads when asked whether we have managed to create artificial experience