At the end of the day, an “intelligent machine” is an oxymoron. Intelligence is precisely what cannot be encoded in rules or algorithms. Computers are only good at what they do, not because they can “think” (they can't) but because they have the huge advantage over us of being rigidly, reliably unthinking. Those of us who lived through the darkest days of the 2008 financial crisis at close hand will recall the catastrophic consequences of the blind delegation of decision making to arcane algorithms that were never intended to be deployed without a good deal of nuanced interpretation, intuition and judgement, taking cognizance of idiosyncratic local conditions. No machine, no matter how “intelligent,” can ever do this on our behalf, or anything like it. In his paper [see below], Icarus: Disembodied Knowledge, Bureaucratic Thinking, and the Hopeful Return to Reality, cybernetician Dr James Wilk, Liveryman, a seasoned digital transformation adviser, explores the requirements for deploying AI intelligently, the historic roots of our present digital malaise, and the fallacious, tacit philosophical assumptions behind it.