In light of the recent exposure of Facebook ‘leaks’ and the claim by the now notorious, data brokerage firm Cambridge Analytica, that they used Facebook user’s data to ‘build 'models to exploit what we knew about them and target their inner demons’ (2018), there is arguably a more pressing need than ever to evaluate some of the illusions of control and omniscience that prevail in writing and reporting about AI and broader computational technologies. One of the strengths of both Science and Technology Studies (STS) and a feminist STS, is their refusal to separate technology from politics, and, in the case of Donna Haraway's (1985, 1991, 2016) and Lucy Suchman’s (2017) Situated approaches, a methodology that also provides us with an ethics of technological practice. This paper will critique the way in which AI is often presented within a technologically determinist trajectory, one that serves a neo-liberal status quo, but it will also outline alternative models, grounded in a materialist coding practice and an ethics that responds to, and learns from the entanglement of people and materials.