Pages

Monday, 29 September 2014

Moving on with my book, this is the sketch for Chapter 4. Chapters 4 and 5 deal with two aspects of functionalism: here I deal with functionalism in educational thinking - both in the managerialist sense and also in what I call the 'deep functionalism' of cybernetics. In Chapter 6 I will deal with functionalism in the thinking about information. Still lots to do, and hopefully no clangers! (although the good thing about blogging this stuff is that I'm more likely to iron out problems...)

Introduction:
Functionalism and the Question of Efficient Education

Educational institutions are complex. They comprise departments
for academic disciplines, research, marketing, libraries, estates management,
IT deployment, finance and so on. To see each of these performing a function,
with a particular purpose lies at the root of functionalism. Functionalism sees
that the architecture of the enterprise of education may always be improved, in
the same way that the architecture of a business might be improved:
efficiencies can be introduced, often with the help of new technology. Whether using Taylorist scientific management,
analysing the workflows of academics, or even through introducing wholesale Fordist reorganisation of educational processes, optimised and efficient delivery of education has been a driver both for well-intentioned educational reorganisation (to reduce costs to learners) or for (arguably) ill-intentioned managerial intervention with the goal of maximising profits. Here we must ask, Can education be 'efficient'? What does efficiency
in education mean?

In terms of inspecting its efficiency, a common topic of debate is the expense of education and how it should be paid for. In this debate, typically we see arguments about efficiency combined with arguments about fairness. The reasons for this lie in the fact that it is impossible to discuss efficiency in education in isolation from both macro and micro-economic mechanisms which will bear upon the constitution of society in the wake of education, and on the rights and obligations of individuals subject to education. For example, we can ask, If Higher Education is funded by everyone
(through taxes), and not everyone attends university, is it efficient, irrespective of it being fair? To what extent does the answer to that question depend on how the money is used within the institution? To what extent does it depend on what education gives society? The institution of education might appear to be grossly inefficient and an expensive drain on the taxpayer, but to reform it would change its nature, and with it the nature of society: how might the efficiency of higher education be measured against the benefits to the whole of society in such a model? If, alternatively, education is funded by only those who attend, the burden of
funding on the individual is much greater than it would be if it was funded by
everyone. Is this more efficient? What are the knock-on effects on society? If (to take a popular solution to this problem) education is funded on the basis of loans which are taken out and repaid by
those who attend, but who only repay if they earn enough to afford later in
life, is this more efficient? And what happens to the money within the institution? If Vice-Chancellors award themselves huge pay rises (as they have done in recent years) is this a mark of efficiency (as VCs usually claim!)? If VCs have led cost-cutting initiatives which have squashed academic salaries, introduced institutional fear and compromised academic freedom, can this be defended as being 'efficient'? The problem is that whilst efficiencies might be sought by identifying the functions of components of education, the function of education itself is contested.

Functionalist thinking can
work at many levels. I will make a simple
distinction between 'shallow functionalism' and ‘deep functionalism’. Shallow
functionalism is the thinking about the functions of the components of the
institution of education, the way that those components inter-operate, and so
on. This is the Taylorist agenda in education. Deep functionalism looks at
similar problems with a greater degree of granularity: what are the components
of learning processes, what are the components of teaching, of the curriculum,
of knowledge itself. Thinking relating to deep functionalism belongs to the
discipline of cybernetics. The relationship between deep functionalism and
shallow functionalism presents new questions about how we thinking about 'solving
the problem' of education, how we think about 'identifying the problem of
education'.

Shallow functionalism

The shallow functional problem begins with the way that
institutions and society is organised. We might draw this as a simple hierarchical
diagram with government and education ministers at the top and teachers and
students at the bottom. In between are various organisational units: schools,
universities, colleges, each of which has a head who is responsible for
coordinating the provision of courses, resources, teachers and timetables.
Teachers themselves have the responsibility for coordinating the activities of
their learners, making sure national standards (like exams) are met
satisfactorily, and the expectations of learners (and, in schools, their
parents) are satisfied.

The hierarchy here is clear. Each individual has a function.
Within the contemporary University, there are particular functions which are
identified as for 'quality', 'validation', 'assessment', 'certification', and
so on. Each of these units performs a function ascribed to them within the
structure. Such functions are determined at different levels of the structure.
For example, the reason why we have 'validation' as an activity in the
University is so as to ensure that the things students are taught fit into a
general and comparable schema of 'subjects for which degrees might be awarded'
within the national sector of universities. Validation is a function emerging
as a knock-on effect of other functions within the university system. The
individual responsible for 'validation' in the university is the person who has
to fill in forms, attend meetings, review documentation and feed the results of
this to the institutional hierarchy.

Similarly, the assessment boards within the university have
a function to verify the marks awarded by teachers to their students. Typically
marks for modules are presented to the group of staff concerned, and
discussions ensue as to what decisions should be taken in each case. This panel
is seen as fundamentally important in guaranteeing the award of degrees to
individuals. What if it didn't happen? What if anyone could award anyone else a
degree? Then the system as an education system becomes incoherent: the end
result is a perceived injustice on the part of some learners who find their
efforts unfavourably viewed purely because of bad luck or lack of transparent
organisation.

The functional differentiation within the education system
has evolved over time. From origins in
the individualistic practices of apprenticeship learning, the monastery and so
on, preferred hierarchical models arose which privileged the fairness and
equity and stability of an education system. The advocates of change in the
education system are those who see that this evolutionary process is
incomplete; that further evolution is required in order to ensure that the
functions performed by the different components of the education, and the
functions performed by individuals within those units are all optimal, and that
therefore the function of the education system as a whole is tightly and
efficiently specified.

However, there remain a number of problems with this
picture. Whilst the hierarchy of functions specify the arrangements of
documents passing between departments which assure quality, fairness, and so
on, there are no 'real' people. Institutions do not contain functionaries; they
contain real persons, with real histories, real hang-ups, real talents, and so
on. Everybody concerned is involved in a process of problem-solving: the
question within institutions is whose problems count for the most; who gets to
pursue a solution to their problem, and who else loses as a result? Institutions
constitute themselves with real politics.

The actual 'evolutionary' processes which determine the
functions of the institution rely on various arguments which present the latest
problem solution in an “ethical” light - it is the 'right' thing to do. Such
statements are usually made by those in power. Functional determination is not
a process of natural selection because any selection is determined by power not
by any inherent natural property within each individual: the dominant species
in any institution dwarfs the power of everyone else. This process becomes more marked when
powerful individuals arm themselves with new technologies. It has been argued
that technologies are often used as tools for the class struggle, but this is
more often the way in which senior managers might enhance their power over
everyone else. The ethical arguments for such change amount to declarations of
status of new instruments of engagement.

It is for this reason that functionalist thinking has become
associated with managerialist doctrine. That the seeking of efficiencies of the
system frequently costs the jobs of many of the workers. It is in fighting this
naive view that deep functionalism is
sometimes invoked as a way of challenging the managerial hierarchy - either by
re-determining the functions of education so that institutional structures are
no longer needed (for example the recent experiments with MOOCs), or by
highlighting the deep processes of individual learning and drawing attention to
the epistemological gulf between functionalist philosophies of management and
the processes of learning and growth of culture which they threaten.

A hierarchy of function is a kind of model. The boxes in a
model indicate departments and people. The lines delineate 'information flows'.
Those are the communication channels between people. For thinking about is
meant by information in this context,
I will address this in the next chapter. But typically in institutions, the
information flows contain reports and formal communications which are the
result of some process which has been executed by the unit concerned; their
necessity has usually been determined by higher-order power functions. Failure
to produce reports will be deemed to be “not working”, and the workers within
the unit will probably be sacked.

If EdTech has been a tragic enterprise over the last 10
years then it has been because it hoped to establish itself with the ambition
and ideals of deep functionalism, only (in the end) to have strengthened the
hand of shallow functionalists. In understanding the reasons for this, we now
have to turn to the background behind the deep functionalism that started it
all. Here we have to consider a different kind of model.

Deep Functionalism
and Educational Technology

Deep functionalist models consider the arrangement of
components of individual behaviour, communication, self-organisation and
consciousness. The principle feature of deep functionalist thinking is not just
a greater granularity in the determination of components, but increasing
complexity in the interrelationship of those components: in particular, the
relationship of circular inter-connectedness. Circular inter-connectedness, or
feedback, was one of the principal features of psychological and biological
models and early functional models date to before the 19th century. Piaget's
model of perturbation and adaptation in organisms provides the classic early
example of this kind of deep functionalism. But the thinking about deep
functionalism goes back further to the origins of the discipline which
concerned the interaction of components at a deep level, and in particular, the
circular relationships between the action of different components which produce
behaviours which can appear chaotic or life-like.

Piaget's mechanism is one of feedback between components,
and it was this model which was one of the founding principles behind the
pedagogical models which informed thinking about learning and teaching. Among
the most influential work within educational technology is that of Gordon Pask,
whose conversation theory attempted to identify the functional components of
communication within the teaching and learning context. This model was
simplified in the late 1990s by Diana Laurillard as her 'conversation model',
which subsequently was used as one of the bases of constructivist pedagogy.

However, just as the shallow functionalist models failed to
work, or at least relied on power relations in order to work, so the deep
functionalist models and the technologies that they have inspired have also
often failed to work. In a passage in Diana Laurillard’s book on “Learning as a
Design Science”, she states the problem:

“The promise of learning technologies is that they appear to provide
what the theorists are calling for. Because they are interactive,
communicative, user-controlled technologies, they fit well with the requirement
for social-constructivist, active learning. They have had little critique from
educational design theorists. On the other hand, the empirical work on what is
actually happening in education now that technology is widespread has shown
that the reality falls far short of the promise."

She then goes on to cite various studies which indicate
causes for this 'falling short'. These include Larry Cuban's study which
pointed to:

·Teachers have too little time to find and
evaluate software

·They do not have appropriate training and
development opportunities

·It is too soon – we need decades to learn how to
use new technology

·Educational institutions are organized around
traditional practices

She goes on to echo these findings by stating:

"While we cannot expect that a revolution in the quality and
effectiveness of education will necessarily result from the wider use of
technology, we should expect the education system to be able to discover how to
exploit its potential more effectively. It has to be teachers and lecturers who
lead the way on this. No-one else can do it. But they need much more support
than they are getting."

However, here we see a common feature of functionalism, both
shallow and deep: functionalist theories struggle to inspect themselves. The
odd thing in Laurillard’s analysis is that at no point is it suggested that the
theories might be wrong. The finger points at the agency of teachers and
learners and the structural circumstances within which they operate. In other
words, deep functionalism is used to attack shallow functionalism.

Most interventions in education are situated against a
background of theory, and it is often with this theoretical background that
researchers situate themselves. Given the difficulties of empirical
verification in any social science, the relationship between these descriptions
is metaphorical at best, and such models are often a poor match for real
experience. The difficulty in adapting these abstractions presents an
interesting question about the relationship between theories, researchers,
practitioners and the academic community. That the personal identity of
researchers becomes associated with the validation of a particular analytical
perspective or a theoretical proposition. Either it is a theoretical
proposition which is to be defended or a particular method of research, which
itself will be situated against a theoretical proposition (which often lies
latent). To critique theory is not just an intellectual demand to articulate
new theory (which is difficult enough), but it is also to question the
theoretical assumptions that often form the basis for professional and personal
identities of the researcher. On top of this, critique of school or college
structures (which are often blamed for
implementation failures) provides a more ready-to-hand target for critique
rather than theoretical deficiency.

This is a question about the nature of functionalist thought
as a precursor to any theoretical abstraction and technological inervention. What
is the relationship between analytical thought and its categories to the real world
of experiences and events? For Hume, whose thinking was fundamental in the
establishment of scientific method, there was no possible direct access to real
causation: causation was a mental construct created by scientists in the light
of regular successions of observed events. The models of education present an
interesting case of Humean causal theory because there are no regular
successions of observed events: events are (at most) partially regular; only in
the physical sciences are event regularities possible. Given that merely
partial regularities are observable, what are the conditions for the
construction of educational theories? The answer to this is the use of
modelling and isomorphism between models and reality: educational science has
proceeded as a process of generating, modelling and inspecting metaphors of
real processes.

Functionalism and the
Model

When Laurillard discusses the extant ‘theoretical models’
(Dewey, Vygotsky, Piaget) she presents a variety of theories of learning. She
attempts to subsume these models within her own ‘conversational model’ which
she derived from the work of Gordon Pask. She defends the fact that these
models of learning haven’t changed by arguing that “learning doesn’t change”.
How should we read this? Does it mean that Dewey, Vygotsky and Piaget were
right? Or does it mean that “there is no need to change the theoretical
foundations of our educational design, irrespective of whether the
interventions work or not”. Basically, there is an assumption that the model
which has served as a foundation for design of educational interventions isn’t
broken because it served its purpose in being a model for the design of
educational interventions, irrespective of its ability to provide a way of
predicting the likely results of educational interventions.

Such deficiencies in modelling are not uncommon in the
social sciences. In economics, for example, econometric models which fail to
explain and (certainly) to predict the events of economic life continue to
appear in economic journal papers. The deficiencies of the model appear to
serve a process of endless critique of policy as attempts are made to make
policy interventions fit the prescribed models. This continues to the point
where it is difficult to publish a paper in an economics journal which does not
contain an econometric formula. Yet, the principal figures of economics
(Keynes, Hayek, etc) used very little mathematics, and Hayek in particular was
scathing in the emerging fashion for econometrics.

A similar situation of adherence to formalisations as the
basis for theory has emerged in education. In education, this takes the form of
slavish adherence to established theoretical models to underpin practice: a
tendency which might be called ‘modellism’. Models are associated with
ideological positions in education. The dominance of constructivist thinking,
which (as we said in chapter 1) is grounded in solid and reasonable pedagogical
experience – is nevertheless a foundation for models of reality which (partly
because of their affinity to pedagogical practice) are hard to critique, lest
those who hold to them feel that their most cherished values about education
are under attack. In trying to address this situation, we need to understand
the nature of these models.

Laurillard hopes her ‘conversation model’ provides a
generalisation of the available theories of e-learnning. She states that she
would have liked to have made a simpler model, but she feels that simple model
(like Kolb’s learning cycles, or double-loop learning) leave out too much. That
she is able to produce a model which is commensurable with these existing
models owes partly to the fact that each of the models she identifies have a
shared provenance in the discipline of cybernetics.

The Machinic Allegory
of the Cybernetic Model

Cybernetics is a discipline of model building, and
particularly understanding the properties of systems with circularity in their
connections. Cybernetics is difficult to describe. Its efforts at defining
itself (multiple definitions abound) testify to the fact that it doesn’t have
the same kind of constitution as other established sciences, cybernetics grew
from a period of interdisciplinary creativity and science that emerged from the
Word War II, it was recognised that there was the possibility of making
connections between ‘feedback and control’ within the newly discovered
mechanical devices of the war (in cybernetics case, it was the missile
detection systems which Norbert Wiener had been developing in MIT), and the
biological mechanisms of living things. It appears as a kind of playful
philosophising, where physical or logical mechanical creations with unusual
properties are explored, and the questions raised are used to speculate on the
nature of the world. Pickering calls this ‘ontological theatre’: a kind of allegorical
process of exploring fundamental mechanisms and relating them to reality. Cybernetics
provides an alternative to philosophy as a means of description of the world. With
emphasis on feedback and indeterminacy, cybernetics brought with it its own
mathematics which provided the ground for deeper investigations which,
ultimately were to produce many spin-offs which now have their own separate
disciplines (and rarely acknowledge their shared heritage) including Computer
Science, Artificial Intelligence, Family Therapy, Management Science, biology.
Self-organising systems became the principal metaphor behind these systems and
generic models could be provided which could cover a range of different
phenomena.

Machines with circular connections exhibit behaviour which
becomes unpredictable in ways in that can make a machine appear to have ‘life’.
One of the first machines with this property was developed by psychiatrist Ross
Ashby. His ‘homeostat’ was a machine which contained four oscillating
mechanisms whose output values were wired into the inputs of other oscillators.
When activated, the different gauges oscillate in apparently random patterns, a
change in each individual motor prompting reactions in each of the others. By
making the machine and observing the behaviour, the possibility of making
distinctions about the behaviour about the machine becomes possible. At the
same time, it also becomes possible to consider the nature of this 'model' and
its relationship to the natural world. The distinctions surrounding the
homeostat provided the opportunity to introduce new concepts: attenuation,
amplification. These distinctions feature in a new kind of 'allegory' about
social life and psychological phenomena like learning. The homeostat creates
events in which understanding is a performative process of engagement:
cybernetic machines and models 'tell stories' about the phenomena of
consciousness and understanding. Cybernetics is a new kind of metaphysical
allegory to account for the way things come to be, and for the way things might
become.

The latter emphasis on becoming pinpoints the evolutionary hopes
contained within cybernetic understanding. There are many similarities between
evolutionary explanation and cybernetic understanding: indeed, for those
scientists who have sought to develop Darwinian evolutionary theory,
cybernetics has been a powerful tool which they have used to dig deeper into
the components of the emergence of life. As with evolutionary explanation, the
central feature in these kinds of explanations is time. It was initially in the
mathematics of time-series that Wiener first articulated the cybernetic
dynamics as an example of irreversibility: each state depended on some prior
state, and differences of initial conditions could produce dramatically
different patterns of behaviour (a physical manifestation of the point made by
Poincaré many years earlier). Given the dependence on states on previous
states, and the irreversibility of the processes of emergence, there needed to
be a way of thinking about the connection between the variation in states and
the conditions under which states varied.

Just as evolutionary explanation regards selection and
probability as its principle mechanical drivers over time, cybernetics takes as
its principal driver the inter-relationships between components each of which
can occupy a finite number of states at any particular moment. Ashby noticed
that the number of possible states in a component at any point in time was related
to the number of states in other components. In suggesting counting the number
of possible states at any one point, he called his measure of the number of
possible states of a machine Variety, and stated his axiom that the variety of
a system can only be absorbed by the variety of another system. In other words,
equilibrium between components depended on the balancing of the number of
possible states in each component. An imbalance caused fluctuations in
behaviour and uncertainty. The technique of counting variety, and Ashby’s
associated law has many everyday applications: in the classroom, the teacher
has the variety of a single human being; whilst the class has the variety of 30
human beings. Somehow, the teacher has to manage this variety, which they do by
attenuating the variety in the class (with rules and regulations), and
amplifying their own variety (with a central position where they can be seen,
and a chalk-board).

Ashby’s Law represents deep functionalism's alternative to
evolutionary language: instead of genes competing for supremacy, complex
organisms interact in ways which preserve their own internal variety management
across their different components. Given the new kind of language of deep
functionalism and cybernetics, the predominant concerns of shallow
functionalism can be re-inspected. What does the organisation hierarchy look
like if instead of identifying the components of the organisation as those
which perform the functions of marketing, sales, accounts and so on, we examine
the way that variety is managed from the macro to the micro level? What does
the enterprise of the educational institution look like as a multi-level
variety-management operation? What impact do technologies have in helping to
manage variety through either attenuation or amplification? Most importantly,
by this route, might it be possible to characterise the more basic qualities of
mind and establish better ways of organising newly re-described components of
education to provide a more enlightening way of organising education?

Of the pioneers of cybernetics who asked the questions about
the organisation, Stafford Beer applied Ashby’s principles to re-describe the
organisation chart of the institution. Beer’s approach was to allegorize fundamental
components he considered to be vital to the successful management of variety in
any organisation, and indeed in any organism. Beer’s Viable System Model led
him to Chile where he was invited to rewire the Chilean economy under Salvadore
Allende. Using Telex machines and a rudimentary 1972 computer, a control centre
was established which had feeds of production information from the entire
Chilean economy. Decisions could be taken in the light of information received.
In the history of cybernetics, there is perhaps no more spectacular example of
the aspiration of deep functionalism.

Beer’s work characterises the power and ambition of deep functionalist
thinking for re-describing social institutions and transforming them, at the
level of individual consciousness and experience, the same principles were used
to address psycho-social pathologies emerging from human communication and
experience. American anthropologist Margaret Mead was one of the first scientists
present at Macy conferences, but she was later joined by her husband Gregory
Bateson who saw in the descriptions of ‘feedback’ dynamics within trans-cultural,
and trans-species systems a way of describing human pathology which if
apprehended could avert the ecological catastrophe that many cyberneticians
(including Heinz von Foerster) were already predicting. Bateson’s thinking
about system dynamics goes back to Ashby in recognising the fundamental
distinction as that of the ‘difference’. Differences are perturbations to a
system’s equilibrium, and differences cause change in a system: Bateson argues
that what human’s consider to be ‘information’ is “a difference that makes a
difference that…” The dynamics of difference-making result in differences
occurring at different ‘levels’ in an organism. Influenced by Russell and
Whitehead’s class-set theory, Bateson defines different classes of difference.
Consciousness involves the interaction of different difference-processing
mechanisms. In examining the learning processes of children as they grow into
adults, he observed that basic mechanism of stimulus and response at one level,
gave way to deeper levels of coordination as higher-level differences concern
not basic stimuli, but the results of accommodation to primary responses to
stimuli. Tertiary levels of response occur in response to those differences
produced by secondary levels: the response to the adaptation processes to the
primary response. There are two important phenomena which arise from this
mechanism: on the one hand, there is, for Bateson, emergent consciousness which
arises through the interaction of different levels of description in a system.
Secondly, there is emergent pathology: different levels of description may
contradict each other – particularly in inter-human communication. It is in
these different levels of description that Bateson becomes particularly
interested.

One particular situation is what he called the ‘double-bind’:
a situation where one level of description conflicts with another in a
contradictory way, where a third level of control prohibits either party in
being able to “see” the situation they are caught in. In the double-bind
Bateson saw the political relationship between masters and slaves, the dynamics
of war, addiction or depression. In the dynamics of emergent levels of
organisation, Bateson saw the processes of learning from basic experience of
stimulus and response, to higher order functions of sciences and the arts.
[more to do]

Pask’s Conversation
Theory and Teaching Machines

Whilst Bateson’s project was one of radical redescription,
with interventions occurring across a range of phenomena (from dolphins to
psychotherapy), Gordon Pask used a similar approach to experiment directly with
new technologies for learning. As with all cyberneticians, Pask’s interests
were very varied: his contributions range from educational technology, art, architecture,
biological computing to epistemological concerns around the nature of concepts
and understanding. It encompassed the connection
between self-organising systems, constructivist epistemology and a theory of
society. However, Pask is perhaps most important as one of the founders of
experimentation with teaching machines. It was with these machinces that Pask
explored his interest in communication and the ways individuals learn and adapt
from one another and the ways that they acquire concepts about the world, which
they then communicate.

Displaying a circularity typical of cyberneticians, Pask’s
work in education revolved around the concept of the “concept”: what is a
concept and how are they related to communication and learning? Pask’s theory
of concepts has many similarities to von Foerster’s theory of object. For Pask,
the concept is a stabilised pattern of interactions between individuals and the
world. Pask’s approach is a kind of meta-epistemology, which regards concepts
as both ideal and subjective – since they are realised in individual minds –
whilst exhibiting the properties of objectivity
in the eddies of stability in the interations between people in the
world. In order to realise his theory of concepts, Pask requires two things: he
requires a mechanism that drives the process of conceptualising. And then he
requires a “field” which bounds the operation of this mechanism and allows for
the formation of concepts and the interactions between people. Pask calls these
two aspects simply an M-machine and a P-machine. An M-machine is the hardware –
biological substrate, and in the case of human beings, the brain. The P-machine
is the software – some kind of storage for emerging concepts. Importantly, the
P-machine may exist within a single M-machine, or across many M-machines. In
the latter case, one may talk of social groups, shared discourses and (most
importantly) shared concepts between members of social groups. Pask sees as a
process of maintaining stable conceptual forms as a broader process of
maintaining individual identity. Luhmann sees individual identities as being
constituted out of the self-organising dynamics of communications; Pask posits
an ultimately individualistic and psychological mechanism of maintaining
conceptual structures in an interactionist context.

This kind of deep functionalism presents difficult
questions: How are concepts communicated? It is this part of his theory which
Laurillard is drawn to, and whilst his explanation stands on its own at one
level, the extent to which its assumptions draw on his much more convoluted
assertions about the nature of concepts is a sign of particular problems
further along the road. Fundamentally, conceptual structures are realised
through processes of communication. In teaching and learning processes,
teachers attempt to coordinate their understanding of conceptual structures
with learners’ understandings by making interventions in contexts, and
encouraging learners to articulate their understandings through a process Pask
calls “teach-back”. Through repeated exchanges, activities (which are different
kinds of context) and exercises, teachers and learners gradually harmonise
their conceptual formulations.

So from the shallow functionalism of units of organisation
of the university to the deep functionalism of concepts, communication and
psychology, how do these things interact? Pask built his conversation theory into
a theory of society. Within his work is situated the adaptation-assimilation
mechanisms of Piaget, the social constructivist theory of Vygotsky, the
situated cognition of Lave and Wenger, and many other theories. The breadth of
this work provides the ground for Laurillard’s ambition to embrace the other
models that she admires, and indeed, within Pask’s model, there is some
commensurability between the cybernetic provenance of Senge, Kolb, Piaget, and
others.

From Deep
Functionalism to Epistemology and Critique

Whilst cybernetics relation to philosophy is somewhat
strained (Luhmann humorously characterises the relationship with philosophers
turning up at the cyberneticians party like the uninvited “angry fairy”) the
deep functionalism of cybernetics must eventually end up in philosophical
territory: for all the speculation about the nature of the world, cybernetics
struggles to be anything other than metaphysical. Its identification of processes
of variety management, difference and extrapolating these to the processes of
consciousness attest to a particular view on knowledge and being in the world.
Having said this, cybernetics asks the question as to the nature of reality in
a new way: with the demonstrable dynamics of machines and the logic of its
ideas and its mathematics, cybernetics presents a model of the metaphysical
realm which appears more coherent than those presented by philosophers from
past ages. Such a view was clearly held by Bateson who called cybernetics “the
biggest bite from the tree of knowledge man has taken for 2000 years”, and his
epistemology focused on the importance of language and communication as a
dynamic process of difference-making which resulted in the co-creation of
reality. By this logic, empiricist views of ‘the real’, the ‘observable’ and so
on were challenged: reality was a construct. It’s an irony that the philosopher
who would most closely agree with this position is David Hume, who was
similarly sceptical about reality, but whose work established the foundations
for modern empirical method.

The central difficulty that Bateson addressed was the
problem of observation. A philosophical shift was signalled by Margaret Mead,
who wrote in 1968 of the need for “cybernetics of cybernetics”. Mead’s plea for
a cybernetics which turns its focus on cybernetics itself was seen to focus
immediately on the work by two Chilean biologists, Humberto Maturana and
Francisco Varela.

Maturana and Varela’s work on cellular organisation appeared
to demonstrate (down the microscope), that cells were self-organising,
self-reproducing entities. The question then became, if cells are like this,
what about language? What about human relations? What about objectivity?
Maturana carried this work forwards in arguing that media of transmission were
in fact fictions – the results of self-organising processes. There was no
information, there was no language: there were processes of languaging between
biological entities. That there was an empirical biological basis behind
Maturana’s epistemology introduced the seeds of a problem with reality: his
work could be characterised as a ‘biological reductionism’. However, it wasn’t
just biology that was pointing in this direction. Shortly after Maturana and
Varela’s intervention, Heinz von Foerster argued for a mathematical orientation
towards 2nd order cybernetics. In considering the nature of objects and the
objection to radical idealism that was first proposed by Dr. Johnson, Von
Foerster worked on a way in which the identification of objects could be
explained through the patterning of sensory information, where stable patterns
of interaction could be determined irrespective of the point of view. He called
these points of stability of interaction ‘eigenvalues’ and his work
subsequently was expanded by other cybernetic mathematicians, notably the
topologist Louis Kauffman.

Maturana and Varela’s ideas had a further impact, and one
which Maturana (in particular) objected to. This was the idea that communications
themselves could be self-organising. Niklass Luhmann’s view of the world was based
on 2nd order cybernetics, yet his approach was essentially an inversion of
Maturana’s biological emphasis. Luhmann suggested that it was communications,
not biological entities, which should be seen to be self-organising. Biological
entities became then the means by which communications were reproduced and
transformed in society. This move enabled Luhmann to redescribe the phenomena
of the social sciences from the perspective of self-reproducing systems. More
importantly, it meant Luhmann could engage in a full-blooded redescription of
the major themes in sociology, which resulted in a critical engagement that
brought Luhmann’s cybernetics far greater inter-disciplinary attention than
anyone else, building as it did on the highly influential social functionalism
of Talcott Parsons.

Luhmann’s exclusion of agency is at once challenging and
powerful. Luhmann asks whether agency is merely the interaction of communication
dynamics. Is meaning simply the calculation of aniticpations of
double-contingency of communication? What then matters? What of love? What of
passion? Luhmann’s presents powerful answers to these question in “love as
passion”, where love is seen as an inter-penetration of psychic systems.
Luhmann generally makes 2nd order cyberneticians uncomfortable. Yet
his message isn’t radically different from that of Pask’s social theory. Yet
Luhmann, more clearly that any other figure in cybernetics , relates the sociological
discourse to the cybernetics, but more importantly he reveals the direct
inheritance of German idealism, and particular of Kant. It is for this reason
that his argument with Jurgen Habermas, who represents a fundamentally
different tradition of thinking, is most interesting. However, the revealing of
the Kantian inheritance and its problems is where the novelty and magic of
cybernetic ‘deep functionalist’ thought can be situated in a deeper tradition
of thought.

Deep Functionalism
and the Kantian Inheritance

Kant’s importance in the history of philosophy rests on his
rejection of a model of the world where meaning lay inherent in nature, not in
man. In his enterprise he echoed Hume’s view that causes were not inherent in
the natural world, but instead the result of discourse between scientists who
contrived reproducible experiments. He made a distinction between analytic knowledge
(knowledge is that which can be known inherent to a proposition, as in
mathematics) and synthetic knowledge (knowledge constructed in the light of
experience). He introduced a new method of philosophical reasoning to ask
“given that we can have knowledge of the world in various ways, what must the
world be like?” As Von Glasersfeld points out, Kant is not the first to highlight
the importance of human construction for coming to know the world (Bentham may
have been the first, but Vico also expressed a similar view), but he is the
first to devise a completely new philosophical schema within which this might
occur. His fundamental question, building on the distinction between the
analytic and the synthetic, is how synthetic a priori propositions are
possible: how it is that something which is constructed from experience can be
known without experience? This was an important question because it concerned
the knowledge of God: if synthetic a priori knowledge was impossible, how could
God exist? In suggesting an answer, he postulates that human subjectivity must
organise perceptions into “categories” of thought. Through the categories Kant
was able to justify and analyse the way in which synthetic a priori knowledge
and other forms of knowledge were possible. He concluded that knowledge of the
world must emerge from human beings, but that human understanding must be
constituted in a particular way so as to reach the conclusions about the world
with which we are all familiar. This was a metaphysical proposition: the
‘Transcendental subject’ of categorical understanding, which could only be
inferred by the way the world was.

In proposing the transcendental subject, Kant made a key
assumption: the regularities of the world which Hume had referred to, and which
were fundamental to synthetic understanding, was a necessary attribute of the
world. This so-called ‘natural necessity’ was itself challenged by Hume, who
could see no reason why the world should exhibit regular successions of events
if causal mechanisms were human constructs. The transcendental subject was a
dual metaphysical assertion: an assertion about the world and an assertion
about consciousness. It is this dual assertion upon which Husserl built when
devising his phenomenology. 2nd order cybernetics disagrees with
Kant on the question of natural necessity. By substituting mechanisms of
understanding for Kantian categories, reality is seen through the eyes of the
cyberneticians as constituted through interacting communicative processes.

A contrasting approach to Kant is to uphold his view on
natural necessity, but to reject his view of the transcendental subject. Bhaskar
upholds a different kind of transcendentalism based on the question “Given that
science is possible, what must the world be like?” This is to re-ask the
Kantian question following Hume with the benefit of hindsight of 200 years of
scientific progress. In upholding natural necessity, Bhaskar also rejects not
only Hume’s rejection of it, but also rejecting Hume’s assertion that causes
are constructs. In arguing instead that causes are real, inherent in the nature
of the world, and that science’s job is to discover them (not create them),
Bhaskar paints a picture of reality which is very different from the both the
cybernetic view, and from Hume’s and Kant’s subjectivist view. The concept of
mechanism plays a fundamental in this, with Bhaskar making a key distinction
between transitive and intransitive mechanisms: those mechanisms which exist
through human agency, and those mechanisms which exist outside human agency. In
articulating an argument that welds Aristotelian and Marxist thought with
Kantian transcendentalism, Bhaskar argues for a dialectical materialist logic
that is fundamentally oriented towards emancipation. From this perspective, the
cybernetic view it attacked for not inspecting its ontology: it suffers a
linguistic reductionism which excludes causal factors which must, in Bhaskar’s
view, be considered if one is to account for reality. The most important of these
is absence. Bhaskar’s philosophy
suffers similar problems of privileging mechanism that the cybernetic viewpoint
is subject to, however, the highlighting of the reductionism to language, and
the importance of absence as a cause helps him and other (including Smith) to focus
on the concreteness of human experience and the transcendental implications of
this on the nature of the world rather than dual transcendentalism of the world
(natural necessity) and the subject.

The contrast between these positions presents three critical
areas for inspection which form the nexus of the problem space concerning the
different varieties of functionalism. On the one hand, there is a critique of
actualism from Bhaskar which focuses on the causal power of absence, on the
other hand, there is critique of Kant’s correlationism from Meillasoux and
Badiou which focuses on whether or not there is natural necessity. This broad
area of disagreement then boils down to two concrete issues: the nature of real
agency and ethical behaviour, and the nature of real people.

1.The Correlationist/Actualist Problem

What opens up is a vista on possible ontologies. The
differences between positions can be characterised as to the extent to which
ontologies assert natural necessity or not (in the language of Meillassoux,
whether they are ‘correlationist’ or not) and the extent to which ontologies
tend towards a temporal mechanical descriptions which are available to
synthetic knowledge and with ‘fit’ as the driving force, as opposed to logical
analytic descriptions with truth as the central feature. To identify a range of
ontological positions is not to relativise ontology; it is instead to find the
resources to situate possible ontologies within a meta-ontological framework.

To begin with, a simple table of the positions under
consideration can be constructed:

Nature\subjectivity

Transcendental
subjectivity

Concrete
subjectivity

Natural
necessity

Kantian transcendentalism

Pragmatism

Critical Realism

Natural
contingency

2nd-order Cybernetic epistemology

Badiou

Meillisoux

This pinpoints the difference between Kantian ontology and
cybernetic otology, so far as the assumptions which are made about the nature
of the world. The cybernetic epistemology holds to no particular stance on the
nature of reality; there is no natural necessity. However, it still upholds
Kant’s transcendental subjectivity, albeit with a mechanistic flavour rather than
one of categories. The problem for the the cyberneticians is that their
ontology ressupposes time as the fundamental requirement for mechanisms of
variety management. Kant’s philosophy, on the other hand, does not describe
time-based mechanisms. Having said this, the end-result is the same: what Pask
and Luhmann describe is a transcendental subject.

The problem of time is the marker between those approaches
which attempt to avoid the transcendentalising of the subject. In Bhaskar, the
idea of mechanism plays as important a role in his philosophy as it does in
cybernetic thought. The apprehension of mechanism as a concept appears to imply
both natural necessity and some degree of subjective transcendentalism: it is natural
necessity which determines the regular successions of events which humans can
interpret as “mechanistic” through the categories. If there is no natural
necessity, as the cyberneticians claim, an alternative to mechanistic
description needs to be suggested. Badiou and Meillassoux both argue (in
different ways) that reality is essentially contingent: people have bodies and
they use languages and the world presents ‘events’ to them. The process of
making sense of events, for Badiou, is a process of apprehending truths. In
Badiou, mechanism is rejected in favour analytical (i.e. not synthetic)
knowledge.

In this actualist/correlationist conflict between possible
ontologies, the cybernetic transcendental person fits uncomfortably. Cybernetics
rejects natural necessity on the one hand, only to infer it in its embrace of
mechanism. This makes its transcendental assertions problematic. The cybernetic
subject does not appear to be a real person. Instead, it is a machine which
processes communications: in Laurillard’s model, teachers and learners process
each others’ communications in the light of engaging with the world. Learners’
descriptions of their learning are used by the teacher to gauge what they might
do next. It is hard to see why anything would matter to anyone in this situation. It is hard to see where either
a teacher or a learner might become passionate about what they teach. With
everything is reduced to language and coordinating mechanisms, why bother with
the process at all? Is it possible to have a model of learning without a model
of human care which must be its fundamental prerequisite? To understand this,
we have to address the question of agency and why things matter to people in
the first place.

2. The problem of action and ethics

Teaching, parenting, caring, empathising and listening are
activities carried out because they matter to us. How does cybernetics explain
why a person would risk their life to save others? How does it explain why a
person would wish to make art or compose music? How does it explain our
conscience? Or why people fight each other? In each case, the rational
ascription of function to the components of a mechanism (even if it is a
conversational mechanism) leads to what Bhaskar calls a ‘flattening-out’ of
being: reality becomes a rationally-prescribed mechanical process. What is
missing is the acknowledgement of the ‘real’: what it takes to make a ‘real
person’.

The causal connection between speech acts of teaching and
teach-back with the dynamics of engaging with the context of learning sit
within a deeper context which is unspecified. In Bhaskar’s terminology, the
correlationism of mechanism to transcendental subjectivity might also be called
‘actualism’ in that they model what either can be actually said to exist
(supported with evidence) and suggest mechanisms that are deemed to be actually
operating in order to produce the observable effects. Here we see the problem
in its essence: learners and teachers have personal histories which will affect
the ways in which they interact; much is communicated in ways beyond direct
linguistic or bodily utterances; the joy and laughter of learning are absent.

The fact that functionalism tends to box-in real people,
whether shallow or deep, problems are encountered when it is intended to
coordinate action. Arguments are put forwards as to why such and such should be
done, or on warning of the dangers of doing something else. From the
historicist rhetoric that much neo-Darwinian thinking inspires, to the deep
cybernetic arguments for global ecology, ought is the word that tends to
dominate. The issue here concerns another aspect of Hume’s thinking: the fact
that he argued that obtaining “oughts” from “is” is something which is
impossible. For Bhaskar, this argument
is wrong because Hume’s ontology was wrong. Bhaskar argues that oughts are
derivable from is’s: indeed, the emancipatory axiology of his critical realism
is precisely determined from navigating the space between is to ought.

What is agency as an ethically-driven process of engaging
with the world? Behind any model is a kind of instrumentalisation of the world
and instrumentalisation of the engagements between people. The phenomenology of
action does not render itself amenable to analytical probing. If agency was to
be characterised as ultimately ethical, and that judgements about the rightness
of action preceded any action, then a distinction has to be made between the
agency of human beings and the agency of any other kind of entity (like a
robot). It would also entail a characterisation of ethics which excluded the
possibility of an artificial ethic.

The question of agency and models is therefore a question
about ethical judgement and action. With regard to ethical judgement, there are
a number of ethical positions which philosophers identify as conceptually
distinct. The ethical position which might be modellable in some way is the
position labelled as ‘consequentialist’: this position considers that agents
act through some kind of benefit calculation either to themselves or to the
community as a whole. It may be conceivable that a model might be able to
characterise a range of calculations of this: von Foerster’s ethical principal
“always act to increase the number of possibilities” is an example of a
“machine ethic” which would work under these circumstances. However, other
ethical positions are less easy to model.

3. The problem of the person and the new
problematisation of the subject

Models are abstract, yet we know life through our
constitution as persons. Individual understandings emerge against the backdrop
of their own personal experiences, attachments, personal motivations and so on.
Kant’s transcendental subject and Pask’s P-machine and Luhmann’s transpersonal
communication constructs are united in the absence of real people with these
rich dimensions of experience. Modelled agency serve as cyphers of real people
– as vague predictors of real behaviour. Yet usually real people are remarkably
richer in their response to the world than any model suggests. The failure of
econometric calculations about utility functions, agent-based modelling,
rational economic behaviour and so on is that they ultimately fail to predict
the nature of human agency. All these models are constrained by the variables
they consider, and exclude many other variables which in the end are only
revealed through practice and experience. The fundamental question that arises
from this inability is whether a naturalistic inquiry into the behaviour of
individuals is at all possible.

The failings of deep and shallow functionalism in
characterising real people ultimately depends on what Smith calls ‘variables
sociology’. However carefully the independent variables are chosen for the
scientific explaination, the dependent variables appear change according to values
other than those defined as ‘independent’. This is the weakness of trying to
model individuals as rational agents selecting an action from a set of options.
Mechanisms describe what can be actually perceived (by some means or other –
even if it is through an agent-based model), but reality extends beyond what
can actually be identified to include what isn’t there.

The idea of absence being causal is not restricted to
sociology. In cosmology, we popularly understand ‘dark matter’ in the universe
as representative of the physical causal forces which must exist for the
universe to be consistent. The challenge is to try to find a method whereby
absent causes may be unearthed. Bhaskar attributes the identification of
concrete absence to the dialectical and methodological process of science
itself. This means that a naturalistic inquiry into education is a
methodological pursuit with the purpose of determining absences. It is by this
absence pursual move that Bhaskar argues that science itself was possible in
the first place. In this way, observations are made and mechanisms suggested to
explain them, with successful mechanisms being used as a foundation for further
investigation and false mechanisms rejected. However, whilst Bhaskar’s position
seems deeply sensible, and certainly has formed the foundation for deeper and
more sensible thinking about the nature of personhood, what matters to people,
and so on, there remain (as we have discussed) problems with the idea of
mechanism and the assumption of natural necessity.

Meillassoux’s alternative ontology constrasts with Bhaskar
because it doesn’t accept natural necessity as its premise. Meillasoux also
begins with Hume, and asks about the conditions under scientists reach
conclusions about the nature of the world. Meillasoux’s conclusion is that the
Humean practice of constructing causes in the light of event regularities was
in fact a processes of determining expectations, or the probabilities of
events. However, the probability of anything means that the total number of
possible events is calculable. Meillasoux points out that this can’t be the
case. This is, in fact, a restatement of Bhaskar’s claim that empirical event
regularities are produced under the conditions of closed-system experiments,
and that the causal inference could not have been constructed because outside
the experimental conditions, many constructed causal mechanisms still hold.
Instead of making a transcendental claim on the nature of natural necessity and
the ontology of causes (causes are real), Meillasoux appeals for an ontology of
truth, arguing that a natural ordering of the world, revealable through
mathematics, is at work in the scientist’s specification of the probabilities
of events, and that the empirical process amounts to the bringing together of
an idealised natural order with an observed natural order. What matters in
scientific inquiry is an encounter with ‘events’ of life where the ordering is
made explicit. Badiou pinpoints 4 kinds of events: Science itself (and
mathematics), art and aesthetic experience, love, and politics and the
experience of justice.

Both intellectual moves avoid transcendentalising the person,
as is attempted through cybernetics or other kinds of functionalism. Instead it
transcendentalises a natural order which can be articulated either through mathematics
(Badiou) or through causal mechanisms (Bhaskar). Science proceeds on the basis
of positing the natural ordering of things, and engaging in dialogue with the
orders of things as revealed through events. Bhaskar’s and Badiou’s positions
are related in a number of ways. First, they both articulate the importance of
politics as a driver for social emancipation, although Badiou and Meillasoux
see other drivers in art, love and science. Secondly the assertion of truth by
Meillasoux and Badiou is really the assertion of a dialectical mechanism,
similar in force to Bhaskar’s dialectic as an emancipatory force. Finally,
Bhaskar’s absences which have a causal power become Badiou’s events themselves:
events which are essentially not really there, but through the transformations
effected by them reveal aspects of natural order previously undetermined.

Persons have bodies and languages, but they also perceive
truth which is revealed to them through events. The weakness of the Paskian
model is that it appears that only language is explicitly recognised (although
one might suppose that bodies are perhaps implicit in the interactions between
the teacher and the learner). Pask will not commit either to truth on the one
hand (to satisfy Badiou and Meillasoux), or will commit to natural necessity
and a materialist dialectic of emancipation (to satisfy Bhaskar). What emerges
is a flat ungrounded model with little explanatory or predictive power, but
with some strong and essentially unprovable metaphysical assertions. Is a more
naturalistic position possible?

Addressing the
Naturalistic Gap in Education

The process of determining the function of components
remains fundamental to scientific inquiry. The speculation of causal mechanisms,
the guessing and imagining of “what might be going on” are all processes which
sit at the heart of all scientific inquiry, alongside the desire to understand
what might be going on. The methodological question in education, and in the
social sciences in general, is the extent to which the practice of those who
study what’s going on in education relate to the processes of those who study
the behaviour of sub-atomic particles or the origin of the universe. Different
ontological stances present different accounts of these processes. Bhaskar, for
example, introduces his distinction between the transitive and the
instransitive domain to account for the possibility of creating close-system
experiments in the physical sciences, together with the social processes of
conjectures and refutations, paradigm shifts and so on which characterise the
social dimension of science. Bhaskar’s ‘possibility of naturalism’ rests with
an ontological grounding of inquiry in the social sciences which situates
social mechanisms of reproduction and transformation of social structures with
material causal mechanisms. By this logic, all science – not just social
science - is politics; it is the political which sits at the heart of his
naturalistic ontology. However, Bhaskar’s ontology, as we have seen, is one
which rejects Hume’s scepticism about reality, and upholds natural necessity in
the form of the intransitive domain and the reality of causes.

The alternative possibility of naturalism rests with Badiou
and Meillasoux who point to an ontology of mathematical truth. The purpose of
naturalistic inquiry – whether it be in science, mathematics, art or in the
various dimensions of human relations – is to uncover truths relating to the
ordering of the world. By this logic, Hume’s scepticism about causes is upheld;
his regularity theory of causation is reframed as a statistical theory of human
expectation, the nature of the difference between physical experiment and social
experiment being one of different orders of expectation whose logic is
potentially knowable.

Both these positions articulate a positive vision for the
social science. Both demand progressive closure of the gap between theory and
practice, openness to refutation of theory, and a fundamental grounding in
political reality and the concreteness of persons. Both have methodological
processes by which they might achieve their aims. In Bhaskar’s case, perhaps
the most widely deployed methodological approach is Pawson and Tilley’s
Realistic Evaluation of Minger’s multimethodology. Both these methods are
effectively meta-methods which seek critique of different methods for examining
what happens, so that not only the results of experiment are examined, but so
too are the implicit ontologies lying behind the methods themselves. Both
techniques privilege causal mechanisms and tend to avoid the less mechanistic
and more subtle aspects of Bhaskar’s later philosophy. In practice, mechanistic
descriptions can be hard to reconcile, articulating different processes from
different levels of abstraction.

The alternative is a logico-empirical movement which was
first suggested by Elster, who combined measurement with the representation of
different social theoretical statements using rational choice theory and game theory
(before later claiming that rational choice theory was deficient). Badiou and Meillasoux’s ontology presents an
alternative which combines mathematical analysis and measurement. One of the
key problems they address is the way in which different social theories may be
represented and compared. Taking the view that different social theories are
different descriptions of social ordering, Badiou’s utilisation of the
mathematics and set and category theory presents the possibility for the
representation and evaluation of different social theories.

Whatever the prospects for these different positions, the need
for a metatheory about theorising and empirical practice within the social
sciences, and particularly education, seems beyond question. There are likely
to be many possible metatheories, and there are likely to be a number of
different possible ontologies upon which they sit. The space for theoretical
development in education is a space of possible world-views – not just views
about the nature of education, or the purpose of education – but fundamental
different views on what it is to be human, and on the nature of the world.
Against this background, the modelling of education through cybernetics or
other kinds of functionalism becomes merely a way of creating new kinds of events
in the light of which we hope to learn new things. The point about the technologies
in education about which Laurillard bemoans the failure is not, and could never
have been, implementation. It was illumination.

Conclusion

Functionalism’s dominance in modern culture rests on its
unique position as the ‘solution-finding’ paradigm. What I have hoped to
articulate here is that functionalism’s solutions are invariably deficient, but
that functionalism’s strength lies on the one hand in its ability to coordinate
action, and on the other to ask new questions. When we devise new interventions
in education, be they pedagogical or technical, we create a new “theatre of
events”. What matters is the effect of those events on the practice of people
witnessing them. New interventions might be devised in the light of
functionalist theories (either deep or shallow), but providing participants are
open to ask deeper questions about the world in the light of the events that
occur, rejecting or refining theories as they go, then critical scientific
advances within education ought to be possible. However, this proviso is a very
big one. The reality of human behaviour appears to lead individuals to becoming
attached to explanatory theories as a means of articulating explanations and
designing interventions which, when they don’t work, cause individuals not to
abandon their theories, but instead to assert them more strongly, blaming
factors in implementation rather than poor theorising for the situation.

Here we should inquire about the relationship between
particular properties of particular theories which lead to uncritical or
dogmatic acceptance, and particular properties and tendencies of particular
individuals. Do totalising theories attract people who are less likely to let
go of them? I have attempted to show how the totalising ambitions of cybernetic
description are ungrounded and that whilst cybernetic ideas can raise very
powerful questions about ontology, they sit uneasily between philosophical
positions which are fundamentally incompatible. It may be that the first step
to dealing with over-attachment to totalisations is to unpick the totalisation
and highlight the tensions contained within it. However, on its own, this is
unlikely to be enough: it is merely a discursive intervention, and suffers the
same idealisation of the person as the theories it critiques.

Real people in education have real feelings, real histories,
real ambitions, real fears and it is likely to be only through particular
circumstances that any progress might be made. The challenge is to ensure the
conditions under which authentic interactions between real people can occur in
such a way so as to ensure the continual asking of questions about the nature
of what we attempt to do in education. Laurillard hints at this in her book:
“only real teachers can solve the problems of their students”.

Wednesday, 24 September 2014

I'm doing some teaching of computing at the moment (a long time since I've done this, so have to brush up my Java!). The circumstances I am doing this in are strange. It's all about fear - it creates absurd situations. I enjoy teaching, but this stuff makes me depressed: I think of Shostakovich who always kept a packed suitcase under his bed lest the KGB came knocking armed with giant screwdrivers (I'm embellishing Shostakovich's account adding the screwdrivers - I think it improves it!)

Anyway, I want to talk about computing as a subject. One of my fantastic PhD students, Kurt Lee (see http://kurt.futuretechies.com/) is researching the new changes to the computing curriculum and their impact on teachers and children in the context of Free Schools. Computing is a weird subject: it keeps changing (a bit like institutional politics!). Indeed, the changes to technology are political in a way: every technical development is effectively a proposal for social change. However, technological developments are never voted for; they just happen (or not). A proposal for technical development may take-off (i.e. gain a critical mass of followers) or not. However, even when a technical development takes-off, we know that it won't last for ever. We know that any technology's status is time-limited. This produces an educational problem when we try to define a computing curriculum: whatever we teach will be out of date very soon; so what do we teach?

Of course, there are 'generic' skills related to programming, and ways in which students can practise in any language, any environment, and gradually make themselves flexible enough to move with the times. This is one of the reasons why Java has maintained its position in the computing curriculum for at least 10 years or so. However, when I learnt to program on my Masters course, everybody was learning Modula-2, which was a variant of Pascal. Indeed, even the operating system of the original Apple Mac was written in Pascal. Before that, it was Fortran and COBOL. Things change.

However, moving with the times only happens if individuals are really interested in what's going on, and usually if they are in an environment which forces them to change. Not everybody is that interested in computing. Indeed, of the population in a typical school, only a tiny percentage of students will be passionate about the technology world. Yet now they all have to learn to program. What use are those 'generic skills' of programming if there is isn't a real passion and curiosity for what is happening - either in many learners, or indeed, in many teachers? Doesn't it become just another box to tick, just another item on the curriculum which has to be delivered and studied under pain of failure for students, or losing one's job for teachers? Fear again.

The motivation for the new computer science curriculum is that whilst technology and innovation are fundamentally important to the economy, the teaching of technology had become boring in schools and doesn't address programming skills: its all spreadsheets, databases, word processors, etc. Yet everyone knows that technology can be really exciting (if a bit scary and alienating): it can do amazing things like the Oculus Rift, 3D printing, Open source hardware, bio-hacking, big data, etc. All of this stuff is important not only because it will excite many kids - particularly if espoused by teachers who are similarly excited - but also because at the root of it all is the fact that all of these things are propositions for social change. Technology is political (technology is in the mix in the political situation that gave rise to my teaching stint at the University!) What sort of a technological world do we want? I agree with Andrew Feenberg that we need to 'politicise technology'. Slavishly trying to bang algorithms into the minds of the young is anti-political, alienating and socially counter-productive.

After my teaching stint, I invited my students (who had already sat through 4 hours of Java) to join my research department in a presentation on learning analytics given by a a representative of the educational systems company, Talis. The Talis representative pitched the learning analytics work as a way of making the students' lives 'better' (the social proposition): "it will inform you of the resources you need to spend more time on if you are going to succeed", however, it was clearly the institution which would be the ultimate beneficiary. One of the students replied with a sharp criticism: "Why should I install your Spyware?" He didn't really have an answer to that. More of this is needed.

But what's going on here? A powerful organisation is declaring what Searle calls a 'status function': "this is a technology for helping you succeed in your learning". It's a peculiar status function, because it's not at all clear what it means: Succeed? Learning? Help? Talis can only make this fly if others agree to it. The status function to learners didn't meet with much approval from the learners in the session. However, this status function is not the only one made. There's another status function which says "This technology will help you monitor and enhance the educational provision of your staff to ensure the successful retention of your learners" Another powerful group hears this: the university management. The management are who Talis wants to sell to, not the learners. The management is also in a position to declare their own status function to their staff: "This is the technology you now have to use in order to keep your job". That's the really powerful one. Where are the learners' interests? Well, a new status function is declared to them: "this is technology by which we will measure your progress and gauge the level of support required" Learners want to pass. What choice do they have? The position "Why should I install your Spyware?" is drowned out by the trump card of the institution.

What emerges here are the conflicts and trade-offs of the different commitments individuals have, and the way that powerful organisations manipulate the status functions that they make to different audiences. So what of the computing curriculum? Another status function: "This is the new curriculum for computing which is compulsory". Those who engage with passion will be those who were always passionate about technology. Everyone else will find the path of least resistance to uphold the new status function whilst not threatening any other commitments they have. But there's something else about this status function: "This is the new curriculum about lots of stuff which is likely to be out-of-date soon". How do people respond to that? Again, one is tempted to take the path of least resistance. However, the whole business of these things being around for a short time and then being replaced by something else really IS the subject of technology. That is the thing that the enthusiasts really know about - everything they commit to they do so in the knowledge that it will change. Technologists commit to continual change.

How many teachers commit to stability? How many institutions commit to stability? How many government education departments commit to stability (even when they say they want to encourage innovation)? My guess is that for these people technology, enthusiasm and change is great - so long as it doesn't threaten the status quo! What a contradiction!

Monday, 22 September 2014

This is the sketch of chapter 2 of my (very slowly emerging) book on Education and Information. The real issue it addresses concerns the role of 'critical' thinking about education and technology, and how this inevitably leads to ontology. It's not an easy read: re-examining it now makes me think that I have more changes to make to Chapter 3 on phenomenology (which I posted earlier).

Introduction: Why Ontology?

Critique of educational thinking entails consideration of
the concrete manifestations of theories and discourses within a context of
policy, justice and emancipation. For any educational proposition, we ask, In
whose interests? For what purpose? Who are the losers? What are the
implications for society? What is the implicit understanding of the world? And
so forth. Traditionally this thinking has had its home among the followers of
Marx and has given rise to specific philosophical schools (for example,
Critical Theory of the Frankfurt school of philosophers) and particular
branches of intellectual inquiry in established domains (for example, the
relatively recent rise of Critical Management Studies). The relation between these developments and
Marx’s originating thought has presented a challenge to engage with ‘reality’
in some form, and the turn of recent developments has been a concern for ontology.
This has emerged from a critical deepening of Marx’s dialectical materialism
which has, in its turn, embraced an inquiry into the grounds for knowledge of
reality, the philosophy of science, the question of social naturalism and the
nature of causation.

This intellectual move has largely gone unnoticed within the
world of education, where ontology is not a common word. There are numerous
reasons for this, although principle among them is the fact that educational
thinking has been dominated by constructivist world-views which have privileged
the construction of reality in individual minds over any account of
materiality. From Piaget’s ‘genetic epistemology’, Von Glasersfeld’s ‘radical constructivism’,
Laurie Augstien’s “learning conversations” and Pask’s conversation theory, the
principal focus of educational theory has been on knowledge. This has continued
with recent developments in educational technology with the developments of the
MOOC and the VLE grounding themselves in pre-existing constructivist
educational theories.

There are good practical reasons why this should be so.
Constructivist pedagogies privilege conversation, experiential learning and
shared activities. They provide a platform for critiquing didactic practice
which promotes the ‘sage on the stage’ teacher and ‘information transfer’
models of learning. By its very nature as an intellectual inquiry,
constructivism engages minds and bodies in conversation, activity and
exploration and the effects, when implemented by a skilled teacher, can be
transformative. However, such experiences are not universal, and can convince
those who practice them that not only is their practice excellent, but the
theory correct. This can be a barrier to constructivists critiquing their own
theory and the world within which it appears (sometimes) to work. If there has been
an ‘anti-ontological’ move within constructivism, it has been driven by two
forces: on the one hand, a sense of fear that engagement with ontology might
undermine constructivism’s experientially-grounded opposition to
instructionalism and didacticism. On the other hand, constructivism’s emphasis on
conversation has tended to “flatten” the discourse such that distinctions
between ethics, politics, knowledge and action which are fundamental to
ontological thinking become intractable within constructivism’s discourse which
tends to retreat towards varieties of relativism.

Of greater significance is constructivism’s approach to
causality. Whilst its theory establishes the causes for learning as lying
inherent in the dynamics of conversation, the central bone of contention within
the discourse is the precise nature of the causal mechanisms on the one hand, and the relationship between causation and natural necessity on the other. Whilst the
question of possible mechanisms of causation are much discussed (these form the foundations of cybernetic thinking), the nature and
ontology of causation itself is rarely inspected. Taken as implicit is a model
of causation which was established by Hume and which itself is essentially
constructivist (a point often lost in constructivist opposition to Hume’s
‘positivism’). Thus, when varieties of constructivism emerge in educational
technology (for example, the recent vogue for connectionism which underpinned
the MOOC) their defence rests on the supposition of actually existing causal
mechanisms which connect individual subjectivities, whilst failing to critique
the supposition of their own existence.

The subjectivist-objectivist debate in education has a
precursor in the economic discourse. Carl Menger’s critique of economic method
and its relation to objectivity established a line of thinking about economics
which placed individual agency and experience at the centre and laid the
foundations for the subjectivism of the Austrian school of economics. Central
to Menger’s argument was the idea that on an everyday basis, there were partial
regularities of events which could be studied, but that global generalisations
of these events tended to be abstract and unrealistic. Educational
interventions appear similar in this regard: interventions in the classroom
show partial regularities, or as Lawson has more recently termed them,
‘demi-regs’, whilst at the same time assumed global tendencies of policymakers
creates demi-regular constraints within the classroom that teachers typically struggle to deal with. The
fundamental question (and the issue between those who will speak of ontology
versus epistemology) is, By what mechanism do such demi-regs arise? It is in
the critical investigation and inspection of possible mechanisms for
educational demi-regs that a social ontology of educational interventions
presents itself. An educational ontology builds up from the particular to the
general, and deals with localised problems rather than ideal solutions. It
aspires to what Popper calls “piecemeal social engineering”.

The social ontologist will ask “what must the world be like
given that demi-reg x occurs?” They will critically consider many possible
responses, and examine them from the perspective not just of conversation, but
social organisation, technical and material structures, political implications, economic considerations and human emancipation. Social constructivism may be one of the possible
answers to the question, although its reduction of political, material and
emancipatory concerns to language coordinations means that it doesn’t
constitute a critical ontological inquiry: instead it imposes its own totalising
ontology.

Whilst there is much to be gained from engaging with
social ontology, there remain problems concerned with the abstractions and
terminology which inevitably result from an ontology (such problems of
abstraction also were the concern of Menger as he critiqued economics). The
question of causes is not easily settled through an abstract critique: if the
causal question is to be thoroughly addressed, then the means by which such a
critique is transmitted from one head to another remains a fundamental
question. For this reason, the combination of ontology and education entails an
examination of learning and educational organisation necessitates an engagement
with phenomenology as well as critique.

The following chapter is structured in three parts: firstly
I consider the nature of causality in education, taking into account the
history of thinking about causation. Part 2 considers the nature of knowledge
in society and how it relates to causes, but also considers the problems
inherent in a abstract representation of knowledge. Part 3 considers the
processes of teaching and learning which are necessary for any kind of
knowledge to be transmitted. In drawing these themes together, I argue that
abstraction is the underlying issue and that the relationship between theories
of education and politics must play out in the domain of play with technology.

The nature of Causality in Education

Demi-regs in education are those informally coded expectations
that teachers often exchange in the staffroom. The world of knock-on effects of
changes to university funding may be modelled by sociologists and economists
and entirely different conclusions reached depending on the model that one has
of human beings, yet most people consider that there are causal mechanisms
relating to teaching, parents, family or friends. Engineers will see the
‘wiring’ of the system and the levers that effect change. Psychologists on the
other hand might pay deeper attention to the processes of learning itself and
the institutional context within which this arises. Deeper sociological
arguments then raise themselves as to whether we are ‘methodological individualists’
like Weber believing that society is reducible to individuals, or whether we
are Dukheimian institutionalists who see individuals conforming to established
societal norms. Demi-regs are an articulation of expectations which in turn arise from values which have their roots in personal history, discourses of engagement and positions of power. If we identify a demi-reg, what should we do about it?

Demi-regs can be cited as 'evidence' for a particular policy. For example, ‘pupil
behaviour’ is highlighted, and measured including ‘zero tolerance’ introduced
on the back of demi-regs that support correlation between the intervention and the result. Typically, these will be formulated against some kind
of methodological intervention that seeks to identify some kind of causal
attribution which identifies the demi-reg (or in the case of many cases in
education, a ‘demi-pathology’) and decisions (taken in good faith) are made to
alleviate them. New policies introduce new kinds of instruments: typically
targets are set, and the social system within which the demi-reg was identified
is changed to include other work. So-called 'evidence-based policy' is the result.

Typically, with evidence-based policy, evidence is sought after the intervention is decided: the evidence is produced to support interventions, making it more like 'policy-based evidence'. What is the ontology of the situation within which the demi-regs emerge? All observers exist within a social context participate in the situation, and each - from students to teachers to ministers - constitute parts of mechanisms of reproduction and transformation of social rules, the reproduction
of rights and responsibilities of different stakeholders and different role
players. Ministers are little different from the rest of us in having their
ideas about education deeply informed by their own experience of it. These
ideas present different conceptions of the causal mechanisms involved in the
education system. Government regulation operates on the basis that the system
is wired in the way the minister thinks, with opponents seeing it wired differently - raising alarm at what they see as
dangerous ‘tinkering’ – as if they were watching an amateur attempt at bomb
disposal! Which levers to pull? What
goals to aim for? Somewhere among the morass of causal assertions, are ideas
about the causes of learning itself. Objections to government regulatory
measures often end up by saying something “this will damage learning… by
implication, it will damage civil society…” and so on.

Demi-regs are by their nature part of a mechanism which
entwines agents and structures. It is with this situation that we have to ask
whether a naturalistic understanding of the causes of demi-regulatarities is
possible. To what extent is such a question about the value pluralism in the
educational system? How is it we come to have different ideas about what
education ought to be? How is it ministers of education see it unnecessary to
inspect their own experiences of education when they come to form policies?
These are also questions about causal mechanisms, but they are questions about
causal mechanisms at a deeper level than the mechanisms which connect the
actions of teachers to the abilities of learners. Yet things can be made better
through careful observation and critique of demi-regs: witness the ways in
which we deal with disability and differences between people in ways that
society can better organise itself to meet individual needs. Whilst the
obsession with labelling children with disorders like autism, attention-deficit
disorder, dyslexia and so on can go too far, it has created conditions for new
ways of looking after each other within education systems which attempts to deal equitably with
disability. The explanatory labelling of these conditions acts as a code for
people to understand the context within which certain behaviour is displayed.
Nothing in the labelling changes individual behaviour itself, but an
understanding of the ontogeny of the behaviour can make a difference.

The Demi-reg and the nature of causation

Demi-regs reveal what we might consider to be causal
patterns, but what is a 'cause’? The idea of “cause” as Aristotle used
it is very different from the idea of “cause” as it is used in by ministers
(and practically everyone else) to describe the education system. Sometimes, the
word ‘cause’ is used in place of the word ‘blame’: we might attribute blame for something on somebody, when the
causes are far more complex. For Aristotle, a cause was inherently tied-up with
the substance of a thing. Causes were part of the real stuff of the world. But
when we talk about knock-on effects, we adhere to a different tradition of
thinking about “cause”. This is the tradition of thinking that was ushered in
with the Enlightenment and the work of David Hume. Understanding how these
different perspectives on cause and education are important is crucial in
understanding what is happening now in education.

The world as it appeared to David Hume was a world where the
‘old order’ of scholastic academic inquiry seemed to be challenged by something
that was much more dynamic and exciting. The reverberations of the work of
Isaac Newton, who had died in 1727 led Hume to be exposed to the latest
developments in “optics, medicine, meterorology, biology, astronomy and
electricity” (see Sapadin, 1997) Scientists like Robert Boyle were exploring
the world in a way that appeared far removed from the dogmatic presentation of
a God-given world by the scholastics. The space was prepared for the
declaration of a schism in the pursuit of knowledge, and Hume set himself the
task of trying to articulate it.

He saw the battle lines drawn around the concept of
‘causality’. The scholastic view was that causes were being discovered by
scientists. The causes of things were inherent in the nature of their
substance. But if this was the case, what was going on in the process of
methodical experimentation which Hume saw all around him? What was happening in
the scientific literature which Hume, more than most at the time, was exposed to?
To him, there clearly seemed to be a connection between the practices of
scientists and their discourse about what they had discovered. And this set the
scene for a new kind of theory about causation.

Newton, for Hume, was the shining example of where “new
philosophy calls all in doubt” (see Mossner, life of David Hume, p75)

“In Newton this island may boast of having produced the greatest and
rarest genius that ever rose for the ornament and instruction of the species.
Cautious in admitting no principles but such as were founded on experiment; but
resolute to adopt every such principle, however new or unusual form modesty,
ignorant of his superiority above the rest of mankind, and thence, less careful
to accommodate his reasonings to common apprehensions; more anxious to merit
than acquire fame; he was, from these causes, long unknown to the world; but
his reputation at last broke out with a lustre which scarcely any writer,
during his own liofetime, had ever before attained. Whilst Newton seemed to draw
off the veil from some of the mysteries of nature, he showed at the same time
the imperfections of the mechanical philosophy, and thereby restored her
ultimate secrets to that obscurity, in which they ever did and ever will
remain. “

Hume argued that:

“the knowledge of this [causal] relation is not, in any instance,
attained by reasonings a priori; but arises entirely from experience, when
we find that any particular objects are constantly conjoined with each other.”
(Hume, 1737)

Concluding that the only way that knowledge about causes
could possibly be gained was through experimental observation through the
senses, the only way that this knowledge could be established as
science was through the reproduction of those sense-conditions where observation
could take place. Basically, his critique was that the Christianised form of
Aristotelian causation had produced a lack of inquiry. Tying religion to
science in this way was producing more problems because as technologies allowed
for the development of more and more sophisticated instruments of observation,
so questions were asked about the nature of the world which presented answers
which were challenging to Christian doctrine at the time. Galileo was only the
most famous victim of this process. Hume’s intervention was revolutionary. It laid the
philosophical foundations that underpinned the practices that were already established by the 18th century scientists. But his philosophy
had far-reaching consequences. The establishment of the conditions for
scientific experiment became the most important factor in the pursuit of
knowledge, not the ascription of types and genera which could be related to the
substance of God.

The relationship
between Hume’s theory and conventional thinking about cause in education

Positivism today tends to be seen as a dirty word. Hume’s
establishment of event regularities underpinned nascent thinking about social
science in the 19th century. The pursuit of naturalism in the mind
of Comte was driven by the realisation of the possibility that event
regularities within society. At a time when emerging knowledge about human
biology and the identification of flora and fauna, and the growth of taxonomic
representations presented the possibility that through statistical examination, an
equivalent of event regularities could be possible. The beginnings of social
science also grew from a world in change. Auguste Comte witnessed at first hand
the aftermath of the French Revolution. Born 23 years after Hume died, Hume’s
impact on French intellectual life had been enormous, amplified through his
effect on Kant and it was from this intellectual environment that Comte began
to formulate his science of society. It was therefore only natural that he
should look at the example of science as established by Hume as his model for
thinking what a science of society might involve. Comte drew inspiration from
Rousseau (who Hume knew), Saint-Simon and others in his attempt to establish
naturalistic inquiry of society.

Comte published “A course in positive philosophy” between
1830 and 1840. In it, he argued that the co-dependence of theory and
observation were the founding principles of all scientific inquiry, and that
this co-dependence could apply to the study of society as well. This general
principle was founded on Hume’s philosophy as Comte argued that ‘repeatability’
was key in establishing scientific principles. Comte believed that the order of
society was knowable and classifiable. With the emerging mathematical tools of
statistics, he could follow the principles of other Victorian scientists in
identifying genera and classifications. This theoretical assertion about the study of society also
laid the foundations for inquiry into economics. Adam Smith, who Hume also
knew, had already begun to draw up his own picture of socio-economic causation:
his major work is entitled “An inquiry into the Nature and Causes of the Wealth
of Nation”.

Having seen the extent of the influence of this
philosophy, the scientific advances that were supposedly underpinned by it in
the 20th century, have called some of the foundations of the
philosophy into question. The simple fact is that Hume predicated his reasoning
about the construction of causes on the idea of the ‘closed-system experiment’.
Yet, the laws of science that emerged through those experiments have been far
more successful in domains that lie well outside the closed system of the
experiment. How could it be that a formula or a law could show to be
reliable within the confines of an experiment, to be codified by scientists,
and to be shown to still be reliable within the world at large well away from
the original experiment? Mustn’t that have meant that the discovered law was in
some way not in the heads of scientists, but instead really active in the world
after all? Doesn’t that mean that what happened was not, as Hume argued, a process
of social construction in the light of experiment, but a process of discovery?

The causality of a demi-reg in economics
and education

If the regularities of scientific experiments were
actually discoveries and not constructs, what might possibly be discovered with
the identification of a demi-reg? Is there some kind of objective truth about
such discoveries? Is a better world possible?

Reproducible experiment was exciting in the 18th century, and it became clear that scientific
discourse was important as a means of agreeing laws. However, with the
emergence of social science, there were deeper problems about the nature of the
substance of man which were largely ignored by Comte. These were problems
related to the nature of observation and abstraction. Early economists were
aware of this. Carl Menger noted the difference
between the “abstractly conceived economic world” and the “real phenomena of
human economy” (Menger 1963, p. 73). He argued that from the “full empirical
reality” of the world, “numerous elements [are] not emergent from an abstract
economic world”. At the same time he acknowledged that the “most realistic
orientation of theoretical research imaginable must accordingly operate with
abstractions” (ibid, p. 80).

As the Christian project became tied up with Aristotelian
philosophy so a philosophy of causation became not a support to scientific
inquiry, but a matter of dogma. Augustine developed Aristotelian philosophy in
a way that lent its support for the emerging power of the church. The prime
causal agent became God. But this switch to divine power was made possible
because the claim for the substance of God could be made: that essentially, god
had divine substance and therefore all things were made through God, therefore
all acts were attributable to God at some level.

The tensions in his thinking derive from the subjectivist force within his philosophy which was to form the basis of the
Austrian school. Whilst Menger’s focus was on accounting for real phenomena
rather than abstract models, he conceived of a methodological process of
creating theory through observation of regularities which he termed “empirical
laws”. Inevitably these too were also abstract and so Menger’s concern for a
realistic orientation in economics exposes fault lines which underpin
subsequent work within the Austrian school of economics. Menger read Comte and argued that “It was M. Comte’s opinion
that Political economy, as cultivated by the school of Adam Smith’s successors
in this country [Great Britain] and in France, failed to fulfil the conditions
required of a sound theory by Positive Philosophy, and was not properly a
science. He pronounces it to be defective in its conceptions, “profoundly irrational”
in its method, and “radically sterile” as regards results”

Such strong words highlight the division of thinking about
what was scienfically possible in society. Menger wished for a science of
economics that was separable from the science of sociology. Comte disagreed
that this was at all possible. The fundamental question when faced with a demi-reg is
“given that this is the case, what must the world be like?” This is a different
way of thinking from the causal attributionalism that is more typical of
educational thinking. Ironically, it emerges from Kant’s transcendental logic
of inquiry into the nature of knowledge, and whilst this philosophy rejects
Kantian idealism, it accepts that the way he went about it is valid. Yet Hume
had his biggest effect on Kant who was “awoken from his dogmatic slumbers”.
Kant’s philosophy was underpinned by a particular method that he adopted in
thinking about the world. His ‘transcendental arguments’ required him to look
at the world and to ask himself “if this is what is going on, what must the
world be like”. Fundamentally, Kant had taken Hume’s ideas to heart, and the
transcendental method was in fact a form of Humean reasoning. It was a process
of looking at all the phenomena of the world and considering what caused it to
come to be.

Kant’s conclusions about “what the world must be
like”, concluded that there must exist basic ‘categories’ of understanding for making sense of and coming to
know the world. In the 1980s, a number of philosophers also pursued a kind of
Kantian transcendental reasoning about the world. This reasoning would first of
all show that Hume’s description of the way that causes are constructed
couldn’t be right. Scientific knowledge has too many successes outside
the domain of the closed system experiment for construction of causes to be the
only thing going on in determining causal laws. There had to be something
‘real’ that was discovered. This argument has been principally developed by Rom
Harré, Roy Bhaskar.

The nature of knowledge and the nature of the world

If a demi-reg is identified and a transcendental logic is
articulated, the next challenge is to understand the nature of what might bring
an event to pass. There are aspects of material change which appear
deterministic. There are aspects of social change which appear down to
people. Bhaskar’s critique of Hume leads
him to identify two types of mechanism in the world. On the one hand, he must
account for the mechanisms uncovered by physicists which operate in the same
way across closed and open systems (for example, gravity). These are termed
‘intransitive mechanisms’ which, Bhaskar argues, must exist independently of
human agency. On the other hand, social tendencies or partial regularities do
not behave like this, usually being context-dependent. These mechanisms depend
on human agency for their existence and Bhaskar calls them ‘transitive
mechanisms’.

In developing the distinction between transitive and
intransitive mechanisms, Bhaskar draws a distinction between the ‘real’, the
‘actual’ and the ‘empirical’ as domains of reality within which mechanisms
operate. The empirical constitutes that aspect of reality that can be
identified through experience (for example, through experiment), whilst the
actual includes states of affairs which have the possibility of being
experienced without necessarily being the subject of experiment (for example,
potential but as-yet-unrealised technologies). The real includes both of these
aspects, but adds the domain of ‘absence’: things which are not actual – which
do not have the possibility of being experienced directly – but which can be
experienced indirectly through their effects.

In asserting that mechanisms are discovered (not constructed) by scientists and then codified as
laws of science, Bhaskar invokes the operation of both transitive and
intransitive mechanisms. He argues that Hume’s scepticism about causes led to
erroneous thinking about scientific methodology which was carried over into the
social sciences producing the kind of practices which establish artificial
regularities, idealised formal abstractions, poor explanatory power, and
predictive failure. The principal argument is that because of mistaken
ontological thinking at a methodological level (i.e. Hume) implicit,
uninspected and erroneous ontological assumptions have been embedded within the
social sciences. The ‘critical’ aspect of Critical Realism therefore seeks to
make explicit the implicit ontological assumptions of methods, theories and
models as a way of moving towards a deep stratified conception of generative
mechanisms in the world. However, there must be some kind of process not just for teasing out the different
layers of reality, but also a process for teaching awareness of different
levels of reality. For Bhaskar, this process is one of critique: a
fundamentally negatively-driven process where the underlying world-views
(ontologies) are unpicked in order that they can be properly examined for their
explanatory power in the light of the demi-regs presented.

Decision, Education and
Critique: From Is to Ought

Whatever is seen of a demi-reg, education involves decisions.
Every causal process that is identified in some way depends on processes that emerge between individual heads and the
social and material world: in the heads of teachers, curriculum designers,
students, administrators, authors and academics. Decisions emerge from a
context which constrains them. For every decision, there are questions which
relate to the analytical context (it’s internal logic), the extent to which any
decision is dependent on past experience, or what is felt to be ‘common sense’,
and the extent to which any decision might be taken against a context of doubt
or uncertainty as to whether a particular decision is the right one, or in
whose interests a particular decision is made.

‘Being’ is the context of decision-making; and ontology as the
study of being is about determining those constraints. If it cannot provide a
fundamental approach it is because it is oriented towards the nature of doubt
about any position (a common criticism of Bhaskar’s Critical Realism and its
acolytes is that it is too often insufficiently critical). A critical
ontological approach differs from a constructivist approach because it argues
that it is sensible to see those constraints as ‘real’. But what does ‘real’
mean?

To understand the real, we have to understand the deep
structures which relate individual psychologies with social structures,
political policies and agendas and individual freedom. The constraints on
agency are deeply embedded in our experience of life with one another: among
the constraints that bear upon our actions are issues relating to our
capabilities and ethics. Within Hume’s causal model, such concerns for the
decision-maker were not within the scope of naturalistic approach: the
socially-constructed causes relating to the regularities of events that were
witnessed had no relation to the ethical background of observers. Hume must
have wondered that to admit that there might be a connection would result in an
undermining of his theory. He identified that within ethical reasoning, different kinds of arguments ensued in describing what there is, and what there ought to be. Hume’s position was that this difference in the kinds of arguments deployed was a problem. In coming to examine causal processes of education and society in general, negotiating the territory of is's and ought's is fundamental. Can the
“oughts” of education can be determined by identifying the ‘is’s? To what extent is it possible to be objective? To what extent is
naturalism of education possible?

Abstraction, Learning and Action

Social ontologies are inevitably abstract. Actions in
education have real effects in terms of the resulting freedoms of individuals. But
how can something abstract account for lived experience? Of the application of
Critical realism, economics has been an important area. Lawson has argued that
abstraction in critical realism means:

“that which is abstracted from is
the concrete. The point of
abstraction is to individuate one or more aspects, components, or attributes
and their relationships in order to understand them better” (ibid)

However, even with (or because of) an emphasis on causal
mechanisms, Critical Realism is caught positing abstracted social relations as
a means of representing reality. In particular, the descriptions of mechanisms
cannot in themselves account for the necessity for learning about the
mechanisms, shared understanding and collective action – all of which are
necessary if a mechanistic description is to have a transformative effect in
the world.

In Bhaskar’s later work, he situated the identification of
ontological mechanisms as the first stage in a dialectical processes of
becoming more critically aware of the world (Bhakar, 1993). Bhaskar labels
‘moments’ of a dialectical process which he abbreviates to the acronym ‘MELD’,
with ontological mechanisms operating as the key focus of the first level (M),
absence at the second level (E), totality and love at the third (L), and
transformative praxis at the forth (D). Given Bhaskar’s emphasis on a
dialectic, and what appears to be a move away from the description of causal
mechanism which now appears as the beginning of the “Dialectical Critical
Realism” process, there appears to be a gap between the description of causal
mechanisms and identification of absences on the one hand, and the
transformative social praxis that Lawson and other Critical Realist economists
hope to achieve through critique and the abstracting of mechanisms.

Decisions, not
abstractions, have real effects. The decision of a teacher to teach in a
particular way, the decision of a headmaster to focus on a particular area of
education; the decision of ministers to reform education. They have abstract
origins, but they have real effects on the emotions of learners, teachers, on
social processes, economic systems, and so on. There is a real moment of
‘truth’ – a leap of faith as Kierkegaard might have it.
The problem with the cybernetic viewpoint is that thinking about thinking
(which is where second-order cybernetics can take us) serves not purpose.
Kierkegaard says: “Thinking can turn toward itself in order to think about
itself and skepticism can emerge. But this thinking about itself never
accomplishes anything.” Thinking should serve by thinking something (Wikipedia). Kierkegaard wants to stop "thinking's
self-reflection" and that is the movement that constitutes a leap. This
involves taking a ‘leap to faith’, which forms the basis of his existential
philosophy.

A decision to change economic policy, make a purchase, or
even to make an utterance requires some kind of rationale upon which
abstractions will have some bearing. Abstract ideas ground decisions as to what
arguments to utter, what distinctions to make, what groups to join, or what
policies to support. Decisions result from some reflexive mechanism made in
anticipation of what is thought might occur in the light of the decision “If I
make this statement, these people are likely to agree/disagree with me”, and so
on. In making the decision, the likely decisions of others need to be
speculated upon. Shared understanding of problems, situations, history and
background can assist in the selecting of effective decisions. In academic
discourse, it is the discourse itself which provides the ground for the making
of decisions.

If abstraction grounds decision, what grounds the
abstraction? The Critical Realist answer to this is ‘reality’; the cybernetic
answer is ‘mechanism’. But “reality” and “mechanism” in these contexts both
remain essentially abstract – the nature of reality codified in the neologisms
of Bhaskar (or Lawson) are no less abstract than the complex mechanics of the
cyberneticians. The abstract ‘objects of knowledge’ are part of Bhaskar’s
‘transitive domain’. So it appears that things are stuck in a loop: on the one
hand, realism pins its colours to the mast of reality, arguing against
idealism, whilst at the same time, the only “reality” it can effectively point
to is an abstract description of lived experience.

This problem of abstraction can be usefully inspected
through the lens of Bhaskar’s dialectical levels described above. At the transpersonal level (3L) of ‘love’,
lived experience with other people appears to take precedence over abstraction
alone. Contexts, activities, social groupings, conviviality all become
essential parts not only of realist inquiry but of cognitive processes. In a
totalising context, abstractions become lived through experience, where it is
not the logical consistency of the abstraction which matters, but the emergent
social and psychological effects of exploring an abstraction together.

A musical analogy may help to illustrate this point. A
written musical score encodes the abstractions of a composer: these represent a
series of decisions by the composer. A performer, skilled in the realisation of
abstractions into lived experience, enters the world of the abstraction with a
view to making explicit their experience through performance: the performer
makes decisions based on their own capabilities, the score and their judgement
of the effect. Audiences then make decisions (whether to listen or not)
depending on their judgement about their own experience (but also the judgement
and reaction of others). So the abstraction, once locked-in to the score for
few to appreciate, becomes manifest as a shared experience for many. The
composer and the performer may well have had a view as to the likely shared
determination of the audience in the context of this revealed shared
experience, and aimed for their abstraction to be sufficiently accurate such
that the shared experience bears some resemblance to their expectations. They
can be shown to be right or wrong depending on the reaction of the audience to
their efforts.

Conclusion: Some unanswered questions

The purpose of this chapter has been to show that social
ontology is a good way of answering the critical questions we might pose about
any policy position that might be taken in education. To answer questions like
“in whose interests? Who loses? What are the consequences?” we need to
understand the nature of the world that is supposed behind a particular policy
idea. We need, in particular, to understand the kind of demi-regs which give
rise to a particular policy. And we need to critically inspect the possible
mechanisms whereby those demi-regs might emerge.

One of the weaknesses of the ontology project is the
tendency to produce explanations for the world with all their conceptual
paraphernalia, rather than present invitations to think about it. Yet, methods
of social ontology can be powerful in providing a way of collecting the
evidence of regularities or demi-regularities which demand some kind of
analysis of their causation. The first question is, How far down do you go in
producing causal explanations? The second question is Given the fact that any
causal explanation takes the form of an abstraction of one sort or another, how
does it get from one brain to another? How does any abstraction of a social
mechanism account for its own ontogeny and transmission?

The problem lies in the fact that whilst the ethical
question is directly addressed by a social ontology approach, the process of
establishing the connection between an Is and an Ought relies on effective
processes of teaching and learning which lie outside the system. So, for
example, we can examine the demi-regs of education, we can attribute those
demi-regs to causal mechanisms, and yet we cannot account for the most
effective ways in which those demi-regs may themselves be transmitted and processes
of social change set underway. Is’s and Ought’s are divided by educational
processes.

Causes between what a person learns and what causes them to
learn underpin our ideas about what education ought to do. We seek to discover
the causes of learning in order to make decisions about policy. But education
is very strange and doesn’t appear to exhibit the causal connections between
things that are evident from physical experiments. The philosophical and
economic hinterland to this question has led us on an intellectual journey
which has taken in philosophy, economics and cybernetics. There are problems at
each step of the way in terms of how we think about society. From the naivity
of physical realism which Hume appears to espouse, to the solipsism of second-order
cybernetics, to the difficulty in making any kind of distinction between
general forces with a radical approach to personalism in education. Is there a
way through this?

Decisions are important because they have real effects on
ordinary people. Policies have real effects on children, decisions about
curriculum have effects on all learners and so on. But what is in a decision?
How do we consider decisions in the context of causes? Decisions appear causal,
but what causes a decision? The answer to this question entails an analysis of
experience: ontology on its own is not enough.