Donald Clark Plan B

What is Plan B? Not Plan A!

Saturday, December 23, 2017

Is debate around 'bias in AI' driven by human bias? Discuss

When AI is mentioned it’s only a matter of time before the
word ‘bias’ is heard. They seem to go together like ping and pong, especially in debates around AI in education. Yet the
discussions are often merely examples of bias themselves – confirmation,
negativity and availability baises. There’s little analysis behind the claims. ‘AI programmers are largely white males so all
algorithms are biased - patriarchal and racist’ or the commonly uttered
phrase ‘All algorithms are biased’. In
practice, you see the same few examples being brought up time and time again:
black face/gorilla and reoffender software. Most examples have their origin in
Cathy O’Neil’s Weapons of Math
destruction. More of this later.

To be fair AI is for most an invisible force, that part of
the iceberg that lies below the surface. AI is many things, can be opaque
technically and true causality difficult to trace. So, to unpack this issue it
may be wise to look at the premises of the argument, as this is where many of
the misconceptions arise.

Coders and AI

First up, the charge that the root cause is male, white
coders, AI programmers these days are more likely to be Chinese or Indian than
white. AI is a global phenomenon, not confined to the western world. The
Chinese government has invested a great deal in these skills through Artificial
Intelligence 2.0. The 13th Five-Year Plan (2016-2020), the Made in China 2025
program, Robotics Industry Development Plan and Three-Year Guidance for
Internet Plus Artificial Intelligence Plan (2016-2018) are all contributing to
boosting AI skills, research and development. India has an education system
that sees ‘engineering’ and ‘programming’ as admirable careers and a huge
outsourcing software industry with a $150 billion IT export business. Even in
Silicon Valley the presence of Asian and Indian programmers is so prevalent
that they feature in every sitcom on the subject. Even if the numbers are wrong
the idea that coders infect AI with racist code, like the spread of Ebola, is
ridiculous. One wouldn’t deny the probable presence of some bias but the idea
that it is omnipresent is ridiculous.

Gender and AI

True there is a gender differential, and this will continue,
as there are gender differences when it comes to focused, attention to detail
coding in the higher echelons of AI programming. We know that there is a
genetic cause of autism, a constellation (not spectrum), of cognitive traits and
that this is heavily weighted towards males. For this reason alone there is
likely to be a gender difference in high-performance coding teams for the
foreseeable future. In addition, the idea that these coders are unconsciously,
or worse, consciously creating racist and sexist algorithms is an exaggeration.
One has to work quite hard to do this and to suggest that ALL algorithms are
written in this way is another exaggeration. Some may, but most are not.

Anthropomorphic bias
and AI

The term Artificial Intelligence can in itself be a problem,
as the word ‘intelligence’ is a genuinely misleading, anthropomorphic term. AI
is not cognitive in any meaningful sense, not conscious and not intelligent
other than in the sense that it can perform some very specific tasks well. It
may win at Jeopardy, chess and GO but it doesn’t know that it even playing
these things, never mind the fact that it has won. Anthropomorphic bias appears
to arise from our natural ability to read the minds of others and therefore
attribute qualities to computers and software that are not actually there.
Behind this basic confusion is the idea that AI is one thing – it is not – it
encapsulates 2500 years of mathematics since Euclid put the first algorithm
down on papyrus and there are many schools of AI that take radically different
approaches. The field is an array of different techniques, often
mathematically, quite separate from each other.

ALL humans are biased

First, it is true that ALL humans are biased, as shown by
Nobel Prize winning psychologist Daniel Kahneman and his colleague Amos
Tversky, who exposed a whole pantheon of biases that we are largely born with
and are difficult to shift, even through education and training. Teaching is
soaked in bias. There is socio-economic bias in policy as it is often made by
those who favour a certain type of education. Education can be bought privately
introducing inequalities. Gender, race and socio-economic bias is often found
in the act of teaching itself. We know that gender bias is present in subtly
directing girls away from STEM subjects and we know that children from lower
socio-economic groups are treated differently. Even, so-called objective
assessment is biased, often influenced by all sorts of cognitive factors –
content bias, context bias, marking bias and so on.

Bias in thinking
about AI

There are several human biases behind our thinking about AI.

We have already mentioned Anthropomorphic bias, where reading ‘bias’ into software is often the result of this over-anthropomorphising.Availability bias
arises when we frame thoughts on what is available, rather than pure reason. So
crude images of robots enter the mind as characterising AI, as opposed to
software or mathematics, which is not, for most, easy to call to mind or
visualise. This skews our view of what AI is and its dangers, often producing
dystopian ‘Hollywood’ perspectives, rather than objective judgement.

Then there’s Negativity
bias, where the negative has more impact than the positive, so the Rise of
the Robots and other dystopian visions come to mind more readily than positive
examples such as fraud detection or cancer diagnosis.
Most of all we have Confirmation bias, that leaps into
action whenever we hear of something that seems like a threat and we want to
confirm our view of it as ethically wrong.

Indeed, the accusation that all algorithms are biased is
often (not always) a combination of ignorance about what algorithms are and a
combination of four human biases – anthropomorphism, availability, negativity, confirmation and
anthropomorphism bias. It is often a sign of bias in the objector, who wants to
confirm their own deficit-based weltanschauung and apply a universal, dystopian
interpretation to AI with a healthy dose of neophobia (fear of the new).

ALL AI is not biased

You are likely in your first lesson on algorithms to be
taught some sorting mechanisms (there are many). Now it is difficult to see how
sorting a set of random numbers into ascending order can be either sexist or
racist. The point is that most algorithms are benign, doing a mechanical job and
free from bias. They can improve performance in terms of strength, precision and
performance over time (robots in factories), compressing and decompressing
comms, encryption algorithms, computational strategies in games (chess, GO,
Poker and so on), diagnosis-investigation-treatment in healthcare and reduced
fraud in finance. Most algorithms, embedded in most contexts are benign and
free from bias.

Note that I said ‘most’ not ‘all’. It is not true to say
that all algorithms and/or data sets are biased, unless one resorts to the idea
that everything is socially constructed and therefore subject to bias. As
Popper showed, this is an all-embracing theory to which there is no possible
objection, as even the objections are interpreted as being part of the problem.
This is, in effect, a sociological dead-end.

Bias in statistics
and maths

Al is is not conscious or aware of its purpose. It is, as
Roger Schank keeps saying, just software, and as such, is not ‘biased’ in the
way we attribute that word to ‘humans’. The biases in humans have evolved over
millions of years with additional cultural input. AI is maths and we must be
careful about anthropomorphising the problem. There is a definition of ‘bias’
in statistics, which is not a pejorative term, but precisely defined as the
difference between an expected value and the true value of a parameter. If the
value is zero, it is called unbiased. This is not so much bias as a precise recognition of differentials.

However, human bias can be translated into other forms of statistical
or mathematical bias. One must now distinguish between algorithms and data.
There is no exact mathematical definition of ‘algorithm’ where bias is most
likely to be introduced through weightings and techniques used. Data is where
most of the problems arise. One example is poor sampling; too small a sample,
under-representations or over-representations. Data collection can also have
bias due to faulty data gathering in the instruments themselves. Selection bias
in data occurs when it is gathered selectively and not randomly.
However, the
statistical approach at least recognises these biases and adopts scientific and
mathematical methods to try to eliminate these biases. This is a key point –
human bias often goes unchecked, statistical and mathematical bias is subjected
to rigorous checks. That is not to say that it is flawless but error rates and
attempts to quantify statistical and mathematical bias have been developed over
a long time, to counter human bias. That is the essence of the scientific
method.

An aside…

The word ‘algorithm’ induces a sort of simplistic
interpretation of AI. Some algorithms are not created by humans, code can
create code, some are deliberately generated in evolutionary AI to create
variation and then selection against a fitness purpose. It’s complex. There are
algorithms in nature that determine genetic outcomes, the way plants grow and
many other natural phenomena. Some thing that there is a set of deep algorithms
that determine the whole of life itself. Evolutionary AI allows algorithms to be
promulgated or generated by algorithms themselves, in an attempt to mimic
evolution, but defining fitness and selecting those that work. While it is true
that bias can creep into this process it is wrong to claim that all algorithms
are created solely by the hand of the coder.

AI and transparency

A common observation in contemporary AI is that its inner
workings are opaque, especially machine learning using neural networks. But
compare this to another social good – medicine. We know it works but we don’t
know how. As Jon Clardy, a professor of biological chemistry and molecular
pharmacology at Harvard Medical School says, "the idea that drugs are the result
of a clean, logical search for molecules that work is a ‘fairytale'”. Many drugs
work but we have no idea why they work. Medicine tends to throw possible
solutions at problems, then observe if it works or not. Now most AI is not like
this but some is. We need to be careful about bias but in many cases,
especially in education, we are more interested in outputs and attainment,
which can be measured in relation to social equality and equality of
opportunity. We have a far greater chance of tackling these problems using AI
than by sticking to good, old-fashioned bias in human teaching.

Fail means First
Attempt In Learning

Nass and Reeves through 35 studies in The Media Equation
showed that the temptation to anthropomorphise technology is always there. We must resist the temptation to think this is anything but bias.When an algorithm, for example, correlates a
black face with a gorilla, it is not that it is biased in the human sense of
being a racist, namely a racist agent. The AI knows nothing of itself, it is
just software. Indeed, it is merely an attempt to execute code and this sort of
error is often how machine learning actually learns. Indeed, this repeated
attempt at statistical optimisation lies at the very heart of what AI is.
Failure is what makes it tick. The good news is that repeated failure results
in improvement in machine learning, reinforcement learning, adversarial
techniques and so on. It is often absolutely necessary to learn from mistakes
to make progress. We need to applaud failure, not jump on the bias bandwagon.

When Google was found to stick the label of gorilla on black
faces in 2015, there is no doubt that it was racist in the sense of causing
offence. Rather then someone being racist in Google, or having a piece of maths
that is racist in any intentional sense, this is a systems failure. The problem
was spotted and Google responded within the hour. We need to recognise that
technology is rarely foolproof, neither are humans. Failures will occur.
Machines do not have the cognitive checks and balances that humans have on such
cultural issues but they can be changed and improved to avoid them. We need to
see this as a process and not just block progress on the back of outliers. We
need to accept that these are mistakes and learn from these mistakes. If
mistakes are made, call them out, eliminate the errors and move on. FAIL in
this case means First Attempt In Learning. The correct response is not to
define and dismiss AI because of these failures but see them as opportunities
for success.

The main problem here, is not the very real issue of
emanating bias from software, which is what we must strive to do but the simple
contrarianism behind much of the debate. This was largely fuelled by one book….

Weapons of 'Math' Destruction - sexed up
dossier on AI?

Unfortunate
title, as O’Neil’s supposed WMDs are as bad as Saddam Hussein’s mythical WMDs,
the evidence similarly weak, sexed up and cherry picked. This is the go-to book
for those who want to stick it to AI by reading a pot-boiler. But rather than
taking an honest look at the subject, O’Neil takes the ‘Weapons of Math
Destruction’ line far too literally, and unwittingly re-uses a term that has
come to mean exaggeration and untruths. The book has some good case studies and
passages but the search for truth is lost as she tries too hard to be a
clickbait contrarian.

Bad
examples

The first
example borders on the bizarre. It concerns a teacher who is supposedly sacked
because an algorithm said she should be sacked. Yet the true cause, as revealed
by O’Neil, are other teachers who have cheated on behalf of their students in
tests. Interestingly, they were caught through statistical checking, as too
many erasures were found on the test sheets. That’s more man than machine.

The second
is even worse. Nobody really thinks that US College Rankings are algorithmic in
any serious sense. The ranking models are quite simply statistically wrong. The
problem is not the existence of fictional WMDs but poor schoolboy errors in the
basic maths. It is a straw man, as they use subjective surveys and proxies and
everybody knows they are gamed. Malcolm Gladwell did a much better job in
exposing them as self-fulfilling exercises in marketing. In fact. most of the
problems uncovered in the book, if one does a deeper analysis, are human.

Take
PredPol, the predictive policing software. Sure it has its glitches but the
advantages vastly outweigh the disadvantages and the system, and its use,
evolve over time to eliminate the problems. The main problem here is a form of
bias or one-sidedness in the analysis. Most technology has a downside. We drive
cars, despite the fact that well over a million people die gruesome and painful
deaths every year from in car accidents. Rather than tease out the complexity,
even comparing upsides with downsides, we are given over-simplifications. The
proposition that all algorithms are biased is as foolish as the idea that all
algorithms are free from bias. This is a complex area that needs careful
thought and the real truth lies, as usual, somewhere in-between. Technology
often has this cost-benefit feature. To focus on just one side is quite simply
a mathematical distortion.

The chapter
headings are also a dead giveaway - Bomb Parts, Shell Shocked, Arms Race,
Civilian Casualties, Ineligible to serve, Sweating Bullets, Collateral Damage,
No Safe Zone, The Targeted Civilian and Propaganda Machine. This is not 9/11
and the language of WMDs is hyperbolic - verging on propaganda itself.

At times
O’Neil makes good points on ‘data' – small data sets, subjective survey data
and proxies – but this is nothing new and features in any 101 statistics
course. The mistake is to pin the bad data problem on algorithms and AI –
that’s often a misattribution. Time and time again we get straw men in online advertising,
personality tests, credit scoring, recruitment, insurance, social media. Sure
problems exist but posing marginal errors as a global threat is a tactic that
may sell books but is hardly objective. In this sense, O'Neil plays the very
game she professes to despise - bias and exaggeration.

The final
chapter is where it all goes badly wrong, with the laughable Hippocratic Oath.
Here’s the first line in her imagined oath “I will remember that I didn’t
make the world, and it doesn’t satisfy my equations” a flimsy line. There
is, however one interesting idea – that AI be used to police itself. A number
of people are working on this and it is a good example of seeing technology
realistically, as being a force for both good and bad, and that the good will
triumph if we use it for human good.

This book
relentlessly lays the blame at the door of AI for all kinds of injustices, but
mostly it exaggerates or fails to identify the real, root causes. The book is
readable, as it is lightly autobiographical, and does pose the right questions
about the dangers inherent in these technologies. Unfortunately it provides
exaggerated analyses and rarely the right answers. Let us remember that Weapons
of Mass Destruction turned out to be lies, used to promote a disastrous war.
They were sexed up through dodgy dossiers. So it is with this populist
paperback.

Conclusion

This is an
important issue being clouded by often uninformed and exaggerated. Positions.
AI is unique, in my view, in having a large number of well-funded entities, set
up to research and advise on the ethical issues around AI. They are doing a
good job in surfacing issues, suggesting solutions and will influence
regulation and policy. Hyperbolic statements based on a few flawed meme-like cases
do not solve the problems that will inevitably arise. Technology is almost
always a balance up upsides and downsides, let’s not throw the opportunities in
education away on the basis of bias, whether in commentators or AI.

Sunday, December 17, 2017

10 uses for Chatbots in learning (with examples)

As chatbots become common in other contexts, such as retail,
health and finance, so they will become common in learning. Education is always
somewhat behind other sectors in considering and adopting technology but adopt
it will. There are several points across the learner journey where bots are
already being used and already a range of fascinating examples.

1.Onboarding bot

Onboarding
is notoriously fickle. New starters in at different times, have different needs
and the old model of a huge dump of knowledge, documents and compliance courses
is still all too common. Bots are being used to introduce new students or staff
to the people, environment and purpose of the organisation. New starters have predictable
questions, so answers can be provided straight to mobile, directed to people,
processes or procedures, where necessary. It is not that the chatbot will
provide the entire solution but it will take the pressure off and respond to
real queries as they arise. Available 24/7 it can give access to answer as well
as people. What better way to present your organization as innovative and
responsive to the needs of students and staff?

2.FAQ bot

In a sense
Google is a chatbot. You type something in and up pops a set of ranked links.
Increasingly you may even have a short list of more detailed questions you may
want to ask. Straight up FAQ chatbots, with a well-defined set of answers to a
predictable set of questions can take the load off customer queries, support
desks or learner requests. A lot of teaching is admin and a chatbot can relieve
that pressure at a very simple level within a definite domain – frequently
asked questions.

3. Invisible LMS bot

At another
level, the invisible LMS, fronted by a chatbot, allows people to ask for help
and shifts formal courses into performance support, within the workflow.
LearningPool’s ‘Otto’ is a good example. It sits on top of content, accessible
from Facebook, Slack and other commonly used social tools. You get help in
various forms, such as simple text, chunks of learning, people to contact and
links to external resources as and when you need them. Content is no longer
sits in a dead repository, waiting on you to sign in or take courses, but is a
dynamic resource, available when you ask it something.

4. Learner engagement bot

Learners
are often lazy. Students leave essays and assignments to the last minute,
learners fail to do pre-work, and courses– it’s a human failing. They need
prompting and cajoling. Learner engagement bots do this, with pushed prompts to
students and responses to their queries. ‘Differ’ from Norway does precisely
this. It recognizes that learners need to be engaged and helped, even pushed
through the learning journey, and that is precisely what 'Differ' does.

5. Learner support bot

Campus
support bots or course support bots go one stage further and provide teaching
support in some detail. The idea is to take the administrative load off the
shoulders of teachers and trainers. Response times to emails from faculty to
students can be glacial. Learner support bots can, if trained well, respond
with accurate and consistent answers quickly, 24/7.

The Georgia
Tech bot Jill Watson, and its descendants, responds in seconds. Indeed they had
to slow its response time down to mimic the typing speed of a human. The
learners, 350 AI students, didn’t guess that it was a bot and even put it up
for a teaching award.

6. Tutor bots

Tutor bots are different from chatbots in terms of the
goals, which are explicitly ‘learning’ goals. They retain the qualities of a
chatbot, flowing dialogue, tone of voice, exchange and human (like) but focus
on the teaching of knowledge and skills.Straight up teaching is another
approach, where the bot behaves like a Socratic teacher, asking sprints of
questions and providing encouragement and feedback. This type of bot can be
used as a supplement to existing courses to encourage engagement. Wildfire, the
AI content generation service uses bots of this type to deliver actual teaching on apprenticeship content, as a supplement to courses, also built using AI, in
minutes not months. Once the basic knowledge has been acquired, the bot tests
the student as well as getting them to apply their knowledge.

7. Mentor bot

The point
of a bot may not be to simply answer questions but to mentor learners by
providing advice on how to find the information on your own, to promote problem
solving. AutoMentor by Roger Schank,is
one such system, where the bot knows the context and provides, not just FAQ
answers but advice. Providing answers is not always the best way to teach. At a
higher-level chatbots could be used to encourage problem solving and critical
skills, by being truly Socratic, acting as a midwife to the students behaviours
and thoughts. Roger Schank is using these in defence-funded projects on Cyber
Security.

As the
dialogue gets better, drawing not only on a solid knowledge-base, good learner
engagement through dialogue, focused and detailed feedback but also critical
thought in terms of opening up perspectives, encouraging questioning of
assumptions, veracity of sources and other aspects of perspectival thought, so
critical thinking could also be possible. Bots will be able to analyse text to
expose factual, structural or logical weaknesses. The absence of critical
thought will be identified as well as suggestions for improving this skill by
prompting further research ideas, sound sources and other avenues of thought.
This ‘bot as critical companion’ is an interesting line of development.

8. Scenario-based bots

Beyond
knowledge, we have the teaching and learning of more sophisticated scenarios,
where knowledge can be applied. This is often absent in education, where almost
all the effort is put into knowledge acquisition. It is easy to see why – it’s
hard and time consuming. Bots can set up problems, prompt through a process,
provide feedback and assess effort. Scenarios often involve other people this
is where surrogate bots can come in.

9. Practice bots

Practice
bots, literally take the role of a customer, patient, learners or any other
person and allows learners to practice their customer care, support, healthcare
or other soft skills on a responding person (bot). Bots that act as revision
bots for exams are also possible.

A bot that
mimics someone can be used for practice. For example, the boy with attitude
‘Eli’, developed by Penn State, that mimics an awkward child in the classroom.
It is used by student teachers to practice their skills on dealing with such
problems before they hit the classroom. Duolingo uses bots after you have
gathered an adequate vocabulary, knowledge of grammar and basic competence, to
allow practice in a language. This surely makes sense.

10. Wellbeing bots

If a bot is being used in any therapeutic context, its
anonymity can be an advantage. From Eliza in the 60s to contemporary therapeutic
bots, this has been a rich vein of bot development. There is an example of the word
‘suicidal’ appearing in a student messenger dialogue, that led to a fast
intervention, as the student was in real distress. Therapeutic bots are being
used in controlled studies to see of they have a beneficial effect on outcomes.
Anonymity, in itself, is an advantage in such bots, as the learner may not want
to expose their failings.

Bots such as ‘Elli ‘ and ‘Woebot’ are already being
subjected to controlled trials to examine the impact on clinical outcomes.

Bot warning

The holy
grail in AI is to find generic algorithms that can be used (especially in
machine learning) to solve a range of different problems across a number of
different domains. This is starting to happen with deep learning (machine learning).
The idea is that the teacher bot will replace the skills of a teacher, not just
be able to tutor in one subject alone, but be a cross-curricular teacher,
especially at the higher levels of learning. It could be cross-departmental,
cross-subject and cross-cultural, to produce teaching and learning that will be
free from the tyranny of the institution, department, subject or culture in
which it is bound. Let’s be clear, this
will not happen any time soon.AI is
nowhere near solving the complex problems that this entails. If someone is
promising a bot will replace a teacher – show them the door. Bots will augment
not automate teaching.

We have to
be careful about overreach here. Effective bots are not easy to build, have to
be ‘trained (in AI-speak ‘unsupervised’) and are difficult to build. On the
other hand trained bots, with good data sets (in AI-speak ‘supervised’), in
specific domains, are eminently possible. Another warning is that they are on a
collision course with traditional Learning Management Systems, as they usually
need a dynamic server-side infrastructure. As for SCORM – the sooner it’s
binned the better. Bots fit n more naturally into the xAPI landscape.

Conclusion

Chatbots have real potential in a number of learning
activities, all along the learning journey, not as a general; ‘teacher’ but in
specific applications within specific domains. They need to be trained, built,
tested and improved, which is no easy task, but their efficacy in reducing the
workload of teachers, trainers, lecturers and administrators is clear. The
dramatic advances in Natural Language Processing have led to Siri, Amazon Echo
and Google Home. It is a rapidly developing field of AI and promises to deliver
chatbot technology that is better and cheaper by the month.

As a bot
does not have the limitations of a human, in terms of forgetting, recall, cognitive
bias, cognitive overload, getting ill, sleeping 8 hours a day, retiring and
dying - once on the way to acquiring, albeit limited, skills, it will only get
better and better. The more students that use its service the better it gets,
not only on what it teaches but how it teaches. Courses will be fine-tuned to
eliminate weaknesses, and finesse themselves to produce better outcomes.

We have
seen how online behaviour has moved from flat page-turning (websites) to
posting (Facebook, Twitter) to messaging (Txting, Messenger). We have seen how
the web become more natural and human. As interfaces (using AI) have become
more frictionless and invisible, conforming to our natural form of
communication (dialogue), through text or speech. The web has become more
human.

Learning
takes effort. Personalised dialogue reframes learning as an exploratory, yet
still structured process where the teacher guides and the learner has to make
the effort. Taking the friction and cognitive load of the interface out of the
equation, means the teacher and learner can focus on the task and effort needed
to acquire knowledge and skills. This is the promise of bots. But the process
of adoption will be gradual.

Finally,
this at last is a form of technology that teachers can appreciate, as it truly
tries to improve on what they already do. It takes good teaching as its
standard and tries to support and streamline it to produce faster and better
outcomes at a lower cost. It takes the admin and pain out of teaching. They are
here, more are coming.

Thursday, December 14, 2017

7 solid reasons to suppose that chatbot interfaces will work in learning

In Raphael’s painting various luminaries stand or sit in
poses on the steps, but look to the left of Plato and Aristotle and you’ll see
a poor looking figure in a green robe talking to people – that’s Socrates. Most
technology in teaching has run against the Socratic grain, such as the
blackboard, turning teachers into preachers and lecturers. With chatbots we may
be seeing the return of the Socratic method.

This return is being enabled by AI, in particular Natural
Language Processing but also through other AI techniques such as adaptive
learning, machine learning, reinforcement learning. AI is largely invisible,
but it doe have to reveal itself through its user interface. AI is the new UI
but because the AI is doing a lot of the smart, behind the scenes work, it is
best fronted by a simple interface, the simpler the better. The messenger
interface seems to have won the interface wars, transcending menus and even
social media. Simple Socratic dialogue seems to have risen, through the process
of natural selection as THE interface of choice, especially on mobile.

So can this combination of AI and Socratic UI have an
application in learning? There are several reasons for being positive about
this type of interface in learning.

1. Messaging the new
interface

We know that messaging, the interface used by chatbots, has
overtaken that of social media over the last few years, especially among the
young. Look at the mobile home screen of any young person and you’ll see the
dominance of chat apps. The Darwinian world of the internet is the perfect
testing ground for user interfaces and messaging is what you are most likely to
see when looking over the shoulder of a young person.

So one could argue that for younger audiences, chatbots are
particularly appropriate, as they already use this as their main form of
communication. They have certainly led the way in its use but one could also
argue that there are plenty of reasons to suppose that most other people like
this form of interface.

2. Frictionless

Easy to use, it allows you to focus on the message not the
medium. The world has drifted towards messaging for the simple reason that it
is simple. By reducing the interface to its bare essentials, the learner can
focus on the more important task of communications and learning. All interfaces
aim to be as frictionless as possible and apart from speculative mind reading
from the likes of Elon Musk with Neuralink, this is as bare bones as one can
get.

3. Reduces cognitive
load

Messaging is simple, a radically, stripped down interface
that anyone can use. It requires almost no learning and mimics what we all do
in real life – simply dialogue. Compared to any other interface it is low on
cognitive load. There is little other than a single field into which you type, it
therefore goes goes at your pace. What also matters is the degree to which it
makes use of NLP (Natural Language Processing) to really understand what you
type (or say).

4. Chunking

One of the joys of messaging, and one of the reasons for its
success, it that it is succinct. It is by its very nature chunked. If it were
not, it wouldn’t work. Imagine being on a flight with someone, you ask them a
question and get a1 hour lecture in return or imagine. Chatbots chat, they
don’t talk at you.

5. Media equation

In a most likely apocryphal story, where Steve Jobs
presented the Apple Mac screen to Steve Wosniak, Jobs had programmed it so say
’Hello…”. Wosniak though it uncessary – but who was right? We want our
technology to be friendly, easy to use, almost our companion. This is as true
on learning as it is in any other area of human endeavour.

Nass & Reeves, in The
Media Equation, did 35 studies to show that we attribute agency to
technology, especially computers. We anthropomorphise technology in such a way
that we think the bot is human or at least exhibits human attributes. Our
faculty of imagination finds this easy, as witnessed by our ready ability to
suspend belief in the movies or when watching TV. It takes seconds and works in
our favour with chatbots, as dialogue is a natural form of human behaviour and
communication.

6. Anonymity

If you have qualms about chat replacing human activity,
remember also, that many learners are reluctant to ask their tutor, lecturer,
manager or boss questions, for fear of embarrassment, as it may reveal their
lack of knowledge. Others are simply quiet, even introverts. Anonymous learning,
through a chatbot, then becomes a virtue
not a vice. Wellbeing bots may also want to preserve anonymity. In this sense,
chatbots may be superior to live, human teachers and bosses. Time and time
again we see how technology is preferred to human contact – ATMs, online retail
and so on, in learning, in some circumstances we also witness this phenomenon.

7. Audio possible

The brain is a social organ, likes to receive stuff in
chunks and interact when learning. We are social apes, grammatical geniuses at
age 3 and learn to listen and speak long before we learn to read and write
(which take years). Chatbots, such as Siri and Alexa already exist and, with
the addition of text to speech and speech to text, turn chat into the exchange
of speech. Reading and writing are replaced by listening and speaking.

Conclusion

Of course, one must be careful here, as chatbots have real
limitations. They work best in narrow domains, with a clear purpose. Their
ability to deliver full-milk, sustained dialogue is limited. Nevertheless, they
can deliver learning functions aright across the learning journey from on-boarding,
learner engagement, learner support, mentoring, teaching, assessment, practice
and well being.

Chatbot interfaces can be fully scripted using no natural
language processing at all or they can use varying levels of NLP to allow for
variations on input. At the simplest level it can cope with synonyms and
different word order. Large services by the big players, such as IBM and
Microsoft offer much more naturalistic interfaces. Whatever your choice, regard
the dialogue interface as something separate.