Saturday, 8 December 2018

At OEB2018 the last session I lead was on the subject of AI, machine learning in combination with old philosophers and new learning. The session drew a big group of very outspoken, intelligent people making this session a wonderful source of ideas on the subject of philosophy and AI.

As promised to the participants, I am adding my notes taken during the session. There were a lot of ideas, so my apologies if I missed any. The notes follow below, afterwards embedding the slides that preceeded the session in order to indicate where the idea for the workshop came from.

Notes from the session Old Philosophers and New
Learning @OldPhilNewLearn #AI #machineLearning and #Philosophy

The session started of
with the choices embedded in any AI, e.g. a Tesla car running into people, will
he run into a grandmother or into two kids? What is the ‘best solution’…
further into the session this question got additional dimensions: we as humans
do not necessarily see what is best, as we do not have all the parameters, and:
we could build into the car that in case of emergency, the car needs to decide
that the lives of others are more important than the lives of those in the car,
and as such simply crash the car into the wall, avoiding both grandmother and
kids.

The developer or
creator gives parameters to the AI, with machine learning embedded, the AI will
start to learn from there, based on feedback from or directed to the
parameters. This is in contrast with computer-based learning, where rules are
given, and they are either successful or not but they are no basis for new
rules to be implemented.

From a philosophical
point of view, the impact of AI (including its potential bias coming from the
developers or the feedback received) could be analysed using Hannah Arendt’s
‘Power of the System’, in her time this referred to the power mechanisms during
WWII, but the abstract lines align with the power of the AI system.

The growth of the AI
based on human algorithms does not necessarily mean that the AI will think like
us. It might choose to derive different conclusions, based on priority
algorithms it chooses. As such current paradigms may shift.

Throughout the ages,
the focus of humankind changed depending on new developments, new thoughts, new
insights into philosophy. But this means that if humans put parameters into AI,
those parameters (which are seen as priority parameters) will also change over
time. This means that we can see from where AI starts, but not where it is
heading.

How much ‘safety
stops’ are built into AI?

Can we put some kind
of ‘weighing’ into the AI parameters, enabling the AI to fall back on more
important or less important parameters when a risk needs to be considered?

Failure as humans can
results into growth based on those failures. AI also learns from ‘failures’,
but the AI learns from differences in datapoints. At present the AI only
receives a message ‘this is wrong’, at that moment in time – if something is
wrong – humans make a wide variety of risk considerations. In the bigger picture,
one can see an analogy with Darwin’s evolutionary theory where time finds what
works based on evolutionary diversity. But with AI the speed of adaptation
enhances immensely.

With mechanical AI it
was easier to define which parameters were right or wrong. E.g. with Go or
Chess you have specific boundaries, and specific rules. Within these boundaries
there are multiple options, but choosing those options is a straight path of considerations.
At present humans make much more considerations for one conundrum or action
that occurs. This means that there is a whole array of considerations that can
also imply emotions, preferences…. When looking at philosophy you can see that
there is an abundance of standpoints you can take, some even directly opposing
each other (Hayek versus Dewey on democracy), and this diversity sometimes
gives good solutions for both, workable solutions which can be debated as being
valuable outcomes although based on different priorities, and even very
different takes on a concept. The choices or arguments made in philosophy (over
time) also clearly point to the power of society, technology and reigning
culture at that point in time. For what is good now in one place, can be
considered wrong in another place, or at another point in time.

It could benefit teachers if they were
supported with AI to signal students with problems. (but of course this means
that ‘care’ is one of the parameters important for society, in another society
it could simply be that those students who have problems will be set aside.
Either choice is valid, but it builds on other views on whether we care in a
‘supporting all’ or care in a ‘support those who can so we can move forward
quicker’. It is only human emotion that makes a difference in which choice
might be the ‘better’ one to choose.

AI works in the
virtual world. Always. Humans make a difference between the real and the
virtual world, but for the AI all is real (though virtual to us).

Asimov’s laws of
robotics still apply.

Transparency is needed
to enable us to see which algorithms are behind decisions, and how we – as
humans – might change them if deemed necessary.

Law suits become more
difficult: a group of developers can set the basis of an AI, but the machine
takes it from their learning itself. The machine learns, as such the machine
becomes liable if something goes wrong, but ….? (e.g. Tesla crash).

Trust in AI needs to
be built over time. This also implies empathy in dialogue (e.g. sugar pill /
placebo-effect in medicine, which is enhanced if the doctor or health care
worker provides it with additional care and attention to the patient.

Similar, smart object
dialogue took off once a feeling of attention was built into it: e.g. replies
from Google home or Alexa in the realm off “Thank you” when hearing a
compliment. Currently machines fool us
with faked empathy. This faked empathy also refers to the difference between
feeling ‘related to’ something or being ‘attached to’ something.

Imperfections will
become more important and attractive than the perfections we sometimes strive
for at this moment.

AI is still defined
between good and bad (ethics), and ‘improvement’ which is linked to the
definition of what is ‘best’ at that time.

Societal decisions:
what do we develop first – with AI? The refugee crisis or self-driving cars?
This affects the parameters at the start. Compare it to some idiot savants,
where high intelligence, does not necessarily implies active consciousness.

Currently some humans
are already bound by AI: e.g. astronauts where the system calculates all.

And to conclude: this
session ranged from the believers in AI “I cannot wait for AI to organise our
society” to those who think it is time
for the next step in evolution, in the words of Jane Bozart: “Humans had their
Chance”

Khan academy system is
a proven system, with one of the best visualisations of how the students are
advancing. With a lot of stats and graphs. Carlos used this approach for their
0 courses (courses on basic knowledge that students must know before moving on
in higher ed).

Based on the Khan
stats, they built a high level analytics system.

Predictions in MOOCs
(see paper of Kloos), focusing on drop-out.

Monitoring in SPOCs
(small private online courses)

Measurement of Real
Workload of the students, the tool adapts the workload to the reality.

FlipApp (to gamify
flipped classroom), remember and to notify the students that they need to see
the videos before class, or they will not be able to follow. (Inge: sent to Barbara).

Creation of
Educational Material using Google classroom. Google classroom sometimes knows
what the answer of a quiz will be, which can save time for the teacher.

Learning analytics to
improve teacher content delivery.

Use of IRT (Item Response
Theory) to see which quizzes are more useful and effective, interesting to
select quizzes.

Coursera define
skills, match it to the jobs and based on that recommend courses.

Industry 4.0 (big
data, AI…) for industry, can be transferred to Education 4.0 (learning
analytics based on machine learning). (Education3.0 is using the cloud, where
both learners and teachers go to).

Machine learning
infers the rules from getting answers which are data analysed (in comparison to
computer learning, which is just the opposite, based on rules, giving answers).

Chinese social credit system
which gives minor points if you do something that is seen as not being ‘proper’.
Also combined with facial recognition, and monitoring attention in class
(Hangzhou number 11 high school).

Learning analytics:
Siemens (2011) definition still the norm. But nowadays it is a lot about
analytics, but only little about learning.

Trust: currently we
believe that something is reliable, the truth, or ability. Multiple definitions
of trust, it is multidimensional and multidisciplinary construct. Luhmanndefined trust as a way to cope with risk, complexity, and a lack of system understanding.
For Luhmann the concept of trust compensates for insufficient capabilities for
fully understanding the complexity of the world (Luhmann, 1979, trust and …)

For these reasons we must be transparent,
reliable, and be integer to attract the trust of learners. There should not be
a black box, but it should be a transparent box with algorithms (transparent
indicators, open algorithms, full access to data, knowing who accesses your
data).

User involvement and
co-creation: see the competen-SEA project see http://competen-sea.eu capacity
building projects for remote areas or sensitive learner groups. One of the
outcomes was to co-design to create MOOCs (and trust) getting all the
stakeholders together in order to come to an end product. MOOCs for people, by
people. Twitter #competenSEA

This was a wonderful AI session, with knowledgeable speakers, which is always a pleasure. Some of the speakers showed their AI solutions, and described their process; others focused on the opportunities and challenges. Some great links as well.

Knowledge graph +
knowledge space theory: monitoring students real-time learning progress to
evaluate student knowledge mastery and predict future learning skills. based on
Bayesian network plus Bayesian inference and knowledge tracing and Item
Response Theory. The system identifies the knowledge of the student based on
the their intake or tests. Based on big data analysis the students get a
tailored learning path. (personalised content recommendation using fuzzy logic,
classification tree, and personalized based on logistic regression, graph
theory, and genetic algorithm.). Adaptive learning based on Bayesian network,
plus Bayesian inference, plus Bayesian knowledge tracing, plus IRT to precisely
determine students current knowledge state and needs.

Some experiments and
results: the forth Human versus AI competition, which resulted in AI being
quicker and more adapt to score tests of students.Artificial Intelligence in Education
conference (AIED18 conference link, look up video youtube.com, call for papers deadline 8 February 2019 for AIED19 here).

How we differ:
adaptive learning adapts to the individual, only shows content when it is
necessary, takes into consideration what the student already knows, follows up
on what the student is having trouble with. This reduces the time of learning, and
increases motivation. Impact from adaptive learning, almost 50% reduction of
learning time.

Supports conscious
competence concept.

AI is 60% of the
platform, but the most important part is the human being, learning engineers,
the team of humans who work together makes it possible.

Tumo is a learning
platform where students direct their own development. After school program, 2
hours twice a week, and thousands of students come to the centre of TUMO. Armenia
and Paris, and Beirut.

14 learning targets
ranging from animation, to writing, to robotics, game development…

Main education is
based on self learning, workshops and learning labs.

Coaches support the
students and they are in all the workshops and learning labs.

Personalisation: each
students choose their learning plan, their topics, their speed. That happens
through the ‘Tumo path’, which is an interface which enables a personalised
learning path (cfr LMS learning paths, but personalized in terms of speed and
choices of the students). After the self-paced parts, the students can go to a
workshop to reach their maximum potential, to learn and know they can explore
and learn. These are advanced students (12 – 18 years, free of charge).

Harnessing the power
of AI: the AI solves a lot of problems, as well as provide freedom to
personalise the students learning experience. A virtual assistant will be
written to help the coaches to help the student guided through the system.

AI guided dog: a
mascot to help the students.

The coaches,
assistants… are their to learn the students to take up more responsibility.

For those learners who
are not that quick, a dynamic content aspect is planned to support their
learning.

A lot of work has been
done round ethics in data. But there are also the algorithms that tweak the
data outcomes, how do we prevent biases, guard against mistakes, protect
against unintended consequences….

But what about
education: self-fulfilling teacher wishes…

So how do we merge
algorithms and big data and education?

With great power comes
great responsibility (Spiderman, 1962, or French revolution national
convention, 1793)

ATS tool built by
Facebook, but the students went on strike (look this up).

(look up papers,
starting to get tired, although the presentation is really interesting)

What is the impact of
AI on our children is her main research considerations. How is the interaction
between children and the smart agents. And what do we have to do, to avoid
biases while children are using AI agents.

At present the AI
biases infiltrate our world as we know, but can we transform this towards less
biases?

(liveblog starts after
a general paragraph on the two keynotes that preceded her talk, and really her
talk was really GREAT! And with fresh,
relevant structure).

First of a talk on the
skill sets of future workers (the new skills needed, referring to critical thinking,
but not mentioning what is understood with critical thinking) and the
collective intelligence (but clearly linking it to big data not small data, as
well described in an article by Stella Lee).

Self-worth idea for the philosophy session, refer tot he Google map approach
where small companies who offer one particular aspect of what it took to build
google maps were bought by Google, and as such producing something that was bigger
than the sum of its parts). But this of course means that the identity and the
self-versus-the-other becomes under pressure, as people that really make a
difference at some point, do not have the satisfying moment to think they are
on top of the world (you can no longer show off your quality easily… for there
are so many others just like you… as you can see when you read the news, follow
people online…). While feeling important was easier, or possible in a ‘smaller’
world, where the local tech person was revered for her or his knowledge. So, in
some way we are loosing the feeling of being special based on what we do. Additionally,
if AI enters more of the working world, how do we ensure that work will be
there for everyone, as work is also a way to ‘feel’ self-worth. I think keeping
self-worth will be an increasing challenge in the connected, and AI supported
world. As a self-test, simply think of yourself, and wanting to be invited to
be on a stage… it is a simple yet possibly mentally alarming aspect. Our
society is promoting ‘being the best’ at something, or having the most ‘likes’,
what can we do to install or keep self-worth?

Than a speaker on the
promise of online education, referring to MOOCs versus formal education, the
increase of young people going to college… which strangely contradicts what the
most profiles of future jobs seems to be like (professions that are rather
labour intensive). The speaker Kaplan managed to knock down people who get into
good jobs based on non-traditional schooling (obviously, my eye-brows went up,
and I am sure there are more of us in the audience pondering which conservative
thinking label can be put on that type of scolding stereotype speech,
protecting the norm, he is clearly not even a non-conformist).

Here a person in the
line of my interest takes the stage: Anita Schjoll Brede. Anita founded an AI company
Iris.ai , and tries to simplify the AI, machine learning and data science for
easier implementation. So… of interest.

Learning how to learn sets us human beings apart. We are in the era where machines
will learn, based on how we learn… inevitably changing what we need to learn.

She gives what AI is
seen by most people, and where that model is not really correct.

Machine learning is
based on the workings of a human brain. Over time the machine will adapt based
on the data, and it will learn new skills. It is a great model to see the
difference. One caveat, we still not sure how the human mind really works.

If we think of AI, we
think of software, hardware, data … but our brains are slightly different and our
human brains are also flawed. We want to build machines that are complementary to
the human brain.

Iris.ai started with
the idea that there are papers and new
research published every day, humans can no longer read all. Iris.ai goes
through the science and the literature process is relatively automated. The
process is currently possible with a time decrease of 80%. Next step is
hypothesis extraction, than build a truth tree of the document based on scientific
arguments. Once you have the truth trees are done, link that to a lab or
specific topic, … with an option of the machine learning results leading to
different types of research. Human beings will still do the deeper understanding.

Another example is one tutor per child. Imagine that there is one tutor
for that child, which grows with that child, helps with lifelong learning. The
system will know you so well, that it will know how to motivate you, or get you
forward. It might also have filters to identify discriminatory feelings or
actions (remark of myself: but I do wonder, if this is the case, then isn’t
this limiting the freedom of saying what you want and being the person you want
to be… it might risk becoming extreme in either way of the doctrine system).

Refers to the Watson
Lawyer AI, which makes that the junior lawyers will no longer do all the groundwork.
So the new employees will have to learn other stuff, and be integrated
differently. But this relates to critical ideas of course, as you must choose
for employing people (but make yourself less competitive) or you only higher senior
lawyers (remark of myself: but than you loose diversity and workforce).

Refers to doctors
built by machine learning, used in sub-Saharan settings, to analyse human blood
for malaria. Which saves time for the doctors, health care workers… but
evidently, this has an impact on the health care worker jobs.

Critical thinking,refers
to source criticism she learned during her schooling.

Who builds the AI, lets
say Google will transgress the first general AI… their business model will
still get us to buy more soap.

Complex problem solving: we need to hold this uncertainty
and have that understanding. To understand why machines were lead to specific
choices.

Creativity: machines can be creative, we can learn this.
Rehashing what is done, and making it to something of your own is something
that is (refers to lawyer commercial that was built by AI based on hours of
legal commercials).

Empathy: is at the core of human capabilities like
this. Machines are currently doing things, but not yet empathic. But empathy is
also important to build machines that can result in positive evolutions for
humans. If we can support machines that will be able to love the world,
including humans.

Wednesday, 5 December 2018

After being physically out of the learning circuit for about a year and a half, it is really nice to get active again. And what better venue to rekindle professional interests than at Online Educa Berlin.

Yesterday I lead a workshop on using an ID instrument I call the Instructional Design Variation matrix (IDVmatrix). It is an instrument to reflect on the learning architecture (including tools and approaches) that you are currently using, to see whether these tools enable you to build a more contextualized or standardized type of learning (the list organises learning tools according to 5 parameters: informal - formal, simple - complex, free - expensive, standardized to contextualized, and more aimed at individual learning - social learning). The documents of the workshop can be seen here.

The workshop started of with an activity called 'winning a workshop survival bag', where the attendees could win a bag with cookies, nuts, and of course the template and lists of the IDVmatrix.
We then proceeded to give a bit of background on the activity, and how it related to the IDVmatrix.
Afterwards focusing on learning cases, and particularly challenges that the participants of the workshop were facing.
And we ended up trying to find solutions for these cases, sharing information, connections, ideas (have a look at this engaging crowd - movie recorded during the session).
The workshop was using elements from location-based learning, networking, mobile learning, machine learning, just-in-time learning, social learning, social media, multimedia, note taking, and a bit of gamification.

It was a wonderful crowd, so everyone went away with ideas. The networking part went very well also due to the icebreaker activity at the beginning. This was the icebreaker:

The WorkShop survival bag challenge!
Four actions, 1 bag for each team!

Action 1
Which person of your group has the longest first name?
Write down that name in the first box below.Action 2

Choose two person prior to this challenge: a person who will record a short (approx. 6 seconds)

video with their phone and tweet it, and a person/s who will talk in that video.

Record a 6 second video which includes a booth at the OEB exhibition (shown in the

background) and during which a person gives a short reason why this particular learning solution

(the one represented by the booth) would be of use to that persons learning environment

(either personal or professional).

Once you have recorded the video, share it on twitter using the following hashtags: #OEB #M5

#teamX (with X being the number of your team, e.g. #team1) . This share is necessary to get the

next word of your WS survival bag challenge.

Once you upload the movie, you will get a response tweet on #OEB #M5 #teamX (again with the

number of your team).

Write down the word you received in response to your video in the second box below.

Action 3

Go to the room which is shown in the 360° picture in twitter (see #M5 #OEBAllTeams).

Find the spot where 5 pages are lined up, each of them with another language sign written on

them.

Each team has to ‘translate’ the sign assigned to their team. You can use the Google Translateapp for this (see google play, the app is free!).

Write down the translation in the third box below.

Action 4
Say the following words into the Google Home device which is located in the WS room

“OK Google 'say word box 1', say word box 2, say word box 3“

If Google answers, you will get your WS survival bag!

And although the names were not always very English, with a bit of tweaking using the IFTTT app, all the teams were able to get Google home mini to congratulate them for getting all the challenges right.

Free MOOC report

The report gives an overview of the goals of the project, the methodology, and finishes with the practical recommendations for using online courses to enhance access and progression into higher education and the employment market (for refugees).

MOONLITE multiplier event (part of a EU Erasmus+ project)

The MOONLITE event supports learning without borders, practically it harnesses the potential of MOOCs for refugees and migrants to build their language competences and entrepreneurial skills for employent, higher education, and social inclusion.

There are bursaries to help cover your travel expenses which you can apply for at the venue!

The Higher Education landscape is changing. As the information economy progresses, demand for a more highly, and differently, qualified workforce increases, and HE Institutions face the challenge of reskilling and upskilling people throughout their lives. The corporate and NGO sectors are themselves exploring the benefits of a more qualified online approach to training, and are entering the education market in collaboration with HE Institutions, but also autonomously or via new certifying agencies. Technology is the other significant player in this scenario. It allows for new, data-driven ways of measuring learning outcomes, new curriculum structures and alternative forms of recruitment strategy via people analytics.

MOOCs represent the crossroads where the three converge. Come to EMOOCs 2019 and explore the impact and future direction of open, online education on a social, political and institutional level.

The eMOOC summit has four tracks: research, business, policy and experience track.

At the MOOC crossroads: where academia and business converge

The Higher Education landscape is changing. As the information economy progresses, demand for a more highly, and differently, qualified workforce and citizens increases, and HE Institutions face the challenge of training, reskilling and upskilling people throughout their lives, rather than providing a one-time in-depth education. The corporate and NGO sectors are themselves exploring the benefits of a more qualified online approach to training, and are entering the education market in collaboration with HE Institutions, but also autonomously or via new certifying agencies. Technology is the other significant player in this fast-changing scenario. It allows for new, data-driven ways of measuring learning outcomes, new forms of curriculum definition and compilation, and alternative forms of recruitment strategy via people analytics.

At the MOOC crossroads where the three converge, we ask ourselves whether university degrees are still the major currency in the job market, or whether a broader portfolio of qualifications and micro-credentials may be emerging as an alternative. What implications does this have for educational practice? What policy decisions are required? And as online access eliminates geographical barriers to learning, but the growing MOOC market is increasingly dominated by the big American platforms, what strategic policy do European HE Institutions wish to adopt in terms of branding, language and culture?

The EMOOCs 2019 MOOC stakeholders summit comprises the consolidated four-track format of Research and Experience, Policy and Business. And will feature keynote speakers, round table and panel sessions as well as individual presentations in each track. The aim is for decision-makers and practitioners to explore innovative and emerging trends in online education delivery, and the strategic policy that supports them. Original contributions that share knowledge and carry forward the debate around MOOCs are very welcome.The number of HE institutions involved in MOOCs, and the numbers of courses and enrolled students, has increased exponentially in recent years both in Europe and beyond. One of the results of this growing MOOC movement is an increasing body of research evidence that positions itself within the established research communities in technology enhanced learning, open education and distance learning. Key trends that are accelerating HE technology adoption are blended learning design and collaborative learning as well as a growing focus on measuring learning and redesigning learning spaces, and, in the long-term, deeper learning approaches and cultures of innovation.

This track welcomes high-level papers supported by empirical evidence to provide a rigorous theoretical backdrop to the more practical approaches described in the experience track, and particularly invites contributions in the area of these key trends.

When submitting your paper, please indicate type of paper and track in the submission process.

Proceedings

The Work-in-Progress proceedings will be submitted to CEUR-WS.org for online publication. Outstanding short papers may be included in the Springer Proceedings.

Important dates:25 February 2019: Short Paper submissions for Research Track.25 March 2019: Notification of acceptance/rejection29 April 2019: Camera-ready versions for online Proceedings with ISBN and copyright form

Thursday, 18 October 2018

In the past year I have been adding some Instructional Design descriptions in my notebook. After I while I realized that something useful could come out of this very varied collection, so now I am putting some of these pages online (the Instructional Design Variation matrix or IDVmatrix). The idea is to grow a compendium of these pages, adding parameters that are meaningful in ID to each of those learning/teaching design elements, and eventually use these parameters as a matrix to use on the job. I will only write them here, and add the #IDVmatrix hashtag for easy recall once these pages grow. The reason behind these pages is to create a contemporary overview of Instructional Design options that are out there, and to build an instrument that allows you to quickly screen whether other ID-options can be used that reflect the same parameters you are looking for (taking into account your target learning population). The collection will have standard ID-tools (e.g. authoring tools, LMS, MOOCs...) as well as more contemporary learning and teaching tools (e.g. chatbots, machine learning, ...). The template I will follow is simple: short description (as brief as possible while allowing main features to be addressed), a segment on who uses it and how (of course that will be a not exhaustive), referring to some examples, important features to keep in mind, and finally adding a matrix stamp to it (taking into account the 5 parameters I think are relevant to structuring educational tools. And trying to add some meaningful, possibly EdTech critical pictures as a bonus. First one: a classic: the LMS.

Learning Management System (LMS)

Learning Management Systems (LMS, also related to Content Management or Course Management Systems) come in many variations, but generally they offer a digital environment to facilitate, support and design online or blended instruction. an LMS offers content structuring options (put specific modules online, sometimes integrate a learning path into those courses), quiz-options (including a question-database with a variety of quiz-options), and communication services between the learners, the facilitators, the course managers ... or all of the learning stakeholders.
The LMS is pre-programmed. In some cases this means the complete system is programmed (e.g. Blackboard, WIZiq), and you - as a course provider - can only customize specific features, but in other cases you can customize a big part of the system (due to open source code), including some programming that you do yourself (e.g. Drupal, Moodle). Some smaller LMSs offer a more specialized and valuable option, e.g. Curatr which emphasizes the social learning factor. Some LMS also include course libraries, or you - the institute - can build an open, LMS supported library to offer support to your learners.
Normally these systems are self-contained, but with options to integrate other tools to align the LMS with contemporary learning realities (e.g. integrate instagram, twitter). Although some LMS are free, you need to consider the cost of server space, programming some features, supporting all users, and keeping the system up and running 24.7.Who uses it: learners, teachers, trainers, course coordinators, ... each on their own level. Normally user rights can be allocated within the LMS. Depending on the role, the LMS will offer a different experience (back-end mostly for course-delivery people, and front-end for the learner). Important features to keep in mind while choosing a LMS: security features are very important as a LMS generates a lot of learner data and communications traffic. A mobile app is a must, test it on multiple devices to estimate the quality of the app. Offline features will make life much easier for learners. SCORM options make life easier for any instructional designer, and xAPI features will allow the educators/facilitators to make meaningful analysis from all the learner data.IDVmatrix stamp

The call for papers below is for authors researching 'human learning and learning analytics in the age of artificial intelligence' and is an action to celebrate BJET's 50th anniversary. But first ... the call for co-authors to realize a new Rebus book on the subject of Introduction to Philosophy series.

Seeking Authors & Editors for Introduction to Philosophy Series

The Rebus Community initiative Introduction to Philosophy series has grown tremendously, and a few books are nearing the final stages! Led by Christina Hendricks (University of British Columbia), the series includes eight volumes in total, ranging across themes. We are currently seeking faculty interested in contributing to the series by authoring chapters in the following books:EpistemologyAestheticsMetaphysicsSocial and Political PhilosophyPhilosophy of Religion

Authors should have a PhD in philosophy and teaching experience at the first-year level. PhD students and candidates may also be considered as authors, or can contribute to the book in other ways. If you are interested, please let us know in Rebus Projects. Include your CV, a brief summary of your experience teaching an intro to philosophy course, and the chapters you would like to write.

We’re also looking for a co-editor for the Aesthetics book, and an editor forPhilosophy of Science. If you’re interested in taking on one of these roles, read the full job posting and then comment in the activity on Rebus Projects, including some details about your experience and the area in which you are interested.

The editorial team encourages contributions from members of under-represented groups within the philosophy community. Decisions will be made by the team on a rolling basis.

CfP for papers on the subject of Human learning and learning analytics in the age of artificial intelligence, a 50th anniversary edition of BJET

At the 50th anniversary of the Britisch Journal of Educational Technology (BJET) invites you to contribute your most current research to BJET as a way to celebrate BJET’s anniversary. Title of the special section: Human learning and learning analytics in the age of artificial intelligence (Critical perspectives on learning analytics and artificial intelligence in education)

Deadline for manuscript submissions: February 10th, 2019Publication: Online as soon as copy editing complete.Acceptance Deadline: 10th August 2019Issue Publication: November 2019.Guest editor: Andreja Istenič Starčič, Professor University of Primorska & University of Ljubljana; Visiting scholar University of North Texas. For all information, please contact: andreja.starcic@gmail.com

This special section focuses on human learning and learning analytics in the age of artificial intelligence across disciplines.

In May 2018, they organized a working symposium entitled The “The Human-Technology Frontier: Understanding the Human Intelligence 0.2 with Artificial Intelligence 2.0.” The symposium was sponsored by the Association of Educational Communications and Technology (AECT). Distinguished scholars, including learning scientists, psychologists, neuroscientists, computer scientists, and educators addressed some urgent questions and issues on the learner as a whole person, with healthy development of the brain, habit, behaviour, and learning in the fast-advancing technological world. The symposium inspired these special issue topics (which not limited to):

1. Learning and human intelligence: Based on what we know of the brain and what we are likely to understand in the near future, how should learning be defined/redefined?2. Learning and innovation skills, the 4C - creativity, critical thinking, communication and collaboration: How could learning technologies support the transformative nature of learning involving all domains of learning, cognitive, psychomotor, affective-social? How could the advanced feedback and scaffolds support the transition from “combinational” to the “exploratory” and “transformational” creativity, thinking and potential consequences for communication and collaboration?3. Towards a holistic account of a person – brain, body, habits, and environment: What would a learning and research design that embraces a whole person perspective look like?4. Human intelligence with innovations and advances of technologies: What technologies are most likely to have a positive impact on learning in the short and long future?5. Properties and units of measures of learning: What are the constructs of learning and beliefs about learners and learners’ needs given the multilevel technologies, collaborative networks, interaction and interface modalities, methodologies and analysis techniques we have to work with?6. Learning perspectives: Do we face transitions in theories of learning?

In the past 50 years, BJET has been at the front offering a platform and forcing discussions in the above areas. At the 50th anniversary of BJET, we invite interdisciplinary scholars to contribute their most current research to BJET as a way to celebrate BJET’s anniversary.

Please send me the working title of your paper with a short abstract (if you include co-authors, please also provide names of all authors) to my e-mail andreja.starcic@gmail.com by November 30th, 2018.