conversations and learning in the digital world

Since we are talking about rhizomes I thought tillage is also important. Tillage is the agricultural preparation of soil by mechanical agitation of various types, such as digging, stirring, and overturning. One of the advantages of tilling the soil is that tillage helps develop strong healthy roots with better air circulation.

I am reading through blog posts that have sprung up after I posted this on the rhizo14 group on Facebook:

I find it ironic that people talk about their qualifications and researches and their ability to read and understand critical theory when that is not the aim of this uncourse at all. As long as everyone “gets” the generic meaning of it, all is well and we progress as a community. How everyone reaches to the end is immaterial. If you get the theory without reading it, you have cheated brilliantly.

Furthermore, I would like to assert my independence and state that I am not an academic and yet wish to be part of this uncourse. Does that make me “Un-qualified” to take it up? If we are to question the very foundation of the education system and try to change it so as to include one and all in a whole big community, then it shouldn’t matter whether I am a phd or a college drop out, should it? This is how a rhizome breaks.

Perhaps that was my way of unsettling the soil to make it healthy again for unrestrained growth.

Did I do it on purpose? No. Did I wish to make jabs at privileged people? No. Did I project such an outbreak? No. Did I want to make people uncomfortable? Probably yes. Perhaps to make them think and take charge. It started a discussion between academics and non academics or as my frainger Ary calls them pragmatists and theorists. It shook things up – the rhizomes multiplied and divided. It made some of us to stop and take notice of our actions and behaviours as academics, non-academics, pragmatists, wanna be academics, recovering academics etc. It was an opening of sorts to make people stop and spend some time to self assess and self re-mediate.

Dave told us in the week 2 hangout that “for rhizomatic learning to work, people need to feel like they are empowered and in control of their objectives. It’s not possible to tell someone to be independent.” He’s right, you cannot just make people empowered, you cannot hand them down their powers or tell them to be responsible. There can only be the right circumstances that make people feel empowered and responsible. You can only create such circumstances or situations (not exactly scaffolding) but something that makes them uncomfortable perhaps? so as to make them take notice of their actions and behaviour which will in turn start a process of self-introspection and self re-mediation.

Here’s another example from my personal life. I am an only child and both my parents worked. I was mostly left to my own devices and gained a lot of life experiences from being alone and not relying on my parents during that time. It made me independent in a way that I had to take some everyday decisions without consulting my parents (this was before the cell phone invention). Did my parents actively want to make me independent? Is that why they both worked and left me in the care of my grandparents? No. It was only the circumstances. It made me independent at a very young age.

We cannot ignore the hierarchies in the educational system or any other system for that matter. One way to feel independent or to assert your independence is to take charge and break out of the mould and you can only do that when you are uncomfortable or in a situation that demands you to stand out and voice your opinion.

Someone posted this in the g+ rhizo community

“You can’t connect the dots looking forward you can only connect them looking backwards. So you have to trust that the dots will somehow connect in your future. You have to trust in something: your gut, destiny, life, karma, whatever. Because believing that the dots will connect down the road will give you the confidence to follow your heart, even when it leads you off the well worn path.” — Steve Jobs

“Every successful movement throughout human history – whether political, religious or social – has been shaped and driven by a powerful narrative, one that invites participation by many and makes it clear that the outcome hinges on that participation.” — John Hagel

If I try to connect the dots now I can see how this independence was enforced by my post on me and on others. It made people uncomfortable – some agreed with it, some choose to ignore it. It fostered a whole new rhizomatic network with people linking it to feminism, victimization, cultural differences etc..

Does that achieve my goal for this week? Enforce independence? Take responsibility, self-assess and self remediate?? Yes….I think so! A small ripple…

Week 1 Challenge – Use cheating as a weapon. How can you use the idea of cheating as a tool to take apart the structures that you work in? What does it say about learning? About power? About how you see teaching? Bonus – Do lots of rhizomatic teaching? Tell us about it.

My rhizomes were all connected in my head and then with this cheating – I felt a push that made them all spill over the floor like noodles all over crawling in different directions.

Image source: wikipedia

I am still uncomfortable with the word “cheating” in the context of learning. Who exactly are we trying to cheat – the educational institutions, their laws and structures, hierarchies or just ourselves? After listening to Dave’s explanation, I do get it but I don’t like it.

I read a lot many blogs, comments, tweets and still can’t get my head around the idea of cheating as learning. Maybe cheating is not the right word – do we mean collaboration, negotiation, exchange of ideas, bending rules, finding a leeway, permitted divergence, margin, space, latitude to look at things and find answers in a different and creative way and at the same time conform to the rules?

All unconnected thoughts going haywire.

For some reason the idea of cheating then led me to recall the fable of The Blind man and the Lame. The story goes like this: A blind man was walking down a bad road when he met a lame man. He asked the lame man to help him. The lame man said that he was too weak and couldn’t possibly help the blind man. The lame man was weak but could see and the blind man was strong. Both of them overcome their disability by helping each other out. The blind man carried the lame man on his shoulders and the lame showed the way even though he couldn’t walk. They both made a perfect whole.

Image source: Wikimedia

By rendering their services to each other, they cheated on their disability and achieved what could have been impossible individually. They cheat on nature (physical disability) for good and think in creative and imaginative way to achieve their goal. There are many variations to this story but it symbolises the theme of mutual support.

This story in turn made me think of the #fraingers from #edcmooc and how we helped each other on our individual learning path. I wouldn’t say we had a disability but we taught each other tools and apps, supported and helped each other in coming up with our artefacts and played the part of being the sounding board to other’s half baked ideas. We met and connected, much like mercury drops and created this intricate labyrinth of ideas which grew like rhizomes. I hope with this mooc I get to meet more people with whom I can share and exchange my creative (dis)ability and grow my wholesome being.

“individually each fish would be eaten up – but together they are a force”

Much of what we think about the world we believe on the basis of what other people say. But is this trust in other people’s testimony justified? This week, we’ll investigate how this question was addressed by two great philosophers of the Scottish Enlightenment, David Hume (1711 – 1776) and Thomas Reid (1710 – 1796). Hume and Reid’s dispute about testimony represents a clash between two worldviews that would continue to clash for centuries: a skeptical and often secular worldview, eager to question everything (represented by Hume), and conservative and often religious worldview, keen to defend common sense (represented by Reid).

The Enlightenment was the period in European intellectual history beginning in the late seventeenth century and throughout the eighteenth century, characterised by the revolutions in the field of science, politics, society and philosophy. These revolutions swept away the medieval world-view and ushered in our modern western world. In addition to the French, there was a very significant Scottish Enlightenment (key figures were Francis Hutcheson, David Hume, Adam Smith, and Thomas Reid).

Intellectual Autonomy

Hume is probably most well-known for his naturalistic philosophy. He doesn’t appeal to God or anything supernatural in giving philosophical explanations. He was a sceptic and is noted for his arguments against the cosmological arguments for the existence of God. In his Treatise of Human Nature, he proposed to study human beings with the same experimental scientific method that we use to study the rest of nature.

His article “On Miracles” in “An Enquiry Concerning Human Understanding” has also been highly influential. Hume defined miracles as a violation of nature. Hume argues that we have no reason to believe in miracles and should certainly not consider them foundational to religion. We derive our knowledge of miracles only on the testimony of others who claim to have seen them. Since this is a second-hand testimony (experience of others) and not our own experience, we should not rely on it. Hume said that you should only believe people if they are likely to be right.

What is testimony?

Philosophers use the term ‘testimony’ to refer to any situation in which you believe something on the basis of what someone else asserts, either verbally or in writing. Hume and other philosophers writing about testimony are keen to point out: a lot of what we believe about the world is based on the testimony of other people. As Hume put it in the essay on miracles:

“there is no species of reasoning more common, more useful, and even necessary to human life, than that which is derived from the testimony of men.”

To understand what Hume is talking about, think of some city that you’ve never visited. You’ve got a lot of beliefs about that city: beliefs about who lives there, population size, what it’s like. All these beliefs are based on testimony: they are based on what you’ve read in newspapers about that city, or you’ve talked to someone who has been there, or you’ve read the Wikipedia article about it.

Hume assumes that you should only trust testimony when you have evidence that the testifier is likely to be right. Hume thinks that this assumption follows from what seems like an innocuous assumption, sometimes called evidentialism:

“A wise man … proportions his belief to the evidence.”

Hume argues that you need independent evidence to trust someone’s testimony, the credit we give testimony “admits of a diminution, greater or less, in proportion as the fact is more or less unusual.”

Hume’s definition of Miracle: A miracle is a violation of the laws of nature. I.e. something that has never happened “in the common course of nature.” For Hume, a miracle is an event that is an exception to a previously exception less regularity. It’s just something that has never happened before.

With this definition of a miracle in place, and with Hume’s assumption about testimony, we can now start to see why Hume concludes that you should never believe that a miracle has occurred, on the basis of testimony. Here’s how Hume articulates his argument in the essay on miracles:

“No testimony is sufficient to establish a miracle, unless the testimony be of such a kind, that its falsehood would be more miraculous, than the fact, which it endeavours to establish.”

That is, a miracle can only be credible if the testimony in its favour is far more forceful than the laws of nature contradicting it.

Hume’s last premise is that it is always more likely that the testimony of others is false than that it is true. False testimony is very common and it is more likely that people are wrong about things and so their sincere testimony is mistaken. Therefore, Hume concludes, you shouldn’t trust someone’s testimony, if what they’re claiming is that a miracle occurred.

Reid’s challenge:

Hume’s contemporary critic, Thomas Reid, challenged his assumption about testimony. The assumption Hume makes about testimony is that you should only trust testimony when you have evidence that the testifier is likely to be right. Reid argued that trusting testimony is similar to trusting your senses. Believing something, on the basis of what someone else says is just like believing something on the basis of seeing it with your own eyes. We don’t only trust our senses when we have evidence that they’re likely to be right. Reid challenges Hume’s assumption that we should only trust testimony on the basis of evidence that the testifier is likely to be right. So if Reid’s argument is right, Hume’s assumption about testimony is false.

Hume and Reid both believed that there were innate principles that governed how we think and how we feel. Reid, however, thought that we were also hardwired to trust other people’s testimony. He thought we had an innate “principle of credulity,” which he defined as:

“a disposition to confide in the veracity of others, and to believe what they tell us.”

For Reid, we are innately “hardwired” to trust other people’s testimony. For Hume, we first have to get evidence that they’re likely to be right – at least evidence that human beings in generally are likely to be right – before we can trust other people’s testimony. Reid asks us to think about small children and the extent to which they trust the testimony of other people. And he points out that children are very much disposed to trust the testimony of other people – they’ll believe whatever you tell them. But this, Reid argues, is incompatible with Hume’s picture of testimony:

“[I]f credulity were the effect of reasoning and experience [as Hume claims], it must grow up and gather strength, in the same proportion as reason and experience do. But, if it is the gift of Nature, it will be strongest in children, and limited and restrained by experience; and the most superficial view of human life shews, that the last is really the case, and not the first.”

In other words, if Hume’s picture were right, the principle of credulity would be weakest in children, because they’d not yet have any experience of how likely (or unlikely) people are to asserts truths (or falsehoods). But in fact children are the most trusting, and adults the least trusting and the most sceptical. This is the opposite of what you’d expect, if Hume’s picture were right. So rather than being based on experience, our trust in each other’s testimony is a “gift of Nature.”

Reid talks about what children do in fact believe, whereas Hume talks about what people ought to believe. But we can still appreciate Reid’s point here if we take him to be saying: Hume’s view implies that children should not trust the testimony of other people, until they have evidence that people are likely to assert the truth. But this seems wrong: there’s nothing wrong with children trusting the testimony of other people, like their parents. Reid claims, if we abided by Hume’s principles: If Hume were right,

“no proposition that is uttered in discourse would be believed [, and] such distrust and incredulity would deprive us of the greatest benefits of society, and place us in a worse condition than that of the savages.”

The big dispute between Hume and Reid on testimony: is trusting other people an innate principle (as Reid argues), or do we need evidence of the reliability of testimony before we trust it (as Hume argues)? In addition to the principle of credulity, Reid said that there was a “principle of veracity”, which he defined as: “a propensity to speak the truth … so as to convey our real sentiments.”

“Lying,” Reid goes on to say, “is doing violence to our nature.” For Reid, just as we are naturally trusting creatures, we are naturally honest creatures. Hume however challenges this idea that we are “hardwired” for honesty. He gives numerous examples of people testifying falsely. People often have motives to lie, as when “they have an interest in what they affirm.” There are “advantages” to “starting an imposture among an ignorant people.” Hume also says, Human beings are prone to believe “the tales of travellers” because human beings generally find the feelings of “surprise and wonder” agreeable. Hume says that human beings are prone to testify, regardless of whether they have good evidence for what they’re saying, because of

“[t]he pleasure of telling a piece of news so interesting, of propagating it, and of being the first reporters of it.” This, Hume argued, is why gossip and rumour spread so quickly.

What is Enlightenment?

Enlightenment, according to Immanuel Kant, is man’s emergence from his self-incurred immaturity. Immaturity is the inability to use one’s own understanding without the guidance of another. This immaturity is self-imposed when its cause lies not in lack of understanding, but in lack of resolve and courage to use it without guidance from another. ‘Sapere Aude!’ [Dare to know] ‘Have courage to use your own understanding!’–that is the motto of enlightenment.” According to Kant, Enlightenment is the process by which people could rid themselves of intellectual bondage and this could be achieved by thinking freely and using one’s self determinism to act judiciously.

Kant is talking here about the extent to which we form beliefs or opinions on the basis of testimony. To believe something on the basis of testimony is to allow one’s “understanding” to be “guided” by another person. And Kant clearly thinks that not being so guided, not basing beliefs on testimony, is in some sense a virtue – hence the motto “Think for yourself.” Contemporary philosophers have come to call this virtue, if indeed it is a virtue, intellectual autonomy.

Hume’s approach to testimony is that we shouldn’t believe other’s testimony unless we have evidence that it is likely to be right. Hume doesn’t think that we should never believe something on the basis of their testimony. He thinks that we often do have enough evidence to trust other people’s testimony. We should only trust other’s people’s testimony, on his view, when we ourselves have evidence that the person is likely to be right. According to Hume we shouldn’t trust anyone blindly. According to Reid, intellectual autonomy is a violation of nature as humans are essentially trusting beings and social creatures and our beliefs and opinions are naturally guided by others. For Reid, instead of intellectual autonomy the virtue is intellectual solidarity. That is the ideal that Reid offers as against the individualism of Kant and Hume’s “enlightenment.”

Value of intellectual autonomy

Which theory is right? Hume and Kant with their ideal of intellectual autonomy or Reid with his ideal of intellectual solidarity? Why think that it is a good thing to be intellectually autonomous?

Kant appeals to “Sapre aude” which means “dare to be wise”. He argues that a person whose beliefs and opinions are based on testimony alone doesn’t really have knowledge. When he says “Dare to know” he means that dare to base your beliefs on the basis of something other than testimony.

Genuine or real knowledge requires what Plato called the ability to “give an account”: the ability to explain, or to situate that knowledge in some broader body of information. That is something, you might think, that you cannot get from testimony. That kind of understanding, or as Kant might put it wisdom can only be gotten on your own: you cannot get it from someone else. Perhaps the value of intellectual autonomy comes from the fact that knowledge (or understanding, or wisdom) is only possible for the intellectually autonomous person.

The second way to defend the value of intellectual autonomy is to think about the social or political implications of intellectual solidarity. The policy of trusting other people’s testimony, without evidence of their reliability, has conservative implications. Think of the extent to which our beliefs and opinions can be shaped by our communities, and in particular the communities that we grow up in. People tend to believe what the people around them believe, and they often inherit religious and political and moral views from previous generations. Our question about the value of intellectual autonomy can be seen as a question about the value of this tendency: are we fans of this tendency to trust other people (like Reid, with his views of the naturalness of trusting testimony), or are we sceptical of this tendency (like Hume)?

How you answer this question depends on how we think about conservatism in intellectual matters. If you value progressive and innovative breaks with tradition, and hope that conventional wisdom will be overturned, you may side with Hume, and see intellectual autonomy as a virtue – a social or political virtue. If you value the conservation of your community’s beliefs, and hope to avoid radical breaks with tradition, you may side with Reid, and see intellectual solidarity as a virtue.

We all live with some sense of what is good or bad, some feelings about which ways of conducting ourselves are better or worse. But what is the status of these moral beliefs, senses, or feelings? Should we think of them as reflecting hard, objective facts about our world, of the sort that scientists could uncover and study? Or should we think of moral judgements as mere expressions of personal or cultural preferences? This week we’ll survey some of the different options that are available when we’re thinking about these issues, and the problems and prospects for each.

Empirical judgements are based on scientific testing or practical experiences. They are derived from experiment and observation rather than theory.

Examples of empirical judgements:

The earth and other planets rotate around sun.

There are + and – electrical charges.

Some traits in plants are genetically inherited.

The so-called “God” particle is real.

The sky is blue.

The book is on the desk.

Moral judgments are evaluations or opinions formed as to whether some action or inaction, intention, motive, character trait, or a person as a whole is (more or less) Good or Bad as measured against some standard of Good. – http://www3.sympatico.ca/saburns/pg0402.htm

Moral judgements are an evaluation of someone or something as good, bad or right, wrong.

Examples of moral judgements:

Giving to charity is morally good.

Taking care of your children is morally required.

Protesting injustice is morally right.

Cain killing Abel out of jealousy was morally wrong.

Oedipus sleeping with his mother was morally bad.

Genocide is morally abhorrent.

Polygamy is morally dubious.

Status of Morality:

We make moral judgements in everyday life. Here, we won’t be discussing whether these moral judgements are correct or incorrect. Rather, we will be asking the status of these judgments. What are we doing when we make such judgements? Are we representing objective facts of matter? Or are we describing our personal or cultural practices? Are we depicting some element of the universe out there? Are we expressing our emotions toward things? These are the types of questions that we ask, when we ask about the status of morality. What exactly are we asking, when we ask about the status of morality?

Three questions about these judgments:

Are they the sorts of judgments that can be true or false – or are they mere opinion?

If they can be true/false, what makes them true/false?

If they are true, are they objectively true?

Three philosophical approaches to the status of morality:

Objectivism: our moral judgments are the sorts of things that can be true or false, and what makes them true or false are facts that are generally independent of who we are or what cultural groups we belong to – they are objective moral facts. It is the view that there are universal moral principles, valid for all people and all situations and times. It holds that moral principles have objective validity and that this validity is independent of cultural acceptance. Moral principles are universal but with some exceptions.

Relativism: our moral judgments are indeed true or false, but they’re only true or false relative to something that can vary between people.

Cultural Relativism: our moral judgments are indeed true or false, but they’re only true or false relative to the culture of the person who makes them.

Subjectivism: our moral judgments are indeed true or false, but they’re only true or false relative to the subjective feelings of the person who makes them. “X is bad” = “I dislike X”. Subjectivism is a form of relativism

Objectivism: our moral judgements are the sorts of things that can be true or false, and what makes them true or false are facts that are generally independent of who we are or what cultural groups we belong to – they are objective moral facts.

Challenge to Objectivism: Important difference between

how we determine whether something’s morally right/wrong

how we determine whether an empirical claim is true/false

With a moral judgment, like, genocide is morally abhorrent, or polygamy is morally dubious, if somebody disagrees with you, it seems difficult to know what method we would use to settle the issue. How do we figure out who’s right about the issue? It doesn’t look like we can observe the world and find the moral facts in the same way that we can with the empirical facts. Can Objectivists explain this intuitive difference?

Relativism: our moral judgements are indeed true or false, but they’re only true or false relative to something that can vary between people.

Challenge to Relativism: It seems like there’s such a thing as moral progress. Can Relativists explain this possibility? For example, in the past people thought that slavery was perfectly fine but now we think of slavery as morally abhorrent. That seems like a piece of moral progress, we’ve gone from a bad view to a good view. But if the relativist view is right, somebody in the past said slavery is morally okay, that could be true relative to that culture. Whereas someone now says slavery is, slavery’s morally wrong, that could be true relative to our culture. So, there’s a sort of difference in opinion, but there’s no progress in opinion. So the basic challenge here for the relativists is to explain the possibility of moral progress.

Challenge to Emotivism: It seems like we can use reason to arrive at our moral judgments like we use reason to arrive at our empirical judgments. But how can Emotivists explain this intuitive similarity?

We sometimes reason our way to our moral views, our moral opinions. But if emotivism is right, then our moral opinions are just a mode of reactions, they are not reasoned responses to questions about morality. So think about the example of Oedipus is sleeping with his mother Jocasta, was morally bad. He might have initially thought, that’s right, but then reasoning, Oh well, Oedipus didn’t know it was his mother and so it wasn’t culpable what he did. Come to think, well, it wasn’t morally bad. So that kind of transition, changing your mind through reason, is really hard for the emotivist to explain because the emotivist thinks that the ultimate judgements that you make when you make moral judgements, they are emotive reactions, not reasoned responses to beliefs about the way things are, with morality. So, the basic challenge to emotivism is to explain how we can reason to our moral views.

In week 3 we contemplate the question: What is it to have a mind? What are the special properties that beings with minds have? What sorts of things have those properties: animals? Infants? Computers?In this week we discuss some of the approaches contemporary philosophers have taken to the question of what it is to have a mind. We begin our discussion with Cartesian dualism, which claims that mind is immaterial, continue to identity theory, a view that mind is identifiable with physical matter, and finish with functionalism, according to which a mental state is essentially identifiable with what it does. In the second part, we concentrate on the problems that thought experiments of Alan Turing and John Searle pose to the account of mind.

Philosophy technique used to determine properties is to look at something that doesn’t have those properties. Ex: a day in the life of a tennis ball, dog and human to compare the characteristics of the daily existence of each.

Human mind can think thoughts about thoughts, imagine unreal states, plan for future, change the environment for survival, have conscious awareness or “what is it like?” the awareness of the experience that accompanies each thought process.

How do we characterize that “what is its likeness” to have a particular experience? Any story of how the mind works is going to have to explain why we have this “what is its likeness” and how it is we’re able to think about things.

Cartesian (or Substance) Dualism:

Rene Descartes– iconic 17th cent. Philosopher of mind believed minds were made of immaterial substance, different from human body. He reached this conclusion by arguing that the nature of the mind (that is, a thinking, non-extended thing) is completely different from that of the body (that is, an extended, non-thinking thing), and therefore it is possible for one to exist without the other. This argument gives rise to the famous problem of mind-body causal interaction still debated today: how can the mind cause some of our bodily limbs to move (for example, raising one’s hand to ask a question), and how can the body’s sense organs cause sensations in the mind when their natures are completely different?

Problem of Causation:

Elisabeth of Bohemia, his student, challenged Descartes’ view. She asked if we have this immaterial substance then how does it affect changes in the physical body. This is known as the Problem of Causation. For physical things to move, including humans, there must be some physical impetus changing the physical state so it/we can move.

Problem of Causation- how does an immaterial substance cause a physical substance to move? Thoughts, beliefs, and desires can cause particular behaviours. Behaviours happen in physical bodies.

Interestingly, Elisabeth introduces her own nature as female as one bodily ‘condition’ that can impact reason. While Descartes concedes that a certain threshold of bodily health is necessary for the freedom that characterizes rational thought, he disregards Elisabeth’s appeal to the “weakness of my sex” http://plato.stanford.edu/entries/elisabeth-bohemia/

Minds, Brains and Computers

Physicalism is the view that everything in this world exists only in its physical stuff. It is the view that minds and bodies are made of exactly the same things. Physicalism is sometimes known as “materialism”.

There are different views to explain the idea of Physicalism, some of which are:

Identity theory is the idea that mental states or properties are neurological states or properties i.e. If two things are physically identical, then they will be psychologically identical. It asserts that mental events can be grouped into types and then can be correlated with the types of physical events in the brain. Identity theory is a reductionist view as it reduces psychological thoughts to the physical.

There are two ways of spelling out the Identity Theory:

Token/Type Identity:Type = category
Token = instance

For example: Certain class of objects such as cars = category = type, while a particular car, say Audi, maybe called a token; representative of the type car.

Type Identity theory claims for every type of mental phenomenon, there is a corresponding physical phenomenon i.e. certain types of brain state are identical with certain types of mental state. So, all sorts of happy mental states would be identical with certain sorts of brain states. A type of psychological state is identical to a type of physical state. This form of the theory assumes two things:

Every time you are in a certain mood – such as being happy – there is the same corresponding brain state

The same mood/brain state relationship occurs in everyone else

Token identity theory, by contrast, only admits that for every type of mental phenomenon there some kind of physical phenomenon but it states that it is possible that brains may not function in exactly the same way to produce mental states. So for example, according to Token identity theory, two people might be happy at the same time, but their brain states would be different.

A problem for Type Identity theory:

Hilary Putnam argued that type identity theory is too narrow. The theory states that particular mental states are narrowed down to particular physical states. This is all reduced down the human brain and the human body. But other species also feel pain. The problem is the same for actual species such as fish and hypothetical species such as aliens. Given that these species have very different ways (very different chemical states of brains) of realising a sensation such as pain, how can we assume that such an experience is identical with only a certain brain state? This greatly reduces the strength of Identity Theory. There are two options here: either we assume that such creatures do not have similar experiences to us, or we admit that such conscious experiences as pain are “multiply realisable”.

The key point for Putnam is that mental states are multiply realisable. This means that a certain mental state such as pain can be identified in a variety of different physical states. Each specie has a different chemical build up and can realise pain in a different manner. Hence a particular psychological state can be identified in many different physical ways. This is the theory of multiple realisability.

Functionalism

Hilary Putnam thought that we should understand mental states in terms of their function. Instead of thinking about what the brains are made up of, we should think about what functions they perform. Functionalism is the approach that concentrates on what the mind does, i.e. the function of mental activity. Functionalists claim that trying to give an account of mental states in terms of what they’re made of is like trying to explain what a chair is in terms of what it’s made of. What makes something a chair is whether that thing can function as a chair? Putnam’s claim was that we should identify mental states by not what they’re made of, but by what they do.

Mental states are caused by sensory stimuli and certain beliefs. They also cause behaviours and new mental states.

Computational theory of mind

Mind as a computer is a view that the human mind and/or human brain is an information processing system and that thinking is a form of computing. One could argue that minds are information processing machines; they take information provided by our sense and other mental states which we have, process it and produce new behaviours and mental states. We equate our minds with computers in its function of input-process-output.

Source: Wikipedia

Turing Machines

If minds are computing machines, then how complex does an information processing system needs to be before it counts as a mind? He proposed the following experiment as a response to this question. He invented the “Turing” machine in 1936 and called it an “a-machine” (automatic machine). The Turing test is a test of a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed this scenario: a human judge engages in natural language conversations with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. The test does not check the ability to give the correct answer to questions; it checks how closely the answer resembles typical human answers.

Problems for Turing Test:

It’s language based: all testing is dependent on intelligence conveyed through language. We cannot test animal intelligence as they cannot talk.

It’s too anthropocentric: We’re testing for human intelligence, seems chauvinistic to think of intelligence worth studying is only human intelligence. There could be other forms of intelligence out there.

It doesn’t take into account the inner state of the machine. If a machine is able to pass the Turing test, then one must look into that particular machine to see what it’s made of. Would this machine pass as having a mind?

The idea that the mind is a computing machine is certainly an attractive one. However, there are problems with that view.

The Chinese Room argument, devised by John Searle, is an argument against the possibility of true artificial intelligence. The argument centres on a thought experiment in which someone who knows only English sits alone in a room following English instructions for manipulating strings of Chinese characters, such that to those outside the room it appears as if someone in the room understands Chinese. The argument is intended to show that while suitably programmed computers may appear to converse in natural language, they are not capable of understanding language, even in principle. Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. Searle’s argument is a direct challenge to proponents of Artificial Intelligence, and the argument also has broad implications for functionalist and computational theories of meaning and of mind.

We know a lot of things – or, at least, we think we do. Epistemology is the branch of philosophy that studies knowledge; what it is, and the ways we can come to have it. This week, we’ll look at some of the issues that arise in this branch of philosophy. In particular, we’ll think about what radical scepticism means for our claims to knowledge. How can we know something is the case if we’re unable to rule out possibilities that are clearly incompatible with it?

In this week we learn about three important aspects about the theory of knowledge:

All knowledge is information. Not all information is knowledge. In this world of information overload, we need to be able to distinguish between good information and bad information and how we can use it. And that’s why knowledge is so important. Identifying what is good information is one of the reasons why philosophers are very interested in trying to determine exactly what knowledge is.

A particular fundamental way in which we use the word knows, is what’s called Propositional Knowledge, which is knowledge that something is the case. In order to know what propositional knowledge is, we need understand what a proposition is.

What is a Proposition?

A proposition is what is expressed by a declarative sentence, which is a sentence that declares that something is the case. A proposition is either true or false.

Examples of sentences that express a proposition:

The cat is on the mat.

Your dinner is in the oven.

The moon is made of cheese.

Examples of sentences that don’t express propositions:

Shut that door.

Yes please.

How can I help you?

Propositional knowledge can be true or false. So, a sentence like Shut that door is not the sort of thing that can be true or false, because it doesn’t describe the world as being a certain way. But a sentence like the cat is on the mat, which could be true, there is a cat on the mat. Or it could be false, there isn’t a cat on the mat. If you have propositional knowledge of this proposition, then you know that the cat is on the mat.

One way of getting a handle on what propositional knowledge involves is to contrast it to another kind of knowledge called know how or ability knowledge.

Propositional versus Ability Knowledge

Knowledge-that

Knowledge how

Knowing that the earth orbits the sun

Knowledge how to drive

Knowing that Paris is the capital of France

Knowing how to play piano

Knowing that one has toothache

Knowing how to beat the stock market

Basic constituents of knowledge

1.) Propositional Knowledge – or Knowledge that something is the case expressed by a declarative sentence describing world as being certain way.

2.) Know-How or Ability Knowledge – contrasts with propositional knowledge as Know How connects with manifestation of ability or skill.

Two conditions for Propositional Knowledge

One can know a proposition only if:

Source: Wikimedia

That proposition is true;

One believes that proposition.

1.) Truth– if you know a proposition then that proposition must be true. So, if proposition knowledge requires truth, you can’t know a falsehood.

2.) Belief– If you know a proposition, then you must believe that proposition. Knowledge requires belief but is stronger than belief.

When we say that knowledge requires truth, what we mean by that is that you can’t know a falsehood. In particular, we’re not suggesting that when you know you must be infallible, or that you must be absolutely certain.

For example,
Do you know what you had for breakfast this morning?
But are you certain about this? Isn’t it possible that you have made a mistake?The moral: while knowledge demands truth, it doesn’t require certainty (any more than it requires infallibility).

Knowing that a proposition is true is not the same as knowing that this proposition is probable. Consider the claim that human beings have been to the moon, and compare that with the claim that it’s likely or probable that human beings have been to the moon.

There are two probabilities in this proposition:

I know that human beings have been to the moon.

I know that it is likely/probable that human beings have been to the moon.

The second claim is much weaker than the first. It implies a doubt about the probability of human beings having been to the moon.

Also, we don’t mean that knowledge that a proposition is likely or probable. Sometimes we hedge the things we know if we’re not sure about something that what it is we know is just simply likely or probable. We don’t know the proposition but we do know it in this hedge form. Although sometimes it’s appropriate to do that, it does not mean it’s always appropriate to know in a qualified form. But, in lots of cases we do know things without the qualification or hedge.

So, knowledge requires truth, belief-true beliefs. Knowledge requires getting it right. If you don’t get it right, you don’t have true belief then you’re not in market for knowledge.

Is there more to knowing than getting it right? There all kinds of ways of getting right. If I have a true belief, it doesn’t count as knowing. (Sometimes you believe without knowing.) One can get it right in lots of ways which wouldn’t suffice for knowledge.

For example, imagine a juror in a criminal trial. A juror believes the defendant is guilty purely out of prejudice. As it happens, he is right. But clearly he does not know that the defendant is guilty. Then there is another juror who formed his opinion based on critical thinking about evidence. One formed his opinion based on prejudice; the other based it on critically thinking about the evidence. Both jurors get it right but first juror got it right because of prejudice (luck) (He doesn’t know). Second juror got it right based on sifting through evidence (anti-luck) (He knows.) So, for epistemologists, knowledge requires more than just getting it right, more than just true beliefs; it requires attending to evidence, thinking things through to reach a correct judgement.

The classical answer to the question of what you need to add to true belief to get knowledge is justification. In addition to true belief, a proposition must also be justified. A belief is said to be justified if it is obtained in the right way. Justification means that beliefs are based on reasoning and evidence rather than luck or misinformation.

It is sometimes called the tripartite or three-part account of knowledge. It goes right back to antiquity, back to Plato.

For some time, justified true belief account was widely believed to capture the nature of knowledge. In 1963, Edmund Gettier published a short but widely influential article that contrasted this notion. He provided examples of cases in which the subject had true and justified belief yet we cannot agree that he has knowledge about it.

So, what constitutes adequate justification so knowledge is not based on luck?

Source: Wikimedia

Examples given 1.) The Stopped Clock Case – first offered by Bertrand Russell. Someone forms a belief about what time it is by looking at a stopped clock. Their belief is true based on luck of looking at clock at time of day when clock is right. Agent has no reason to doubt clock’s reliability. This assumption informs agent’s belief. Counterexample of the anti-luck intuition. Agent has JTB but no knowledge only a TB based on luck.

Example 2.) The Farmer offered by Roderick Chisholm. A farmer looks onto a field and believes he sees a sheep; however, what he may be seeing is a sheep-like object, or real sheep hiding behind the sheep like object. He’s got a JTB that could easily be false. Is the farmer really believing there is a sheep or wanting to believe the thing out there is a sheep? (Do we perceive what we want to believe?)

There is a two-step formula to form Gettier type problems:

Step1: have an agent form your beliefs in such a way where the belief would normally be false. Ex. Farmer and clock— agents convince themselves what they see is a sheep/right time because of context; have no reason not to form these beliefs.

Step2: turn the false belief into true belief using reasons that are different from the justification provided by the subject.

Keith Lehrer proposed we need to add 4th condition which says your belief is not based on any false assumption or false lemmas.

No false lemmas view holds that Knowledge becomes JTB where TB is not based on any false lemmas.

Lemmas (Assumptions) Narrow way of thinking of assumptions – clock example, the agent assumes clock is working; no reasons to think it stopped working. Broad conception of assumptions – an assumption is some false belief germane to target belief formed in the ettier case which is false. There are always assumptions at play in forming our beliefs. Assumptions don’t always generate right kind of result.

Two main questions raised by Gettier Problem:

Whether or not justification is necessary for knowledge. What do we require of knowledge over and above true belief?

If justification condition doesn’t eliminate knowledge undermining luck; if justification by itself can’t respond to our intuition, it can’t explain how and when we know we’ve got TB that’s it’s down to luck. What kind of condition would do that? How do we know our TB isn’t down to luck?

Gettier’s cases show we can JTB that are down to luck. What condition must we add to knowledge to be confident we’ve got cognitive success, TB that isn’t down to luck?

Radical scepticism is the view that knowledge (at least of the world around us) is impossible that is, we don’t know anything and we couldn’t know anything. Sceptics make use of sceptical hypotheses, scenarios where everything is as it usually appears to be, but where we are being radically deceived. The sceptic says that we cannot rule-out sceptical hypotheses, and thus argues that we are unable to know anything about the world around us.

Source: Wikipedia

Brain-in-a-Vat Sceptical ArgumentThe Brain in a Vat hypothesis is something similar to what is shown in the film Matrix. The idea is that what you see around in this world and all of your experience are not true. None of this is taking place. In fact your brain has been harvested; it has been taken out of your skull and it has been put in a vat of nutrients and it is being fed experiences, fake experiences. The brain-in-a-vat is floating around there and it thinks that it is out in the world interacting with other people, it thinks that it’s seeing things and doing things, but in fact nothing of the sort is taking place. The brain-in-a-vat has radically false beliefs. And yet their experiences are indistinguishable from the experiences we’re having right now, which one would hope aren’t brain-in-a-vat-type experiences.

So here’s the question the sceptic asks. They say “How do you know you’re not a brain-in-a-vat?” And of course the answer to that is “Well probably we don’t.” So the sceptical argument is that:

I don’t know that I’m not a brain-in-a-vat.

If I don’t know that I’m not a brain-in-a-vat, then I don’t know very much.

So, I don’t know very much.

Even if we don’t know that we’re not brains-in-vats, so what? But if you were a brain-in-a-vat, then you wouldn’t have hands (since brains-in-vats are handless by definition). So how do you know that you have hands? (And if you don’t know this, what do you know?)

Epistemic Vertigo

Source: Wikipedia

So, the classical theory of knowledge has a flaw and the sceptics want us to believe that we are a brain-in-a-vat, but we all know/feel/believe that our lived experiences are real. It seems that if we cannot rule-out these hypotheses, then much of what we think we know is under threat.

Epistemic vertigo sets in when you stop to consider that this is actually a knowable proposition, a true belief, contrary to every logical truth about us being possible brain-in-a-vat and actually encompasses the entire universe of possibilities.

My next MOOC has just started. It is called Introduction to Philosophy by by Dr. Dave Ward, Professor Duncan Pritchard, Dr. Alasdair Richmond, Dr. Allan Hazlett, Dr. Suilin Lavelle, Dr. Matthew Chrisman, Dr. Michela Massimi. It is another course from Coursera again from The University of Edinburgh.

The course is divided into seven segments and each week a different professor will introduce us to the following topics:

First week’s lecture introduced the course and had us thinking about what Philosophy actually is.

From the syllabus:

We’ll start the course by thinking about what Philosophy actually is: what makes it different from other subjects? What are its distinctive aims and methods? We’ll also think about why the questions that philosophers attempt to answer are often thought to be both fundamental and important, and have a look at how philosophy is actually practised. Finally, we’ll briefly touch upon two very influential philosophers’ answers to the question of how we can know whether, in any given case, there really is a right way of thinking about things.

At first it seemed a bit difficult to understand and assimilate the content, so here’s my attempt at taking notes for my better understanding. Ary (All the world’s a mooc), my frainger from edcmooc is also taking this mooc and she has created the first week’s notes.

Philosophy is the activity of stepping back and working out the best way of thinking. Philosophy is an activity not a subject. For example, we step back from “doing” the activity of physics and analyse our thinking about the process of doing physics. Thinking about the process of “doing physics” is philosophy. We can think about “the doing” from the inside, the armchair, by looking at data and asking how data confirms or refutes what we already know. Or, we can revise our conceptions from the outside – taking our thinking and testing it against the world. Dr. Ward shares an example about medieval medicine and how through experience, empirical knowledge gained throughout history, we changed our thinking about medicine. We stepped back and worked out the best way of thinking about the science of medicine. So, as Dr. Ward says, we philosophize on all aspects of our lives. We are constantly stepping back and working out the best ways to understand and solve problems in our ever changing world to make the world a more hospitable place for all humanity.

Source: philosophicalquestions1.blogspot.co.uk

Philosophical questions/problems arise anywhere, in any domain. Some philosophical questions are trivial while others help us reflect on our present behaviour. For example, why are there zippers versus why genocide, slavery or racism? Dr. Ward compares the philosopher to a child who constantly asks WHY? Philosophy helps us articulate and justify our assumptions and pre-suppositions. The philosopher not only asks why but also searches for answers.

Philosophy is fundamental as a subject but a brain surgeon or a bomb disposer would not step back and ask why. There are lots of activities you can think about well without philosophy but philosophical questions are not too far away. Philosophy can be frustrating because we always need to give reasons for our ways of thinking. And sometimes we realize our ways of thinking are indefensible such as slavery, genocide, racism. However, once we realize we’re wrong or crazy in our way of thinking, we can step back and look at the “bad ways of thinking/behaving of the past” to help us reflect, articulate and justify our assumptions and pre-suppositions in the present.

What are some present day behaviours/ways of thinking that may appear barbaric to future generations? Some examples: practice of farming animals, irresponsibility to planet, ignoring the suffering of distant societies. Philosophy is therefore fundamental because it helps us not take our thinking or ways of being for granted. We must constantly ask why.

Doing good philosophy per Hilary Putnam entails more than just arguments but big picture vision. But why don’t most societies or systems function this way? Maybe because not everyone sees and understands the big picture. They lack the education, background, ability and those who do have the education and ability to see the big picture take advantage of it for their own personal gain, greed. So it’s so easy to tell society truths and lies. People understand the lies embedded and hidden behind truths.

Source: Wikimedia

Lies create cognitive dissonance because people can’t cope with ideas or situations which may be too complex or uncomfortable for them to understand and accept. Perhaps this is why we have so much frustration and anger in the world today (Freud). We know there’s so much wrong with the world but are helpless and powerless to do anything. It’s so easy to manipulate people with arguments lacking any sort of vision.

Dr. Ward gave an overview of components of an argument. Arguments provide evidence that demonstrate truth of some conclusion. Premises are claims that argument makes to support conclusions. However, conclusions don’t always follow premises making an argument invalid. When an argument is valid with true premises and conclusions that follow the premises, then it’s a sound argument. So it’s critical that when we engage with philosophical problems we do more than identify and asses the premises, we need to see if the argument considers the big picture vision so we can then attempt to articulate what aspects we agree or disagree with. We also need to recognize that some philosophical problems are so complex they can’t be easily resolved or expressed in a precise series of premises and conclusions.

Summary

How do we know there is a right way of thinking about things? How we do know that finding the right way of thinking about things can be done through philosophy? Hume, a sceptical attitude to find out truth about the world. Causation is never something we can ever really know—billiard ball example. Our minds are prone to add to impressions we get from world. All we ever experience about our “self” is our thoughts, experiences, beliefs. We can’t prove existence of God because of our mind’s propensities to draw conclusions based on our impressions. Hume helped Kant awake from “dogmatic slumber”. In Critique of Reason Kant argues the idea of a world that doesn’t correspond to the rules that govern our mind is nonsensical.

Has this age still got elegance, take a photo of 21st century elegance.

Yes, this age has still got elegance, more so in the field of education than any other. Today’s learner is not just a passive receptor of knowledge but a creator too! Open source is the way forward with collaboration and creation along with fellow learners and fraingers!

That’s why elegance in the 21st century is in the way we learn today….

So how does that connect? In today’s age of e-learning, e-commerce, e-mail, e-everything, e-friends and i-thingies, here’s the new electronic pipe. So here’s e-pipe, no tobacco, no ash, more healthy! And more importantly, something 106ish backwards E-pipe 601!

I know I just flipped the prompt. But that’s the advantage. You can interpret it any way you want. You can be creative and divergent! 🙂