TheOtherDrX's Higher Education bloghttps://theotherdrx.wordpress.com
Although not exclusively Higher Education...Fri, 15 Sep 2017 11:42:57 +0000enhourly1http://wordpress.com/https://s2.wp.com/i/buttonw-com.pngTheOtherDrX's Higher Education bloghttps://theotherdrx.wordpress.com
Black Swans: When you disprove your own hypothesis that all Swans are white.https://theotherdrx.wordpress.com/2016/11/17/black-swans-when-you-disprove-your-own-hypothesis-that-all-swans-are-white/
https://theotherdrx.wordpress.com/2016/11/17/black-swans-when-you-disprove-your-own-hypothesis-that-all-swans-are-white/#respondThu, 17 Nov 2016 11:05:29 +0000http://theotherdrx.wordpress.com/?p=118Continue reading →]]>I wrote a blogpost a while back on the White Swan hypothesis, where I challenge my research students to test the hypothesis that all swans are white, as in the UK and Europe, this is a generally accepted notion. Either students suggest finding as many white swans as possible to support their hypothesis, or look for a single black swan to disprove their hypothesis. As all good scientists know, the latter is ideal, but the former is so often the reality in the ‘publish or perish’ era of academia, and is cause for so much bad science. When I wrote the blog post, I promised to highlight some of the Black Swans that I’ve located, and in doing so, disproved my Professor’s hypotheses, made myself mildly unpopular and in some cases, managed to set the academic record straight.

In the link above (bad science) Amgen, a large Biotech company, attempted to replicate 53 papers in the area of bone oncology, and found the central hypothesis in 47 of them was not supported. Worrying for me, this is the area where I research. Explanations for so many cancer studies being impossible to replicate include direct research fraud, delusional scientists who are convinced they are right and fool themselves into believing their results and so are fooled by their own cognitive bias (white swans), and to some extent, poor quality reagents.

We were working on Osteoprotegerin, a protein that Amgen were interested in, partly because it was reported to bind and inactivate the cytotoxic ligand TRAIL. TRAIL was (and still is) being developed by Amgen as an anti-cancer therapy, and the concern was in the early 2000s that Osteoprotegerin (OPG) might stop their new drug from working if produced at biologically relevant levels by either the cancer cells themselves, or the supporting mesenchymal/connective tissue cells. So that’s what we worked on.

The plan was simple. Grow cancer cell lines in the lab, allow them to accumulate enough OPG so that they are ‘protected’ from TRAIL. Alternatively grow cancer-associated connective tissue cells in the lab, and see if they produced enough OPG to protect the cancer cells. The experiment worked. Both breast and prostate cancer cell lines secreted TRAIL, and in these experiments, the in vitro results supported the notion that OPG prevented TRAIL-induced cell death, and that the cancer cells could produce it themselves. This produced several papers and therefore, OPG production was the major concern for resisting TRAIL-based therapies.

Cancer cell lines grown in the lab are a poor proxy for cancer, but in many cases they are the best we have access to. The really important question was: What was happening in tumours within the patient? So we immunostained a series of breast and prostate cancers using a widely used and well published antibody to OPG and saw abundant staining in these cells, suggesting that they might produce abundant OPG in vivo, and therefore rendering them resistant to TRAIL. So far so good. Many other research groups also found high OPG staining in tumours and other tissues, and crucially, we all used the same antibody.

The antibody in question did indeed bind OPG. It was used as an antibody pair in an ELISA assay, whereby one OPG-binding antibody is attached to a plastic well of a microplate, which captures the protein (OPG), and the bound protein is then detected with a different antibody to OPG which is labelled, so signal is proportional to OPG in the sample. One of these antibodies was also widely used for immunostaining in tissues, so applying the antibody to a thin section of tissue on a microscope slide, so that it binds where OPG is present in the tissue section. This detection antibody can then be easily detected via a second antibody which is labelled to allow detection and visualisation.

We had been conducting some Western blots with the same antibody, so separating out the proteins based on mass, then detecting with the same antibody, rather than immunostaining of proteins in situ in cells or tissue, and found abnormal sized OPG in some breast and prostate cancer cell lines, and the notion was that this could be a novel variant and OPG is known to be alternately spliced, whereby the OPG gene could code from alternate exons (protein-coding) and introns (non-protein coding). We knew that the OPG mRNA sometimes included an intron that created a premature ‘stop’ codon, potentially accounting for the smaller observed size of OPG on the Western Blots. Many other groups had mentioned these mRNA and protein variants in conference presentations but hadn’t yet published their observations. We were awarded funding to complete this project to investigate this potential novel biomarker. That’s where it all went wrong (or right).

Our plan was to selectively knock-down the mRNA to the splice variant mRNA using RNA interference experiments (RNAi) experiments, whilst leaving the full-length coding mRNA intact, to test the hypothesis that the splice variant produce the abnormal sized protein variant. We showed that RNAi to selectively knock-down the OPG splice variants of OPG mRNA failed to affect this abnormal sized product. The hypothesis that OPG had a novel protein variant derived from a splice variant was dead. We then looked at the abnormal sized protein in more detail, to find out what it was. We performed 2-Dimensional gel electrophoresis, so separating proteins by charge, then by size, to give a better separation such that each protein was isolated into a spot on the gel. Then by transferring the proteins to a membrane, we could perform a 2D-Western blot using the anti-OPG antibody to localise the abnormal-sized protein, without other proteins of the same size contaminating the same spot. We then isolated protein from the region when the antibody bound. The protein was sent for peptide sequencing by Mass Spectrometry. That protein turned out to be a different protein, called carbonic Anhydrase-2. Carbonic anhydrase-2 (CA-II) is an existing tumour marker, known to be expressed in many cancers. We published the findings here in International Journal of Cancer. The notion that breast and prostate cancers expressed abundant OPG was questionable, although our widely used ELISA to detect OPG was valid.

We showed the staining pattern observed with the anti-OPG antibody matches CA-II staining in many tumours, but not all, and that some were indeed expressing OPG. The paper cited all the studies that had used the antibody and whose data might be affected, including one of our own studies.

Most groups working on this wouldn’t have known about the antibody problem, and wouldn’t have thought to even look for a problem, and some still might not know. We could have gone on for years collecting “evidence” that supported our hypothesis, probably publishing more papers whose conclusions might not stand up to retesting by alternate methods. Instead, we tested our hypothesis to destruction, on the basis that if we did all we could to disprove our hypothesis, and failed to do this, then maybe, just maybe, the hypothesis is correct. Our hypothesis that the abnormal-sized product was derived from splice variant of the OPG gene had been disproved. In doing so, we inadvertently highlighted that the immunostaining in a number of papers looked more like the CA-II pattern.

And what happened? Well, the antibody supplier initially wasn’t impressed, but the antibody data sheet now cites our paper and makes it clear for future researchers about the issue. The academic community surely would cite this paper as a beacon of good practice, letting everyone know that those early OPG-staining papers might be unreliable. Erm… no. The paper has been cited twice. TWICE! In a journal with an impact factor of >5, you would expect 5 citations per year. However the papers with the questionable OPG staining are still getting cited, and some of the earliest papers noting differential staining (and citing alternate explanations for this, see Fig. 1 here, and see Fig.1 here) between two different anti-OPG antibodies, including the problem antibody in our study, are still being cited (169 times to date for one of them). What is clear is that even if we identify a Black Swan, most people don’t want to accept that it exists. Instead, they insist it is a large, dark Goose.

]]>https://theotherdrx.wordpress.com/2016/11/17/black-swans-when-you-disprove-your-own-hypothesis-that-all-swans-are-white/feed/0theotherdrxDebating evolution with an opponent of evolution. Let’s call it ‘public engagement of science’https://theotherdrx.wordpress.com/2015/03/03/debating-evolution-with-an-opponent-of-evolution-lets-call-it-public-engagement-of-science/
https://theotherdrx.wordpress.com/2015/03/03/debating-evolution-with-an-opponent-of-evolution-lets-call-it-public-engagement-of-science/#commentsTue, 03 Mar 2015 11:27:06 +0000http://theotherdrx.wordpress.com/?p=98Continue reading →]]>As a scientist with an interest in genetics, I firmly oppose the teaching of either creation, or intelligent design ideas alongside biology, genetics and science. However I do find the Creation vs. Evolution debate fascinating. Not just because of the beauty of the genetics concepts involved, but also the extent of misinterpretation of genetics concepts both in those arguing for, and against evolution. I recently read a review of geneticist Dr Adam Rutherford’s book “Creation: The Origin of Life/The Future of Life” on the website EvoGenesis, which I originally came across via this tweet from Dr Adam Rutherford:

The overall mission of EvoGenesis is to “develop a credible Christian consensus against evolution”, but I gather not necessarily from a typical ‘Creationist’ perspective.

So I asked the reviewer some questions, possibly teasing out some potential misunderstandings. In the resulting well-mannered and polite debate, I had three simple rules: 1) Try to get across the beauty of genetics in my answers, 2) provide sound genetics knowledge with peer-reviewed studies to back up my claim where possible and 3) try to be far more courteous than the other commenters (who are presumably mostly scientists). What resulted was a fascinating insight into how evolutionary genetics is perceived, and how with the absence of knowledge and conceptual understanding, ‘untestable’ hypotheses can become widely accepted as fact and given credibility. Neither of us shifted our entrenched positions on evolution as far as I can see, but that never a realistic aim of the discussion. However I thoroughly enjoyed engaging in the resulting discussion, which allowed me to put some of my thoughts into proper context.

Below are my questions and comments, followed by the response from ‘Creation Foundation‘. It should make interesting reading for anyone with an interest in genetics, epigenetics and/or evolution. Oh, and a bit of Ornithology. Where would we be without Darwin’s finches? Or Crossbills for that matter…

Posted by TheOtherDrX (first post in response to the three statements above)

There is a huge gulf between early Darwinian genetics, the allure of the coding sequence being a ‘blueprint’ for complex life, and the current view of genetics. This is largely due to our knowledge of epigenetics. The DNA sequence is not the sole answer. Chromatin is ‘more of an answer’, and chromatin is DNA & histone proteins. These proteins that DNA is wrapped around (Histone) are regulating which genes are expressed by which cells in which environment (requiring complex cell-cell and cell-extracellular matrix signals). So having the DNA sequence DOES tell us what protein can be made, but doesn’t necessarily tell us that it WILL be made in a particular cell at a particular time, although we can make some good predictions. Just having the DNA sequence does not allow us to make a complex organism such as a human. Any modern geneticist would agree. We cannot currently ‘sequence’ the epigenome all that reliably, but can make some inferences based on how the histones are chemically modified and whether a region is chromatin in a particular manner tells us about likely gene expression. We cannot exactly predict exactly how a cell in a 10-day-old developing embryo will behave, but we know that it is epigenetically programmed to do what it does, and the progeny of that cell in a 10-day-old embryo is destined to be say, a neuronal cell and not a connective tissue cell. To claim that we have all the answers would be false. For Darwin to claim that he had all the answers would have been false, just as a scientist in 50 years’ time to claim they had all the answers would be false.

You state: “As already stated, the simple fact is that the morphological information required to create cells and programme them to work in unison to create skin, nerves, veins and arteries does not exist in the DNA.” You are correct. We used to think that, but we were wrong. Well, we were partly wrong. This doesn’t mean we discount everything that was proposed previously as we were only partly wrong. We discount falsified or disproved hypotheses, and we present new evidence to support alternate or ‘evolved’ hypotheses. As we say in the trade “It’s a little more complex than we thought”.

Reply from Creation Foundation

Thank you for your reasoned comment. I am aware of epigenetic and have another blog item “Epigenetic antics”. But how do you answer the simple question of how cells at one location somehow know they have to create finger and those at another place a foot? Surely there must be some kind of plan or pattern. What kind of epigenetic influence can do that? Hence Sheldrake’s morphic field. I suppose it is reasonable for someone to remain an evolutionist, despite the problems, saying, like Jerry Coyne: “Don’t expect us to explain everything yet. But one day we will find out”.

Posted by TheOtherDrX

You state “But how do you answer the simple question of how cells at one location somehow know they have to create finger and those at another place a foot? Surely there must be some kind of plan or pattern. What kind of epigenetic influence can do that?”

Well, in short, it is not a simple question (for me), however a well-qualified developmental biologist may disagree!

We know that master regulator genes coding for proteins such as BMPs, Wnts, Shh, Notch all act in concert to create the correct tissue at the correct time. Early signals (e.g. a particular Wnt signal for instance) effectively programs the cell to add an epigenetic tag, which gives the cell in essence, a ‘memory’ of what signals it has received prior. These are epigenetic changes that are silencing or switching on a whole array of genes. I particularly like the HOX genes, giving identity to body parts/segments. Compare two cells, one in a slightly different environment to the other but next to each other. Cell 1 received signals a, b, c, d due to its position in a developing embryo. Cell 2 receives signals a, b, c but not d. Lack of d signal could be due to its physical location e.g. on the ‘outside edge’ of a 16-cell ball of cells, resulting in it becoming extra-embryonic tissues, for instance. These 2 cells will now respond differently to signal X, thanks to differential receipt of signal d, even though they both receive signal X. This sequential epigenetic programming seems to occur at all stages of development in an ordered and programmed manner.
So where does the finger/foot/thorax decision come in? Read up on HOX genes along with temporal and spacial co-linearity. Fascinating stuff.

I prefer this explanation (with its supporting evidence) than Sheldrake’s morphic field. He may even be trying to explain the above concepts, including epigenetic role in generation to generation ‘genetic memory’ but without a sound appreciation (or knowledge) of epigenetics perhaps? I don’t for sure as I don’t (and wouldn’t!) claim to have read his books.

Reply from Creation Foundation

Do you really think all the incredible complexity you have been studying came about by the action of environment pressures on random DNA copying errors? It is becoming clear that each cell is a tiny factory unity, set up to manufacture whatever range of protein building materials the body may require. I think this might be compared to creating paints for an artist or designer. But where is the artistic genius? Clearly not in the DNA. Anyway, that’s my opinion. According to the Bible there is a ‘spirit’ in man and animals, much like Sheldrake’s field. I don’t think science can deal with that. PS If you are the ‘other’ Dr X, then who is the ‘other’ Dr X?

Posted by TheOtherDrX

I have addressed your points below.

1) You state “Do you really think all the incredible complexity you have been studying came about by the action of environment pressures on random DNA copying errors?”

Yes. The fact you say it is incredible affirms your position. To me it is ‘credible complexity’, but that is central to how we differ in our ideology.

2) You state “It is becoming clear that each cell is a tiny factory unity, set up to manufacture whatever range of protein building materials the body may require. I think this might be compared to creating paints for an artist or designer. But where is the artistic genius? Clearly not in the DNA.”

Every cell has a role to play in the multicellular organism. It knows what it needs to do based on environmental cues, and epigenetic changes induced by early environmental cues. You say “clearly not in the DNA”. Correct in part. Better to ask “Clearly not in the chromatin”. Then you get the sequence of the DNA, you get the epigenetics of the DNA (the methyl groups on C nucleotides that you mention in your epigenetics blogpost) and you also get the other group of epigentic changes, such as methylation, acetylation, not to mention imidation & phosphorylation all of which have specific roles depending on which amino acid of the histone they are on. Is this the ‘artistic genius’ that you mention? As I said before, it’s more complicated than we thought, and clearly more complicated that you thought. Since you didn’t engage with any of my ‘further epigenetics’ from my last comment other than to reject it all, I suspect there is a certain lack of molecular biology, beyond the very basic epigenetics. At least engage with this before dismissing it. It really is very interesting…

3) You state “According to the Bible there is a ‘spirit’ in man and animals, much like Sheldrake’s field. I don’t think science can deal with that.”

Science cannot deal with things that cannot be measured. For example there is a highly debated physics discussion currently on whether ‘String theory’ should be adopted as science fact, or just remain theory, in the absence of measured evidence. Most rational scientists say not.
See: http://www.nature.com/news/scientific-method-defend-the-integrity-of-physics-1.16535
Why can’t science deal with Sheldrake’s ‘spirit field’? Well in the same way that many physicists cannot accept that string theory is fact: Because there is no evidence for that. You can say whatever you want if you can NEVER be disproved. That’s a good reason NOT to accept it as science fact. I assume you have read Karl Popper on falsifiability? It is central to scientific enquiry, something the ‘String theory’ lot should probably think about…

4) You ask “PS If you are the ‘other’ Dr X, then who is the ‘other’ Dr X?”

That’s easy, she is not the ‘other’ Dr X, she is Dr X in her own right. She got her PhD before me, hence it is me who is the ‘other’ Dr X.”

Reply from Creation Foundation

It has been said that we decide what to believe, then look for arguments to support our opinion. Personally, I cannot accept that the increasingly complexity you seem very familiar with could possibly have evolved by the action of environmental factors on randomly generated DNA copying errors.

You are correct about my knowledge of this complexity being limited. Physics is my subject. I am therefore acting in the role of an investigative reporter which is why I get accused of ‘quote mining’.

Evolutionists, like creationists, are clearly not all of one mind. I often quote Derek Hough, for example, British evolutionists of evangelical zeal who used to actively engage people around him in conversation in the hope of converting them to evolution. Then one day on the way to work, he realised that Darwinism was infantile nonsense. But, as per my initial comment above, is still an evolutionist but in search of a credible mechanism. Hence his ‘self-developing genome’ idea, which is actually saying what Genesis says.

Going back to second point, how can a single cell play a meaningful part in an embryo, such knowing it has to work on making a right hand rather than a left one? Of course, it is easy to ask questions. So not being able to answer a question does not mean someone is wrong.

I know that some people suspect that cells can somehow communicate, perhaps a bit like birds in a flock, or ants or bees in a colony.

Similar mechanisms at play throughout the developing organism at different stages. The cells are able to make a hand (well part of a hand, in fact, just one tissue type), it just depends which side they end up on. Plenty more evidence where this came from.

You say: “It has been said that we decide what to believe, then look for arguments to support our opinion.”

I would like to think in science, it is the opposite way round. We look for arguments to support an opinion, then decide whether or not to accept it.

I don’t get the ‘infantile nonsense‘ bitReply from Creation Foundation

Just been looking at Cell.com Thanks for bringing it to my attention. Do you work for them by any chance? I see they publish some 30 journals to help workers in the field keep up with current research? That is ‘complexity’. My point about infantile nonsense was that that is how Hough came to regard current evolutionary theory. However, he still thinks there is a credible mechanism if only he could find it. He is still an evolutionist. So, with regard to your point, is he not looking for arguments to support his opinion — which the facts strongly suggest is wrong?

Posted by TheOtherDrX

You state “Just been looking at Cell.com Thanks for bringing it to my attention. Do you work for them by any chance? I see they publish some 30 journals to help workers in the field keep up with current research? That is ‘complexity’.”

No I don’t work for them, I am an independent scientist, and yes, there are a lot of journals as we have a lot of specialist scientists publishing in what might appear to be very niche areas. However those publishing in Cell are some of the highest-regarded scientists in life sciences. And before you ask, no I haven’t published there (I wish…)

As for Hough, he may be looking for credible mechanism but overlooking the obvious. Darwin wrote: “But we are far too ignorant to speculate on the relative importance of the several known and unknown laws of variation; and I have here alluded to them only to show that, if we are unable to account for the characteristic differences of our domestic breeds, which nevertheless we generally admit to have arisen through ordinary generation, we ought not to lay too much stress on our ignorance of the precise cause of the slight analogous differences between species”

He was admitting that he didn’t know the mechanism. I really like that in a scientist: Being able to admit that we don’t know due to lack of credible evidence, rather than over-stating observations to support an ideology. Darwin’s ‘unknown laws of variation’ are quite credibly DNA copy errors, but given DNA had not been discovered, it seems unfair to overly criticise Darwin given he came up with his work before DNA had been discovered. What does one expect? The initiator of a whole new field of science to also master molecular biology 100 years ahead of everyone else? I think DNA copy errors satisfy Darwin’s ‘Unknown laws of variation’ very nicely and given a modern genetics course, I suspect Darwin would have been far more content with his work.

DNA copy errors are a fascinating topic, and having studied them particularly in cancer, where error rates often exceed those in normal cells by over 100-fold, it is easy to observe clonal evolution occurring in short time-spans. Not species evolution of course, but the ability to apply selective pressure and facilitate the survival of specific phenotypes, albeit in a single-cell scale, gives an insight into what could happen over longer periods of time, and indeed, what does happen during the course of the disease.

Finally, you say: “So, with regard to your point, is he not looking for arguments to support his opinion — which the facts strongly suggest is wrong?”

He may well be looking for facts to support his hypothesis, I don’t know, but exactly which facts are so wrong? I need it spelling out to me.

Reply from Creation Foundation

Thanks for your comments. I have been reading up on histones and chromatin. Talk about complexity!

I think the subtle thing is the difference between micro and macro variation as noted by Filipcenko(?) back in 1927 – i.e. there is an evolutionary mechanism, but it is limited. I imagine you are familiar with Darwin’s second theory of Pangenesis, which seems like an attempt to give a mechanism of Lamarck’s observation that organisms seem to sense their environment and mutate and adapt accordingly in a meaningful way. I mention some of this in my book EvoGenesis. Thus it seems Lamarck was right, but Darwin had it backwards. Lamarcks theory explains why all those imagined transitional forms cannot be found – because they never existed, for the reason just given. “Nature” does not work on that random hit and miss manner.

Perhaps it could help your personal career and get you ahead of the competition if could accept that God or “nature” has engineered limited variation into all organisms. Hence you could forget about chance and random variations and try to work out how the Lamarck/Panspermia/After their kind mechanism really works. Darwin tried the gemmule idea, but it got disproved by his cousin Galton.

Don’t know much about it, but people like Mendel apparently understood that you can induce variation by subjecting plants to various pressures. Might be worth studying the methods of plant and animal breeders. Do they have special techniques?

I can see your Nobel Prize already! But you will have to beat Hough to it.

Posted by TheOtherDrX

You state “Don’t know much about it, but people like Mendel apparently understood that you can induce variation by subjecting plants to various pressures. Might be worth studying the methods of plant and animal breeders. Do they have special techniques?”

I have done and published this experiment with cancer cells. I have induced variation by exerting the selective pressure of ‘invasiveness’ i.e. the ability of cancer cells to be able to degrade and migrate through a protein membrane as an in vitro model of cancer spread. Interestingly, we see phenotypes and genotypes not seen in the bulk unselected population. Why is this? These invasive cells are less proliferative, so they are relatively rare (DNA copy errors and chromosome aberrations at play) so are not readily seen in analysis of the bulk population. They get outgrown, and are there are there all along, but I would have to analyse 1000s of individual cells to see them. If selective pressure is for high proliferation (normal cell culture conditions), I cannot detect them easily. If selective pressure is on invasive ability, I can isolate and observe the invasive cells in isolation quite easily.

This is analogous to your statement about Mendel, however it is clear that there are limitations on the degree of variation and how successful it can be, in part dependent on the low rate of DNA copy errors and a 1-year germination-germination cycle. This variation can be accelerated by ‘quasi-random mutagenesis. I understand that experiments to create ‘drought-resistant wheat’ can only go so far as shown by the failed Lysenkoism experiments.

Transgenerational epigenetic inheritance can actually be shown experimentally to some extent thanks to those pesky histone modifications. Events in the current life being passed on via histone modifications to gametes to change the future phenotype of subsequent generations. I am not an expert in this field, but it is not likely to be DNA in this case, but inherited histone modifications giving transgenerational traits independence of the DNA sequence. This is why in previous comments I focus on Chromatin, not just DNA as a means of inheritance.
For a nice example of transgenational inheritance (in this case to an olfactory stimulus) see:http://www.nature.com/neuro/journal/v17/n1/full/nn.3594.html

Maybe Lysenko wasn’t that far wrong with his failed crop experiments, just far too ambitious and the wrong genetic basis. As someone who has published on ‘phenotypic plasticity’ in cultured cells, I do so with a clear idea of the genetic mechanism at play (histone epigenetics), and as yet, I have not been accused of ‘quackery’.

I stick firmly with Lamark having it backwards. Organisms don’t sense changes and mutate in response, just like my cancer cells don’t sense that I ‘want’ them to invade; DNA mutations allow exploitation of a change in environment (the invasive cells now being isolated from the faster growing bulk population), and successful mutations (causing invasive phenotype) predominate in the selected population. And the same is most likely true for histone modifications.

Reply from Creation Foundation

Complicated stuff, but you clearly have practical experience. Darwin once suggested that variations already existed in organisms, being “written in invisible ink”. So perhaps variation takes place, not by natural selection working on DNA errors. But by favouring constructive variations already existing in a minority of a population. I understand that people working with microbial resistance to antibiotics find Lamarck’s theory fits the facts better than the DNA error model

With regard to transgenerational inheritance, have you considered the statement in the Ten Commandments about the “sins of the fathers” being “visited on their children to the third and fourth generation”? Sins or unwise actions, it seems, such as lack of hygiene, sex sings, etc., could have epigenetic effect as they affect the DNA. The Bible suggests that the effects pass after two or three generations. Do people get cancer because of the “sin” of eating food polluted with all manner of exotic chemicals, all added in pursuit of the almighty dollar?

I will check the source you mention, but cannot promise to understand it!

Posted by TheOtherDrX

You stated “Darwin once suggested that variations already existed in organisms, being “written in invisible ink”. So perhaps variation takes place, not by natural selection working on DNA errors. but by favouring constructive variations already existing in a minority of a population”

1) It is generally accepted that variations that pre-exist prior to selection are the direct result of DNA copy errors and/or histone modifications and/or DNA methylation, although DNA copy errors are the major and most obvious group to most people. This is where our two views are back to front. I stand by my view of chromatin events (mostly DNA sequence) causing variation and selection acting preferentially on a minority population in a Darwinian manner. We see what is selected for in preference. Some of these changes will also result in reduced ‘fitness’ for that selective pressure and so are not seen at all. But this is not the only mechanism and is slightly misleading. For more, see point 3 below.

You stated “have you considered the statement in the Ten Commandments about the “sins of the fathers” being “visited on their children to the third and fourth generation”? Sins or unwise actions, it seems, such as lack of hygiene, sex sings, etc. could have epigenetic effect as they affect the DNA. The Bible suggests that the effects pass after two or three generations.”

You state” Do people get cancer because of the “sin” of eating food polluted with all manner of exotic chemicals, all added in pursuit of the almighty dollar?”

3) This a predictable interpretation, given the Biblical ‘sins of the father’ idea. But why shouldn’t epigenetic changes in response to a stimulus, in some cases, actually ‘protect’ future generations? Why should we ALWAYS be punished? Doesn’t the theory of evolution promote survival of future generations in response to an event? Well…

Take the studies in Scandinavia where populations suffered famine in 1800s. The grandchildren of those affected had altered weight/height which, assuming the famine was still ongoing, could have ‘protected’ the future population. In this case, DNA methylation epigenetics is at play, not chromatin modification. Methylation of the Insulin-Like Growth Factor Receptor (Type 2) or IGF-2 gene promoter limits its expression, and limits growth. See this excellent link, which is a ‘deleted’ non-published chapter from David Epstein’s “The Sports Gene” book.

A similar analogy might be happening with pygmy elephants on islands with limited resources. I seem to remember reading that the last Mammoths on Svalbard were very small. That’s the only way a sustainable population could have suvived as long as it did.

As for whether people might be at an elevated risk of certain cancers in response to a stimulus in a previous generation, I am not going to completely discount your notion as I know that IGF-2 levels predict certain cancer risks. However if I am to consider that notion, I am equally considerate of the opposite potentially being true for other stimuli which might be protective to future generations. This however is very different to being punished by God. It is being punished (and don’t forget, protected) by elaborate genetic evolutionary mechanisms.

So, I present evidence for DNA copy errors creating a rare sub-population that a selective pressure ‘selects for’ with increased survival, and in addition, a complementary mechanism for how organisms can respond to environmental changes (developmental histone modifications and DNA methylation) and affect the next few generations. These latter mechanisms explain some variation that is independent of DNA copy errors. Maybe these latter mechanisms are not usually communicated very well in discussions like this. Maybe this is why a simplistic DNA view of Darwinism appears to be ‘infantile nonsense’, since DNA sequence alone cannot explain everything that geneticists observe. However some of these observations can be explained independently of the DNA sequence alone, whist still having a genetic mechanism.

Anyway, it was the opinion of evolutionist Derek Hough (author of ‘Evolution – A case of stating the obvious’) that the DNA copying error cannot explain the complexities of nature. However, like Darwin, he feels that evolution is obviously true or axiomatic, common sense. Thus he a believer and is searching for the right mechanism.

Personally, I find it odd that nature is so complex in an organised fashion that intelligent people like you are struggling to understand how it works, yet you think that it made itself from inanimate matter!

The word for ‘sin’ in the OT comes from the word ‘to miss’ or to err, make a mistake, etc. It seems that what God is warning us in Exodus is that behaviour can and will affect our children. There then follows some specific no-no’s. In modern parlance I suppose he could have said, in effect, take how you live and what you because it might methylate your genes with painful result for your children. However the methylation will wear off over 3 or 4 (not 2 of 3 as I said last time) generations.

I don’t think Exodus means that God intervenes to punish mistakes, but warning us that is how the system works, cause and effect. He accepts responsibility for what he created.

Of course, clever people want to find out how it all works. Nothing wrong with that.

Going back to Lamarck, I would suggest that the potential mutations, that ‘invisible ink’, are all of a constructive nature, not just accident copying errors of the kind of damage that can be inflicted on fruit flies – i.e. micro-evolution. This explains why all those failed intermediate forms cannot be found. I suspect that Darwin realised that, which is why he went for the Pangenesis idea.

It is interesting, from what you said, that there seem to be changes generated internally (micro-evolution?) and those generated externally from the environment (mostly damaging?).

I have noted those articles. Thanks.

Posted by TheOtherDrX 1 day ago

You state “Personally, I find it odd that nature is so complex in an organised fashion that intelligent people like you are struggling to understand how it works, yet you think that it made itself from inanimate matter!”

That to me is a logical fallacy. Something along the lines of ‘argument from incredulity’ or something like that. Nature is so complex that people far more intelligent than me will be struggling with it far into the future. The difference is, I find that credible. Consider Darwin’s situation by comparison. He didn’t even have the knowledge of DNA, DNA copy errors, let alone DNA methylation, or histone modifications, or the effects of famine of IGF-2 promoters. But I think he did a pretty good job given this deficit.

One final point before this discussion draws to a logical close: What are these ‘failed intermediate forms’ of which you speak?

Reply from Creation Foundation

I think you know what I mean. See chapter 6 of Origins. One small quote: “First, why if species have descended from other species by fine gradations do we not everywhere see innumerable transitional forms? Why is not all nature in confusion, instead of the species being, as we see them well defined?”

This is the great black hole in Darwin’s theory. Like the crystal spheres of the Middle ages astronomers that supposedly moved the planets in their orbits, before Newton dispensed with them, those transitional forms do not exist because they never did exist. This is why they cannot be found.

I think Darwin realised in his heart that he was wrong, which is why he turned to Lamarck’s ideas, whose theory of inherited characteristics, etc., did not require transitional forms, and so fitted the facts better.

Thanks for your posts, which have been very informative.

Posted by TheOtherDrX

Oh THOSE transitional forms! I thought you were talking about the fossil record. It’s interesting how Biblical scholars dissect the text of the Bible, and do exactly the same to Darwin. Absolutely nothing wrong with asking the questions.
It’s a long time since I’ve read Darwin and I accept that theories have difficult elements that Darwin, at the time, struggled with. It is surprising, however that this was one of them, as to me, it is such a given.
Species are adapted to a particular ‘niche’, based on their specialisation. Species are also defined by their ability to interbreed only with each other, and that if different species breed with each other they cannot provide fertile offspring. Species that cannot breed with each other cannot do so because of genetic differences (take us and Gorillas having a different number of chromosomes, let alone all the differences at the DNA base pair level). Typically, but not always, these have arisen during a period of time where a larger population have been physically isolated, or separated from each other giving time for these differences to arise. This may also occur via ‘niche’ separation’.

Consider the population of say, Crossbills (largish finch with a cross-over-beak, feeds on pines, you know the sort). There are 4 species in Northern Europe: Common, Parrot, Two-Barred and Scottish Crossbills. Each has a favoured but somewhat overlapping ‘niche’, and the Scottish/Parrott Crossbills are so similar that until recently, it was not realised that both are resident in the Caledonian Forrest in Scotland. They might well represent very rare examples of ‘transitional forms’; but they might be too dissimilar for your definition of that. Scottish and Parrott Crossbills are largely separated by the size of their beak, and the unique call of the Scottish Crossbill, and are clearly distinct from the Common Crossbills which are smaller. The Parrot Crossbill is a specialist on Scots pine. Scottish Crossbills also favour Scots pine but also larch and some other conifers. The Common Crossbill seems to favour larch, Spruce and to some extent Douglas Fir. The Two-Barred Crossbill is a Larch specialist.
Now consider if all Crossbills in Northern Europe were a continuous intergrade ‘transitional forms of varying sized beaks’. Firstly, within any single ‘niche’ such are Northern Larch, where the Two-Barred Crossbill is a specialist (aided no doubt by its flashes of white offering camouflage given the frequent presence of snow), if these Crossbills there had variable beak sizes, none, or very few of them would be an exact specialist. All might be ‘OK, given limited competition’, and might be able to get by, but by no means brilliant at extracting the seeds. However those that are brilliant at extracting the seeds predominate, and their genes predominate, giving the Two-Barred Crossbill a distinct phenotype, and a genetic profile that now cannot breed with Common Crossbills or any other Crossbills. Similar mechanisms are at play with the intergrade Scottish/Parrot Crossbills in Scotland, but this is a more subtle and recent ‘split’. Specialisation drives speciation. Specialisation results in isolation (physical, or maybe just ‘niche’) which allows genetic diversification to occur between the two.
There is a hole in my argument that you might have spotted: Why are the trees, which are the basis of ‘speciation’ of Crossbills so ‘speciated’ themselves, and not a continuous intergrade?
Again, it is environment. Northern Larch trees are specialists in areas that other trees cannot grow. If there were gradations of Larch all over Norway, why should they all be equally good at surviving in that harshest environment, or any particular environment for that matter? They shouldn’t; they aren’t. They don’t. Only the specialists survive given the competition for limited resources in a particular niche. The best survivors survive. That population succeeds with limited competition, expands massively until it encroaches on the niche of a different species, and at times the two might then live side by side, but in competition giving a more ‘diverse woodland’. But they cannot interbreed. Genetically, they are diverse from their ‘progenitor’ or ‘ancestor’ populations such that at some point, they can no longer reproduce with the ancestor population, and are a new species. And those species have a specialist finch that feed on them.

Reply from Creation Foundation

I think the adaptation to niches that you have described so beautifully is precisely what does happen in nature, as with Darwin’s finches on those islands. But the question is How does it happen? Does the process have to wait for one or a range of coordinated and favourable DNA copying error to take place?

I would say instead that God engineered ‘micro-evolution’ into organisms so that they can adapt to niches and can be deliberately bred to meet mankind’s needs.

You cleverly suggest that a change of species occurs, but in Genesis speak, those birds are all the same ‘kind’, which is why Darwin had to extrapolate his examples of cattle breeding way beyond the range of his data in order to explain ‘macro-evolution’.

I think we are talking about the twigs on Haeckel’s tree of life, when we need to take a look at the trunk and main branches. As Woese admits, evolution cannot even explain the origin of DNA or ‘life’ in the first place. So all this talk about how one form evolved into another is a bit like me planning a world tour, staying at lavish hotels, when I am old, unemployed and bankrupt, so it will never happen. It is supposition and wishful thinking.

You say you have not read Darwin for some time, and I suggest that is because he is irrelevant. Science is not studying macro-evolution that Darwin dreamed of, but the micro-evolution God has engineered into all organisms.

P.S. You sound like a very good teacher.

Posted by TheOtherDrX

You state “Does the process have to wait for one or a range of coordinated and favourable DNA copying error to take place?”

It would appear so. A rather fortunate alteration to a gene that alters craniofacial features in mammals, is altered in the evolution of beak shape in Darwin’s finches. See this in Nature, published only this month.

http://www.nature.com/nature/journal/v518/n7539/full/nature14181.html
Many thanks for your kind comments. I think we have both learned something from this, even if we have not moved our views at all. I have certainly got a clearer grasp of the genetic mechanisms, now I’ve spent time thinking about it and catching up with some interesting new research.

PS I’ve not read Darwin for a while as my work is not really focussed on environmental-type genetics. It is more tentatively related to what I specialise in, which is cancer.

Creation Foundation replied

Thank you for your helpful comments.

Final note from TheOtherDrX:

I have clarified the order of comments, and corrected some obvious typos and grammatical errors in the original posts. Otherwise, the comments are as originally published on EvoGenesis. Feel free to comment, even if only to correct me on my genetics knowledge, or to debate the evolution of Crossbills!

Bypassing the Remembering and Understanding steps when answering exam questions using Google.

I was challenged recently at a Learning and Teaching conference about how I assess students in Biosciences. Apparently my assessment methods are ‘not authentic’ and ‘overly test knowledge’ on the basis that almost every module on my course is in part, assessed by a formal exam. There was some mention of a dodgy Einstein quote derived from “[I do not] carry such information in my mind since it is readily available in books. The value of a college education is not the learning of many facts but the training of the mind to think.” I’m sure context is everything in this quote, and very much doubt he was talking about fundamental knowledge underpinning established theories in physics. That got me thinking: Is knowledge worth testing anymore given that everything is accessible from Google? Furthermore: Seth Godin in his TEDx talk on education said, “There is zero value in memorizing anything, ever again. Anything worth memorizing can be looked up.” Sugato Mitra in “Living in the age where knowing is obsolete” suggested that answers to Physics tests could be googled and discussed, rather than answered alone in an exam. Eric Mazur suggested that any answer to a test that can be googled is not an authentic test. There are many other examples, as outlined very nicely here (sorry, this blog by @webofsubstance has since disappeared). There are two inter-related themes going on here: Knowledge is ‘out there’ and accessible so there is little benefit to memorising it. Even if knowledge is good, testing of knowledge is bad. Related to this final point, all assessments should be ‘authentic’.

As a relative novice to ‘authentic assessment’, I was keen to find out more. In the style of the above experts (excluding Einstein), I Googled it, and came with a link via Wikipedia on the definition of authentic assessment. A brief search suggested that the article is written by an expert on the topic, so I’m confident in the author being an authority on the subject. I have now read this and will discuss this document in relation to my assessment methods, which are really quite traditional (also known as ‘bad’) In ‘authentic assessment’, the assessment tool should mimic real-life situations. So having knowledge and applying it to a situation seems OK, but not testing the knowledge directly, which I think is what Eric Mazur is getting at. However if someone cannot complete an ‘authentic assessment’, what exactly is limiting the student’s progress? Is it an inability to recall the required knowledge to apply it in context, or inability to apply their acquired knowledge in context? If Google is allowed in the examination, as proposed by some experts, then we are focussing on the application of knowledge. Sorry, did I say knowledge? It’s just that it’s not necessarily knowledge is it? It’s more of a ‘state of transient observation’ whilst the information is on screen. Either way, the student got to the answer in a real-life connected-to-the-internet sort of way, so full marks all round. Well done. I’m not convinced.

My view of assessment, and in particular ‘authentic assessment’ in science, is that it should rely upon some known facts, and students should be able to apply those facts to conclude or deduce something, like an authentic scientist. As a research scientist, I have had some ideas over the years, and none of them have come about by not knowing anything about the topic. In fact I’d go as far to say that ALL of what I’d call half-decent project ideas have come from deep understanding two or more disparate topics in great detail, and making some sort of interacting link between the two. So the question is: Am I assessing in a manner that is authentic to science? The classic inauthentic/traditional test is a MCQ test. It is not very often in real life that you have to take an educated punt at 4 possible answers, three of which are deliberately contrived to trip you up. However it can test knowledge. I find a close correlation between MCQ scores and more analytic problems, but rarely can MCQs be designed in such a way. Where does the traditional University essay-style exam fit in? Authentic or not? It is dependent on the wording and what is being tested, and yes, many exams at lower levels do not really test anything other than pure recall of the lecturers’ notes. However any scientific or research report writing, where the introduction is a descriptive knowledge base, ideally with hypothesis based on knowledge, and a discussion containing critical analysis of evidence is essentially what many of us aim to replicate in traditional essay-style exams. So an essay question along the lines of:

Discuss the role of X and Y the process of Z. In answering this question, explain the relative contributions of X (and/or Y) on Z in some relevant setting.

This is a fairly typical degree-level essay exam. Some knowledge recall, in that for good marks, you need to define X and Y before putting them in context of process Z. In the second part, the students have to make some judgements based on the relative importance based on knowledge, which is not possible from memorising a 2D flow diagram. This exam structure is used below in one of my cancer-related exams that I might ask at Final year degree or Master’s-level Molecular Pathology students. Forgive the technical detail on this, but it is really very very interesting…

Discuss the role of p53 and pRb on regulating the cell cycle. In answering this question, explain the relative contributions of inactivation of genes coding for p53 and pRb in Cervical cancer vs Colon cancer (Feel free to skip this bit, but for those who are bothered or interested):

p53 and pRb are proteins that stop cells dividing at different stages of the cell cycle (that is, time from division to next division) by inter-related and distinct mechanisms. Loss of these (by gene inactivation, or protein degradation) promotes tumour growth. In cervical cancer, genes coding pRb and p53 are very rarely inactivated because Human Papilloma virus (HPV) inactivates/degrades p53/pRb protein, so no there is no selective pressure on inactivating the genes coding for p53/pRb protein. In contrast, p53 gene is usually mutated (inactivated) in Colon cancer, and pRb is either inactive, or something controlling pRb ‘messes up’ the cell cycle control process…

Is this an authentic assessment? Well, according to many experts in education, and the expert definition of authentic assessment, no it isn’t, given that I can get the answer from Google and that it is not a ‘real-life’ situation. I can get nicely written reviews on the topic, and even Wikipedia distils this down to the basics quite nicely, so no, this is not authentic at all. Here are the 5 ‘not-at-all-trying-to-induce-a-false-dichotomy’ features of Traditional vs. Authentic assessment by Jon Mueller, as obtained via Wikipedia.

Traditional Assessment vs. Authentic Assessment:

1) Selecting a Response vs. Performing a Task

2) Contrived vs. Real-life

3) Recall/Recognition vs. Construction/Application

4) Teacher-structured vs. Student-structured

5) Indirect Evidence vs. Direct Evidence

In my essay, students are 1) performing a task which is authentic to any scientist. Yes, they are selecting a response from memory and relaying that information, but they must apply their knowledge of why mutations arise in a certain manner. It is a 2) real-life task, in that it could represent the key elements of a justification/intro for a research proposal or research paper studying these genes in either cancer type. Yes, this is all on Google, but how would you know what to search for without the appropriate knowledge-base and understanding of how to put that information in to context? Although the first part of the exam question above relies on the traditional recall. 3) Recall of knowledge “Discuss the role of p53 and pRb on regulating the cell cycle”, the question then asks students to apply this knowledge, and hopefully construct their own hypothesis of why certain events occur. They can only properly apply the knowledge if they understand two important concepts that have been taught previously, but are not specifically asked for. They must also recall and relate the causative events in Colon cancer. They must then ‘explain the relative contributions of inactivation of genes coding for p53 and pRb in Cervical cancer vs Colon cancer‘. So although both pathways are ‘messed up’ differently in each tumour type, the net effect is the same: Cells grow fast and refuse to die. That is an important point to appreciate, and this essay allows me to assess whether the students have grasped this concept. So how about 4) Teacher-structured vs. Student-structured (whatever that means): As helpfully defined by Jon Mueller, “A student’s attention will understandably be focused on and limited to what is on the test. In contrast, authentic assessments allow more student choice and construction in determining what is presented as evidence of proficiency.” Well, it’s an exam, designed and written by myself. I really don’t understand how a student-structured task makes it more authentic, other than to tick the ideologically ‘non-traditional’ box. I suppose I could leave it up to the students to write about a tumour type of their choice, or cell signalling pathway of their choice, but that might miss some conceptual nuances that I want to test. As the teacher/’expert’, I know that these pathways are relevant to ALL cancers, unlike a student’s possible choice of some random cell signalling pathway. I know that studying these two tumours highlights crucial differences in the mechanisms of how these important regulatory pathways are bypassed in cancer. Studying say, Colon vs. Pancreatic wouldn’t cover these nuances, but the students don’t know that. Finally, what about 5) Indirect evidence or direct evidence: Again, as helpfully defined by Jon Mueller“What thinking led the student to pick that answer? We really do not know. At best, we can make some inferences about what that student might know and might be able to do with that knowledge. The evidence is very indirect, particularly for claims of meaningful application in complex, real-world situations.” I know why a student has come to the final conclusion at the end of my essay question as I have tested the underlying fundamental knowledge first, and seen how they have applied their interpretation of the fundamental knowledge to a situation that requires synthesis of ideas from distinct parts of the curriculum. So yes, direct evidence is used to assess the student on application of knowledge. So the startling conclusion is that essay-style questions under test/exam conditions can be ‘quite authentic’, even if the answer can be obtained directly from Google. It also leads us to conclude that being able to Google the answer to tests is not necessarily a good way of assessing students. Finally, if a quote appears to be obviously ridiculous, it probably is ridiculous. There are so many situations where we cannot reasonably be attached to the internet. Fundamental knowledge recall will always be needed and hence, it’s probably worth ‘testing’ whether students have this prior to progression academic progression. As a scientist, here are just a few:

2) Listening to a conference paper presentation and wondering how X relates to Y, and whether it might also relate to Z, which the authors had not considered (but will form the basis of my next research project).

3) The PhD voice viva examination. A must for any independent research scientist.

4) Any voice viva examination at any level, so a must for any of my BSc or MSc graduates.

5) Being interviewed on how your recent research findings are relevant to X by the media, and how they might relate to something else (usually quite obscure…).

6) A lab meeting with the professor: “How does X relate to Y. What is Y? Why are you studying Y if you don’t know what Y is…? Get out of my office…”

7) Being interviewed for a job, where getting an interview may in part be based on your knowledge base and ‘decent grades’.

“Oh hang on, I’ll just ask Google”. Yes, that will go down well…

PS: I rarely use Wikipedia as an initial research tool, but given the Godin, Mitra and Mazur quotes, I thought I would give it a go. Sorry. I won’t do it again.

PPS: In future posts I’ll discuss how I get students to answer questions based on interpretation of data from scientific research papers under exam conditions. I’d go as far as to say that this is also ‘somewhat authentic‘.

]]>https://theotherdrx.wordpress.com/2015/02/09/it-knowledge-worth-testing-anymore-it-testing-knowledge-authentic/feed/5theotherdrxBypassing the Remembering and Understanding steps when answering exam questions using Google.Should we adopt more active learning at the expense of cutting the STEM curriculum?https://theotherdrx.wordpress.com/2014/11/28/should-we-adopt-more-active-learning-at-the-expense-of-cutting-the-stem-curriculum/
https://theotherdrx.wordpress.com/2014/11/28/should-we-adopt-more-active-learning-at-the-expense-of-cutting-the-stem-curriculum/#commentsFri, 28 Nov 2014 15:47:36 +0000http://theotherdrx.wordpress.com/?p=82Continue reading →]]>A few things have been troubling me recently. Do I teach too much stuff? Is teaching on my course too didactic? Am I over-reliant on knowledge transfer and passive learning? Do my students forget everything they have learnt on the course once they have sat their written formal exams? Are my students’ BSc/MSc marks not reflective of their ability/knowledge/skills at graduation? I’m going to go with the answer of ‘probably not’, just for now, despite negative comments about what is my preferred teaching style.

I have discussed a couple of major studies on active learning vs. passive learning in STEM subjects in my blog here and here. Despite these rather critical post-publication peer reviews, I am certainly not against what are described as ‘active learning’ approaches. The MSc course that I run has no more than 30% of the marks awarded from formal written exam based around ‘scientific lecture content’. This is very low for the sector, so on the face of it not overly traditional. There is extensive problem-based learning, laboratory work, lab-reports, group assignments, presentations and written essays, professional skills and project work, much of which consolidates learning of lecture content. However for the ‘formal taught scientific content’ parts, I do still tend to give 2 hour lecture sessions, where I am talking for anything up to 80% of the time. When I am not talking, I am encouraging some active learning by asking questions, a bit of peer-instruction here and there, debating, doing the odd quiz, getting students to answer questions embedded in the lecture notes and sometimes testing prior knowledge before I have even started talking. Typically I upload all lecture materials and support reading prior to sessions onto the VLE so students don’t arrive ‘cold’, and can attempt some activities prior to the session. These ‘active learning’ activities within the formal lectures do seem (anecdotally I admit) to encourage learning within the session, and also highlight to me misconceptions due absence of required prior knowledge, or just things that I just haven’t explained very well.

So if these ‘active learning’ sessions are so useful in my lectures, why not do the full 2 hour session using this manner? Flipped classroom teaching has been proposed by some, whereby lectures are pre-recorded and watch prior to the session, leaving the full lecture time free for discussion. Others propose discovery learning and problem-based learning, which promotes deeper understanding than passive learning. For me, the problem with 100% ‘active learning’ approaches comes down to content. Yes, there is lots of it in Bioscience, and the lecture is efficient in this regard. However within STEM subjects, I am slightly troubled by the end-point of many discussions whereby the proposal is to decrease content in favour of depth of learning. It’s not that my students’ answers in exams are superficial. The best of them already give very deep, reasoned, well-though-out answers that are at times really testing the tutors’ own knowledge. I really don’t see broad superficial learning from those at the top. However follow any educational conference or L&T Twitter discussion and you will perpetually hear the following negative remarks:

‘the answer (for active learning approaches) in many cases is to cut out huge swathes of content‘ Narrative: Too much content

‘try cutting out stuff and everyone suddenly has a vested interest’Narrative: Too much content, much irrelevant

‘reduce content as this will promote deeper approaches’Narrative: Too much content, specialise in fewer areas

‘Handing out lecture notes = admitting there is a problem with the lecture’Narrative: too much content to remember in lecture session

‘if you believe that lecturing is simply the transfer of information then you will soon be out of a job’Narrative: Lectures (and lecturers) are rubbish

All of these comments are from discussions of active learning and technology-enhanced learning approaches over more traditional approaches. Probably the most concerning is the notion that these active learning approaches are ‘better’ when we need to cut out vast amounts of content for them to be properly implemented. If 100% active learning approaches cannot deliver the required content to the same learning outcomes, then is it necessarily ‘better’ than a predominantly traditional-based or mixed active-traditional approach, or just different? Do we need to amend the learning outcomes to make the method of delivery ‘fit’?

Let’s put it another way. If my manager asked me to cut out half of the content from my course to make it ‘easier’ for some weak international students to get a ‘deeper understanding’ of specific topic areas, I’d consider this to be dumbing down, and rightly so. Recent events at Anglia Ruskin highlight the potential for falling foul of the QAA on precisely this matter. This case highlights the importance of tightly adhering to validated learning outcomes of nothing else. If I did the same, and reduced half of my content but used 100% active learning approaches throughout all of my sessions as the reason, I’d probably get promoted into the faculty L&T team for it. And there lies a problem.

So, the question is: Do we deliver too much ‘taught scientific content’ in STEM? If we don’t, then deliver all sessions by whatever methods work well for you, be it 100% active learning, traditional lecture, or in my preference of a lecture with some active learning ‘nuggets’, and continue to assess to the same standards as existing courses at similar institutions. If we do genuinely deliver too much content, then cut it down, and fully justify that decision based on comparisons with similar established courses at similar institutions. However cuts to an existing course curriculum should not be for the sole reason that it doesn’t fit with our new preferred pedagogy.

]]>https://theotherdrx.wordpress.com/2014/11/28/should-we-adopt-more-active-learning-at-the-expense-of-cutting-the-stem-curriculum/feed/5theotherdrxComments on Deslauriers “Improved learning in a large-enrollment physics class”.https://theotherdrx.wordpress.com/2014/11/20/comments-on-deslauriers-improved-learning-in-a-large-enrollment-physics-class/
https://theotherdrx.wordpress.com/2014/11/20/comments-on-deslauriers-improved-learning-in-a-large-enrollment-physics-class/#commentsThu, 20 Nov 2014 12:32:41 +0000http://theotherdrx.wordpress.com/?p=67Continue reading →]]>The role of traditional teaching is taking a beating from some/many quarters in HE. Often the evidence against traditional teaching is from educational research which by the standards of a typical scientist, is rather questionable. This is a pity as the very positive messages about non-traditional methods can become lost. This gives entirely traditional teachers the opportunity to disregard the findings as nonsense, when clearly there is benefit to using methods other than a straight 2 hour Powerpoint monologue. I have discussed the Freeman paper on active/passive learning in STEM, which is very influential, but has its issues as discussed previously here. Another highly influential paper on active/passive learning was published in Science (Deslauriers et al 2011, Improved Learning in a Large-Enrollment Physics Class, Science, 332, 862-864). This is one of the most influential journals in Science, so only studies that are highly relevant and conducted using the finest experimental design are even considered for peer-review, yet alone published. So therefore, this study probably requires more attention that your run-of-the-mill education journal.

The study’s main outcome is that deliberate practice concept resulted in better scores than traditional teaching in a single group of students after 3 hours of teaching, where deliberate practice delivered by a young inexperienced tutor, whereas traditional teaching delivered by an experienced academic. That’s it. One group of students. 3 hours tuition. Two separate tutors doing two separate things. One MCQ test. No controls. Blimey! I have seem more comprehensive studies proposed for a MEd final project!

So where do we start.

The whole study is based on 3 hours of tuition by a highly rated and experience academic instructor with above average student feedback scores (so that means good at teaching, right?) giving a traditional lecture vs. 3 hours of an inexperienced post-doc. Now the results show that the post-doc wins hands-down, but the supplementary data explains that one of them is highly animated and the other not so. Why have two variables (instructor and method of tuition) especially since the study is not repeated? This is essentially an n=1 experiment with no controls.

Why not have the experience academic deliver both methods, in part to control against the two-variable problem, and in part to show that the deliberate practice concept is transferable even to experienced traditional academics, and not just young enthusiastic post-docs? Even better, why not get the new post-doc to also deliver both methods, either in the same academic year or the following intake of students? You then have something that resembles a controlled experiment.

The students were informed that something new would be done and explained why. Since the traditional group would probably get much of the same, the Hawthorn effect needs to be more adequately controlled for. The experimental group had higher attendance than the control traditional group, which is commended. However is this the sole reason that scores were higher? However they test group did engade better, in favour of the test group (Hawthorn effect again?) There is no mention of correlation of attendance with outcomes in either group, so was it the increase in attendance from 45% to 85% that increased test scores? If that factor (attendance) could be assessed in isolation, we may start to get causative effects.

The cohort of 850 were split into two groups of approx. 270. Hmm, what about the others? Might have acted as a nice control if ‘no intervention’ used, if only for the Hawthorn effect. Seems a bit odd to me.

I really disagree with the use of “amounts of learning” being used in an absolutely quantitative manner. For example “… and more than twice the learning in the section taught using research-based instruction”. Twice the learning? The assessment is by MCQ, so twice the score is twice the learning, yes? Well no, given that some questions are 50:50, some are 1 in 5, some are rather simple and even I could answer, some look really quite tricky. The results may be far more, or less impressive than the 2-fold that the authors state.

The study propagates the false dichotomy myth that teaching has to be either traditional or progressive. The Freeman paper (as discussed previously in my blog) included any studies where at least 10% of the session was ‘active learning’, so by definition 90% was traditional passive lecture. I doubt whether 100% active or 100% passive are best to allow in-depth assimilation of breadth of content to the required depth. All to often I hear of ‘ditching content’ to facilitate in-depth active learning, using the Freeman paper as evidence. Some of this content ditching may be valid in favour of some active approaches, but it also gives the impression of an anti-knowledge agenda (I’ll save this for another blog).

What really disappoints me is that this type of study could have been conducted over a couple of academic years to obtain controlled data to remove all the competing variables. I still don’t think it would be anywhere near worthy of publication in Science, even if all of the findings were replicated, but at least it would be done properly. We could then make a sensible descision on what teaching methods work best for our students. As it is, this paper allows the more traditional teaching arm to disreagrd the evidence as rubbish or unreliable on the basis of n=1 and no controls (and hence, no causitive factor identified).

I am a little concerned at the school-boy error of describing what is clearly non-parametric data using means and (presumably) standard deviation. No attempt at any statistics was presented.

Lets save the best ’til last. The authors are the tutors. OK, not ideal as preconceived bias could creep in, but this is Science journal. That would not happen and would be adequately controlled for in some way. Or maybe not. The authors state “…but we believe that educational benefit does not come primarily from any particular practice but rather than from the integration into the overall deliberate practice framework”. This might be OK in a final conclusion, after seeing that data, but NOT in the setion describing the design goal. They are effectively saying “We believe our hypothesis to be true, and we will be the tutors to gather the appropriate evidence that will prove that we are correct”. This is not how to do science, and I am amazed that the journal Science let this part through. It is the single most disturbing part of the who paper for me (lack of controls almost as bad). I have previously blogged about the white swan hypothesis. I have also written about ideology and belief, and how it should have no place in science. This paper ticks both boxes.

OK, that’s enough criticism for now, as there is a decent probability that the test group genuinely learned more, and we could distil some really good practice from this paper. I do like the subtle hint of Sceptic-Proponent collaboration in here, but that would only be valid if the traditional academic’s view was that traditional was best, and that academic had been converted by the findings.

See my initial thoughts below from when I first read the paper. Let me know if I’ve got the wrong end of the stick on this one.

]]>https://theotherdrx.wordpress.com/2014/11/20/comments-on-deslauriers-improved-learning-in-a-large-enrollment-physics-class/feed/7theotherdrxDeslauriersActive vs. passive learning in STEM: Is a little better than a lot?https://theotherdrx.wordpress.com/2014/07/11/active-vs-passive-learning-in-stem-is-a-little-better-than-a-lot/
https://theotherdrx.wordpress.com/2014/07/11/active-vs-passive-learning-in-stem-is-a-little-better-than-a-lot/#commentsFri, 11 Jul 2014 22:54:22 +0000http://theotherdrx.wordpress.com/?p=64Continue reading →]]>Lecturers who lecture have been getting a lot of stick recently for their ‘sage on the stage’, didactic, boring lectures. I have even heard it said that the brain is more active when asleep compared to in lectures (maybe that’s just my lectures), however I have yet to find convincing evidence in the published literature. Certainly the students who sleep through my lectures don’t learn much. So when Freeman et al (2014) published their study “Active learning increases student performance in science, engineering, and mathematics”, I read it with great interest. It is published in Proceedings of the National Academy of Sciences USA, a peer-reviewed journal that all scientists respect as one of the leading journals in science, not some free, online, non-peer-reviewed education journal that would publish a shopping list if the authors would pay the publication charges. Given the source journal, this is something that all lecturers in STEM, even those that wouldn’t touch an Edu Journal, should carefully read.

So, what does the paper claim? The paper is a meta-analysis, so looked at 255 previous studies on active vs. passive learning, and combined the results. The study shows that across the STEM disciplines, results improved by 0.47 standard deviations, and the chances of failing were roughly halved when active learning approaches were used. Active learning worked across all class sizes, but worked better in smaller classes of 90% passive learning (didactic lectures) representing the passive group. I really don’t want to criticize so early but isn’t this a bit of a false dichotomy?
So, >10% active learning is better than <10% active learning. I accept that the data supports this conclusion, and it fits well with observations of my own teaching where student understanding 'seems better' where I employ some degree of active learning vs. a 2 hour didactic lecture. I am not surprised by this finding and would encourage all lecturers to not just rely on a monologue. However does this mean that we should ditch the lecture? Well, no, as that is not what the data says.
If active learning should entirely replace passive learning, then we should observe a quantitative increase in attainment with increasing proportion of active learning up to 100% active. The authors state that "we were not able to evaluate the relationship between intensity (or type) of active learning and student performance due to lack of data", which I find surprising since they had the % of active learning, and the effect size for each study. Maybe they just couldn't show a linear response or the clear significance that they expected… In analyzing outliers with high effect sizes, the percent of active learning was 25%, 33% and 100%. I find it difficult to comprehend how the data could not be plotted to show the effect size vs. % active learning. I suspect that there was no clear link between 10% and 100% active learning and outcomes. Or to put it another way, 100% passive is probably worse than anywhere between 0% and 90% passive, but not necessarily correct that 100% active learning results in better outcomes than, say, 50:50 active:passive. This is just me reading between the lines, but there is a gaping hole in the data analysis required to be sure of the conclusion that active learning should represent the 'empirically validated teaching method'. This is not sufficient evidence to ditch the lecture (yet) but us certainly strong evidence for further studies teasing out the quantitative effect of increasing student engagement within lecture-type sessions to give optimal outcomes.

There are other commentaries that discuss other aspects, or weaknesses of this study. Major concerns include whether active learning results in better attainment, or the possibility that teachers who employ active learning become more enthusiastic. Is it the method, or the delivery? Furthermore, the file drawer effect could still at play here. Although effect sizes were symmetrical on funnel plots, there could easily be an absence of either neutral or pro-passive studies. Even if I had such data, I'd probably find it harder to publish a pro-passive learning study than a pro-active learning study.

So in summary, the data says that 100% passive learning is worse than <90% passive. Or to put it another way, some active learning is better than none. Whether this means that the highly efficient lecture should be abandoned in STEM far from certain, and there are mechanisms to introduce active elements into lectures. What is fairly certain is any mechanism that can engage students during a lecture is probably worth doing.

]]>https://theotherdrx.wordpress.com/2014/07/11/active-vs-passive-learning-in-stem-is-a-little-better-than-a-lot/feed/3theotherdrxThe White Swan hypothesis: How not to fool yourself in researchhttps://theotherdrx.wordpress.com/2014/07/02/the-white-swan-hypothesis-how-not-to-fool-yourself-in-research/
https://theotherdrx.wordpress.com/2014/07/02/the-white-swan-hypothesis-how-not-to-fool-yourself-in-research/#commentsWed, 02 Jul 2014 15:26:04 +0000http://theotherdrx.wordpress.com/?p=56Continue reading →]]>There is a simple test that I use to determine whether a potential graduate student is in a suitable frame of mind to undertake post-graduate research in science, and this test seems applicable to most disciplines (maybe except Maths). Ask the student how they would ‘test’ the hypothesis that ALL swans are white. Here in the UK, we have three native species of swan. They are all normally white in adult-phase, and any UK/EU student will be familiar with notion that swans are white. They think they already know the most likely answer before gathering any data, a common problem in research. So how do students go about testing such a hypothesis? There are two (or three) distinct approaches:

The first student would go to a local country park, where they would ‘look for’ white swans. They would find hundreds of white swans. They would go around all of the local parks and find even more white swans. Their preconceived notion of swans being white is supported. They come back to say “I have looked in all of the likely places, looking for white swans, and I counted several hundred. All swans are white, hypothesis proven”.

The second student would also go around the local parks and country parks, taking careful note of the number of white swans, but looking specifically for black swans, or swans of any colour other than white. They return to say “I have looked in all likely places looking for swans, and all of the swans I found were white. If there was a black swan, I think I would have spotted it, but I didn’t see any. Therefore, the hypothesis is supported (for now) that all swans are white”

The third student would go further afield, researching whether there have been any sightings of swans that are not white, and following up on such sightings, and gathering evidence that would disprove the central hypothesis. They return to say “I have seen evidence of thousands of white swans, of three different species (as determined by size, and by bill colour), which initially seemed to support the hypothesis, but I’ve got evidence of a black swan, and it is of a different species. I also have evidence of reports of a few pairs breeding elsewhere in the UK. Apparently they are escapees from exotic bird collections, and that swans are black in Australia, and other species and black and white. The central hypothesis has been disproved. I will now refine my hypothesis and undertake further research on…”

The first student knew what he was looking for and looked for data that ‘fitted’ his hypothesis, or belief, that all swans are white, and he gathered as much evidence as possible to convince himself (and his supervisor) that this is the case. I see this confirmation bias first hand all too often in science, and other sectors also (see Thomas Gilovich‘s work for some excellent studies). The second student went out looking for black or coloured swans, so was seeking to disprove the hypothesis. OK, so he didn’t quite try hard enough, but came to a logical conclusion. The third student also went out looking for coloured swans, tried that bit harder to seek out data that would disprove his hypothesis. If even he couldn’t disprove it, despite his best efforts, then he could be content with his data being of the highest quality. Reported sightings wouldn’t be enough, he needed to see one for himself. Having disproved the hypothesis, he could now focus on other specific hypotheses armed with this new knowledge e.g. whether swans are mostly white, or whether the ratio of white:black is changing over time are very different questions/hypotheses that can now tested by enquiry to further knowledge in this area. Formulating the initial hypothesis is important in designing a study, and its limitations, and data from both students 1 and 2 would correctly support the hypothesis that MOST swans in the UK are white, but not that ALL swans are white.

Now, who would you take on for a research project to test whether a proposed intervention works, given that if the intervention works, the research team will probably get a big grant to expand the study, lots of kudos and you will probably get a promotion? That depends on you aims. If it is to get promotion and you really don’t care whether the intervention works, or alternatively you are of the mind-set that you hypothesis can’t possibly be wrong for whatever reason, take on student 1 every time. You will be happy with the results! If it is to further knowledge, take on student 3, and if he is not available, settle for student 2.

None of this is new. It is a standard scenario (or variation thereof) used by many scientists to illustrate reliable methods of gathering evidence, and how easily pre-conceived ideas can bias the data generated. The biggest fear is how this mind-set of the scientist can affect the data. Might student 1 see a black swan by chance, but either a) convince himself that it was a large, dark goose, or even worse b) suppress the data for fear of not supporting the hypothesis that the professor/group leader supports? Yes, it happens a lot. Read up on the File Drawer problem, where negative studies i.e. those with effect of treatment/intervention vs control, tend not to get published, whereas studies with a clear effect do get published. If the data ‘fits’ current convention, then it’s even more likely to be published in a higher impact journal. This has a serious impact on subsequent meta-analyses. Read Ben Goldacre’s Bad Pharma. Read up on how studies are being retracted in science for both honest mistakes and dishonest practise. One potential solution is to design studies with individuals who have generated data that opposes your own hypothesis. Such Sceptic-Proponent collaborations should reduce confirmation bias, and maybe bring opposing factions together. At this point I could go on to discuss in detail Karl Popper‘s critical rationalism, and discuss falsificationism, but I won’t.

In future blogs, I will outline some of the black swans that I have found over the years. Some of them have made me unpopular, albeit temporarily in most cases, but all of them have advanced knowledge in some way, and allowed me to focus on something potentially more worthwhile. None of the black swans that I identified have accelerated my career as much as if I had generated data supporting the initial hypotheses, but I have at least published the findings in peer-reviewed journals to outline the problems to other scientists. This is a major part of scientific enquiry. It is not such a bad thing to have formulated a hypothesis that is subsequently disproved, as long as at the time, it was done with the best possible enquiry methods. It is a major part of science that we have to accept.

PS The smaller black birds in the image are coots.

]]>https://theotherdrx.wordpress.com/2014/07/02/the-white-swan-hypothesis-how-not-to-fool-yourself-in-research/feed/2theotherdrx_64744262_photoStrategies for improving exam feedbackhttps://theotherdrx.wordpress.com/2014/06/24/strategies-for-improving-exam-feedback/
https://theotherdrx.wordpress.com/2014/06/24/strategies-for-improving-exam-feedback/#commentsTue, 24 Jun 2014 11:12:49 +0000http://theotherdrx.wordpress.com/?p=49Continue reading →]]>A frequent complaint from students regarding exams is the lack of opportunities to learn from their exams. As such, this has prompted education commentators to question their use, over other assessments where students can learn from the assessment and gain useful feedback from the assessed work. Well, I didn’t learn to drive on my driving test, but I admit, that is a lazy and poor answer. Aside from the weaknesses of exams (which Phil Race has covered very nicely recently) they have a prominent position in STEM subjects at least, a position that will not change as long as accreditation bodies promote their use. Furthermore, exams appear on our Key Information Sets (KIS) and their presence is deemed good. So aside from their perceived weaknesses, let’s make the most of them.

Feedback from every exam that I have ever taken has been in the form of a number or a letter. Actually not every exam, as I received some excellent feedback from my driving test examiner, but a part from that, nothing. I’ve never seen a script, and don’t know where strengths and weaknesses are. Why is this? We give copious amounts of feedback on other assignments, so why not here where students probably make the same repeated mistakes time and time again? It’s largely down to the practicalities of giving it back, and dealing with the fallout of contested marking. GCSE and A-levels are tricky as performed by outside exam boards, but what about in Universities? I set the papers, I mark the papers, and the marked scripts sit under my desk from one year to the next, when they are disposed of. I have no excuse for not using them for educational gain.

I recently did a short trial of giving of exam feedback to students. My initial approach was a single sheet outlining the key points on my mark scheme that were met to basic pass, merit or distinction, and outlined areas that were missed completely, with room for added extras that were relevant but not on a prescribe scheme. A simple marking matrix really, which students preferred to just receiving a grade, and for the first time they received a question-by-question marks breakdown, but moaned that they would like to see the scripts. I’ll probably do this again, but it does take significantly more time than I am allocated for exam marking. Another colleague showed students their scripts on a different module, and they were ‘happier’. However most students had effectively left the building for that academic year, or even for good, so this is still not logistically ideal.

So, why not make use of the box of scripts under the desk from last year? I frequently get asked “what does it take to pass, a 2:1 or a first in this module?” At the last taught session of the module, I get out last year’s attempts, and after anonymising them, I hand them out. They are sometimes astounded that we really do give 100% for essay answers, but are also shocked by the sheer quality and thought put into the work. They often comment on the poor quality of the 40% scripts, but I make it clear that many in the room will probably produce work of equal or lower quality. I ask them what they could produce now on the same questions. It is hoped that the real hard work of exam revision starts at this point.

So how do students gain from this? Firstly, students realise that they are not going to get a good mark in my modules by regurgitation of my lecture notes, especially when answers are out of context to the question. Secondly, at 2nd year and beyond, marks are not given over first class level without clear evidence of further reading, or independent thought that can be clearly identified on the script. Students see where last year’s cohort did this and were rewarded. Thirdly, students see how the marking scheme is non-linear, and that it is much easier to get the first 40 marks, as defined by the minimum pass level descriptor, than the next 20 marks, which require much more context and argument. It is not just about writing 100 facts and counting up 100 ticks for full marks. Finally, students see how answers can become very good answers if clearly put into context. A well annotated diagram linked to explanatory text, for example, can quickly demonstrate understanding of a complex concept. Seeing evidence of how assessors (well, how I assess them on their module) award marks, and what for, may account for the unusual marks profiles in most of my modules. Many modules in my department have coursework marks that are 10-20% higher than exam marks, and good coursework marks can easily compensate for a failed exam under our current academic awards framework. Since employing the strategy of allowing students to see last years’ scripts, exam marks are catching up with coursework marks, and for the first time I had exam marks exceeding coursework marks in 2 of my modules.

One final point: If only using last year’s scripts, you may ask “you are not letting students learn from their mistakes. Isn’t that central to giving good feedback?”. I disagree, to a point. Students show clear evidence of being able to learn from other people’s mistakes, not just their own. Maybe they will learn better when it is someone else’s mistake.

]]>https://theotherdrx.wordpress.com/2014/06/24/strategies-for-improving-exam-feedback/feed/1theotherdrxMr SchadenfreudeiCard: The low-tech, low cost tool for assessing student engagement in large groupshttps://theotherdrx.wordpress.com/2013/12/17/icard-the-low-tech-low-cost-tool-for-assessing-student-engagement-in-large-groups/
https://theotherdrx.wordpress.com/2013/12/17/icard-the-low-tech-low-cost-tool-for-assessing-student-engagement-in-large-groups/#commentsTue, 17 Dec 2013 12:26:07 +0000http://theotherdrx.wordpress.com/?p=3Continue reading →]]>In this post, I discuss my initial use of simple coloured card voting system for assessing student understanding in a large group University setting. Not a novel concept, but simple and effective.

There have been some amusing Twitter debates recently on the topic of ‘no hands up’ policies, and the contentious use of randomly asking questions to pupils. The issue of tutors assessing understanding of entire groups is not really addressed by either picking random students, or allowing students to elect to answer a question. We get a potentially non-representative snapshot, and by carefully selecting who answers, or what question gets asked, we can convince ourselves that we are doing a great job. Working in a University setting with large groups, I am focussed on assessing whether everyone in the group understands, who doesn’t, and reasons for lack of understanding.

So how can all students be encouraged to participate in Q&A sessions, whilst allowing the tutor to assess student understanding of key concepts by the whole cohort? One area that is being used particularly in Universities is Mobile technology. Apps such as Socrative allow the group to submit answers which can be displayed on screen, but require users’ own devices, which may be OK for Universities, but not really applicable to schools. Similarly Twitter can be used, where students give an answer in class and answers appear on screen. On the downside, the majority of replies are off-task, although once the novelty wears off, maybe this approach will have some merits. Similarly, ‘clicker’-type voting devices are an option, as they do not require the use of students’ own mobiles. Programming individual devices to individual students, if such information is needed can be a barrier, especially for large cohorts.

I frequently put a question on screen, and then ask groups of up to 200 undergraduate students the following questions (in this order, with typical responses noted):

Hands up everyone who thinks the answer is true (25%)

Now hands up if you think it is false (25%)

Hands up who doesn’t know (25%). And as I lose the will to live…

Hands up who doesn’t care (25%).

If that doesn’t work, simultaneously I ask for left hand for true, right hand for false. And so on. Getting large groups to answer questions so that I can gauge levels of understanding of key concepts is not easy, especially in my admittedly didactic teaching sessions.

The iCard voting system: For when Technology-Enhanced Learning seems like an un-necessary evil

Expanding on the concept of left-hand or right-hand up, is the simple use of coloured voting cards. Hold up a red or green sheet of A4 to test understanding of a key point. It allows the tutor to see who understands (or if we are being pedantic, those who think they know the answer), who doesn’t, and who is disengaged completely. However a 50:50 question is not very informative. This is where the iCard comes in. Four (or more) pieces of coloured card (A6 should suffice) liked by a treasury tag can be given out at the start of the session. Questions can be asked to the entire group, which can be a simple recall of a straightforward fact that is central to the understanding a concept, or a more testing question that requires a few minutes of working out.

Is it useful?

Teaching large groups of anything up to 200 renders ‘hand up…’ pretty useless. Even in a smaller group, getting everyone to consider the question, and seeing evidence of some effort by all students is near impossible. In contrast, the iCard-type approach does work. My initial use of this was in an end of semester informal test with 60 students. Questions were projected via Powerpoint with the question, plus 4 colour coded answers.

Initially, as anticipated, a minority did not engage well, but 90% were happy to answer all questions. To get the remaining 10% to engage, there has to be an element of bullying. These students soon found out that if they didn’t answer with the rest of the group, I would push them individually for an answer, and then inform the rest of the group whether the individual was right or wrong. These students soon started to answer with the rest of the group.

What did I learn from using the iCard?

1) Student misconceptions on ‘simple’ fundamental points. Some of the questions were deliberately simple, and I anticipated a >95% of respondents giving the correct answer. By quickly scanning the room for incorrect answers, I could identify who got the answer wrong, and crucially, what the misconception was. I could explain why the given answer(s) might be incorrect, without necessarily highlighting students with wrong answers, by encouraging students to keep their card ‘close to their chest’

2) Identifying weaker students. As the majority of students were answering correct for each individual question, I could focus my attention to the incorrect answers. As expected, some weaker students consistently answered incorrectly. However some students whom I had down as particularly strong students were exposed as having gaping holes in their knowledge, sometimes on fundamental points.

3) Identifying topics that were poorly understood. Two topics out of 11 were particularly poorly answered. I can now look at, and take action on a) how those topics were delivered, and/or b) whether there is some underpinning knowledge that is missing from earlier in the course.

4) Poorly worded questions can trip up students. All questions should have only one correct answer. However questions can be ‘read’ differently, and in two questions, an ‘alternate reading’ of the question would lead the student to answer incorrectly. By discussing why answers are wrong, the students could argue their case. As a result, I will re-writing a few questions before using them again.

5) Students’ responses can spark debate over contentious points. As noted above, students are happy to argue with me if they think that I am wrong. Although the voting cards do not allow students to express views, or give complex and well-articulated answers, the voting cards are an ideal way to initiate subsequent debate.

I must stress that this is not a new concept, and is certainly not my idea. They are so cheap and simple, yet seem to be effective, especially with carefully worded questions or tasks. This type of in class formative assessment seems to be used only sparsely used in Universities where large group teaching is common, and where if anything, mobile technology and clickers are being introduced more widely. With such diverse opinions on how tutors should assess student understanding during sessions before ‘moving on’, maybe it’s time to re-visit some old technology before blindly moving on to a technology-based approach.

Benefits include:

1) They are very cheap and easy to make

2) They are applicable to situations where tutors are assessing a right/wrong answer/MCQ answer, or where specific/defined opinions of a group are being sought, and especially for large groups

3) Bureaucracy of obtaining anything that is remotely costly or technological can be a barrier to implementation.

4) Some tutors will always remain as technophobic as humanely possible, and a minority are particularly ‘risk averse’. Even technophiles have concerns that the time teaching students how to use any technology, and technology failure may detract from the learning.

On the negative side:

1) There is no permanent record of who voted for which answer, only the tutors judgement on who or what to follow up.

2) You will get it in the neck from advocates of Technology Enhanced Learning

In summary, all students get the same questions, and get the same treatment and there is less of a requirement to ‘pick on’ individual students. Please feel free to comment on potential uses, and importantly, limitations of use.