An advice till the censorship of science by the directorate of the Centraal Planbureau is lifted

Archive

Tag Archives: logic

Before the UK Brexit referendum of June 23 2016 I warned that referenda can be silly and dangerous, see here in April 2016. I clarified that the Brexit referendum question was flawed in design. I did not look deeper into this, since, like so many others, I had been put on the wrong foot by the 2016 poll average that suggested a continuation of the status quo. After the surprise outcome, I advised the youngsters in the UK to focus attention on this design flaw, as this is the clearest issue and proper reason to argue that the outcome should be annulled, see here in June 2016. When Kenneth Arrow passed away in early 2017, this was an occasion to have a summary text published in the RES Newsletter, which was republished on the LSE Brexit blog. Now, with the 2017 UK general election, I have been looking a bit deeper at these UK election issues.

One result has been the use of the Lorenz curve and Gini coefficient to show the disproportionality in the UK between votes and seats. Almost all EU members have Proportional Representation (PR) with clear exceptions of the UK and France that have District Representation (DR). Apparently, this is a main reason for the influence of populism in the latter two countries. DR allows that politicians are elected with a minority of the vote, which causes a gap with the majority. Politicians like David Cameron can use a referendum to introduce an element of proportionality. Yet referendum questions are quickly flawed.

A main confusion

Another surprise for me was the existence of the Re-Leavers who make up some 23% of the electorate, and who are very likely also a major section in the House of Commons that supported the invoking of article 50.

Apparently many British voters are awfully respectful of democracy, and while they voted for Remain, they accept the referendum outcome, and let their voting behaviour now be guided by Leave. In other words: they no longer operate as voters who are supposed to express their first preference, but they operate as politicians who develop policy using such preferences.

Voters are better not confused about the following angles:

It is one thing to accept the Brexit referendum outcome as a fact. Please accept facts.

It is another thing to discuss the consequences of that fact.

There is always the distinction between your first preference and dealing with new developments.

Your first preference can change, but rather only because of arguments, and not just because of a majority view.

For me, it is easy to say this, in a country that is used to PR. In the UK case of DR it may well be that strategic voting requires voters to run with with herd. Nevertheless, the Re-Leavers cause quite a confusion in the voting record. Also for the general elections of 2017 we now can observe that we don’t know what people really want.

The YouGov data of June 12th – 13th 2017

Anthony Wells provided and discussed these data that show the impact of the Re-Leavers. Let me quote the main part, and for this quotation I also moved their copyright sign up.

These early June data are most relevant for judging the June 8 2017 UK general election. Apparently 26% of all adults in Great Britain (UK excl. Northern Ireland), but also 53% of the voters who voted Remain in 2016, reason as follows:

I did not support Britain leaving the EU, but now the British people have voted to leave the government has a duty to carry out their wishes and leave.

I consider this an illogical and rather undemocratic statement.

Logic would require the annulment of the referendum outcome, and not to take it seriously.

In representative government, it is Parliament that determines policy, not the people by some referendum.

Most of the EU has PR and thus the notion of representative government. The 2016 Remain voters want to remain in the EU, but, 53% apparently also reject the EU notion of representative government, and instead they appeal to the populism of referenda.

More on the design flaw of the Brexit referendum question

A few days ago, I rephrased one aspect as follows: With R for Remain, S for Soft (EEA), T for some Tariffs, and N for No Deal (WTO), there are 6 possible strict preferences for a deal, from R > S > T (Theresa May before the referendum) to T > S > R (Theresa May after the referendum). If S and T are collected in L (Leave) then there arises the claimed binary choice between R and L. Voters who are in the categories S > R > T or T > R > S would face a hard question. If they expect that R might win, but also that their own preferred option might not win, should they still go out and vote ? They might decide not to turn out, or develop assumptions about what L actually might become, given what what they think about future developments. Similarly for the versions of R. See the voting theory about single peaked preferences (and these are not single peaked but double peaked). Overall it is a fallacy that there is a binary choice. Lawyers can argue that one either invokes article 50 or doesn’t invoke it, yet the referendum isn’t such a legal case, for it is an issue of policy preferences.

In fact, above YouGov poll provides us a bit more information on this issue. Look at this section on their page 15:

Look at the column of the total (with 1651 people in the weighted sample). 35% are clearly for Remain, in their first rank. 47% are clearly against Remain, in their last rank. Thus the middle 8 + 9 ≈ 18% (rounding error) is rather confused, for they put Remain between one of the Leave options. How would they have to vote at a referendum that only allows R or L ? We find similar percentages for the subgroups on the right hand side.

Conclusion

The discussion in the UK would be served by greater awareness of these distinctions:

The difference between voting for your first preference (setting the target) and trying to second-guess politicians (as if you are in the driver seat).

A valid question and an invalid or flawed one, like the Brexit referendum question.

The crucial differences between Proportional Representation (PR) and District Representation (DR), linked to the distinction between representative democracy (mostly PR) and populism (mostly DR).

There is also something not discussed in the above, but that is the difference between the failing Trias Politica and improved democracy with an Economic Supreme Court.

Euclid wrote about 300 BC. Much earlier, Hammurabi wrote his legal code around about 1792-1749 BC. It is an interpretation of history: Hammurabi might have invented all of his laws out of thin air, but it is more likely that he collected the laws of his region and brought some order into this. Euclid applied that idea to what had been developed about geometry. The key notions were caught in definitions and axioms, and the rest was derived. This fits the notion that Pierre de Fermat (1607-1665) started as a lawyer too.

The two cultures: science and the humanities

In Dutch mathematics education there is a difference between A (alpha) and B (beta) mathematics. B would be “real” math and prepare for science. A would provide what future lawyers can manage.

In the English speaking world, there is C.P. Snow who argued about the “two cultures“, and the gap between science and the humanities. A key question is whether this gap can be bridged.

In this weblog, I already mentioned the G (gamma) sciences, like econometrics that combines economics (humanities) with scientific standards (mathematical models and statistics). Thus the gap can be bridged, but perhaps not by all people. It may require some studying. Many people will not study because they may arrogantly believe that A or B is enough (proof: they got their diploma).

Left and right hemisphere of the brain

Another piece of the story is that the left and right hemispheres of the brain might specialise. There appears to be a great neuroplasticity (Norman Doidge), yet overall some specialisation makes sense. The idea of language and number on the left hemisphere and vision on the right hemisphere might still make some sense.

“Broad generalizations are often made in popular psychology about certain functions (e.g. logic, creativity) being lateralized, that is, located in the right or left side of the brain. These claims are often inaccurate, as most brain functions are actually distributed across both hemispheres. (…) The best example of an established lateralization is that of Broca’s and Wernicke’s Areas (language) where both are often found exclusively on the left hemisphere. These areas frequently correspond to handedness however, meaning the localization of these areas is regularly found on the hemisphere opposite to the dominant hand. (…) Linearreasoning functions of language such as grammar and word production are often lateralized to the left hemisphere of the brain.” (Wikipedia, a portal and no source)

For elementary school we would not want kids to specialise in functions, and encourage the use of neuroplasticity to develop more functions.

Pierre Krijbolder (1920-2004) suggested that there is a cultural difference between the Middle East (Jews), with an emphasis on language – shepherds guarding for predators at night – and the Indo-Europeans (Greeks), with an emphasis on vision – hunters taking advantage of the light of day. Si non e vero, e ben trovato.

There must have been at least two waves by Indo-Europeans into the Middle-East. The first one brought the horse and chariot to Egypt. The second one was by Alexander (356-323 BC) who founded Alexandria, where Euclid might have gotten the assignment to write an overview of the geometric knowledge of the Egyptians, like Manetho got to write a historical overview.

Chariot spread 2000 BC. (Source: D. Bachmann, wikimedia commons)

It doesn’t actually matter where these specialisations can be found in the brain. It suffices to observe that people can differ in talents: lawyers would deal much with language, and for space you might turn to mathematicians.

Pierre van Hiele (1909-2010) presents a paradox

The Van Hiele levels of insight are a key advance in epistemology, for they indicate that human understanding itself is subjected to some structure. The basic level concerns experience and the direct language about this. The next level concerns the recognition of properties. Another level is the recognition of relations between these properties, and the informal deductions about these. The highest level is formalisation, with axiomatics and formal deduction. The actual number of levels depends upon your application, but the base remains in experience and the top remains in axiomatics.

Learning goes from concrete to abstract, and from vague to precise.

Thus, Euclid with his axiomatic approach would be at the highest level of understanding.

We arrive at a paradox.

The axiomatic approach is basically a legal approach. We start with some rules, and via substitution and reasoning we arrive at other rules. This is what lawyers can do well. Thus: lawyers might be the best mathematicians. They might forget about the intermediate levels, they might discard the a-do about space, and jump directly to the highest Van Hiele level.

A paradox but no contradiction

A paradox is only a seeming contradiction. The latter paradox gives a true description in itself. It is quite imaginable that a lawyer – like a computer – runs the algorithms and finds “the proper answer”.

However, for proper mathematics one must be able to switch between modes. At the highest Van Hiele level, one must have an awareness of applications, and be able to translate the axioms, theorems and derivations into the intended interpretation. In many cases this may involve space.

Just to be sure: the Van Hiele levels present conceptual divides. At each level, the languages differ. The words might be the same but the meanings are different. This also causes the distinction between teacher-language and student-language. Often students are much helped by explanations by their fellow students. It is at the level-jump, when the coin drops, that meanings of words change, and that one can no longer imagine that one didn’t see it before.

Thus it would be a wrong statement to say that the highest Van Hiele level must have command of all the lowest levels. The disctinction between lawyers and mathematicians is not that the latter have command of all levels. Mathematicians cannot have command of all levels because they have arrived at the highest level, and this means that they must have forgotten about the earlier levels (when they were young and innocent). The distinction between lawyers (math A) and mathematicians (math B) is different. Lawyers would understand the axiomatic approach (from constitutional law to common law) but mathematicians would understand what is involved in specific axiomatic systems.

Example 1

I came to the above by thinking about the following problem. This problem was presented as an example of a so-called “mathematical think-activity” (MTA). The MTA are a new fad and horror in Dutch mathematics education. First try to solve the problem and then continue reading.

Discussion of example 1

The drawing invites you to make two assumptions: (1) the round shape is a circle, (2) the vertical x meets the horizontal x in the middle. However, why would this be so ? You might argue that r = 6 suggests the use of a circle, but perhaps this still might be an ellipse.

In traditional math ed (say around 1950), making such assumptions would cost you points. In fact, the question would be considered insoluble. No question would be presented to you in this manner.

In traditional math, the rule would be that the proper question and answer would consist of text, and that drawings only support the workflow. Also, the particular calculation with r = 6 would not be interesting. Thus, a traditional presentation would have been (and also observe the dashes):

A quick observation is that there are three endpoints, and it is a theorem that there is always a circle through three points. So the actual question is to prove this theorem, and you are being helped with a special case.

Given that you solved the problem above, we need not look into the solution for this case.

The reason for giving this example is: In mathematics, text has a key role, like in legal documents for lawyers. Since mathematicians are lawyers of space and number, they can cheat by using supporting drawings, tables and formulas. But definitions, theorems and proofs are in text (formulas).

(Potentially lawyers also make diagrams of complex cases, as you can see in movies sometimes. But I don’t know whether there are particular methods here.)

Example 2

In text it is easy to say that a line has no holes. However, when you start thinking about this, then you must define what such a hole might be. If a hole doesn’t belong to the line, what does it belong to then ? How would you know when you would pass a hole ? Might you not be stepping over holes all the time without noticing ?

These are questions that lawyers would enjoy. They are relevant for math B but can also be discussed in math A.

See the discussion of yesterday, and check that the main steps should be acceptable for lawyers, i.e. math A.

These students should be able to master the symbolism of predicate logic, since this is only another language and a reformulation of common text.

Conclusions

Thus, a suggestion is that students in math A should be able to do more, when better use is made of the legal format.

Perhaps more students, now doing A, might also do B, if their learning style is better supported.

(Perhaps the B students would start complaining about more text though. Would there still be the same question, when only the format of presentation differs ? Thus a conclusion can also cause more questions. See also this earlier discussion about schools potentially manipulating their success scores by causing student underperformance.)

Historians of science study the genesis and development of ideas, e.g. the interaction between scientists via the letters between authors. Van Ulsen (2001:1) reports:

“Beth operated at the difficult boundary of disciplines. Philosophers denounced him as mathematician and logician, while the mathematicians and logicians regarded him, neither in a positive sense, as a philosopher.”

My documentation w.r.t. my own results serves this purpose as well. When I protest against maltreatment of my work then this does not imply that I lack good judgement or would be impolite.

On February 18 & 19 prof. dr. H.C.M. (Harrie) de Swart (EUR) (wiki) (born 1944, age 71) blocked my attendance at some colloquia on the history of science, first with the argument that these would be a “closed shop”, subsequently, when this was shown to be untrue, with refusing to give any kind of argument. This amounts to a breach of the integrity of science. The following is a summary of the case. The email exchanges with a discussion in English are here: part 1 with De Swart and part 2 with prof. dr. F.A. (Fred) Muller, the project manager.

NWO projects 2012-2017 on Mannoury, Beth, Heyting and Van Dantzig

There are (1) a NWO-project 360-20-301 running in 2012-2016 on Mannoury and philosophy of language and (2) a NWO-project 360-20-300 running in 2012-2017 on Mannoury, Beth, Heyting and Van Dantzig, with a budget of 617,000 euros.

I discovered the NWO-projects around New Year 2016. The projects mention manager prof.dr. F.A. Muller and researchers PhD-student Mireille Kirkels (Mannoury), dr. Paul van Ulsen (Beth & Heyting) and dr. Gerard Alberts (Van Dantzig). For my current focus Kirkels and Van Ulsen are the relevant contact persons. They wrote that I was welcome to attend (Kirkels January 11 and Van Ulsen January 13 2016).

I do not know what the official position of De Swart is with respect this NWO-project, other than that he apparently manages an email list for the colloquia. I actually was a bit surprised to see his involvement since the project summary did not mention him.

The breach by De Swart on February 18 & 19 2016

On February 19 2016 there was a colloquium for this project. On Februari 18 2016, perhaps not coincidentally just the day before, De Swart blocked my attendance for all of these colloquia.

His first motivation was that the colloquia were a “closed shop”.

When I showed De Swart the email by Van Ulsen (preferring the accomplished PhD above the PhD-student) and stated the inference that there is no “closed shop”, whence his statement was untrue, De Swart replied that I was not welcome, refusing to give me a motivation.

This is a breach of scientific integrity. It blocks the flow of information. A colloquium is not organised for nought. De Swart implicitly slanders to others as if there would be cause to block my attendance. There is a legal distinction between “not welcome” and “forbidden”, but this does not apply here scientifically given De Swart’s original reference to “closed shop”.

I informed De Swart of these implications, but he did not remit. I decided not to attend, if only to protect myself from further abuse. Perhaps De Swart has given a motivation to others (but not to me). If participants have information on this, it should be forwarded to me since it concerns my position.

An educated guess what might have motivated De Swart to breach science

Given the lack of stated motivation, one can only guess about it. The event however must be explained to others.

It is likely relevant to mention an earlier case when De Swart maltreated my work, namely in 2001 on the subject of social welfare and voting theory. I protest against this maltreatment in 2001, see the documentation on my website. This issue is not resolved yet.

I do not know of a public statement by De Swart that replies to my protest.

I do not know about a public statement on content by De Swart concerning my book “Voting Theory for Democracy” (VTFD) (2001, 2004, 2011, 2014) (stable text, different versions of Mathematica). I would applaud it when he would finally find time to study VTFD, and state explicitly whether or not he sees some of his misconceptions on social welfare and voting theory corrected.

My criticism doesn’t only apply to De Swart but also to the Dutch community of researchers on social welfare and voting theory, i.e. that they allow De Swart’s malconduct and do not protect me against it. They apparently also neglect VTFD and related work.

Dutch readers can benefit from my webpage that warns about mathematics about social choice and voting theory.

Dutch readers can also benefit from De Swart’s valedictorian speech for his departure from Tilburg in 2010: speech, Volkskrant may 19, Volkskrant June 5. De Swart sins against science on (at least) two points.

(1) He gives a wrong summary of Arrow’s Theorem, suggesting that there would be proof that no voting scheme is ideal. VTFD explains that Arrow cannot tell us what is ideal, and that his words on rationality, consumer sovereignty and moral necessity do not fit his mathematics. De Swart’s scientifically proven false statement in Dutch is on page 10:

(2) De Swart proposes that the electorate does much more work in the ballot box, e.g. by giving report card numbers (10 to 0) or scores (A to F) to parties, or by ranking political parties by order of preference. Perhaps the effort can be reduced by simply sorting physical logo’s of the parties, but still it is a significant job, given the empirical numbers of parties. De Swart refers to Balinski and Laraki, 2007, in which 2000 voters scored 12 presidential candidates with apparently relative ease. I have my doubts on this. De Swart may have his personal opinion, but it is not scientific to neglect another proposal that may be even better. De Swart obstructs current discussion about electoral reform by advocating impractical ways and closing his eyes for a practical approach towards improvement. Again he appears to be an abstract thinking mathematician without proper attention for empirical matters. My suggestion is that it not only suffices but may even be optimal when people have only one vote. The relevant point is that the professionals in Parliament use the more complex mechanisms. Thus voters form the weight that is attached to the party of their choice. The power of voters can be enhanced by having annual elections. Populism can be checked by having an Economic Supreme Court. Let Parliament investigate these options, so that politicians know what these options actually are. See the Dutch booklet “De Ontketende Kiezer“.

Harrie de Swart, valedictorian speech 2010 on voting

Political economy and social dynamics of having a grudge

It would not be rational when De Swart links this issue on voting since 2001 to my attendance at these colloquia for this NWO-project. However, he may not like that I protest.

My website documents what happens with my findings. This documentation cannot be construed as a grudge on my part (i.e. an emotion that interferes with good judgement). I politely greet De Swart and hope that he finds his way towards science. It would be slander w.r.t. me to suggest that I would confuse the topic of the NWO-project in 2016 with the issue on voting since 2001, and that I would not be able to respond in scientific and civilised manner when my work and person are abused.This present text is another example of a scientific and civilised response to abuse.

When De Swart does not provide decent motivation, breaches the integrity of science and implicitly slanders w.r.t. my person, then there arises an asymmetry. It would be slander to suggest that I would have a grudge against De Swart, yet it is not slander but an unavoidable hypothesis to explain events by that he might have a grudge w.r.t. this issue of voting since 2001.

De Swart is also in error in his management (not necessarily the content) in 2007-2008 of the thesis by M. Cabbolet. De Swart tried again in Eindhoven without telling that it had been rejected in Tilburg, only to be found out later on, see Fiers 2008 and Gerard ‘t Hooft 2014.

Thus on colloquia on Mannoury in 2016, above.

Conclusion

The blockage of my attendance of these colloquia should be lifted. Independently, the breach by De Swart w.r.t. this attendance must be looked into. Resolution of the issue since 2001 w.r.t. voting is required as well. These issues should not be confused. However, De Swart’s breach in 2016 may help readers to grow aware that I had reason to protest in 2001 indeed, and to grow dismayed that the Dutch researchers on social welfare and voting did not resolve this over 15 years, and neither the censorship of science since 1990 for at least 25 years by the directorate of the Dutch Central Planning Bureau (CPB) (see the About page).

What I may need to explain as an author is how this relates to my own work.

A nice introduction to epistemology, at the level of the international baccalaureate (IB) programme is the book by Richard van de Lagemaat (CUP, now a new 2015 edition).

A general principle is that philosophy should use mathematics education as its empirical field of reference. When philosophy hangs in the air then it is at risk of getting lost. The education of mathematics has adequate challenge for dealing with abstract notions.

Some main steps in the diagram are:

Jean Piaget introduced stages of development. Epistemology tends to focus on the last stage, with a fully developed rational being who wonders what can be known and how this can be achieved. It makes sense to distinguish stages in such questions however. Pierre van Hiele removed Piaget’s dependence of stages upon age, and turned the issue into a logical framework for epistemology. With the Definition & Reality methodology this framework is also empirically relevant. This is also very useful for the link of philosophy to education. See Pierre van Hiele and epistemology.

Karl Popper turned Otto Selz’s methodology for psychology into a philosophy of science in general. This uses falsifiability as a demarcation between science and non-science. Since the Anglo-saxon world tends to distinguish science and the humanities (humaniora), the general term “theory of knowledge” (epistemology) will do.

Selz inspired Adriaan de Groot to create his experiments with chess masters. Later De Groot continued in methodology, and it seems that he is the one who introduced the empirical cycle. His book Methodologie ends in depressing awareness that science cannot establish truth as in mathematics. Thus De Groot advances the uplifting Forum Theory, that focuses on the rules of conduct within the scientific community. While we may not discover the real truth we still can ask why we should trust these guys and gals.

De Groot and Van Hiele were also inspired by their UvA math teacher Gerrit Mannoury (1867–1956). See this project about Mannoury and significa.

The dashed arrow from Van Hiele to De Groot is the unfortunate failed transfer of the theory of levels of insight. De Groot refers to the thesis but missed this notion, see this discussion.

My book A Logic of Exceptions (ALOE) (1981, 2007, 2011) is already deep into methodology. ALOE looks into the logical paradoxes and suggests that empirical sense may help to get rid of mathematical nonsense. There is a distinction between Gödel’s theorems and the interpretation that he gave to them. For the issue of volition, determinism and chance there is no experiment that allows to distinguish what is empirically the case. (I haven’t yet looked at the interpretation of the recent experiment with Bell’s equation at TU Delft, see the websites by Ronald Hanson and Richard Gill.)

The abbreviation DRGTPE stands for the book Definition & Reality in the General Theory of Political Economy. This 2000, 2005, 2011 book had a precursor already called Background Papers to DRGTPEthat collected papers from 1989-1992. This essentially gave the framework for political economy, in both mathematical model and empirical methodology. The 1994 book Trias Politica & Centraal Planbureau (TP & CPB) (in Dutch) referred to De Groot’s Forum Theory to clinch the argument for an Economic Supreme Court (ESC). Subsequently, DRGTPE 2000 contains a constitutional amendment how the ESC should satisfy such Forum rules.

The news in November 2015 is that I have grown more aware of the importance of Forum Theory for the selection of definitions for applications. This element is implicit in the earlier development but it is useful to state it explicitly, given the importance of the role of definitions. Research groups might be characterised by the definitions that they select. It can depend upon the quality of the rules how flexible research groups are with experiments and adverse information.

Thus, to restate in text what is depicted in the last box in the diagram: This 2015 GTOK has the standard logic (with ALOE), methodology (with Forum Theory), and epistemology, and has more awareness of:

levels of insight or understanding

Definition & Reality methodology

Forum Theory is especially required for the application of definitions.

Application

Some applications of this GTOK are:

(1) My forecast in 1990 (CPB memo 90-III-38) was that unemployment would continue to be high unless Parliament would redesign both the structure of policy making and some policies and markets. I repeated this forecast in 1992, 1994, 2000 extending with other risks like on environment and financial markets, and the condition of the Economic Supreme Court. In the period 1990-2007 Holland seemed to have a lower level of unemployment, which might be a cause for people not paying attention to the analysis. This lower level wasn’t achieved by better policies but by welfare payments (financed by natural gas) and by exporting unemployment by means of maintaining low wages (beggar thy neighbour). The 2007+ crisis and return to higher unemployment confirms my analysis. Though a major element relies on definitions, the forecast as a whole still was falsifiable. Of course the forecast was vague, and not specified with the year 2007, but we are dealing with structure. This also explains why I emphasize that Dirk Bezemer misinforms Sweden and Dutch Parliament: because he keeps silent about the theoretical confirmation given by the empirical experiment of 1990-2007.

(2) The scheme allows us to deal with the confusions by Stellan Ohlsson (abstract to concrete) and Ben Wilbrink (Van Hiele’s theory of levels wouldn’t be empirical).

(3) The scheme allows us to deal with the problem of universals. Van Hiele “demonstrated” the general applicability of the theory of levels by using the example of geometry. (And geometry uses demonstration as a method of proof too.) He mentioned that the theory had general applicability and mentioned chemistry and didactics as other examples, without working out those examples. Freudenthal neglected Van Hiele’s general claim, put him into the box of “geometry only”, and claimed that he, Freudenthal himself, had shown the applicability to mathematics in general. (See here.) Of course, Freudenthal also had the problem that a universal proof is impossible, since you would need to check each field of knowledge. However, now with the definition reality methodology, we can take the levels of insight as a matter of definition. Like the law of conservation of energy defines what we regard as “energy”. The problem shifts to application. For this, there is Forum theory.

The earlier discussion on Stellan Ohlsson brought up the issue of abstraction. It appears useful to say a bit more on terminology.

An unfortunate confusion at wikipedia

Wikipedia – no source but a portal – on abstraction creates a confusion:

Correct is:“Conceptual abstractions may be formed by filtering the information content of a concept or an observable phenomenon, selecting only the aspects which are relevant for a particular purpose.” Thus there is a distinction between abstract and concrete.

Confused is: “For example, abstracting a leather soccer ball to the more general idea of a ball selects only the information on general ball attributes and behavior, eliminating the other characteristics of that particular ball.” However, the distinction between abstract and concrete is something else than the distinction between general and particular.

Hopelessly confused is:“Abstraction involves induction of ideas or the synthesis of particular facts into one general theory about something. (…) Bacon used and promoted induction as an abstraction tool, and it countered the ancient deductive-thinking approach that had dominated the intellectual world since the times of Greek philosophers like Thales, Anaximander, and Aristotle.” This is hopelessly confused since abstraction and generalisation (with possible induction) are quite different. (And please correct for what Bacon suggested.)

A way to resolve such confusion is to put the categories in a table and look for examples for the separate cells. This is done in the table below.

In the last row, the football itself would be a particular object, but the first statement refers to the abstract notion of roundness. Mathematically only an abstract circle can be abstractly round, but the statement is not fully mathematical. To make the statement concrete, we can refer to statistical measurements, like the FIFA standards.

The general statement All people are mortal comes with the particular Socrates is mortal. One can make the issue more concrete by referring to say the people currently alive. When Larry Page would succeed in transferring his mind onto the Google supercomputer network, we may start a philosophical or legal discussion whether he still lives. Mutatis mutandis for Vladimir Putin, who seems to hope that his collaboration with China will give him access to the Chinese supercomputers.

Category (mistake)

Abstract

Concrete

General

The general theory of relativity

All people living on Earth in 2015 are mortal

Particular

The football that I hold is round

The football satisfies FIFA standards

The complex relation between abstract and general

The former table obscures that the relation between abstract and general still causes some questions. Science (Σ) and philosophy (Φ) strive to find universal theories – indeed, a new word in this discussion. Science also strives to get the facts right, which means focusing on details. However, such details basically relate to those universals.

The following table looks at theories (Θ) only. The labels in the cells are used in the subsequent discussion.

The suggestion is that general theories tend to move into the abstract direction, so that they become universal by (abstract) definition. Thus universal is another word for abstract definition.

A definition can be nonsensical, but Σ strives to eliminate the nonsense, and officially Φ has the same objective. A sensible definition can be relevant or not, depending upon your modeling target.

Let us redo some of the definitions that we hoped to see at wikipedia but didn’t find there.

Abstraction is to leave out elements. Abstractions may be developed as models for the relevant branch of science. The Van Hiele levels of insight show how understanding can grow.

A general theory applies to more cases, and intends to enumerate them. Albert Einstein distinguished the special and the general theory of relativity. Inspired by this approach, John Maynard Keynes‘s General Theory provides an umbrella for classical equilibrium (theory of clearing markets) and expectational equilibrium (confirmation of expectations doesn’t generate information for change, causing the question of dynamic stability). This General Theory does not integrate the two cases, but merely distinguishes statics and its comparative statics from dynamics as different approaches to discuss economic developments.

Abstraction (A) is clearly different from enumeration (G). It is not impossible that the enumeration concerns items that are abstract themselves again. But it suffices to assume that this need not be the case. A general theory may concern the enumeration of many particular cases. It would be statistics (GΣ) to collect all these cases, and there arises the problem of induction (GΦ) whether all swans indeed will be white.

Having both A and G causes the question how they relate to each other. This question is studied by R.

This used to be discussed by traditional epistemology (RΦ(a)). An example is Aristotle. If I understand Aristotle correctly, he used the term physics for the issues of observations (GΣ) and metaphysics for theory (AΦ & GΦ). I presume that Aristotle was not quite unaware of the special status of AΣ, but I don’t know whether he said anything on this.

Some RΦ(a) neglect Σ and only look at the relation between GΦ and AΦ. It is the price of specialisation.

Specialisation in focus is also by statistical testing (RΣ(b)) that only looks at statistical formulations of general theories (GΣ).

The falsification theory by Karl Popper may be seen as a philosophical translation (RΦ(b)) of this statistical approach (RΣ(b)). Only those theories can receive Popper’s label “scientific” that are formulated in such manner that they can be falsified. A black swan will negate the theory that all swans are white. (1) One of Popper’s problems is the issue of measurement error, encountered in RΣ(b), with the question how one is to determine sample size and level of confidence. Philosophy may only be relevant if it becomes statistics again. (2) A second problem for Popper is that AΣ is commonly seen as scientific, and that only their relevance can be falsified. Conservation of energy might be relevant for Keynes’s theory, but not necessarily conversely.

The Definition & Reality methodology consists of theory (RΦ(c)) and practice (RΣ(c)). The practice is that scientists strive to move from the particular to AΣ. The theory is why and how. A possible intermediate stage is G but at times direct abstraction from concreteness might work too. See the discussion on Stellan Ohlsson again.

Conclusions

Apparently there exist some confusing notions about abstraction. These can however be clarified, see the above.

The Van Hiele theory of levels of insight is a major way to understand how abstraction works.

Paradoxically, his theory is maltreated by some researchers who don’t understand how abstraction works. It might be that they first must appreciate the theory before they can appreciate it.

To my surprise, today gives more on psychology. Since highschool I denote this as Ψ. I appreciate social Ψ (paper 1996) but am not attracted to other flavours of Ψ.

Last week we looked at some (neuro-) Ψ on number sense, and a few days ago at some cognitive Ψ. Dutch readers may look at some comments last year w.r.t. the work by Leiden Ψmetrist Marian Hickendorff who explains that she is no expert on math education but still presents research on it.

Today I will look at what Dutch Ψist and education researcher Ben Wilbrink states about the work by math education researcher Pierre van Hiele (1909-2010). I already observed a few days ago that Wilbrink didn’t understand Van Hiele’s theory of levels of insight. Let me become more specific.

ME and MER are a mess, but Ψ maybe too

The overall context is that math education (ME) and its research (MER) are a mess. Mathematicians are trained for abstraction and cannot deal well with real existing pupils and the empirical science of MER.

When Ψ has criticism on this, it will be easy for them to be right.

Unfortunately, Ψ appears to suffer from its own handicap. Ψ people namely study Ψ. They do not study ME or MER. Ψists invent their own world full of Ψ theories alpha to omega, but it is not guaranteed that this really concerns ME and MER. We saw this in (neuro-) Ψ and in cognitive Ψ in above weblog texts. It appears also to hold for Wilbrink. Whether Ψ is a mess I cannot judge though, since I am no Ψist myself.

Ψ itself has theories about how people can be shortsighted. But we don’t need such theory. A main element in the explanation is that Ψists tend to regard mathematicians as the experts in ME, while those are actually quite misguided. A mathematician’s view on ME tends put the horse behind the carriage. Then Ψ comes around to advise ways to do this more efficiently.

This has become an issue of research integrity

I have asked Ben Wilbrink to correct some misrepresentations. He refuses.

He might have excellent reasons for this. My problem is that he doesn’t state them. I can only guess. One potential argument by Wilbrink is that he does Ψ. Perhaps he means to say that when I would get my third degree in Ψ too then I might better understand his misrepresentations. This is unconvincing. A misrepresentation remains a misrepresentation, whatever the amount of Ψ you put into it. Unless Wilbrink means to say that Ψ is misrepresentation by itself. Perhaps.

But: Wilbrink’s refusal to provide answers to some questions turns this into an issue in research integrity.

Wilbrink (1944, now 70+) originally worked on the Ψ approach to test methodology (testing people rather than eggs). See for example the Item Response theory by Arpad Elo and Georg Rasch, also discussed in my book Voting Theory for Democracy. The debate in Holland on dismal education in arithmetic causes Wilbrink to emphasize the (neglected) role of Ψ. He also tracks other aspects, e.g. his website lists my book Elegance with Substance(EWS) (2009), but he makes his own selection. Perhaps he hasn’t read EWS. At least he doesn’t mention my advice to a parliamentarian enquiry into mathematics education. All this is fine with me, and I appreciate much of Wilbrink’s discussions.

However, now there is this issue on research integrity.

Let us look at the details. The basic evidence is given by Wilbrink’s webpage (2012) on Pierre and Dina van Hiele-Geldof (retrieved today).

1. Having a hammer turns everything into a nail (empirics)

If you want to say something scientifically about mathematics education (ME), then you enter mathematics education research (MER).

When you meet with criticism by people in MER that you overlook some known results, then check this.

Ben Wilbrink overlooks some known results.

But he refuses to check those, even when asked to.

In particular, he states that the Van Hiele theory of levels of insight would not be empirical.

But my books and weblog texts, also this recent one, explain that it is an empirical theory. I informed him about this. Wilbrink must check this, ask questions when he doesn’t understand this, and give a counterargument if he does not agree. But he doesn’t do that. What he does, is neglect MER, and simply state his view, and neglect this criticism. Thus:

As a scientist Wilbrink should give a counterargument, but he merely neglects it.

3. Having a hammer turns everything into a nail (Freudenthal)

A third case that Wilbrink (here, w.r.t. p233 ftnt 38 again) shows that he doesn’t understand the subject he is writing about, is that he lumps Van Hiele and Freudenthal together, i.e. on the theory of levels. But their approaches are quite different. Van Hiele has concrete versus abstract, Freudenthal has pure versus applied mathematics. Freudenthal’s conceptual error is not to see that you already must master mathematics before you can do applied mathematics. You will not master mathematics by applying it when you cannot apply it yet. Guided reinvention is a wonderful word, like sim sala bim.

It is a huge error by Wilbrink to not see this distinction. Wilbrink doesn’t know enough about MER. This turns from sloppy science into an issue of research integrity when he does not respond to criticism on this.

Remarkably, Wilbrink (here, on Structure and Insight) rightly concludes that Van Hiele is critical of Freudenthal and doesn’t actually belong to that approach. Apparently, it doesn’t really register. Wilbrink maintains two conflicting notions in his mind, and doesn’t care. (See also points 10 and 14 below.)

4. Having a hammer turns everything into a nail (Kant)

Wilbrink looks at ME and MER from the angle of Ψ. This looks like a valuable contribution. He however appears to hold that only Ψ is valid, and MER would only be useful when it satisfies norms and results established by Ψ. This is scientifically unwarranted.

There are cases in which Ψ missed insights from MER. See above. I have noted no Ψist making the observations that can be found in Elegance with Substance.

The Van Hiele theory is a general theory in epistemology (see here), and thus also Ψ must respect that. When Wilbrink doesn’t do that, he should give an argument.

A conceivable argument by Wilbrink might be that Van Hiele did not publish a paper in a journal on philosophy (my notation Φ) so that the sons and daughters of Kant could have hailed it as a breakthrough in epistemology. The lack of this seal of approval might be construed as an argument that Ψ and Wilbrink would be justified to neglect it. This would be an invalid argument. When Wilbrink studies MER and Van Hiele’s theory of levels, and reads about Van Hiele’s claim of general epistemological relevance, then every academic worth his or her salt on scientific methodolgy, and especially Ψists, can recognise it for what it is: a breakthrough.

5. Having a hammer turns everything into a nail (testing validity)

Wilbrink’s question whether there has been any testing on validity on Van Hiele’s theory at first seems like a proper question from a Ψist, but neglects the epistemological status of the theory. He would require from physicists that they “test” the law of conservation of energy, or from economists that they “test” that savings are what remain from income after consumption. This is quite silly, and only shows that Wilbrink did not get it. Perhaps his annoyance about Freudenthal caused him to attack Van Hiele as well ? Wilbrink should correct his misrepresentation, or provide a good reason why being silly is good Ψ.

6. Having a hammer makes you require that everyone is hammering

Wilbrink suggests that Pierre and Dina Van Hiele – Geldof performed “folk psychology”. This runs counter to the fact that Pierre studied Piaget, and explicitly rejected Piaget’s theory of stages. His 1957 thesis (almost 60 years ago) has three pages of references that include also other Ψ. Perhaps Wilbrink requires that they should have studied more of Ψ. That might be proper when the objective was to become a Ψist. But the objective was to do MER. Dina did the thesis with Langeveld, a pedagogue, and Pierre with Freudenthal, mathematician and not known yet for the educational theories that he stole from Pierre (and distorted, but it remains stealing).

If the Ψists would succeed in presenting a general coherent and empirically corroborated theory, that every academic can master in say a year, then perhaps Ψists might complain that this is being neglected. Now that Ψists however create a wealth of different approaches, then researchers in MER are justified in selecting what is relevant for their subject, and proceed with the subject.

Wilbrink’s suggestion on “folk psychology” is disrespectful and slanderous.

7. Having a hammer makes you look for nails at low tide (pettifoggery)

Wilbrink reports that Dina van Geldof mentions only the acquisition of insight and does not refer to the relevance of geometry for a later career in society. Perhaps she doesn’t. Her topic of study was acquisition of insight. Perhaps Wilbrink only makes a factual observation. What is the relevance of this ? It is a comment like: “Dollar bills don’t state that people also use them in Mexico.” Since Wilbrink reports this in the context of above disrespectful “folk psychology”, the comment only serves to downgrade the competence of Dina van Geldof, and thus is slanderous. As if she would not understand it, when Pierre explained to her that his theory of levels had general epistemological value.

8. Having a hammer makes you look for nails in 1957

Wilbrink imposes norms of modern study design and citation upon the work of the Van Hieles in 1957 (when Pierre was 48). The few references in Pierre’s “Begrip en inzicht” (2nd book, not the thesis, also translated as “Stucture and insight”) cause Wilbrink to hold, in paraphrase,

“by not referring, Van Hiele reduces his comments to personal wisdoms, by which he inadvertedly downgrades them.”

This is a serious misrepresentation, even though the statement is that Van Hiele’s texts were more than just personal wisdoms.

(a) It is true that Van Hiele isn’t the modern researcher who always refers and is explicit about framework and study design. What a surprise. The observation is correct that norms of presentation of results have changed. Perhaps authors in the USA 1957 already referred, but this need not have been the case in Europe. (See a discussion on this w.r.t. John Maynard Keynes.)

(b) The suggestion as if Van Hiele should have referred is false however. In that period the number of researchers and size of literature were relatively small, and an author could assume that readers would know what one was writing about. Some found it also pedantic to include footnotes.

Thus: (i) The lack of footnotes does not in any way reduce Van Hiele’s comments to “personal wisdoms”. Wilbrink is lazy and if he is serious about the issue then he should reconstruct the general state of knowledge in that period. (ii) The comment must be rewritten in what is factually correct, and the insinuation must be removed.

9. Having a hammer makes you put nails in other people’s mouths

Wilbrink refers to an issue on fractions. He quotes Van Hiele’s suggestion to use tables of proportions, which has been adopted by the Freudenthal Institute, and quotes criticism by modern mathematicians Kaenders & Landsman that those tables block insight into algebra.

This is a misrepresentation.

This is an example of that a Ψist quotes mathematicians as authorities, and regards their misunderstanding as infallible evidence. A student of MER however would (hopefully) see that there is more to it.

The very quote by Van Hiele contains his suggestion to look at multiplication. Indeed, the book “Begrip en Inzicht” chapter 22 contains a proposal to abolish fractions, and to deal with that algebraically – what Kaenders & Landsman may not know about.

The true criticism is that the Freudenthal Head in the Cloud Realistic Mathematics Institute mishandled Van Hiele’s work: (a) selected only an easy part, and (b) did not further develop Van Hiele’s real approach.

A proposal how Van Hiele’s real approach can be developed is here. I agree with Kaenders & Landsman to the extent that presenting only such tables is wrong, and that also the algebraic relation should be specified. The student then has the option to use either, and learn the shift.

Curiously, Wilbrink comments on this chapter 22 with some approval. Thus he should have seen that he provided a false link between Van Hiele on tables of proportion and the critique by Kaenders & Landsman.

10. Having a hammer makes you hate who refuses to be a nail

“Many original ideas can be found in this book. I came upon them in analyzing dubious theories of both psychologists and pedagogues. It is not difficult to unmask such theories: simply test them in practice. Often this is not done because of the prestige of the theory’s proponents.”

Wilbrink’s judgement (my translation):

“The quoted opinion is incredibly arrogant, lousy, or how do you call such a thing. Van Hiele is mathematician, and makes the same error here as Freudenthal made in his whole later life: judging the development of psychological theory not in the context of psychology, but in the context of one’s own common sense. This clearly gives gibberish. Thus I will continue reading Van Hiele with extraordinary suspicion.”

My comments on Wilbrink:

Van Hiele was a mathematician but also a teacher, with much attention for the empirics of education. This is quite in contrast with Freudenthal who lived by abstraction. (Freudenthal did not create a professorship in math education for Van Hiele, but took the task himself.)

Van Hiele does precisely what Wilbrink requires: look at Ψ and look at empirics (in this case: practice). The only thing what happens is that Van Hiele then rejects Ψ, and this is what Wilbrink doesn’t swallow. While Van Hiele does MER, Wilbrink redefines this as Ψ, and then sends Van Hiele to the gallows for not sticking to some Ψ paradigm.

It is useful to mention that Van Hiele does the same thing in the preface of his thesis. He states that Ψ theories have been shown inadequate (his references are three pages) and that he will concentrate on the notion of insight as it is used in educational practice. He opposes insight to rote learning, and mentions the criterion of being able to deal with new situations that differ from the learning phase.

It is incorrect of Wilbrink to distinguish only the categories of either Ψ or “one’s own common sense” or “folk psychology”. It is quite obvious why Van Hiele cannot find in books on Ψ what he is looking for and actually does: He presents his epistemological theory of levels. Those aren’t in those books on Ψ. If Van Hiele would do what Wilbrink requires, then he cannot present his theory of levels, since Wilbrink’s strict requirements would force him to keep on barking up the wrong tree. It beats me why Wilbrink doesn’t see that.

11. Having a hammer turns your foot into a nail

Wilbrink also quotes from viii:

“Some psychologies lay much stress on the learning of facts. The learning of structures, however, is a superior goal. Facts very often become outmoded; they sink into oblivion because of their lack of coherence. In a structure facts have sense; if part of a structure is forgotten, the remaining part facilitates recall of the lost one. It is worth studying the way structures work because of their importance for the process of thinking. For this reason a considerable part of my book is devoted to structures.”

Wilbrink’s comment on this is (my translation):

“For me this is psychological gibberish, though I rather get what Van Hiele intends (…)”

By which it is established that Wilbrink understands gibberish and may call gibberish what he understands.

12. Having a hammer makes that you run in a loop of nails

Wilbrink’s subsequent quote from Structure and Insight:

“In this book you will find a description of a theory of cognitive levels. I show you how levels of thinking demonstrate themselves, how they come into existence, how they are experienced by teachers and how by pupils. You will also see how we can take account of those levels in writing textbooks.”

Wilbrink (my translation):

“You cannot simply do this. At least Van Hiele must show by experiment that intersubjective agreement can be reached about who when what level has demonstrated by operational achievements (because we cannot observe thoughts directly). (…) Indeed, at least for himself it is evident. Can this idea be transferred to others ? Undoubtedly, for other people have invited him to make this English translation of his earlier book. But that is not the point. The crucial point is: does his theory survive empirical testing?”

My comment: It is a repetition of the above, but it shows that Van Hiele’s repeated explanation about the epistemological relevance of his theory for educational practice continues, time and time again, to elude Wilbrink’s frame of mind.

Of course, statistical science already established before 1957 that the golden standard of experimental testing consists of the double blind randomized trial. Instead, Van Hiele developed his theory over the course of years as teacher in practice. Though he mentions didactic observations already from his time as a student in highschool. But we are back in a repetitive loop when we must observe that it is false to require statistics for Van Hiele’s purposes.

13. Having a hammer makes you avoid number 13 for fear that it might make you superstitious

Hermann von Helmholtz, on the law of conservation of energy (source: wikimedia commons)

14. Having a hammer makes you miss a real nail

Wilbrink (2012) refers to the MORE study of 1993 that defined realistic mathematics education (RME) as consisting of:

It is actually nice that Van Hiele is mentioned in 1993, for at least since 2008 he isn’t mentioned in the Freudenthal Head in the Clouds Realistic Mathematics Institutewiki on RME (retrieved today). His levels have been replaced by Adri Treffer’s concept of “vertical mathematization”. Wilbrink might be happy that he doesn’t have to criticise the levels at FHCRMI anymore. It is now a vague mist that eludes criticism.

Wilbrink’s criticism of Freudenthal’s didactic phenomenology and Wiskobas are on target. It is indeed rather shocking that policy makers and the world of mathematics teaching went along with the nonsense and ideology. The only explanation is that mathematicians made a chaos with their New Math. If Pierre van Hiele had been treated in scientific decent fashion, his approach would have won, but Freudenthal was in a position to prevent that.

Wilbrink apparently thinks that Van Hiele belongs to the Freudenthal group, even though he observes elsewhere that Van Hiele rejects this. Wilbrink assumes both options, and his mind is in chaos.

Wilbrink doesn’t see that the Freudenthal clique only mentions Van Hiele to piggyback on his success, to manoeuvre him out, and later create some matching phrases so that Van Hiele doesn’t have to be mentioned anymore.

The following is a repetition of point 5, but it can be found on this particular page & section by Wilbrink, and may deserve a comment too. Namely, regarding Van Hiele as a pillar of realistic mathematics education, Wilbrink states (my translation):

“Okay, I can infer that the theory of levels can be found in Van Hiele’s thesis, but that thesis is of a conceptual nature, and it doesn’t contain empirical research. Van Hiele doesn’t deny the latter, see the passage on his pages 188-189; but that is really rather sensational: everyone parrots his theory of levels, without looking for empirical support. Every well-thinking person, who has read his Popper for example, can see that you can do just anything with that ‘theory of levels’: It is in the formulation by Van Hiele 1958 [article following the 1957 thesis ?] a theory that excludes almost nothing. I return to this extensively on the Van Hiele page.”

My comments for completeness:

Van Hiele’s theory is as empirical as the law of conservation of energy or the economic principle that savings are the remainder of income after consumption. This is not pure mathematics but it applies to reality. Thus Van Hiele’s theory is hugely empirical. See the former weblog text.

Van Hiele’s thesis p188-189 indeed mentions the subsequent relevance of statistical testing to ground out details. This is something else than testing on falsification. What Van Hiele states is not quite what Wilbrink suggests. The fact of the lack of statistical testing is correct. But Van Hiele does not subscribe to Wilbrink’s criterion of “empiricism”.

Van Hiele does not expect that there will be much statistical development of the levels. Therefor he judges that his theory will tend to be of more value for teachers in practical teaching.

You can do with the theory of levels as much as with the law of conservation of energy. A bit, but a crucial bit. Who has read Popper will see that the idea of falsification must make an amendment on definitions.

Thus, if Wilbrink had had an open mind on epistemology, he could have nailed the FHCRMI for producing nonsense and abusing the wonderful theory by Van Hiele. He missed.

But the key point is that his also misinforms his readership, and refuses to correct after he has been informed about it.

15. Having a hammer makes that only masochist nails like you

Wilbrink’s discussion of Van Hiele’s thesis chapter 1 (here, “Wat is inzicht?”) shows a lack of understanding about the difference between a theorem and a proof. Euclid turns in his grave.

Wilbrink makes a distinction between “mathematics and psychology of mathematics”, without explanation or definition, perhaps in the mood of writing for Ψists who will immediately smell the nest and cheer and be happy.

Wilbrink writes “Brrrrr” (check the r’s) when Van Hiele distinguishes insight based upon inference and insight based upon non-inference. Wilbrink does not explain whether his Brrrrr is based upon inference or non-inference.

Wilbrink fears that Van Hiele will base his didactic insight upon “reason” instead of “theory with empirical testing”. He does not explain what is against reasoning and teaching experience and reading in the literature, for developing a new theory. Perhaps Wilbrink thinks that true theories can only be found in books of Ψ ?

Wilbrink’s final judgement on Van Hiele’s thesis chapter 1 is that it is a “tattle tale”. It is a free world, and Wilbrink may think so and put this on his website. But if he wants to be seen as a scientist, then he should provide evidence. In this case, Van Hiele clearly stated that he found the Ψ theories useless, so that he returned to the notion of insight in educational practice. His discussion of what this means is clarifying. It links up with his theory of levels. Overall it makes sense. As an author he is free in the way how he presents his findings. He builds it up, from the concrete to the abstract. Wilbrink does not respect Van Hiele’s judgement, but provides no other argument than Brrrrr or the spraying with the label of Ψ or invoking the spell of the double blind randomized trial.

16. Having a hammer doesn’t make you a carpenter

Wilbrink (2012) doesn’t comment on Van Hiele’s thesis’s final chapter XVIII about the relevance of the theory of levels for epistemology. An ostrich keeps its head in the sand, where it is warm and dark, like in the womb of its egg.

Conclusion

Originally, I saw some of Ben Wilbrink’s texts on Van Hiele before, and appreciated them for the discussion and references, since there is hardly anyone else in Holland who pays attention to Van Hiele. However, Wilbrink’s reaction to Ohlsson, to the effect that Van Hiele would be wrong about the learning direction of concrete to abstract, caused me to make this evaluation above.

Wilbrink maltreats Van Hiele’s work. Wilbrink doesn’t know enough about mathematics education research (MER) to be able to write about it adequately. He misinforms the public.

I have asked Wilbrink to make adequate corrections, or otherwise specify his (reply) arguments so that I could look into those. He refuses either. This constitutes a breach in the integrity of science.

Mathematics education research (MER) not only looks at the requirements of mathematics and the didactics developed in the field itself, but also at psychology on cognition, learning and teaching in general, at pedagogy on the development of pupils and students, and at other subjects, such as physics or economics for cases when mathematics is applied, or general philosophy indeed. The former weblog text said something about neuro-psychology. Today we have a look at cognitive psychology.

“(…) the human mind also possesses the ability to override experience and adapt to changing circumstances. People do more than adapt; they instigate change and create novelty.” (cover text)

“If prior experience is a seriously fallible guide, learning cannot consist solely or even primarily of accumulating experiences, finding regularities therein and projecting those regularities onto the future. To successfully deal with thoroughgoing change, human beings need the ability to override the imperatives of experience and consider actions other than those suggested by the projection of that experience onto the situation at hand. Given the turbulent character of reality, the evolutionary strategy of relying primarily on learned rather than innate behaviors drove the human species to evolve cognitive mechanisms that override prior experience. This is the main theme of this book, so it deserves a label and an explicit statement:

The Deep Learning Hypothesis

In the course of shifting the basis for action from innate structures to acquired knowledge and skills, human beings evolved cognitive processes and mechanisms that enable them to suppress their experience and override its imperatives for action.” (page 21)

Stellan Ohlsson’s book (2011) (Source: CUP)

Definition & Reality methodology

The induction question is how one can know whether all swans are white. Even a statistical statement runs into the problem that the error is unknown. Skepticism that one cannot know anything is too simple. Economists have the question how one can make a certain general statement about the relation between taxation and unemployment.

My book DRGTPE (2000, 2005, 2011) (PDF online) (though dating from 1990, see the background papers from 1992) proposes the Definition & Reality methodology. (1) The model contains definitions that provide for certainty. Best would be logical tautologies. Lack of contrary evidence allows room for other definitions. (2) When one meets a black “swan” then it is no swan. (3) It is always possible to choose a new model. When there are so many black “swans” that it becomes interesting to do something with them, then one can define “swan2”, and proceed from there. Another example is that in one case you must prove the Pythagorean Theorem and in the other case you adopt it as a definition for the distance metric that gives you Euclidean space. The methodology allows for certainty in knowledge but of course cannot prevent surprises in empirical application or future new definitions. The methodology allows DRGTPE to present a certain analysis about a particular scheme in taxation – the tax void – that causes needless unemployment all over the OECD countries.

Karl Popper (1902-1994) was trained as a psychologist, and there met with the falsification approach by Otto Selz (1881-1943). Popper turned this into a general philosophy of science. (Perhaps Selz already thought in that direction though.) The Definition & Reality methodology is a small amendment to falsificationalism. Namely, definitions are always true. Only their relevance for a particular application is falsifiably. A criterion for a scientific theory is that it can be falsified, but for definitions the strategy is to find general applicability and reduce the risk of falsification. In below table, Pierre van Hiele presented his theory of levels of insight as a general theory of epistemology, but it is useful to highlight his original application to mathematics education, with the special property of formal proof. Because of this concept of proof, mathematics may have a higher level of insight / abstraction overall. Both mathematics and philosophy also better take mathematics education research as their natural empirical application, to avoid the risk of getting lost in abstraction.

Addendum September 7: The above assumes sensible definitions. Definitions might be logically nonsensical, see ALOE or FMNAI. When a sensible definition doesn’t apply to a particular situation, then we say that it doesn’t apply, rather than that it would be untrue or false. An example is an econometric model that consists of definitions and behavioural equations. A definition that has no relevance for the topic of discussion is not included in that particular model, but may be of use in another model.

(Un-) certainty

Definitions

Constants

Contingent

Mathematics

Euclidean space

Θ = 2π

?

Physics

Conservation of energy

Speed of light

Local gravity on Earth

Economics

Savings are income minus consumption

Institutional (e.g. annual tax code)

Behavioural equations

Mathematics education

Van Hiele levels of insight

Institutional

Student variety

To my great satisfaction, Ohlsson (2011:234) adopts basically the same approach.

“The hypothetical process that supposedly transforms particulars into abstractions is called induction and it is often claimed to operate by extracting commonalities across multiple particulars. If the first three swans you ever see are white, the idea swans are white is likely to come to mind. However, the notion of induction is riddled with problems. How are experiences grouped for the purpose of induction? That is, how does the brain know which experiences are instances of some abstraction X, before that abstraction has been learned? How many instances are needed? Which features are to be extracted? How are abstractions with no instances in human experience such as the infinite, the future and perfect justice acquired?”

Definition of abstraction

There is an issue w.r.t. the definition of abstraction though. Compare:

My definition of abstraction is leaving out aspects, see here on this weblog, and see FMNAI. My suggestion is that thought itself consist of abstractions. Abstraction depends upon experience since experience feeds brain and mind, but abstraction does not depend upon repeated experience.

Ohlsson (2011:16) takes it as identical to induction, which explains the emphasis upon experience in his title, rather taken as repetition: “Memories of individual events are not very useful in themselves, but, according to the received view, they form the raw material for further learning. By extracting the commonalities across a set of related episodic memories, we can identify the underlying regularity, a process variously referred to as abstraction, generalization or induction.” For Ohlsson, thoughts do not consists of abstractions, but of representations (models): “In the case of human cognition – or the intellect, as it would have been called in the 19th century – the relevant stuff consists of representations. Cognitive functions like seeing, remembering, thinking and deciding are implemented by processes that create, utilize and revise representations.” and “Representations are structures that refer to something (other than themselves).” (page 29)

Ohlsson has abstraction ⇔ induction (commonality). For me it is dubious whether induction really exists. The two pathways are too different to use equivalence. (i) Comparing A and B, one must first abstract from A and then abstract from B, before one may decide whether those abstractions are the same, and before one can even say that A and B share a commonality. (ii) An abstract idea like a circle might cause an “inductive” statement that all future empirical circles will tend to be round, but this isn’t really what is meant by “induction” – which is defined as the “inference” from past swans to future swans.

For me, an abstraction can be a model too, and thus would fit Ohlsson’s term representation, but the fact that he chooses abstraction ⇔ induction rather than abstraction ⇔ representation causes conceptual problems. Ohlsson’s definition of abstraction seems to hinder his understanding of the difference between concrete versus abstract as used in mathematics education research (MER).

Concrete versus abstract

Indeed, Ohlsson suggests an inversion of how people arrive at insight:

“The second contribution of the constraint-based theory is the principle that practical knowledge starts out general and becomes more specific in the course of learning. There is a long-standing tradition, with roots in the beginnings of Western philosophy, of viewing learning as moving in the opposite direction, from particulars to abstractions. [ftnt 38 e.g. to Piaget] Particulars are given in perception while abstractions are human constructions, or so the ancient story goes.” (p234)

“The fundamental principle behind these and many other cognitive theories is that knowledge moves from concrete and specific to abstract and general in the course of learning.” (Ohlsson 2011:434 that states ftnt 38)

If I understand this correctly, and combine this with the earlier argument that general knowledge is based upon induction from specific memories, then we get the following diagram. Ohlsson’s theory seems inconsistent, since the specific memories must derive from specific knowledge but also presume those. Perhaps a foetus starts with a specific memory without knowledge, and then a time loop starts with cumulation over time, like the chicken-egg problem. But this doesn’t seem to be the intention.

Trying to understand Ohlsson’s theory of knowledge

There is this statement on page 31 that I find confusing since now abstractions [inductions ?] depend upon representations, while earlier we had them derived from various memories.

“The power of cognition is greatly increased by our ability to form abstractions. Mathematical concepts like the square root of 2 and a four-dimensional sphere are not things we stumble on during a mountain hike. They do not exist except in our representations of them. The same is true of moral concepts like justice and fairness, as well as many less moral ones like fraud and greed. Without representation, we could not think with abstractions of any kind, because there is no other way for abstract entities to be available for reflection except via our representations of them. [ftnt 18]”

Ftnt 18 on page 402: “Although abstractions have interested philosophers for a long time, there is no widely accepted theory of exactly how abstractions are represented. The most developed candidate is schema theory. (…)”

My suggestion to Ohlsson is to adopt my terminology, so that thought, abstraction and representation cover the same notion. Leave induction to the philosophers, and look at statistics for empirical methods. Then eliminate representation as a superfluous word (except for representative democracy).

That said, we still must establish the process from concrete to abstract knowledge. This might be an issue of terminology too. There are some methodological principles involved however.

Wilbrink on Ohlsson

Dutch psychologist Ben Wilbrink alerted me to Ohlsson’s book – and I thank him for that. My own recent book A child wants nice and no mean numbers(CWNN) (PDF online) contains a reference to Wilbrink’s critical discussion of arithmetic in Dutch primary schools. Holland suffers under the regime of “realistic mathematics education” (RME) that originates from the Freudenthal “Head in the Clouds Realistic Mathematics” Institute (FHCRMI) in Utrecht. This FHCRMI is influential around the world, and the world should be warned about its dismal practices and results. Here is my observation that Freudenthal’s approach is a fraud.

Referring to Ohlsson, Wilbrink suggests that the “level theory by Piaget, and then include the levels by Van Hiele and Freudenthal too” (my translation) are outdated and shown wrong. This, however, is too fast. Ohlsson indeed refers to Piaget (stated ftnt 38) but Van Hiele and Freudenthal are missing. It may well be that Ohlsson missed the important insight by Van Hiele. It may explain why Ohlsson is confused about the directions between concrete and abstract.

A key difference between Van Hiele and Freudenthal

CWNN pages 101-106 discusses the main difference between Hans Freudenthal (1905-1990) and his Ph.D. student Pierre van Hiele (1909-2010). Freudenthal’s background was abstract mathematics. Van Hiele was interested from early on in education. He started from Piaget’s stages of development but rejected those. He discovered, though we may as well say defined, levels of insight, starting from the concrete to the higher abstract. Van Hiele presented this theory in his 1957 thesis – the year of Sputnik – as a general theory of knowledge, or epistemology.

Freudenthal accepted this as a thesis, but, mistook this as the difference between pure and applied mathematics. When Freudenthal noticed that his prowess in mathematics was declining, he offered himself the choice of proceeding his life with the history of mathematics or the education of mathematics. He chose the latter. Hence, he coined the phrase realistic mathematics education (RME), and elbowed Van Hiele out of the picture. As an abstract thinking mathematician, Freudenthal created an entire new reality, not caring about the empirical mindset and findings by Van Hiele. One should really read CWNN pages 101-106 for a closer discussion of this. Van Hiele’s theory on knowledge is hugely important, and one should be aware how it got snowed under.

A recent twist in the story is that David Tall (2013) rediscovered Van Hiele’s theory, but wrongly holds (see here) that Tall himself found the general value while Van Hiele had the misconception that it only applied to geometry. In itself it is fine that Tall supports the general relevance of the theory of levels.

The core confusion by Ohlsson on concrete versus abstract

The words “concrete” and “abstract” must not be used as absolutely fixed in exact meaning. This seems to be the core confusion of Ohlsson w.r.t. this terminology.

When a child plays with wooden blocks we would call this concrete, but our definition of thought is that thinking consists of abstractions, whence the meanings of the two words become blurred. The higher abstract achievement of one level will be the concrete base for the next level. The level shift towards more insight consists of compacting earlier insights. What once was called “abstract” suddenly is called “concrete”. The statement “from concrete to abstract” indicates both the general idea and a particular level shift.

Van Hiele’s theory is essentially a logical framework. It is difficult to argue with logic:

A novice will not be able to prove laws or the theorems in abstract mathematics, even informally, and may even lack the notion of proof. Having achieved formal proof may be called the highest level.

A novice will not be able to identify properties and describe their relationships. This is clearly less complex than (1), but still more complex than (3). There is no way going from (3) to (1) without passing this level.

A novice best starts with what one knows. This is not applied mathematics, as Freudenthal fraudently suggested, but concerns the development of abstractions that are available at this level. Thus, use experience, grow aware of experience, use the dimensions of text, graph, number and symbol, and develop the thoughts about these.

Van Hiele mentioned five levels, e.g. with the distinction between informal and formal deduction, but this is oriented at mathematics, and above trident seems sufficient to establish the generality of this theory of knowledge. A key insight is that words have different meanings depending upon the level of insight. There are at least three different languages spoken here.

Three minor sources of confusion are

Ohlsson’s observation that one often goes from the general to the specific is correct. Children may be vague about the distinction between “a man” and “one man”, but as grown up lawyers they will cherish it. This phenomenon is not an argument against the theory of levels. It is an argument about becoming precise. It is incorrect to hold that “one man” is more concrete and “a man” more abstract.

There appears to exist a cultural difference between on one side Germans who tend to require the general concept (All men are mortal) before they can understand the particular (Socrates is mortal), and the English (or Anglo-Saxons who departed from Germany) who tend to understand only the particular and to deny the general. This cultural difference is not necessarily epistemological.

Education concerns knowledge, skill and attitude. Ohlsson puts much emphasis on skill. Major phases then are arriving at a rough understanding and effectiveness, practicing, mastering and achieving efficiency. One can easily see this in football, but for mathematics there is the interplay with the knowledge and the levels of insight. Since Ohlsson lacks the levels of insight, his phases give only part of the issue.

Conclusion

I have looked only at parts of Ohlsson’s book, in particular above sections that allow a bit more clarity on the relevance w.r.t. Van Hiele’s theory of levels of insight. Please understand my predicament. Perhaps I read more of Ohlsson’s book later on, but this need not be soon.

In mathematics education research (MER) we obviously look at findings of cognitive psychology, but this field is large, and it is not the objective to become a cognitive psychologist oneself.

When cognitive psychologists formulate theories that include mathematical abstraction, as Ohlsson does, let them please look at the general theory on knowledge by Pierre van Hiele, for this will make it more relevant for MER.

Perhaps cognitive psychologists should blame themselves for overlooking the theory by Pierre van Hiele, but they also should blame Hans Freudenthal, and support my letter to IMU / ICMI asking to correct the issue. They may work at universities that also have departments of mathematics and sections that deal with MER, and they can ask what happened.

When there is criticism on the theory by Van Hiele, please look first at the available material. There are summary statements on the internet, but these are not enough. David Tall looked basically at one article and misread a sentence (and his misunderstanding still was inconsistent with the article). For some references on Van Hiele look here. (There is the Van Hiele page by Ben Wilbrink, but, as said, Wilbrink doesn’t understand it yet.)