Thursday, March 17, 2011

In its 2010 Speech from the Throne, the Government of Ontario, Canada announced its intention to create an Ontario Online Institute (OOI) to support online learning in the province as part of the Open Ontario Plan. I was asked to provide some initial recommendations for that initiative, in the form of responses to five questions. This is my response. (PDF Version)

What is the biggest challenge facing online and distance learning in general today?

The biggest challenge for online and distance learning is the challenge the field was developed in the first place to address, which is the provision of access to learning opportunities to those who would not otherwise be able to obtain them. It is a fact that even in a country with as many opportunities as Canada, there are many people who would like to be able to obtain a higher education, but who are unable to because of time, resources or distance. Online and distance learning represents our best, and probably only, solution to this demand.

A secondary but nonetheless no less pressing issue is the cost of providing access to education generally. The great expansion of Canada's educational sector that has enabled a full 65 percent of the population under 44 to obtain a post-secondary diploma[1] is now under increased stress because of the need to reduce federal and provincial budgetary expenditures. This stress extends across the full educational spectrum, from kindergarten to graduate programs, and in all fields. Though some feel distance and online learning will not reduce costs[2], many are looking to new technologies not only to increase access but also to reduce the load borne by government. The alternative, as we have already seen, is increased tuition, reduced access and reduced services.

This creates a central issue revolving around the strategic design of distance and online learning. If these are viewed as simply the replication of existing educational design in an online environment, it is unlikely costs will be decreased, which decreases the likelihood that they will support any great degree of increased access at all. It is therefore only through the creation of new delivery models that e-learning will achieve both the primary and secondary goal. The challenge of defining this new delivery model is the central issue of the field, and most discussion and research revolves around it.

Without entering into a detailed discussion, the following are some of the approaches and ideas that have been advanced in this direction:

-Open educational resources (OERs) - the suggestion is that the production and distribution of freely accessible learning support materials and services will reduce the overhead created by reliance on commercially published content
-Open online courses (MOOCs) - the suggestion is that by opening enrollment in massive online courses, the potential for student co-facilitation can reduce the overhead involved in teaching small and institutionally-bound classes
-Rationalization - the suggestion is that the creation of online courses and programs can eliminate the need to offer the same program in multiple institutions, and make less popular courses (especially at the K-12 level) more widely accessible to geographically dispersed populations

Countering these proposals are arguments that point to the cost of e-learning technology, especially to students, the cost of service provision and bandwidth, the need for additional training and support required for instructional staff, mechanisms to ensure the appropriateness and quality of learning materials, and alignment with national and international standards and curricula.

What is the biggest opportunity that online and distance learning in general has today?

While post-secondary educational attainment in Canada is very high, globally it is much less[3]. At the same time, internet access is expanding rapidly across the globe, with 5.1 billion mobile subscribers and 1.6 billion internet users[4].This creates not only an opportunity to provide increased access to learning opportunities, but much greater potential for online and distance learning providers to greatly increase their existing markets. Already we have seen a rapid expansion of the corporate distance and online learning sector. The market for self-paced learning materials will double to $49 billion over the current five years[5].

It is not difficult to reconcile the rapidly expanding commercial e-learning market with the publicly-mandated (and publicly-funded) K-12 and post-secondary system. The former, simply, requires the latter. The existence of a continually expanding global market in online and distance learning products and services depends crucially on a market well-position to consume those products, which presupposes a certain level of education to begin with. In essence, education and educational services represent one of the largest examples of the value-add online services distribution model. Just as Skype offers a free basic service to all customers, education providers in general offer a free basic service to all potential learners[6].

If we understand the value of online and distance learning in this way - as the creation of the essential service that makes possible a commercial marketplace of enhanced products and services - then it becomes clear that the greatest opportunity for online and distance education today is the possibility of the creation of that marketplace, not only in Canada but globally. There is a clear link between educational attainment and economic activity generally[7]. Increasing our capacity as an education provider increases markets not only nationally but also globally.

Though the provision of accessible online and distance learning is often depicted as though it were a charity[8] it is in fact an efficient and effective economic development strategy. The development of expertise, the growth of target markets, and the preparation of a recipient population all flow from the provision of basic and fundamental learning services and products. The first jurisdiction that successfully leverages its capacity to deliver an effective and low-cost online learning model to its own population will be in a position to offer a wide range of goods and services globally.

Keeping in mind the biggest challenge and the biggest opportunity for online and distance learning today, what is the one overriding step that Ontario ought to take as it attempts to take its online learning system to the next level?

It should be possible to obtain a university-level education, from kindergarten to graduate degree, and be recognized for that achievement, without once ever having to step into a school or attend an in-person class. That is not to say that every student could, would or should learn in this way. There is no end to the number of studies asserting that students are unable to manage their own learning by themselves[9]. But such a change in the depiction of the default model of learning support constitutes an essential first step.

Such a change represents a transition in outlook from that of scarcity of educational services and resources to that of abundance. It represents a change of outlook from one where education is an essential service that much be provided to all persons, to one where the role of the public provider is overwhelmingly one of support and recognition for an individual's own educational attainment. It represents an end to a centrally-defined determination of how an education can be obtained, to one that offers choices, resources and assessment. The Canadian educational system is already moving in this direction[10]. The current proposal represents an alignment of resources around the terminus.

In order to establish the possibility of a completely self-managed education, two major first steps are required:

-The provision of full curricular resources, from class outlines, teaching aids and assessment tools, readings and texts, library support, learning tools and activities, and sociality support, in an open online environment freely accessible by any learner.
-The establishment of a mechanism and clearly defined metrics for a system-wide recognition of learning, similar to what exists today under the heading of 'prior learning recognition and competency assessment'[11].

While it is clear that not all, and not even the majority, could obtain an entirely self-managed education through such a mechanism, the remainder of the educational system can and should be viewed as support for this core. In particular:

-Schools and teachers would have the option of accessing and using the freely available curricular resources to support learning (note that it would be contrary to the philosophy of enablement to require that they use these resources; not also that voluntary use is concurrent with a recognition for and encouragement of contributions to the curricular resources bank)
-The mechanism for system-wide assessment and competency recognition would be made available to all students, whether or not enrolled in an in-person school. Note that this is not the imposition of system-wide standardized tests. A wide range of possible recognitions is anticipated, generally focused on specialized domains or disciplines.
-A regulated infrastructure of commercial supports and services may be encouraged to develop around the core infrastructure. Services similar to home-study support and test preparation services today may be established. Self-study learning support materials may be published.

Ontario is already recognized as having one of the best educational systems in the world, its graduates among the best-educated citizens of any nation or any era. The development of a system as described above, staged through the implementation of opportunities and supports for the existing system, enables the provision of an 'Ontario education' to citizens of any nation around the world.

Conversely what is the one thing it should absolutely avoid?

The temptation to manage, and especially to manage for outcomes, in the provision of any good or service, is overwhelming. It should and must be avoided.

One of the great strengths, not only of the Canadian educational system, but also systems that fare equally well in international testing, is the generally decentralized nature of the system. Educators and school boards in places like Canada and Finland have a high degree of latitude in how they manage learning and support[12]. Respect for excellence and equity are key to their success[13].

None of this is to say that providers ought not be sensitive to outcomes and attentive to the facilitation of improved outcomes. What makes systems like Ontario's and Finland's worth emulating is the fact that they produce highly competent graduates, capable of excellence not only on standardized international tests but also of flourishing in a complex technologically advanced environment. The success of any educational system is important and must not be ignored. Rather, this caution applies only to a certain approach with respect to that success.

Organizations such as the Canadian military[14] and IBM[15] are realizing that in order to manage a complex system it is not productive to impose direction from the top, even if that direction is motivated to achieve positive outcomes (and it is even less productive when those at the top are not so altruistically motivated). In order to be successful, "Command and control corporations are no longer going to be there. People need to be freed to share what they know." It is through this rapid sharing and response to dynamic and changing circumstances that decentralized and locally managed systems are able to adapt and achieve excellence. Management for outcome does not make the outcome more likely; it makes it less likely.

Which current or emerging technology has the potential of radically transforming online and distance learning?

As predicted in the early days of online learning[16], the personal access device, or 'pad', is proving to be transformative. Apple's release of the iPad in 2010, combined with this year's release of the iPad 2, has resulted in what might be called a tablet boom[17]. In addition to the iPad, Motorola is shipping Xoom and Samsung is producing the GalaxyTab, both run on Google's Android operating system. Amazon continues to produce the Kindle while Barnes and Noble distributes the Nook. The leading Canadian tablet is RIM's Playbook.

The impact has been immediate, widespread and game-changing. As one small example, the e-textbook market, which was 1.5 percent of the overall market a year ago, has doubled this year and will reach 25 percent of the market within five years[18]. Far more than simply an e-book reader, the iPad already supports hundreds of educational applications, ranging from games to communication apps to organizers to math and music[19]. It is not possible to measure how much learning is taking place using these new platforms, as the bulk of it is informal. It is however hard to believe it is anything but substantial.

The arrival of pad computing is also significant in that it represents the first significant realignment of the technology infrastructure in ten years. Through 1995 to 2010 most computer users lived and worked in an environment dominated by the Mac and the PC, the desktop and the laptop. In this environment operating systems manage system communication and storage, and applications are loaded and installed locally, using (and dependent on) the operating system for most user interface and functionality.

The new pad computers change the environment in some significant ways:

-Applications and data are no longer stored locally; more and more they are stored in personally managed or centrally managed services. Consequently, we have begun to shift from localized computing to distributed computing.
-The Windows and Macintosh operating systems are no longer dominant; designers must now work on numerous mobile platforms, including iOS (Apple), Android (Google), WebOS (HP), Windows, Blackberry and more[20]. This is tending developers toward a common platform composed of HTML5, Javascript and CSS3, a network-based operating system rather than a platform-based operating system.
-As a result of the increasing prevalence of mobile platforms, and greatly accelerated by services such as Facebook and Twitter, sociality online is an increasingly important feature of online applications and services, changing the meaning of concepts such as 'self study' entirely, from being an experience had by one person, alone, to an experience enjoyed with a network of friends.

If it were not evident before the arrival of pad computers, it must certainly be evident now, that network technology will change learning in deep and fundamental ways. At the same time learning becomes more individualized and more localized, at the same time it reaches out of the schools and into the lives, workplaces and hobbies or individual learners, it also becomes more network-based, more dependent on a wide variety of online sites and services, some (such as Facebook and Google) very centralized, others (such as Skype and SMS) much more distributed.

A single model for online learning, even were it desirable, will not be attainable in this environment. At best funders and providers can hope is to influence the system, not through regulation and control, but rather by the provision of resources and services that will be both supportive of public and social policy objectives, and found to be useful by the recipient population.

Wednesday, March 09, 2011

Responding to the enquiries posted to this blog post. Originally intended as a comment, but Blogger very arbitrarily limits the size of comments, and I can't seem to change that.

I honestly don't think that most people think of the definition of 'group' when they use it, beyond the obvious connotations of 'more than one person'. When I've asked people, they typically talk about what the members have in common, rather than links between members. So I'm not sure whether my use is in fact an unusual definition of the word. It may be that when critics evaluate their definitions in the light of what I say, they find the same issues, but rather than abandon their use of the word 'group' they reconceptualize it. In any case, the fact of the matter regarding the common meaning of the word 'group' is a matter for empirical study. For myself, I am much less interested in pushing for some sort of commonly accepted definition of the word as I am in getting at the underlying concepts - 'associations based on sameness' vs 'associations based in interactivity'.

On the matter of convergence of vocabulary, I admit that this proposition was the one that gave me the most pause as I wrote the post above. Is progress generated by convergence on common vocabulary or method, etc?

As with everything, it depends on what you mean by 'progress'. It seems to me that many people define 'progress' (if only implicitly, and at least in part) as 'convergence on method, vocabulary, etc'. In such a case the proposition becomes tautological - 'the generation of x results in P because P is x'. And as tautology, such a statement can be effectively removed from consideration.

Therefore, the underlying question is, "does the generation of x result in P for cases (or instances) where P is not x?" Does commonness of method or vocabulary result in progress where progress is *not* defined or composed in some way of commonality of vocabulary or method, etc.? That is a much harder case to be made, and my assertion in the post above is essentially the claim that the case cannot be made.

Why would I say this?

Well, let's look at what 'progress' is a property of, and let's look at what 'commonality' is a property of. 'Progress' is a network phenomenon - it refers to the success of the entire network in obtaining some result. In the case of a social network, 'progress' describes the advancement of society. In the case of a neural network, 'progress' describes the advancement of the person.

In the case of 'commonality', however, we are not talking about the property of the network as a whole, but rather, of the entities that compose the network. Consider, for example, the most oft-used expression of commonality, "convergence on common vocabulary/method etc." We are talking about "use of vocabulary/method etc." by person A as compared to "use of vocabulary/method etc." In the case of neural networks, the terms 'vocabulary', 'method' barely apply (in my own mind, therefore, I adopt a much wider construal of terms like 'vocabulary' and 'method', but let's not go there). We are talking about commonality of neural states.

Now, being most precisely described, we can identify a clearer role for 'commonality', or as I'll more accurately describe it here, 'sameness'. There are two key roles:

1. 'Sameness' of neural state as regards learning theory. Instances of Hebbian associationism ("what fires together, wires together") imply that some aspect of a neral state must be the same for two neurons (ie., firing) in order for a connection to be formed. But note that this kind of sameness applies not to an internal sameness - it doesn't matter *what* caused the neuron to fire - but only external sameness.

2. 'Sameness' as regards physical substrate required for the possibility of communication. If entity A is a neuron and entity B is a donkey, they are not connecting with each other, because the one does not have the capacity to receive signals from the other. A *physical* compatibility is required for communications. But note that this is a very distinct type of commonality, requiring only *fit* and not identity. A lightbulb need not be the *same* as the socket to connect, only compatible. That said, there are some properties that are indeed the same - 'diameter' and 'thread size', in the case of a light bulb, for example. I've discussed this kind of sameness before. I call it "syntactic" sameness, as opposed to "semantic" sameness - a sameness that addresses only the structure, not the underlying meanings.

Applied to social networks, these two types of sameness amount to (1) expressions or external behaviour, and most significantly, productions of external artifacts. This sort of sameness is the sort that allows for stigmergy. And (2) uses of the same physical medium - spoken word (and the shape and structure of those sounds), written forms, etc. Note that two people can 'communicate' even if they have different 'meanings' (or 'truth values', etc) for two words, provided that (a) they behave the same way when the words are uttered (aka 'language games'), and (b) they use the same words.

All of this has led me to devalue what we think of as a 'common vocabulary', in any ordinary meaning of the term. If we do the same performance with the same entities, it doesn't matter what we think about those entities. You have your interpretation, I have mine, and the world goes merrily along until the inevitable divergence of performance.

Monday, March 07, 2011

The best learning I've ever done has been on my own, working through a hard problem, by reading and then writing, either text, or software, or derivations. This is also the hardest learning I've done; most of the people I could talk to don't understand it well enough to explain it, and attempting to work it through leads to more confusion than clarity.

Of course, that's just me. And I wouldn't think that what was best for me was best for everyone.

Pat Parslow replied,

I have certainly had some great learning experiences that way too, and most often, 'hardest' is strongly correlated with 'best' in terms of learning outcomes. It may not always correlate quite as well with the idea of 'best experience related to learning', which is where I would say dialogue has been the most important element for me. But on the couple of occasions I have managed to get a prototype conversational agent to chat to me about what I am trying to learn, it has been immense fun and quite productive too, so the people aren't necessarily part of that equation for me.

Here are my thoughts, expanded:

I think one of the things about working with software is that the learning process can be very iterative. Code something, try it, see if it works, code something, try it, see if it works, code something, try it, see if it works. This back-and-forth with the machine is characteristic of the hard learning that I have done.

This also applies, though less obviously so, in other logico-linguistic domains. Derivations either work, and can be known to work, or they fail. Hence, working on a series of difficult problems can provide the same result. In the case of writing, the evidence of success is less sharply defined, but can be observed nonetheless (for example, by reading the paragraph aloud to test for flow).

There is typically a progression through such iterations. Sometimes the progression is explicitly designed in the assignment itself, as in a series of logic problems or scientific experiments. Sometimes it is defined in terms of overall difficulty, as in the offering of increasingly talented opponents in a game or sport, or in the accomplishment of increasingly difficult welding or woodworking tasks. Sometimes the progression is defined through movement through social circles, such as the progression toward increasing involvement in a scientific or academic community.

The concept of progression in teaching is much less well-defined. Part of this is due to the many roles teachers play, and part of it is due to the variable nature of the object of enquiry. In teaching a student there isn't as clearly a 'right and wrong' as there is in the case of solving logic problems or programming a computer; student success or failure is informed by numerous factors; and the students change each year without necessarily increasing or decreasing the challenge. I think this ambiguity in teaching leads professionals to seek measures of progression through other methods, for example, interaction with other teachers.

My own feeling concerning the iteration of learning with a group is that it is very easy (and unfortunately common) to take the wrong measures as indicators of success. In some cases, the indicators can be very clear, as they would be for an individual - solving scientific problems, winning sporting events, building bridges - but in others the indicators are much more vague. Normally a discipline would be defined by its standards for success, but in the case of those with vague standards, the discipline is defined by the domain of enquiry. What is 'success', and what is 'progression through iterations', in religious studies?

My own observation is that 'progression' in these disciplines is often defined by adherence to the standards of practice within the discipline. It is this standard of progress, I believe, that appeals to precisely the wrong set of indicators of success. For example, consider the following standards of success:
- fluency with and use of a certain vocabulary
- exposure to and familiarity with a standard body of literature
- conduct of enquiry in a generally accepted form of discourse
- acceptance of an underlying set of principles

In other work, I have characterized these as typical of what I called a 'group', and suggested that the standard for success in such environments is best characterized in terms of sameness with other members of that environment. Sameness of vocabulary, sameness of curriculum, sameness of process, and sameness of belief set. Among other things. Naturally the opposition focused the suggestion that my use of the word 'group' was heterodox, rather than the underlying set of propositions I was trying to express. This is, I would argue, characteristic of this misplaced set of standards for success.

If we look at the other domains where there is some less ambiguous standard for success, it may be argued, then we also see these same practices: agreement on vocabulary, curriculum, discourse and method. This may be true, but this is the *result* of successful enquiry, not the cause of it, and progress in these disciplines is typically accompanied (cf. Kuhn, Lauden) by *change* in these practices, not adherence to them. This is *especially* the case when we consider the progress of an individual's own learning; we cannot imagine how a child could possibly be successful in school using only the vocabulary, curriculum, methods and beliefs he or she had at the age of four.

Learning is change, not sameness, and this is as true for a society as it is for an individual. There is no single 'path' or progression that creates learning by bringing a student into the same place as everybody else. At the very best, achievement of sameness can be seen only as an intermediate step in that progress, analagous to mastering the bubble sort, successfully executing a riposte, or building a chest of drawers. But learning has occurred only is the student is able to go beyond that measure of sameness, and create something unique. And success in the field may be obtained whether or not a person has first achieved that degree of sameness, however likely or unlikely that may be.

That is what probably most underlies my unease with group interaction as an essential educational principle. It is a methodology that, while it promotes the sort of iterative behaviour that characterizes successful disciplines, nonetheless risks miring students in an environment where success is measured inappropriately, and in the worst case scenario, risks miring an entire discipline in an unsuccessful methodology.

I believe many academic disciplines - philosophy, education, economics - are in this situation now. Which is why I believe that success in these disciplines will come only as a result of breaking out of the conventions current within those domains, rather than prevalent within them.

Saturday, March 05, 2011

David T. Jones asks, "Does connectivism conflate or equate the knowledge/connections with these two levels (“neuronal” and “networked”)? Regardless of whether the answer is yes or no, what are the implications that arise from that response?"

The answer to the first question is 'yes', but with some caveats.

The first caveat is expressed in several of my papers. It is that historically we can describe three major types of knowledge:
- qualitative - ie., knowledge of properties, relations, and other typically sensible features of entities
- quantitative - ie., knowledge of number, area, mass, and other features derived by means of discernment or division of entities within sensory perception
- connective - ie., knowledge of patterns, systems, ecologies, and other features that arise from the recognition of interactions of these entities with each other

(There is an increasing effect of context-sensitivity across these three types of knowledge. Sensory information is in the first instance context-independent, as (if you will) raw sense data, but as we begin to discern and name properties, context-sensitivity increases. As we begin to discern entities in order to count them, context-sensitivity increases further. Connective knowledge is the most context-sensitive of all, as it arises only after the perceiver has learned to detect patterns in the input data.)

The second caveat is that there is not one single domain, 'knowledge', and, correspondingly, not one single entity, the (typically undesignated) knower. Any entity or set of entities that can (a) receive raw sensory input, and (b) discern properties, quantities and connections within that input, can be a knowledge, and consequently, know.

(Not that I do not say 'possess knowledge'. To 'know' is to be in the state of perceiving, discerning and recognizing. It is the state itself that is knowledge; while there are numerous theories of 'knowledge of' or 'knowledge that', etc., these are meta-theories, intended to assess or verify the meaning, veracity, relevance, or some other relational property of knowledge with respect to some domain external to that knowledge.)

Given these caveats, I can identify two major types of knowledge, specifically, two major entities that instantiate the states I have described above as 'knowledge'. (There are many more than two, but these two are particularly relevant for the present discussion).

1. The individual person, which senses, discerns and recognizes using the human brain.

2. The human society, which senses, discerns and recognizes using its constituent humans.

These are two separate (though obviously related) systems, and correspondingly, we have two distinct types of knowledge, what might be called 'personal knowledge' and 'public knowledge' (I sometimes also use the term 'social knowledge' to mean the same thing as 'public knowledge').

Now, to return to the original question, "Does connectivism conflate or equate the knowledge/connections with these two levels ('neuronal' and 'networked')?", I take it to *mean*, "Does connectionism conflate or equate personal knowledge and public knowledge."

Are they the same thing? No.

Are they each instances of an underlying mechanism or process that can be called (for lack of a better term) 'networked knowledge'? Yes.

Is 'networked knowledge' the same as 'public knowledge'? No. Nor is it the same as 'personal knowledge'. By 'networked knowledge' I mean the properties and processes that underlie both personal knowledge and public knowledge.

Now to be specific: the state we call 'knowledge' is produced in (complex) entities as a consequence of the connections between and interactions among the parts of that entity.

This definition is significant because it makes it clear that:
- 'knowledge' is not equivalent to, or derived from, the properties of those parts.
- 'knowledge' is not equivalent to, or derived from, the numerical properities of those parts

Knowledge is not compositional, in other words. This becomes most clear when we talk about personal knowledge. In a human, the parts are neurons, and the states or properties of those neurons are electro-chemical potentials, and the interactions between those neurons are electro-chemical signals. Yet a description of what a person 'knows' is not a tallying of descriptions of electro-chemical potentials and signals.

Similarly, what makes a table 'a table' is not derivable merely by listing the atoms that compose the table, and there is no property, 'tableness', inherent in each of those atoms. What makes a table a 'table' is the organization and interactions (which produce 'solidity') between those atoms. But additionally, ascription of this property, being a 'table', is context-dependent; it depends on the viewer being able to recognzie that such-and-such an otganization constitutes a table.

A lot follows from this, but I would like to focus here on what personal knowledge and public knowledge has in common. And, given that these two types of knowledge result from the connections between the parts of these entities, the question now arises, what are the mechanisms by which these connections form or arise?

There are two ways to answer this:
- the connections arise as a result of the actual physical properties of the parts, and are unique to each type of entity. Hence (for example) the connections between carbon atoms that arise to produce various organizations of carbon, such as 'graphite' or 'diamond', are unique to carbon, and do not arise elsewhere
- the connections arise as a result of (or in a way that can be described as (depending on whether you're a realist about connections)) a set of connection-forming mechanisms that are common to all types of knowledge

Natural science is the domain of the former. Connective science (what we now call fields such as 'economics', 'education', 'sociology') is the domain of the latter.

One proposition of connectivism (call it 'strong connectivism') is that what we call 'knowledge' is what connections are created solely as a result of the common connection-forming mechanisms, and not as a result of the particular physical constitution of the system involved. Weak connectivism, by contrast, will allow that the physical properties of the entities create connections, and hence knowledge, unique to those entities. Most people (including me) would, I suspect, support both strong and weak connectivism.

The question "Does connectivism conflate or equate the knowledge/connections with these two levels" thus now resolves to the question of whether strong connectivism is (a) possible, and (b) part of the theory known as connectivism. I am unequivocal in answering 'yes' to both parts of the question, with the following caveat: the connection-forming mechanisms are, and are describably as, physical processes. I am not postulating some extra-worldly notion of 'the connection' in order to explain this commonality.

These connection-forming mechanisms are well known and well udnerstood and are sometimes rolled up under the heading of 'learning mechanisms'. I have at various points in my writing described four major types of learning mechanisms:

There may be more. For example, Hebbian associationism may consist not only of 'birds of a feather link together' but also associationism of compatible types, as in 'opposites attract'.

What underlying mechanisms exist, what are the physical processes that realize these mechanisms, and what laws or principles describe these mechanisms, is an empirical question. And thus, it is also an empirical question as to *whether* there is a common underlying set of connection-forming mechanisms.

But from what I can discern to date, the answer to this question is 'yes', which is why I am a strong connectivism. But note that it does place the onus on me to actually *describe* the physical processes that are instances of one of these four mechanisms (or at least, since I am limited to a single lifetime, to describe the conditions for the possibility of such a description).

There is a separate and associated version of the question, "Does connectivism conflate or equate the knowledge/connections with these two levels," and that is whether the principles of the *assessment* of knowledge are the same at both levels (and all levels generally).

There are various ways to formulate that question. For example, "Is the reliability of knowledge-forming processes derived from the physical constitution of the entity, or is it an instance of an underlying general principle of reliability." And, just as above, we can discern a weak theory, which would ground reliability in the physical constitution, and a strong theory, which grounds it in underlying mechanisms (I am aware of the various forms of 'reliablism' proposed by Goldman, Swain and Plantinga, and am not referring to their theory with this incidental use of the word 'reliable').

As before, I am a proponent of both, which means there are some forms of underlying principles that I think inform the assessment of connection-forming mechanisms within collections of interacting entities. Some structures are more (for lack of a better word) 'reliable' than others.

I class these generally as types of methodological principles (the exact designation is unimportant; Wittgenstein might call them 'rules' in a 'game'). By analogy, I appear to the mechanisms we use to evaluate theories: simplicity, parsimony, testability, etc. These mechanisms do not guarantee the truth of theories (whatever that means) but have come to be accepted as generally (for lack of a better word) reliable means to select theories.

In the case of networks, the mechanisms are grounded in a distinction I made above, that knowledge is not compositional. Mechanisms that can be seen as methods to define knowledge as compositional are detrimental to knowledge formation, while mechanisms that define knowledge as connective, are helpful to knowledge formation.

I have attempted to characterize this distinction more generally under the heading of 'groups' and 'networks'. In this line of argument, groups are defined compositionally - sameness of purpose, sameness of type of entity, etc., while networks are defined in terms of the interactions. This distinction between groups and networks has led me to identify four major methgodological principles"

- autonomy - each entity in a network governs itself
- diversity - entities in a network can have distinct, unique states
- openness - membership in the network is fluid; the network receives external input
- interactivity - 'knowledge' in the network is derived through a process of interactivity, rather than through a process of propagating the properties of one entity to other entities

Again, as with the four learning mechanisms, it is an empirical question as to *whether* these processes create reliable network-forming networks (I believe they do, based on my own observations, but a more rigorous proof is desirable), and I am by this theory committed to a description of the *mechanisms* by which these principles engender the reliabiliuty of networks.

In the case of the latter, the mechanism I describe is the prevention of 'network death'. Network death occurs when all entities are of the same state, and hence all interaction between them has either stopped or entered into a static or stead state. Network death is the typical result of what are called 'cascade phenomena', whereby a process of spreading activation eliminates diversity in the network. The four principoles are mechanisms that govern, or regulate, spreading activation.

So, the short answer to the first question is "yes", but with the requirement that there be a clear description of exactly what it is that underlies public and personal knowledge, and with the requirement that it be clearly described and emprically observed.

I will leave the answer to the second question as an exercise for another day.