Top down or bottom up?

You need to deal with the claim that atheism is more associated with ratio-analytic thinking styles.

Rationality is useless if it is not sound. This is what Martin Luther meant when he called reason a “whore”. Pick the wrong premises, and rationality is utterly screwed. Therefore, merely that someone is “rational” means absolutely nothing about whether that person is well-connected to reality.

Which is better:

(1) an argument which is sound but not valid (2) an argument which is valid but not sound

? The answer, of course, is neither/unknown.

Now, if we go back to my original original point, the idea of thinking styles IS TOTALLY important and was actually where I wanted the debate to go.

Jonathan Pearce: I think the goal is to have a bottom up worldview, where you establish the building bricks and see what building arises. I think top down approaches are dangerous, and I think this is what many people, particularly theists, do. They start with a conclusion, and massage evidence to fit. I will happily throw out conclusions, as I have done many times in the past, if that is where the path leads.

? This bottom-up vs. top-down discussion does seem to be very important to thinking style, no? I would also point out Wikipedia’s top-down and bottom-up design for your consideration. I have quite a bit of knowledge about that, being a software architect. I could also throw in Richard Feynman’s “What I cannot create, I cannot understand.”—a favorite quotation of mine.

To which I responded:

The first part of your comment is really about justified truths. Is it better to have a conclusion which is correct but to which you poorly reason, or an incorrect conclusion to which you have argued very well (but perhaps an axiom is incorrect)?

This is why I ask what does better mean? What is the goal?

If it is about finding the cure to a disease which is predicated upon correct truths, then having a poorly argued correct conclusion is better. If it is about fostering better critical thinking etc then perhaps the other option.

Personally, I prefer the other option of well-argued valid and sound conclusions.

The next part you already k now. I prefer bottom up because this is about making sure everything from the bottom up is sound, leaving only the axiomatic foundational brick unable to be fully (deductively) rationalised.

Conclusions are hierarchical. In other words, objective morality, for example, depends upon several other levels, including whether objective ideas in general can and do exist, and whether anything exists outside of our own heads.

Just like Descartes, I reason from what I indubitably know to be true and build from there.

Anything else opens you up more probably to cognitive biases.

This is fundamental, in many sense, to me. It is why I spend some time trying to establish the building bricks of philosophy, such as “what is an abstract idea?” So often we argue on the veneer, and so often this means that our conclusions or claims are propped up with little more than biases and bluster.

For me, the bottom up approach is by far the more justifiable one.

If you look at this image about chip design (or something, it doesn’t really matter), you will get my point:

Here you can see that the left-hand approach has a conclusion which is correct if the whole thing is built correctly. The importance is in the construction, which should ensure truth if built with the correct bricks. The right-hand option smacks of ad hoc. If it doesn’t seem to work (is true) then just mess around with fixing things at the last stages, working under the assumption, nothing more, that the initial building blocks and desires and plans are correct.

At the end of the day, looking down on things is pretty arrogant, and assumes that you know best.

Thanks for writing this! I will respond at more length later, but I have an initial question: is it bottom-up or top-down to talk about “everything that exists”? It seems to me that if I start bottom-up, then I’m not guaranteed that my foundations will describe everything that there is. So it would strike me that the bottom-up builder ought not attempt to talk about “everything that exists”.

johzek

A bottom-up approach is the only way to determine “everything that exists”. Every perception we make implies that we are consciously aware of what exists. This is axiomatic, any attempt to deny this fact must make use of this fact in the attempted denial. Our senses automatically provide us with this awareness of what exists. We do not have to make any assumptions or presuppositions or hold any beliefs to perceive. We simply perceive.

At the conceptual level of consciousness we identify what we perceive and subsequently integrate this knowledge by the use of logical inference in a non contradictory way with previously learned knowledge. In this way we can build our knowledge hierarchically as Jonathan notes.

Unfortunately it is also at the conceptual level that one can downplay or even ignore what we directly perceive and instead adopt a “mental attitude”, such as presupposing or assuming, but the mental attitude that is liable to cause the most problems in determining what exists is the mental activity of imagining. It is certainly true that at this point one can, to use your words, talk about everything that exists, but of course that is all it amounts to, just talk.

So to answer your question, “is it bottom-up or top-down to talk about “everything that exists”. It is definitely top-down to simply talk about “everything that exists”, but if one wants to actually determine what exists he must begin at the bottom and scrupulously adhere to reason as his guide and reason is ultimately grounded in perception.

Luke Breuer

Our senses automatically provide us with this awareness of what exists. We do not have to make any assumptions or presuppositions or hold any beliefs to perceive. We simply perceive.

Are you aware of the claim that all perception is theory-laden? I refer you to Direct and indirect realism. There are some serious problems with your proposition, here.

johzek

Perception precedes conceptualization. How in the world can perception be theory-laden. Like digestion or respiration, perception is an automatic biological process which proceeds without our conscious control. Are digestion and respiration theory-laden also. What I am referring to here is what goes on prior to a person putting on his conceptual thinking cap and starting to weigh down what should be self-evident, and you know, in your words, making it theory-laden.

We perceive what we perceive. The mental image provided by our senses is what it is, and it provides us with an awareness of the world external to our consciousness. Later on one can attempt to describe this process of perception however one thinks is proper, but the process still is what it is no matter how it is described, and it is these perceptions that we experience firsthand without having to adopt a mental attitude.

By the way that illustration on the Wikipedia page sure was funny and more than a little bit misleading. It reminds me of what in Objectivism is called the primacy of consciousness, the idea that consciousness determines the nature and the identities of objects that exist. From your comments here I think that this idea would suit you pretty well.

I am interested in what you think might be serious problems with what I wrote.

Luke Breuer

Perception precedes conceptualization. How in the world can perception be theory-laden.

Grossberg 1999 The Link between Brain Learning, Attention, and Consciousness suggests that we do not become conscious of percepts (sense impressions) unless they match patterns already in our brains. In essence, consciousness is dependent on confirmation bias. Now this is more of a hypothesis than a theory, but you must realize that we probably only have hypotheses right now.

I’ll give you an example. A while ago, I came across a study in which participants were separated into two groups: those who understood the results of Galileo’s Leaning Tower of Pisa experiment, and those who did not, and thought that larger objects fall faster. The participants in each group were shown objects falling at different rates: some objects fell according to the common misconception, while others fell according to Galileo’s discovery. It turns out that each group preferentially saw what it expected to see, and tended to miss what it did not expect to see. Sadly, I could not find the paper again when I spent fifteen minutes looking. However, this is a well-known phenomenon; see Awareness Test – Basketball Passes.

Another way to understand the issue is to try to get computers to see. We find out that we have to load them with the equivalent of ‘concepts’ before they can make any sense of input (e.g. from a video camera). Now, this isn’t a knock-down argument because perhaps there is a better way to do computer vision which does not have these problems. But until that way is found, this idea of yours that “perception precedes conceptualization” seems very open to question.

The mental image provided by our senses is what it is […]

How does this assert anything other than A = A?

johzek

The computer example actually illustrates my point. The video camera is analogous to our perception as it simply provides the raw material for the computer to act on. Likewise for our senses. There needs to be data present for our brains to act on. It can then make identifications or form new concepts or make logical inferences based on this data. Similar to our sensory organs the video camera operates essentially automatically, it is pointed at a scene and the data comes pouring in. It is then acted upon by a computer programed appropriately.

My immediate reaction to Grossberg1999 is how are the initial patterns established if not from sensory input, how is it possible to learn anything, and in novel situations are we just dead in the water. The falling object example seems to be just a case of improper identification, although I am not clear as to the procedure here.

I’m not implying that our senses are perfect. For example, visually we only have a rather small focused field of view and a quite large unfocused peripheral field of view. As the basketball pass awareness test shows ( I have seen much better demonstrations than this one that had me fooled) we can focus our attention on only a part of what we are perceiving and miss information from the neglected parts. We can watch the video a second time and this time focus our attention on the stormtropper instead of the passes because this information is there. We can even rapidly change our focus of attention from one to the other. The sensory data is there to be acted upon if needed.

This is my final comment about this for now but I look forward to reading your response to the question of bottom-up versus top-down.

Luke Breuer

This is my final comment about this for now but I look forward to reading your response to the question of bottom-up versus top-down.

In that case, I don’t think I’m going to put in the time for an thoughtful comment. I prefer dialogues to monologues!

johzek

Luke, I am so happy that you found someone to engage with in a true dialogue as the one with you and Phasespace. How was I to know that our short dialogue of two or three comments and replies would offend your sensibilities, and that one must keep this up for at least several exchanges to qualify as a true dialogue. I think it would be wise for you and courteous to those who might want to engage with you that you inform them upfront that you simply cannot tolerate anything less than what you consider to be a true dialogue, whatever that might be. Seriously ‘monologue’! Was it you or I who was engaged in this monologue?

Luke Breuer

Who says my sensibilities were offended? Methinks this is much ado about nothing. It was I who didn’t want to monologue.

Luke Breuer

Jonathan, a while ago you said that you don’t know enough about quantum mechanics to know if e.g. nonlocality messes with your foundation. I’m curious as to why you don’t think it’s important to learn about nonlocality, perhaps through Bernard d’Espagnat’s On Physics and Philosophy. It strikes me that you aren’t actually starting bottom up; instead, your are going from middle to up and down, or ignoring the ‘down’ direction. It strikes me that the shift to structural realism might be important for foundationalists.

I also wonder if there’s a kind of ‘butterfly effect’ with foundationalism: get an axiom ever so slightly wrong, and the error amplifies until it is a huge error. I don’t know enough about foundationalism to know what the response is to this.

Furthermore, isn’t foundationalism inferred in a top-down or middle-down fashion? I refer you to my difference between ‘fact’ and ‘truth’, where I argue that we reason from observation → theorem → axiom. A theorem is very much like a conclusion it seems, opening you up to the accusation of doing conclusion-first thinking. Now, I’m just not sure this is always a problem.

I guess I was essentially aiming the point at philosophy as a whole, starting with abstract ideas: the building blocks of what we do.

But looking at empirical matters, and QM, I guess I would say this. If we start at the most fundamental level, there is an element of no one really knowing what the shit is going on. There are theories, but it is still open. As for QM, we get that there is adequate determinism. My feeling is that probabilities infer some sort of deterministic framework. Otherwise you have 50/50 random, and that does not seem to be the behaviour of QM.

Probabilities imply some sort of causal reasoning. Mysterious and chancy are not good enough. For certain radioactive decays to have certain probabilities means that there is causality which is different from one element to the next, right? ie causality is at play. Adequate determinism hints at this being the case.

So, for me, there simply must be something going on behind and beyond our ken, A deterministic framework of QM is the only approach which makes sense.

So though there is such mystery at fundamental level, if we look further up the hierarchy, things make sense. But quantum understanding and a TOE or similar is vital to progressing our knowledge.

Phasespace

For certain radioactive decays to have certain probabilities means that there is causality which is different from one element to the next, right? ie causality is at play.

As someone who has studied a little quantum theory, this statement makes me queasy. On the one hand, the decay probabilities certainly are related to the composition of a given atomic nucleus (which you could argue is causal, at least in terms of defining what the decay probability is), but there doesn’t seem to be any actual cause that you could point to for why a given atom decays at a particular time.

Bell’s Theorem (and the experiments that show huge violations of Bell’s inequality) kind of puts the kibosh on the notion that there is some kind of underlying deterministic framework that we are just unable to see.

Of course, there is the notion of superdeterminism as a response to Bell’s theorem, but so far, this idea hasn’t made much progress beyond raising it as a possibility.

But surely there HAS to be causality. For given that polonium and uranium have different half lives, there are definite different probabilities which define these elements. If they were random, all probabilities would tend to 50%.
So many other aspects of science have historically started with gaps in our knowledge…

Phasespace

Not necessarily, or perhaps not quite in the way you think. For example, one of the first things you learn in a quantum mechanics course is the notion of superposition of states, which is a fancy way of saying that a given particle is effectively in all states at once until the “wave-function collapses.” At that point, the particle “freezes” into one of those states and the probability of finding the particle in any one of those states when the wave-function collapses is the result of the “relative strength” (for lack of a better description) of the different states.

The protons and neutrons in the nucleus of an atom have similar energy levels such that this same superposition of states principle applies in much the same way to radioactive decay. That’s where the different decay probabilities come from, but that still doesn’t give us that causal event that tells us why a given an atom will decay at a given time.

So many other aspects of science have historically started with gaps in our knowledge…

That’s the frustrating thing about quantum theory. Bell’s theorem basically says that quantum theory is complete. The experiments that show violations of Bell’s inequality equation essentially prove that Bell’s theorem is correct and there can’t be any such “hidden variables” buried down in an atom that can be causally linked to (in this case) a decay event.

The only way out of this quagmire seems to be the superdeterminism idea I alluded to, which is still more hypothetical than anything else.

Luke Breuer

I guess I was essentially aiming the point at philosophy as a whole, starting with abstract ideas: the building blocks of what we do.

Aren’t the building blocks inferred from what really have to be called ‘conclusions’? I mean, aren’t you looking at a whole and inferring the parts? If you’re not looking at a whole, how do you even know the pieces connect to one another?

I also wonder if emergentism messes with your project, in the sense described by Massimo Pigliucci in Essays on emergence, part I. The idea here is that higher level structures can be largely agnostic as to the lower-level structures, such that there is a really solid layer of abstraction. This messes with the project of inferring the foundation.

As for QM, we get that there is adequate determinism.

What we really seem to get is a layered cake of determinism and indeterminism, with no idea of whether the bottom layer is deterministic or indeterministic. The Romans struggled with this: see Fortuna vs. Parcae. Who appeared to dominate seemed to fluctuate over time.

Are you merely appealing to there being a large “tower” of lawfulness, such that you get physics, then chemistry, then biology, then psychology, then sociology? These in and of themselves don’t get you a block universe; they seem like they could also be compatible with a growing block universe.

So though there is such mystery at fundamental level, if we look further up the hierarchy, things make sense.

Then you seem to mostly be appealing to rationality of reality. Is this correct? The rationality need not be finite in description; it could be that no finite-size computer program is sophisticated enough to describe reality as it is—or at least, as it will be.

Phasespace

What we really seem to get is a layered cake of determinism and indeterminism, with no idea of whether the bottom layer is deterministic or indeterministic.

Not exactly. If QM is what’s at the bottom (and it might not be), then the bottom is both deterministic and indeterministic at the same time. It’s deterministic in the sense that the bottom states are well defined, non-chaotic, and no other states aside from those defined states are possible. It’s also indeterministic in the sense that we can only define probabilities for the states that system will actually be in. We can know the range of states, and we can predict the probability of finding this system in any given state, but it isn’t possible to determine exactly what state we will find this system in when we look at it.

Luke Breuer

It’s deterministic in the sense that the bottom states are well defined, non-chaotic, and no other states aside from those defined states are possible.

I’m not sure this qualifies as ‘deterministic’.

Phasespace

Welcome to the wonderful world of quantum mechanics, where nature pulls our pants down and laughs at our intuitions. I know what you’re getting at there, and I don’t disagree. Quantum mechanics is full of stuff like this, where the thing you’re talking about isn’t quite what it appears to be and defies our intuitive explanations.

It gets both better and worse when you make the leap from quantum mechanics theory to quantum field theory.

I’m not familiar with that particular book, but it is interesting stuff. I’m not sure it necessarily provides much insight into the philosophical questions that have arisen around standard QM and perhaps it raises some new ones, but quantum field theory does rather nicely resolve the wave-particle duality problem. However, it comes at the price of requiring that particles, as we normally think of them, don’t really exist. In QFT, particles are bundles of localized energy within and interacting across different quantum fields… The linkages between quantum theory and string theory start to become more apparent at this level, or at least you can kind of see how QFT was a motivator for string theory.

The actuality of your micro-objects disappears in the process of “renormalization”, and the actuality of your waves into mere curves of probability-distribution in a ψ field. (191)

I don’t know enough to really comment. I will say that I think we need both particulars and universals, and any attempt to smoosh one into the other will inevitably lead to bad places. This is a philosophical value, though; how to apply it to QFT is unclear to me.

Phasespace

Well, I think it’s more accurate to say that QFT results in a destruction of our intuitive concepts. Which, philosophically, can be a bit of problem.

However, I would say that particles aren’t really going away, but our definition of what they are is changing due to our findings in physics. In that light, I tend to see complaints such as Barfield’s as much ado about nothing. These things that we’ve been talking about still exist, but they aren’t quite what we thought they were, and some people find that unsettling.

Luke Breuer

At some point, I want to trace how many of those intuitive concepts stem from Francis Bacon eliminating formal and final causes in his redefinition of knowledge to efficient and material causes. It strikes me that there might be a way to connect final causes to something superposition-like and formal causes to something nonlocal-like. Now, I definitely need to test these intuitions against something like Bernard d’Espagnat’s On Physics and Philosophy. The general dogma is that nothing quantum-like shows up in the macro scale. I would need to learn a lot more to truly be able to critique this dogma.

As to Barfield, one of his big concerns is that science might be undercutting the very observations that lead to scientific theories. It’s somewhat similar to how Logical Positivism undercut itself with the verifiability criterion. If at some point you conclude something that undermines your observations and theorizing twenty steps back, you really ought to see if you can still get to where you are after said undermining is analyzed. Whether or not science right now is actually doing that is up for debate, but I think realizing this as a potential problem is worthwhile.

Phasespace

The general dogma is that nothing quantum-like shows up in the macro scale. I would need to learn a lot more to truly be able to critique this dogma.

I wouldn’t call that a dogma, it’s more like the probability for seeing quantum effects effectively drops to zero at the macro scale. In my qm course we calculated the probability of a bowling ball quantum tunneling through a brick wall (and it was a very oversimplified calculation that didn’t account for a number complicating factors). If memory serves, the number of trials that would be needed for the probability to reach the 1% level was something like 100 times the number of atoms in the universe.

Now, I’m not saying that macro scale quantum effects can’t ever happen, but it seems to me that the circumstances under which it could occur would have to be very unusual.

Luke Breuer

So, I understand that much. The top-level claim, however, is that standard intuitions in the macro-world don’t translate to the quantum-world. For example, the claim would be that there is nothing analogous to superposition or nonlocality. The key here is ‘analogous’, and I mean it in the full sense that intuition does not need identical scenarios (else it probably wouldn’t be ‘intuition’), but sufficiently similar scenarios.

I have discovered that some people do something that might be analogous to superposition when attempting to communicate. When someone says something to me, (a) I assume they meant to convey something meaningful and consistent; (b) if and when I find errors, I search for the smallest possible change I could make to the statement in order to make (a) obtain. One way of doing this is attempting to ‘fuzz’ one word at a time, such that one is essentially constraint-matching the sentence to see what concepts could possibly stand in for a given word. Doing this for more than one word sends you toward combinatorial explosion land, but the idea is that you allow a superposition of meanings and demand that the output makes sense.

When it comes to nonlocality, I’ve found that culture can exert forces in a way that doesn’t seem traceable to single individuals. Culture can be like the Hydra, where if you cut off a head, another two appear. It’s as if the thing that sustains the culture is not in any individual, but exists instead “spread out” across many individuals. Understanding how this works is very important if you want to change the culture in a directed fashion; the reading I have done indicates that culture change is extremely hard and unpredictable. I wonder if this is because what I am calling ‘nonlocality’ has not been well-understood.

Now at the level I have described, the above are pretty weak analogies. But I wonder if further examination would draw any more connections between quantum phenomena and the macro-scale phenomena I’ve described. Both the above analogies seem to depend on a non-mechanistic understanding; if you merely think in terms of billiard balls, they don’t make sense.

Luke Breuer

Here you can see that the left-hand approach has a conclusion which is correct if the whole thing is built correctly. The importance is in the construction, which should ensure truth if built with the correct bricks.

This seems to possibly match an idea I came up with a while ago: “Generating is knowing.” It is related to Richard Feynman’s “What I cannot create, I cannot understand.” However, there seem to be some pretty big problems with this. Here are two:

(1) How do I know my construction well-matches all of reality?
(2) What happens when layers are well-isolated?

My first question relates to my first comment: what guarantee do I have that my bottom-up construction captures anything more than a sliver of reality? I’m reminded of Grossberg 1999 The Link between Brain Learning, Attention, and Consciousness, which argues that we don’t even become conscious of phenomena which do not sufficiently well-match models which already exist in our brain.

My second question alludes to Massimo Pigliucci’s Essays on emergence, part I, where he talks about how emergent systems can be largely agnostic to their substrates. For example, thermodynamics (a phenomenological model) can be largely independent of statistical mechanics (an ontological model). That is, the micro-structure of the constituent particles can be largely irrelevant to the macro-scale behavior. Perhaps the ultimate example of this is Turing machines, which aside for computing power, care not a whit for what computational substrate is used. It strikes me that a consequence of this isolation is that it presents a barrier to our investigation into the substrate, which puts your bottom-up approach into question.

At the end of the day, looking down on things is pretty arrogant, and assumes that you know best.

How is this “looking down on” not 100% equivocation? I’m reminded of Aristotle’s and Bertrand Russell’s description of reasoning from the whole to the parts in the first two pages of de Koninck’s The Unity and Diversity of Natural Science. And yet, how is reasoning from ‘whole’ → ‘parts’ not top-down thinking, or at least middle-down thinking?

The answer to 1) is simple. You build your bricks precisely because the evidence, rational and empirical, lead you to those bricks. Thus your construction should match reality better than any competing brick explanation. That is exactly how it should be and precisely a result of such methodology.

Things which are loinked causally can never be fully isolated from each other. WE may not see the connection; but we don’t always have full access to the knowledge. That’s science (and philosophy). They develop. Science more so!

I think it IS arrogant, or at least foolhardy, to argue from conclusions. Especially when these conclusions are desirable. I think Descartes, for all his going off the rails after this, realised that we have to strip away everything we THINK we know and start form what we REALLY DO know to build up a more watertight case for what actually exists.

Luke Breuer

The answer to 1) is simple. You build your bricks precisely because the evidence, rational and empirical, lead you to those bricks.

How do you deal with the claim that all observation is theory-laden? See, for example, Quine’s Neurathian bootstrap. Furthermore, how do you deal with Grossberg’s claim that you don’t even become conscious of sense-data which don’t match preconceived patterns in your brain? Surely you’ve seen the videos of people passing a basketball where a person in a gorilla suit walks through, and you never even see the gorilla suit because you were focusing intently on other things?

Things which are loinked causally can never be fully isolated from each other.

I never said “fully isolated”; instead, I said “well-isolated”, “largely agnostic”, “largely irrelevant”. What I’m criticizing is the idea that you could possibly do “bottom-up”, instead of, at best, “middle-down”. Let me put it this way: are you more confident that the table exists, or that atoms exist?

Furthermore, I don’t think you understood the point about micro-structure being largely agnostic to macro-behavior. This means you could infer a wrong micro-structure which still does a good job of generating the observed macro-behavior. This seems to strongly argue for trusting the middle-level (the scale on which you perceive) more than the building blocks. And yet, you seem to want to trust the building blocks more than the middle-level. Is this an incorrect assessment of your view?

Please also respond to the following:

At the end of the day, looking down on things is pretty arrogant, and assumes that you know best.

LB: How is this “looking down on” not 100% equivocation? I’m reminded of Aristotle’s and Bertrand Russell’s description of reasoning from the whole to the parts in the first two pages of de Koninck’s The Unity and Diversity of Natural Science. And yet, how is reasoning from ‘whole’ → ‘parts’ not top-down thinking, or at least middle-down thinking?

Given that everything accordingly is theory-laden, then you are choosing one theory-laden approach amongst many, so it matters not. You choose the best theory from those available. If top down is theory laden, and bottom up is too, I will choose the one which is the most coherent, pragmatically useful, but, above all, accurate. I will try to mitigate my own biases, as I always do. And I seem to do ok, since I have overturned most of my taken-for granted top down belief constructions.

This seems to strongly argue for trusting the middle-level (the scale on which you perceive) more than the building blocks. And yet, you seem to want to trust the building blocks more than the middle-level. Is this an incorrect assessment of your view?

So you use the most accurate lowest level blocks you can. And if we do not understand QM, then we take the best understanding we have (that is all we can do) and see how that affects the next level, taking into account our best understanding of that level on its own merit.

Given that everything accordingly is theory-laden, then you are choosing one theory-laden approach amongst many, so it matters not. You choose the best theory from those available. If top down is theory laden, and bottom up is too, I will choose the one which is the most coherent, pragmatically useful, but, above all, accurate. I will try to mitigate my own biases, as I always do. And I seem to do ok, since I have overturned most of my taken-for granted top down belief constructions.

What is being equivocated here?

Luke Breuer

Hi Jonathan, here’s a ping for you to respond to my comment of 7 days ago. You asked me to do this for comments which slip through the cracks—I know you’re a busy guy.

I think we are stuck with trying to understand the time aspect of space-time, and the role of causality in that, and consequently in how we behave – and how we think is one aspect of how we behave. We’re locked in this apparently time evolving causal world and we can’t get past that.

I agree with your points on quantum stuff, I think. Even random events have a cause, and of course have causal effects in turn once they occur. It seems unavoidable that we should ask questions like: what CAUSES non-locality.

We seem to have in our heads this notion of randomness that we don’t fully understand. We have what seems like an idea, but ‘true randomness’ seems to be totally at odds with causality. We can think about causal processes that we model as stochastic, and we can even think of complete determinism being so complex that our models of it constitute an indeterminate perspective – we can’t determine (predict) the outcomes of deterministic systems to arbitrary degrees of precision. But I have yet to see any evidence of something that justifies a label of being ‘truly random’.

The problem for us seems to be the contingency of our knowledge, how as a species we started to think about stuff, how ‘thinking’ (primacy of thought) has led us astray, but it’s our only possible starting point as consciousness emerges, in the species and individual, and how a contingent conclusion that we are sensory empirical creatures comes late to the game but pretty much describes how we are: http://ronmurp.net/thinking/

Thanks for your comments. Will be interested to read your piece. I wonder if @LukeBreuer:disqus is still around to pass comment.

I do struggle with ideas of true randomness – they appear to make no sense for a number of reasons.

What do mean by “led us astray”?

ronmurp

The emergence of thinking led us to think about thinking, about the mind which intuitively appears distinct from the physical, and we came to believe that thought was something special. That allowed philosophy and religion to become what was thought to be the way to knowledge – even leading to JTB attempts to come to totally rationalist solutions, to ‘prove’ stuff.

But later science exposes our non-thinking history: early brains of a few to a few hundred neurons, how learning in small neurological systems (e.g. Kandel on Aplesia) is associated with neuronal states and connections. Working back, to cells generally, they all interact through the physical/chemical interfaces. We are empirical creatures. Our brain neurons are sensing each other pretty much as peripheral neurons sense. There is no evidence of anything else. So, our thinking that thinking is so special (our primacy of thought) is challenged by biology, neuroscience, evolution. Our ‘pure thought’, already questioned by some philosophers, has a scientific challenge.

But, the history embedded in philosophy and religion still persists, so that theologians, and many philosophers, are still essentially dualists – though some seem to make dualist noises while denying they are. Dualist ideas not only led us astray, they still do for some people.

ronmurp

A particular problem is that by the time one is able to think about thinking with any amount of critical thought that can assess evidence, the non-thinking past is already lost. How do you think about a time when your brain couldn’t think clearly? Can you remember being dumb?

This applies to individuals – childhood memories are slim because the brain was developing and had little to remember, before language and concepts were acquired.

It applies to the species – we have no records of what it was like not to have language, because …