The fuller statement which I prefer is: "What does is formally look like to change someone's mind?"

It seems like persuasion can only succeed when a proposition appeals to, and is compatible with, some premise which is already believed prior to the act of persuasion. This raises questions like, "Can acts of persuasion ever cause global overhauls of a person's entire belief set, or can they only change a subset of those beliefs at a time?" (Think Ship of Theseus)

Most basically, a persuasive act succeeds when it brings a person into a state of belief that they did not previously occupy. However, it is possible for two reasoners, whose beliefs match in content, to not be persuaded of some proposition by the same argument. This is only possible if those two people reason according to different rules of inference; they might be different types of reasoners.

So the question naturally follows: "Is it possible to persuade a reasoner of one type to become another type of reasoner (i.e. convince them that one cognitive rule is superior to another which already defines them)?" And what rules could possibly mediate that transition? Would such a process even be coherent, or amenable to any formal description?

Finally, can a change of belief caused by persuasion also cause a change in one's knowledge, or is justification a categorically different ingredient from persuasion? That is, can persuasion ever suffice as a form of justification?

"The book you are looking for hasn't been written yet. What you are looking for you are going to have to find yourself, it's not going to be in a book..." -Sidewalker

First off, I just want to say I love this question. Great topic. Secondly, I've got more comments than I'm even able to properly prioritise, so apologies if this seems a bit free-form. I'll just make a couple of short (and hopefully clear) comments and questions for now, as I'd really like to see where this one goes.

And Ignored Assumption?

It seems to me that the possibility of changing a mind is an implied assumption of the act of discussion or at least certain types of discussion. From this we can assume that there are also implied criteria by which we can judge whether this has happened or not. This often seems to be ignored, as it requires not only that the terms of discussion are set (we are talking about the same things) but that the possible significant outcomes of the debate are acknowledge as well. That there are significant outcomes is something we seem to not only naturally assume prior to the debate, but can participate in without deliberate thought. This might be an explicit statement of agreement or a "I think we are talking at cross purposes". It might even by an "I disagree but I see where you are coming from" or something similar.

What doesn't seem to be asked is what these acts represent, what they show us about what the debate has caused to change, and why.

What Changes And Why?

It seems to me there are several types of change that can take place. You can learn a new piece of information, but it fit your existing position. Is this a change of mind? It seems to me it is not. I can learn a new piece of information in this way even if I set out to confirm my existing position on something, which does not strike me as a change of mind. However, I may learn a new piece of information and for it to change how I view other pieces of information. The new information itself here seems to act as a catalyst, not as a change in itself. But this can also happen without any new information at all; I may change my own mind on something that I already know.

So if the information is neither required nor actively subject to the 'change' we are looking for, what is? We change our minds about something we already hold to be 'true' in some sense, but what does it mean for me to go from making the statement "I hold X to be true" and "I do not hold X to be true" or "I hold X to be false". Are these three distinct cases at all? How do they differ? Most interestingly, what brings them about; how do I best argue, that I will bring about a change in understanding through this exchange? Or, perhaps better phrased, how do I best argue, that a change in understanding is a likely outcome of this exchange?

I would also like to know whether anyone thinks there is a difference between changing a mind on a matter of empirical concern (i.e. evidential questions) and on one of 'raw belief'. Are the statements "I now view this evidence differently" and "I have changed what I believe to be the case in which evidence is not available" expressions of similar types of epistemic-doxastic event or are they actually just the same event happening to different concepts?

What Would Change My Mind?

Not all arguments can be solved within the same frame of reference, but can all arguments be (at least in principles) resolved by application of the correct frame? If a matter of science, what evidence would I require to change my mind? If a more abstract issue, would I use pure reason or also refer to evidence? On grounds of pure belief, would apparent internal inconsistency of my argument necessarily change my position or can I hold something to be true in spite of this? Is it possible at all to change someone's mind on certain types of ethical-aesthetic position? Can I be persuaded that a painting or a song is not beautiful? Can I see the world or things within it in a different way and what does it mean if I can('t)? A very important question of this line of reasoning is "what are my justificatory processes and can they be changed and if so by what?", as this seems to be me to the assumed grounds of debate that we carry with us before ideas are even exchanged.

But I think the most interesting question of all here is do we have any choice in the matter? Is there the potential that, in any given discussion, we could be persuaded against our will if the correct approach were used? If there isn't, what does this mean for debate as a social tool and intentional process? Are some ideas conceptually impervious to change? Are others the opposite, inevitably having to give way if given a suitable timeframe?

It seems to me that we have to look at these type of questions if we are to understand just how it looks to change a person's mind, as we first need to establish what 'change' in this context means and how it can be brought about. What are we looking at when we see a change of mind, how do we recognise it and what can we point to as having caused it to happen?

I need to ponder more before I offer any kind of thorough response. But, to help me, would either of you (or anyone else who follows) help me define some things? I will present some examples from my own life that I would consider to be instances of "changed mind" (and I include in that phrase "changed opinion" or "shifting positions", etc.). If you would be so kind, please help me understand how you would define my following "changes":

1. When I was a young child, I "believed" in Santa. At some point (around 8-9 years of age, if I recall) I began to understand Santa in terms of fairytale and myth.

2. At one point (for a few moments) I believed, as a child, that wearing goggles would keep me afloat in a pool (my older brother told me so). When I jumped in, I sank like a rock. My brother grabbed me and set me on the side of the pool, laughing, and insisting it was time I actually learned how to swim. I was, of course, very angry. At least two things changed: my "belief" in the buoyant properties of goggles, and how much I could trust in my brother.

3. For several years (my teens and early twenties), I believed abortion was "wrong" or, at least, a terrible "lesser of evils". My girlfriend calmly introduced me to the notion of why we should blame a woman and expect her to bring a fetus to term if, in fact, she was raped, or, indeed, if she simply was ignorant or did not intend to get pregnant (and maybe even tried to use various birth-prevention methods unsuccessfully). For all my "erudition", I had never actually CONSIDERED this notion. When I finally considered it, I changed my position almost immediately...a position I have stuck with till the present.

4. Until I was a young adult, I believed the Bible was "literally" true, including stories about Jonah being swallowed by a big fish, only to be regurgitated, enabling him to get on with his life. Today I still believe that story is an amazing piece of literature with powerful messages of admonition, but I no longer believe it is a "historical" recounting, but is closer to a fairytale with a moral.

5. As a young adult, I believe I was in love with a particular woman. Eventually, that relationship faded. I believe to this day that I truly loved her at the time (rather than I was just "infatuated" with her), but I am forever thankful that the relationship faded, and I have no particular viable feelings of love for her now (though I remember her with fondness).

If you can help me understand where one or more of these "changes" lies on your radar, I'll be able to respond to your thread more intelligently.

Thanks Graincruncher; I think I'll kick off the first explicit response here:

At 7/21/2013 7:15:24 AM, Graincruncher wrote:And Ignored Assumption?It seems to me that the possibility of changing a mind is an implied assumption of the act of discussion or at least certain types of discussion. From this we can assume that there are also implied criteria by which we can judge whether this has happened or not. This often seems to be ignored, as it requires not only that the terms of discussion are set (we are talking about the same things) but that the possible significant outcomes of the debate are acknowledge as well. That there are significant outcomes is something we seem to not only naturally assume prior to the debate, but can participate in without deliberate thought. This might be an explicit statement of agreement or a "I think we are talking at cross purposes". It might even by an "I disagree but I see where you are coming from" or something similar.

What doesn't seem to be asked is what these acts represent, what they show us about what the debate has caused to change, and why.

So, I'm not entirely sure what the overall trajectory of these remarks is, but I will affirm that an act of persuasion does carry the practical assumption that it can persuade your partner. That possibility is a necessary condition for success, but not sufficient. For example, the act itself may not succeed by simply having the wrong content; maybe by appealing to a belief premise which your partner doesn't have or logically committed to, or which doesn't allow one to derive the target belief.

What Changes And Why?It seems to me there are several types of change that can take place. You can learn a new piece of information, but it fit your existing position. Is this a change of mind? It seems to me it is not. I can learn a new piece of information in this way even if I set out to confirm my existing position on something, which does not strike me as a change of mind. However, I may learn a new piece of information and for it to change how I view other pieces of information. The new information itself here seems to act as a catalyst, not as a change in itself. But this can also happen without any new information at all; I may change my own mind on something that I already know.

I think we should focus on what belief-changes effected by acts of persuasion, not simply experience in general. At least, that was the mechanism of doxastic transition that I was emphasizing.

So if the information is neither required nor actively subject to the 'change' we are looking for, what is?

I think it may be important to realize that information is not persuasive; it is merely descriptive. It doesn't itself constitute an appeal. An appeal is something else entirely; an appeal to the information is something separate from the information itself. Those appellative mechanisms are the ones under question.

We change our minds about something we already hold to be 'true' in some sense, but what does it mean for me to go from making the statement "I hold X to be true" and "I do not hold X to be true" or "I hold X to be false". Are these three distinct cases at all? How do they differ?

Yes, they are different; let B be a doxastic operator on a some proposition X, such that Bx may be interpreted to mean, "It is believed that x." Then your first utterance equates to (Bx), the second to (~Bx), and the third to (B~x).

Most interestingly, what brings them about; how do I best argue, that I will bring about a change in understanding through this exchange? Or, perhaps better phrased, how do I best argue, that a change in understanding is a likely outcome of this exchange?

A successful persuasive act must appeal to some pre-argumentative belief premise (meaning one that would not change if the argument succeeded) which has a greater relative value to the belief premise which the act is trying to change.

I would also like to know whether anyone thinks there is a difference between changing a mind on a matter of empirical concern (i.e. evidential questions) and on one of 'raw belief'. Are the statements "I now view this evidence differently" and "I have changed what I believe to be the case in which evidence is not available" expressions of similar types of epistemic-doxastic event or are they actually just the same event happening to different concepts?

I think we only need to deal with propositions and beliefs in propositions here; no further details seem to be needed (unless we get into specifying a contentful model for these things). Again, "evidence" is not persuasive; "evidence" doesn't try to make you believe anything. It is the act of appealing to evidence which is properly persuasive.

But I think the most interesting question of all here is do we have any choice in the matter? Is there the potential that, in any given discussion, we could be persuaded against our will if the correct approach were used?

I don't think so; formally speaking, this would be analogous to forcing Euclidean geometry to derive non-Euclidean theorems. It just doesn't follow unless we change the formal system itself; give up the parallel postulate, say. But that just redefines the person we're dealing with, it doesn't describe a change in the same person's beliefs.

Now, people can be forcibly brought into an apparent state of concession (e.g. confession under threat of torture), but the person hasn't been brought to actually believe in the proposition; they're simply conceding it for extraneous bargaining reasons. We're asking questions about the process of someone's beliefs really changing, not just apparently changing under coercion.

It seems to me that we have to look at these type of questions if we are to understand just how it looks to change a person's mind, as we first need to establish what 'change' in this context means and how it can be brought about. What are we looking at when we see a change of mind, how do we recognise it and what can we point to as having caused it to happen?

So, let's consider a person who believes that the truth of some proposition p implies the truth of q. This doesn't mean that the person believes in the either (that is, we can say ~Bp & ~Bq). Rather, we say B(p->q). Now, if this person reasons consistently, then (Bp->Bq) follows. Therefore, if we are to convince this person that q is true, then we could do so by persuading them that p is true.

We would then try to find some belief which that person already has (call it Bk), where k->p. We must then convince them that (k->p); this would be sufficient to bring a consistent reasoner to believe in p, and therefore to believe in q.

Thus, some pre-existent belief such as Bk is necessary to persuade someone of a belief which they do not already hold such as in q and p above. Did this help model the scenario?

"The book you are looking for hasn't been written yet. What you are looking for you are going to have to find yourself, it's not going to be in a book..." -Sidewalker

At 7/21/2013 5:31:52 PM, thg wrote:If you would be so kind, please help me understand how you would define my following "changes":1. When I was a young child, I "believed" in Santa. At some point (around 8-9 years of age, if I recall) I began to understand Santa in terms of fairytale and myth.

Our sense experiences are non-propositional processes; the color red is not a proposition. The sound of laughter is not a proposition. Seeing a mall Santa's beard fall off (or whatever) is not a proposition. As such, we may not be able to model this belief transition coherently; how such non-propositional data can become declarative beliefs or statements at all and to begin with is a deeper mystery which we don't need to solve here.

Now, if your parents told you, "Santa is a fairytale," or something like that, then your belief transition would be accountable by your prior belief in the statement, "My parent's wouldn't lie to me." So, more broadly, I also want to point out that persuasion is a belief-changing exercise, not one which can generate beliefs from scratch.

2. At one point (for a few moments) I believed, as a child, that wearing goggles would keep me afloat in a pool (my older brother told me so). When I jumped in, I sank like a rock. My brother grabbed me and set me on the side of the pool, laughing, and insisting it was time I actually learned how to swim. I was, of course, very angry. At least two things changed: my "belief" in the buoyant properties of goggles, and how much I could trust in my brother.

Again, this experience changed your beliefs using non-propositional data, so it doesn't count as instance of persuasion.

3. For several years (my teens and early twenties), I believed abortion was "wrong" or, at least, a terrible "lesser of evils". My girlfriend calmly introduced me to the notion of why we should blame a woman and expect her to bring a fetus to term if, in fact, she was raped, or, indeed, if she simply was ignorant or did not intend to get pregnant (and maybe even tried to use various birth-prevention methods unsuccessfully). For all my "erudition", I had never actually CONSIDERED this notion. When I finally considered it, I changed my position almost immediately...a position I have stuck with till the present.

Your girlfriend appealed to a belief premise which you already held and to a doxastic implication that you hadn't yet recognized:She first brought your attention to the fact that a belief in the obstetric protection of women would commit a sound reasoner to believe in the allowance of abortion as a protective medical procedure. She then convinced you that since you already believe in protecting women, you should believe that abortion is permissible if you are also a sound reasoner. You wanted to be sound reasoner, so the consequent belief followed naturally from these premises.

Your other examples, I think, can be answered in the same way. Do you see where I'm going with this now?

"The book you are looking for hasn't been written yet. What you are looking for you are going to have to find yourself, it's not going to be in a book..." -Sidewalker

I just thought of something that I thought I should inject into the conversation.

Given that humans often hold contradictory or ambiguous beliefs, can we think of the change of mind process as an organic and perhaps even evolutionary process?

Migration - Adoption of new beliefs from other sources (can be done without fully exploring the consequences of these beliefs)Mutation - Changing a belief to better conform to ones other beliefs.Death - Rejecting a belief on acknowledging its incompatibility with other beliefs.Reproduction - Applying belief to a new area of knowledge unconsidered and observing the results (new beliefs).

Nobody ever investigates all the consequences of their beliefs (is this even possible?) and people almost always hold contradictory beliefs somewhere. Given this, the answer to whether you can become a materially different kind of reasoner is most definitely yes. One can adopt new beliefs that are different from old and then kill off old beliefs until eventually the new beliefs are distinctly separate from the old (speciation?)

On whether justification is materially separate from belief, I hold that it is. Justification is a tool that we use to validate the consistency of our beliefs, nothing more.

Yes, I see the force of your premise more clearly now. Thanks for taking the time to respond to my queries.

Again, before I proceed with any more detail, I want to ask another question. Please bear with me, as I can be pretty dense!

My question is: It sounds to me like your premise is marked by a form of circular reasoning (if that is the correct term). That is, any change that we might observe in our opinions is predicated on a dichotomy between new info or newly arranged info (which I will refer to as "subset" material), and the way we perceive info in general (superset material). It sounds to me like you're suggesting that any change that occurs is entirely superficial insofar as it all falls within a larger structure of reason which itself is foundational. I guess I have a couple of concerns:

1. Why shouldn't these "sub-" changes be considered actual changes of mind (hence, choices within a larger, closed system)? Sure, I may be subject to the super (or macro) set of rules when I play chess, I'm still making valid choices within that system. The rules of the game do not negate my ability to make choices.

2. Couldn't you claim that ANY change, after all, is a subset of a larger consistent view (perhaps this indeed is what you are claiming)? That is, even if we were to produce an example of a change that was, indeed, a "super-" change rather than a "sub-" change, couldn't we easily proceed to find an even bigger construct that would swallow this change...making it now become a "sub-" change that further reinforces your original premise? So, back to the chess example, you could say, "Yes, you made a 'new' move at this point in the game, but that's only because you became aware of more options" or whatever. Or, perhaps a better example would be: "Yes, you added a totally new rule to the game and are now making choices based on that new rule, but you're still playing the game with a set of rules." Isn't this circular?

If you can give me an example of one these "super-" changes and/or show how your premise isn't circular, I would be much obliged and could then maybe respond more meaningfully.

You also asked about the formalisation. I should note here that this would mean that the formal system one used for belief here (your chosen doxastic modal logic) would need to be non-monotonic (defeasible). It would also likely be non-deterministic (i.e. that if we consider the premises in a different order we might obtain a different result). These kinds of systems have been studied, though they are very complicated.

At 7/22/2013 7:24:47 PM, the_croftmeister wrote:You also asked about the formalisation. I should note here that this would mean that the formal system one used for belief here (your chosen doxastic modal logic) would need to be non-monotonic (defeasible). It would also likely be non-deterministic (i.e. that if we consider the premises in a different order we might obtain a different result). These kinds of systems have been studied, though they are very complicated.

I'm sensing this more by the moment; would you recommend any particular readings or general resources for that kind of work?

"The book you are looking for hasn't been written yet. What you are looking for you are going to have to find yourself, it's not going to be in a book..." -Sidewalker

At 7/22/2013 7:24:47 PM, the_croftmeister wrote:You also asked about the formalisation. I should note here that this would mean that the formal system one used for belief here (your chosen doxastic modal logic) would need to be non-monotonic (defeasible). It would also likely be non-deterministic (i.e. that if we consider the premises in a different order we might obtain a different result). These kinds of systems have been studied, though they are very complicated.

I'm sensing this more by the moment; would you recommend any particular readings or general resources for that kind of work?

Well I think default logic is the most well known system for defeasible reasoning. Perhaps you could start with the Stanford article on non-monotonic logic and look at the sources from there? I confess it's not a subject I'm particularly familiar with either. I haven't really looked much into doxastic logic, I've mostly been concerned with the alethic operators.

At 7/22/2013 7:23:46 PM, thg wrote:My question is: It sounds to me like your premise is marked by a form of circular reasoning (if that is the correct term). That is, any change that we might observe in our opinions is predicated on a dichotomy between new info or newly arranged info (which I will refer to as "subset" material), and the way we perceive info in general (superset material). It sounds to me like you're suggesting that any change that occurs is entirely superficial insofar as it all falls within a larger structure of reason which itself is foundational.

I don't think I suggested its superficiality; my question was whether we can indeed replace every belief someone holds over time purely through the use of persuasion, including their beliefs in which rules they should use cognitively.

I guess I have a couple of concerns:1. Why shouldn't these "sub-" changes be considered actual changes of mind (hence, choices within a larger, closed system)? Sure, I may be subject to the super (or macro) set of rules when I play chess, I'm still making valid choices within that system. The rules of the game do not negate my ability to make choices.

I'm not sure what you're getting at in particular; I didn't suggest that we don't make choices, or that we don't choose our beliefs, just that changes in belief brought about by persuasion may be amenable to formal descriptions which unfold deterministically.

2. Couldn't you claim that ANY change, after all, is a subset of a larger consistent view (perhaps this indeed is what you are claiming)? That is, even if we were to produce an example of a change that was, indeed, a "super-" change rather than a "sub-" change, couldn't we easily proceed to find an even bigger construct that would swallow this change...making it now become a "sub-" change that further reinforces your original premise? So, back to the chess example, you could say, "Yes, you made a 'new' move at this point in the game, but that's only because you became aware of more options" or whatever. Or, perhaps a better example would be: "Yes, you added a totally new rule to the game and are now making choices based on that new rule, but you're still playing the game with a set of rules." Isn't this circular?

Who's to say that rule choices aren't themselves governed by rules, or that several rules might be concomitantly competing for governance over each other? A rule of doxastic reasoning might be subverted by a more powerful rule which was previously unrealized by the reasoner (if we take the reasoner's value system into account). That isn't circular, it's just a description of a formal system whose own rules are unstable and which can subvert and compete with each other. That's what makes this topic so interesting.

"The book you are looking for hasn't been written yet. What you are looking for you are going to have to find yourself, it's not going to be in a book..." -Sidewalker

At 7/22/2013 7:24:47 PM, the_croftmeister wrote:You also asked about the formalisation. I should note here that this would mean that the formal system one used for belief here (your chosen doxastic modal logic) would need to be non-monotonic (defeasible). It would also likely be non-deterministic (i.e. that if we consider the premises in a different order we might obtain a different result). These kinds of systems have been studied, though they are very complicated.

I'm sensing this more by the moment; would you recommend any particular readings or general resources for that kind of work?

Well I think default logic is the most well known system for defeasible reasoning. Perhaps you could start with the Stanford article on non-monotonic logic and look at the sources from there? I confess it's not a subject I'm particularly familiar with either. I haven't really looked much into doxastic logic, I've mostly been concerned with the alethic operators.

Thanks!

"The book you are looking for hasn't been written yet. What you are looking for you are going to have to find yourself, it's not going to be in a book..." -Sidewalker

At 7/22/2013 7:24:47 PM, the_croftmeister wrote:You also asked about the formalisation. I should note here that this would mean that the formal system one used for belief here (your chosen doxastic modal logic) would need to be non-monotonic (defeasible). It would also likely be non-deterministic (i.e. that if we consider the premises in a different order we might obtain a different result). These kinds of systems have been studied, though they are very complicated.

I'm sensing this more by the moment; would you recommend any particular readings or general resources for that kind of work?

Well I think default logic is the most well known system for defeasible reasoning. Perhaps you could start with the Stanford article on non-monotonic logic and look at the sources from there? I confess it's not a subject I'm particularly familiar with either. I haven't really looked much into doxastic logic, I've mostly been concerned with the alethic operators.

Thanks!

And by that, I mean: I'll take a look.

"The book you are looking for hasn't been written yet. What you are looking for you are going to have to find yourself, it's not going to be in a book..." -Sidewalker

At 7/22/2013 7:13:08 PM, the_croftmeister wrote:I just thought of something that I thought I should inject into the conversation.

Given that humans often hold contradictory or ambiguous beliefs, can we think of the change of mind process as an organic and perhaps even evolutionary process?

Migration - Adoption of new beliefs from other sources (can be done without fully exploring the consequences of these beliefs)Mutation - Changing a belief to better conform to ones other beliefs.Death - Rejecting a belief on acknowledging its incompatibility with other beliefs.Reproduction - Applying belief to a new area of knowledge unconsidered and observing the results (new beliefs).

Nobody ever investigates all the consequences of their beliefs (is this even possible?) and people almost always hold contradictory beliefs somewhere. Given this, the answer to whether you can become a materially different kind of reasoner is most definitely yes. One can adopt new beliefs that are different from old and then kill off old beliefs until eventually the new beliefs are distinctly separate from the old (speciation?)

Very interesting; this would be incredibly complicated to model formally! Really useful for thought experiments and visualization, though.

On whether justification is materially separate from belief, I hold that it is. Justification is a tool that we use to validate the consistency of our beliefs, nothing more.

Do you think persuasion can be a form of justification? Is a justification some persuasive account which will always convince a certain kind of reasoner?

"The book you are looking for hasn't been written yet. What you are looking for you are going to have to find yourself, it's not going to be in a book..." -Sidewalker

At 7/22/2013 7:13:08 PM, the_croftmeister wrote:I just thought of something that I thought I should inject into the conversation.

Given that humans often hold contradictory or ambiguous beliefs, can we think of the change of mind process as an organic and perhaps even evolutionary process?

Migration - Adoption of new beliefs from other sources (can be done without fully exploring the consequences of these beliefs)Mutation - Changing a belief to better conform to ones other beliefs.Death - Rejecting a belief on acknowledging its incompatibility with other beliefs.Reproduction - Applying belief to a new area of knowledge unconsidered and observing the results (new beliefs).

Nobody ever investigates all the consequences of their beliefs (is this even possible?) and people almost always hold contradictory beliefs somewhere. Given this, the answer to whether you can become a materially different kind of reasoner is most definitely yes. One can adopt new beliefs that are different from old and then kill off old beliefs until eventually the new beliefs are distinctly separate from the old (speciation?)

Very interesting; this would be incredibly complicated to model formally! Really useful for thought experiments and visualization, though.

Well I'm not sure the human brain is really that amenable to formal reasoning anyway so I think simple (or even understandable) formal models will always fail somewhere along the line. Formal tools like deductive systems are most useful for studying proofs and such that are modelling only part of our reasoning. Belief I think is inseparable from the broader conception of cognition and so an adequate explanation would require much more than just a formal system.

On whether justification is materially separate from belief, I hold that it is. Justification is a tool that we use to validate the consistency of our beliefs, nothing more.

Do you think persuasion can be a form of justification? Is a justification some persuasive account which will always convince a certain kind of reasoner?

Yes this would work and I think it is effectively the sense in which justification is used in logic. The problem I have is that the kinds of reasoners present in humanity are so varied as to make a comprehensive study of them very difficult, so we basically narrow it down to, 'a reasoner who can be convinced by a formal proof in logic L'. Perhaps we can talk of emotional justifications (kind of like what psychology does)?

"It seems like persuasion can only succeed when a proposition appeals to, and is compatible with, some premise which is already believed prior to the act of persuasion."

This is really interesting. It seems to me that our stances could be held by a complex web of premises, with one's premises predicated on one's acceptance of other premises, and they in return predicated on still further premises and so on. So a successful act of persuasion could 'rearrange' premises causing one to reject a premise (by means of our premises), and a brand new premise could be derived from this new set of premises, with change of mind being the consequence. I think for persuasion to be possible, we need the premises to get the premises to get the premises etc, that would serve as the foundation on which one could change one's mind. So essentially, I would change "some premise" to "some premise ultimately derivable from one's premises." I'm a total layman on this, but that's my first impression.

At 7/22/2013 8:23:41 PM, dylancatlow wrote:"It seems like persuasion can only succeed when a proposition appeals to, and is compatible with, some premise which is already believed prior to the act of persuasion."

This is really interesting. It seems to me that our stances could be held by a complex web of premises, with one's premises predicated on one's acceptance of other premises, and they in return predicated on still further premises and so on. So a successful act of persuasion could 'rearrange' premises causing one to reject a premise (by means of our premises), and a brand new premise could be derived from this new set of premises, with change of mind being the consequence. I think for persuasion to be possible, we need the premises to get the premises to get the premises etc, that would serve as the foundation on which one could change one's mind. So essentially, I would change "some premise" to "some premise ultimately derivable from one's premises." I'm a total layman on this, but that's my first impression.

Yes, this is the intuition that I'm playing on: that a person who believes propositions can be modeled by analogy to a system which proves propositions.

But this is complicated by the fact that belief sets, unlike formal axioms, are not static or constant entities; they can migrate gradually, subvert one another, behave according to internally competing rules, and be changed by data that isn't even propositional at all!

So it's difficult to say how beliefs can be rearranged, discarded, or replaced without defining some specially simplified derivational case for them: this special case, I think, is persuasion. But how can we eliminate the prior kinds of complications; what if someone values one premise more than another? Now we have to weigh the premises comparatively, etc...

Like Croftmeister said, this is perhaps impossible to model completely; it's not impossible to think about, though!

"The book you are looking for hasn't been written yet. What you are looking for you are going to have to find yourself, it's not going to be in a book..." -Sidewalker

At 7/22/2013 8:23:41 PM, dylancatlow wrote:"It seems like persuasion can only succeed when a proposition appeals to, and is compatible with, some premise which is already believed prior to the act of persuasion."

This is really interesting. It seems to me that our stances could be held by a complex web of premises, with one's premises predicated on one's acceptance of other premises, and they in return predicated on still further premises and so on. So a successful act of persuasion could 'rearrange' premises causing one to reject a premise (by means of our premises), and a brand new premise could be derived from this new set of premises, with change of mind being the consequence. I think for persuasion to be possible, we need the premises to get the premises to get the premises etc, that would serve as the foundation on which one could change one's mind. So essentially, I would change "some premise" to "some premise ultimately derivable from one's premises." I'm a total layman on this, but that's my first impression.

Yes, this is the intuition that I'm playing on: that a person who believes propositions can be modeled by analogy to a system which proves propositions.

But this is complicated by the fact that belief sets, unlike formal axioms, are not static or constant entities; they can migrate gradually, subvert one another, behave according to internally competing rules, and be changed by data that isn't even propositional at all!

So it's difficult to say how beliefs can be rearranged, discarded, or replaced without defining some specially simplified derivational case for them: this special case, I think, is persuasion. But how can we eliminate the prior kinds of complications; what if someone values one premise more than another? Now we have to weigh the premises comparatively, etc...

Like Croftmeister said, this is perhaps impossible to model completely; it's not impossible to think about, though!

It raises the question: if this is true, where did our first premise come from?

I think the first premise was injected without reasoning and arises from our growth and the development of our brain itself. Thus it appears as part of the genetic code and its interaction with the environment.

At 7/22/2013 8:23:41 PM, dylancatlow wrote:"It seems like persuasion can only succeed when a proposition appeals to, and is compatible with, some premise which is already believed prior to the act of persuasion."

This is really interesting. It seems to me that our stances could be held by a complex web of premises, with one's premises predicated on one's acceptance of other premises, and they in return predicated on still further premises and so on. So a successful act of persuasion could 'rearrange' premises causing one to reject a premise (by means of our premises), and a brand new premise could be derived from this new set of premises, with change of mind being the consequence. I think for persuasion to be possible, we need the premises to get the premises to get the premises etc, that would serve as the foundation on which one could change one's mind. So essentially, I would change "some premise" to "some premise ultimately derivable from one's premises." I'm a total layman on this, but that's my first impression.

Yes, this is the intuition that I'm playing on: that a person who believes propositions can be modeled by analogy to a system which proves propositions.

But this is complicated by the fact that belief sets, unlike formal axioms, are not static or constant entities; they can migrate gradually, subvert one another, behave according to internally competing rules, and be changed by data that isn't even propositional at all!

So it's difficult to say how beliefs can be rearranged, discarded, or replaced without defining some specially simplified derivational case for them: this special case, I think, is persuasion. But how can we eliminate the prior kinds of complications; what if someone values one premise more than another? Now we have to weigh the premises comparatively, etc...

Like Croftmeister said, this is perhaps impossible to model completely; it's not impossible to think about, though!

It raises the question: if this is true, where did our first premise come from?

Ok, so I want to establish that I don't think we can account for how beliefs come to exist to begin with (that's a more shadowy process), but that we might be able to analyze how we can change belief sets which are already formed (more precisely, though persuasion).

Now, we might be able to find an answer to your question if we force some kind of nice linear structure on the derivation procedure. However, there are some logic systems (Croftmeister also mentioned these earlier) which prove different theorems depending on which order the premises are stated; these systems are non-commutative.

Our beliefs may be like this; if I've believed in something longer, that belief might be more cherished to me and more difficult to change. Now rearrange the order of my same beliefs so that it becomes newer and less cherished to me; it now might be easier to change under the new ordering. We get new rules simply by shifting the same content around.

So the question becomes: "Which beliefs must remain invariant under every ordering of my believed premises?"

"The book you are looking for hasn't been written yet. What you are looking for you are going to have to find yourself, it's not going to be in a book..." -Sidewalker

I'd like to add to that to say that even in the same order the premises might yield different results if applied in a different way. I might be able to use my premises in the same order but because I use different inference rules to connect them I end up with different conclusions.

At 7/22/2013 9:08:49 PM, the_croftmeister wrote:I'd like to add to that to say that even in the same order the premises might yield different results if applied in a different way. I might be able to use my premises in the same order but because I use different inference rules to connect them I end up with different conclusions.

Yes, this is true.

"The book you are looking for hasn't been written yet. What you are looking for you are going to have to find yourself, it's not going to be in a book..." -Sidewalker

At 7/22/2013 8:45:30 PM, the_croftmeister wrote:I think the first premise was injected without reasoning and arises from our growth and the development of our brain itself. Thus it appears as part of the genetic code and its interaction with the environment.

At 7/22/2013 8:45:30 PM, the_croftmeister wrote:I think the first premise was injected without reasoning and arises from our growth and the development of our brain itself. Thus it appears as part of the genetic code and its interaction with the environment.

Do you think this same process of cognitive premise injection could be active throughout one's life, andbe a factor for why people change their minds (or are able to)? Could persuasion be the identification of these subconscious, unchosen and, sometimes, unacknowledged premises? Also, since neither our conscious mind nor our subconscious mind could begin to hold together the very complex web of premises each of us has , do you think that our justification (whether that be conscious or subconscious) for our premises is a new process each time, and not simply retrieval. Could a successful act persuasion merely be getting a person to see something in a different way by means of the same premises. Could my rhetorical questions be getting annoying? :)

Also, I was thinking that many of our premises are held by nothing more than the fact that we remember we have them. So a successful act of persuasion could be the act of challenging a premise whose foundation no longer exists, and which, when challenged, cannot be justified by the then present premises.

At 7/23/2013 9:06:01 AM, dylancatlow wrote:Also, I was thinking that many of our premises are held by nothing more than the fact that we remember we have them. So a successful act of persuasion could be the act of challenging a premise whose foundation no longer exists, and which, when challenged, cannot be justified by the then present premises.

I would also submit that trusting in one's own premises as well as trusting in one's own ability to remember and know one's own premises (as well as trusting in another's ability to clarify) all play a role in the persuasion/belief process...which would appear to make the entire modeling process complex indeed.

I will continue reading this thread with great interest. Carry on, gentlemen!

At 7/23/2013 9:06:01 AM, dylancatlow wrote:Also, I was thinking that many of our premises are held by nothing more than the fact that we remember we have them. So a successful act of persuasion could be the act of challenging a premise whose foundation no longer exists, and which, when challenged, cannot be justified by the then present premises.

This is a good insight: a persuasion act can simply consist of a normative appeal to certain rules (such as discarding a belief which can't be inferred from any other beliefs in the same set, as you suggest) without containing any new propositions.

Now, it is often case that people will begin with such an underived belief, but, out of some kind of value commitment to that belief, will go about constructed a belief set from which it can be derived (thus, an enterprise of confirmation bias). That sort of behavior is known as an "escalation of commitment". This and other cognitive biases exist as rules which govern reactions to such challenges as you've described, dylan.

It is likely that some reasoners will respond by reverse-engineering a foundation in order to preserve the belief, causing the persuasive act to fail.

"The book you are looking for hasn't been written yet. What you are looking for you are going to have to find yourself, it's not going to be in a book..." -Sidewalker

At 7/23/2013 9:06:01 AM, dylancatlow wrote:Also, I was thinking that many of our premises are held by nothing more than the fact that we remember we have them. So a successful act of persuasion could be the act of challenging a premise whose foundation no longer exists, and which, when challenged, cannot be justified by the then present premises.

This is a good insight: a persuasion act can simply consist of a normative appeal to certain rules (such as discarding a belief which can't be inferred from any other beliefs in the same set, as you suggest) without containing any new propositions.

Now, it is often case that people will begin with such an underived belief, but, out of some kind of value commitment to that belief, will go about constructed a belief set from which it can be derived (thus, an enterprise of confirmation bias). That sort of behavior is known as an "escalation of commitment". This and other cognitive biases exist as rules which govern reactions to such challenges as you've described, dylan.

It is likely that some reasoners will respond by reverse-engineering a foundation in order to preserve the belief, causing the persuasive act to fail.