GrahamH wrote:I don't think we disagree. I would add that we don't need human to perceive violence. An AI trained to recognise violence would likely identify gta5 as violent.If you want to dig down to what violence means you have to much deeper than human brains.

Violence is a variable human concept. It has no meaning beyond that. As for training an AI to recognise violence, good luck with that. Really. I mean, all action requires context and the judgement thereof is probably contingent upon contemporary moral standards. Is shooting a suffering horse, for example, an instance of violence? Not from any perspective I've come across. Although, there could arise a situation in which an individual wanted to shoot a suffering horse for evil/violent reasons. For instance, one might shoot a suffering horse in case it didn't die when one had other reasons for wanting it to die. For instance, you wanted to make the owner thereof suffer from grief, or else you wanted to guarantee that your own horse won a particular race at the weekend.

One CANNOT programme such indeterminable variables into a non-FEELING processing machine (AI). Computers/robots are not like us precisely because we're not like Spock. And consciousness cannot be explained as though we were.

JamesI am not sure that us nubwits think science has all the answers. Or perhaps even if it can get all the answers. But us nubwits likely do think science does give us a better quality answer than say stretching our sphincter muscle and trying to work something out with or without a pencil.

jamest wrote:The biggie here is that 'computer science' doesn't have a fucking clue what emotions are or how to account for them within their 'logical' assessment of computer technology.

You nubwits think that science has all the answers, yet in terms of explaining 'you' it knows absolutely fuck all.

And yet, here you all are, like it's a Sunday and your faith impels you to do so.

A familiar theme, I think you'd agree.

The propblem you have jamest is that you have very fixed ideas. Computers apply fixed rules of binatry logic to build complex functions that are not logical analysis. Computers are not logical thinkers (and nor are humans) these are just some of the capabilities of numerical networks found in brains and artificial neural networks.

Granted it is early days for AI, but it is being applied to identifying violence, sexually explicit content and prediction of illegal behaviour. It is already doing things that humans don't know how to analyse logically.

You may know the Cloud Vision API for its face, object, and landmark detection, but you might not know that the Vision API can also detect inappropriate content in images using the same machine learning models that power Google SafeSearch. Since we announced the Google Cloud Vision API GA in April, we’ve seen over 100 million requests for SafeSearch detection.

The principle of deep learning is to automatically build contexts for recognising the likeness of something, and the complexity of that thing and what constitutes a likeness has no definite boundaries. It can be arbitrarily complex. Given input from bio-sensors such an AI could tell your emotional state, for example. It could use that to predict your behaviour.Unttil these things have been achieved to human level there is certainly room for doubt about what can be done, but there is no sound reason to assert it cannot be done.

GrahamH wrote:I don't think we disagree. I would add that we don't need human to perceive violence. An AI trained to recognise violence would likely identify gta5 as violent.If you want to dig down to what violence means you have to much deeper than human brains.

Violence is a variable human concept. It has no meaning beyond that. As for training an AI to recognise violence, good luck with that. Really. I mean, all action requires context and the judgement thereof is probably contingent upon contemporary moral standards. Is shooting a suffering horse, for example, an instance of violence? Not from any perspective I've come across. Although, there could arise a situation in which an individual wanted to shoot a suffering horse for evil/violent reasons. For instance, one might shoot a suffering horse in case it didn't die when one had other reasons for wanting it to die. For instance, you wanted to make the owner thereof suffer from grief, or else you wanted to guarantee that your own horse won a particular race at the weekend.

One CANNOT programme such indeterminable variables into a non-FEELING processing machine (AI). Computers/robots are not like us precisely because we're not like Spock. And consciousness cannot be explained as though we were.

I'm interested in the relation of emotion to understanding. What do you think emotion plays in determining if violence is being done in your scenarios? It seems more that you are hand-waving at emotions responses that may result from data that is not acquired through emotion. You get upset or angry as a result of working out what's going on. You don't work out what's going on from feeling emotional. Indeed there seems to be a high probability of drawing wrong conclusions from an emotional response to what might be happening.

David went on about emotions be required for decision making, but that seemed to come down to more of a tie-breaker.None of which could not, in principle, be performed by a machine.

David went on about emotions be required for decision making, but that seemed to come down to more of a tie-breaker.None of which could not, in principle, be performed by a machine.

I think the operative term there is "in principle". I think it is a combination of sheer complexity (in terms of the number of thalamo-cortical loops available) and the role of hormones that make human decision-making different from anything machines can do. There is a point where quantity becomes quality. Thus, machines won't be able to mimic humans accurately any time soon, especially as the complexity of computers is no longer increasing on an exponential scale - the technology is becoming mature.

I've been giving this problem the benefit of my insightful and penetrating mind. Here is what I think is going on:

As organisms evolved, those that developed abilities to sense and react to their environment gained a reproductive advantage over those that could not. They could seek or avoid environments, depending on if that environment contained risk or not.

This sort of decision making requires no processing, it can be easily hard wired. And, that works. In it's limited capacity, it works. It works well enough to control satellites in space. That situation isn't very adaptable to changing environments, though. For example, some new thing. Is it a hazard? Is it a danger? If no "pre-wired" logic exists for the new environmental pressure, how does that new pressure affect existing life? Or, what about many new things? How would such basic organisms avoid or take advantage of those new things in their environment?

Here's where I think interesting things started to happen.

Memory. It all comes down to memory. An ability to record events and recall those events. Without memory, an organism can't easily adapt. Such an organism is limited to its wired behavior. But, with memory, an organism can develop a model of its environment, and make decisions about what the organism senses in its environment that are based on comparisons to its internal model... "Red thing in sight. Have I ever eaten a red thing? No? I won't eat this red thing..."

An organism with a mechanism that can store events, recall them later and use them in decision making, is going to have a reproductive advantage over those that cannot.

So. Are we machines? Or, are we machines with routines running in our brains? I'm betting on the latter. What about those routines, though? How did that happen? Back to that evolving organism...

I've already described a basic routine that compares. That routine may be what compares multiple extant situations to order the best behavior from the executive routine. Or, that comparison routine may get information from the memory routine, to arrive at a choice that is based on history as well as current events. That sort of comparison would add a dimension of knowledge to decisions that increases the likelihood of a good decision.

What about these routines? A routine in our world is an abstraction. In our world, a good approximation may be a running executable in a computer. That executable gets loaded into working memory, takes inputs, follows its coded purpose, and produces instructions for the executive to carry out. I propose that the routines running in our brains are quite similar to this idea.

What we lack though, is a means to copy verbatim all of a routine's status in working memory to nonvolatile memory, so we can shut the routine down. When we shut our routines down (die), they can't be restarted. This indicates that these routines are stored in our brains in a fashion that falls apart if the machine shuts off. Not much different from a computer without a hard drive. So long as sufficient working memory exists in a computer, that isn't impossible to do. From my desk, we appear to be the equivalent of computers without hard drives.

What about feelings? Emotions? What's going on there? In the context of reproductive success, how do feelings come into the picture? I don't think this is that difficult a problem, actually. What then, is a feeling? I think it's a judgment of value of a comparison. When the comparator routine produces its output (go/no-go, for example), that output could then be run through a values routine that can prioritize the resulting action. Since we do multiple things at the same time, something needs to sort those things out, prioritize them for the good of the organism. Fear of snakes should drive an action that comes before "pick and eat the strawberry". That feeling, or value judgment, about snakes can save your ass. Pretty useful.

Memory. How, exactly, does that operate in our brains? What is the mechanism by which our brains write, index, and read memories? I have some ideas there, too. I think we're recording all the time. But, we can't read everything we record, forever. I suspect there is some mechanism that records cumulative successes in memory recall. It's possible that a judgment routine affects that "success register" when the memory is laid down. It's also possible that that success register can be incremented as that memory is read. Memories with higher success register indices would be much easier to recall than others. It's also possible that memories with very low success register values eventually become overwritten, though I tend to doubt that (random recall of decades old events, for example).

So, there it is. My best guess is that we are machines. But, machines with routines that were evolved long ago, and those routines emerge as we develop and grow. If we shut the machine off, we are at the mercy of the non-volatile nature of our memories, and the routines, which are us, halt in an unknown state and cannot be restarted.

proudfootz wrote:The problem being a massless immaterial thing outside of time or space... In what sense can it move?

It does not itself move but, rather, it begins motion.

IIRC Aquinas claims the 'first mover' is 'put in motion':

"it is necessary to arrive at a first mover, put in motion by no other"

Which is a contradiction to this premise in the argument:

"Only an actual motion can convert a potential motion into an actual motion"

So this hypothetical 'first mover' can only impart motion by being in motion itself, and no motion can occur without a previous motion.

A few moments of reflection shows that Aquinas has fatally undermined his own position.

Among the problems with this trying to turn a god into an abstraction is that once it is removed far enough from reality there's no sense in which it can impact reality.

The concept of an unmoved mover is paradoxical but, as Aquinas showed, it necessarily must exist to explain the motion we see.

Aquinas may have claimed something. But that is a long way from showing anything.

An unmoved mover is not so much a 'paradox' as it is a self-refuting idea.

There seems to be a lack of scientific precision in this debate, starting with Aquinas, who failed to state whether an exploding, static object was "in motion". Looking at the position of its centre of gravity, you would say it was still static, but looking at its individual components, you would say they were newly in motion.

The_Metatron wrote:I've been giving this problem the benefit of my insightful and penetrating mind. Here is what I think is going on:

As organisms evolved, those that developed abilities to sense and react to their environment gained a reproductive advantage over those that could not. They could seek or avoid environments, depending on if that environment contained risk or not.

This sort of decision making requires no processing, it can be easily hard wired. And, that works. In it's limited capacity, it works. It works well enough to control satellites in space. That situation isn't very adaptable to changing environments, though. For example, some new thing. Is it a hazard? Is it a danger? If no "pre-wired" logic exists for the new environmental pressure, how does that new pressure affect existing life? Or, what about many new things? How would such basic organisms avoid or take advantage of those new things in their environment?

Here's where I think interesting things started to happen.

Memory. It all comes down to memory. An ability to record events and recall those events. Without memory, an organism can't easily adapt. Such an organism is limited to its wired behavior. But, with memory, an organism can develop a model of its environment, and make decisions about what the organism senses in its environment that are based on comparisons to its internal model... "Red thing in sight. Have I ever eaten a red thing? No? I won't eat this red thing..."

An organism with a mechanism that can store events, recall them later and use them in decision making, is going to have a reproductive advantage over those that cannot.

So. Are we machines? Or, are we machines with routines running in our brains? I'm betting on the latter. What about those routines, though? How did that happen? Back to that evolving organism...

I've already described a basic routine that compares. That routine may be what compares multiple extant situations to order the best behavior from the executive routine. Or, that comparison routine may get information from the memory routine, to arrive at a choice that is based on history as well as current events. That sort of comparison would add a dimension of knowledge to decisions that increases the likelihood of a good decision.

What about these routines? A routine in our world is an abstraction. In our world, a good approximation may be a running executable in a computer. That executable gets loaded into working memory, takes inputs, follows its coded purpose, and produces instructions for the executive to carry out. I propose that the routines running in our brains are quite similar to this idea.

What we lack though, is a means to copy verbatim all of a routine's status in working memory to nonvolatile memory, so we can shut the routine down. When we shut our routines down (die), they can't be restarted. This indicates that these routines are stored in our brains in a fashion that falls apart if the machine shuts off. Not much different from a computer without a hard drive. So long as sufficient working memory exists in a computer, that isn't impossible to do. From my desk, we appear to be the equivalent of computers without hard drives.

What about feelings? Emotions? What's going on there? In the context of reproductive success, how do feelings come into the picture? I don't think this is that difficult a problem, actually. What then, is a feeling? I think it's a judgment of value of a comparison. When the comparator routine produces its output (go/no-go, for example), that output could then be run through a values routine that can prioritize the resulting action. Since we do multiple things at the same time, something needs to sort those things out, prioritize them for the good of the organism. Fear of snakes should drive an action that comes before "pick and eat the strawberry". That feeling, or value judgment, about snakes can save your ass. Pretty useful.

Memory. How, exactly, does that operate in our brains? What is the mechanism by which our brains write, index, and read memories? I have some ideas there, too. I think we're recording all the time. But, we can't read everything we record, forever. I suspect there is some mechanism that records cumulative successes in memory recall. It's possible that a judgment routine affects that "success register" when the memory is laid down. It's also possible that that success register can be incremented as that memory is read. Memories with higher success register indices would be much easier to recall than others. It's also possible that memories with very low success register values eventually become overwritten, though I tend to doubt that (random recall of decades old events, for example).

So, there it is. My best guess is that we are machines. But, machines with routines that were evolved long ago, and those routines emerge as we develop and grow. If we shut the machine off, we are at the mercy of the non-volatile nature of our memories, and the routines, which are us, halt in an unknown state and cannot be restarted.

The_Metatron wrote:I've been giving this problem the benefit of my insightful and penetrating mind. Here is what I think is going on:

As organisms evolved, those that developed abilities to sense and react to their environment gained a reproductive advantage over those that could not. They could seek or avoid environments, depending on if that environment contained risk or not.

This sort of decision making requires no processing, it can be easily hard wired. And, that works. In it's limited capacity, it works. It works well enough to control satellites in space. That situation isn't very adaptable to changing environments, though. For example, some new thing. Is it a hazard? Is it a danger? If no "pre-wired" logic exists for the new environmental pressure, how does that new pressure affect existing life? Or, what about many new things? How would such basic organisms avoid or take advantage of those new things in their environment?

Here's where I think interesting things started to happen.

Memory. It all comes down to memory. An ability to record events and recall those events. Without memory, an organism can't easily adapt. Such an organism is limited to its wired behavior. But, with memory, an organism can develop a model of its environment, and make decisions about what the organism senses in its environment that are based on comparisons to its internal model... "Red thing in sight. Have I ever eaten a red thing? No? I won't eat this red thing..."

An organism with a mechanism that can store events, recall them later and use them in decision making, is going to have a reproductive advantage over those that cannot.

So. Are we machines? Or, are we machines with routines running in our brains? I'm betting on the latter. What about those routines, though? How did that happen? Back to that evolving organism...

I've already described a basic routine that compares. That routine may be what compares multiple extant situations to order the best behavior from the executive routine. Or, that comparison routine may get information from the memory routine, to arrive at a choice that is based on history as well as current events. That sort of comparison would add a dimension of knowledge to decisions that increases the likelihood of a good decision.

What about these routines? A routine in our world is an abstraction. In our world, a good approximation may be a running executable in a computer. That executable gets loaded into working memory, takes inputs, follows its coded purpose, and produces instructions for the executive to carry out. I propose that the routines running in our brains are quite similar to this idea.

What we lack though, is a means to copy verbatim all of a routine's status in working memory to nonvolatile memory, so we can shut the routine down. When we shut our routines down (die), they can't be restarted. This indicates that these routines are stored in our brains in a fashion that falls apart if the machine shuts off. Not much different from a computer without a hard drive. So long as sufficient working memory exists in a computer, that isn't impossible to do. From my desk, we appear to be the equivalent of computers without hard drives.

What about feelings? Emotions? What's going on there? ...

Actually, I was thinking more of the emotion of fear (mediated by the amygdalae) that is the best spurer of action.Sorry to leave you guessing, but anyone who knows amything about brains would realise this.

What's with the "...but anyone who knows amything about brains would realise this." crack? Is it necessary? Did that make you feel like you know more about this than I do?

It's a detail that doesn't detract from the point of my post. But, it may be that my post isn't in the best topic for it. I didn't find one specifically discussing the evolution of human intelligence or the rise of consciousness, a thing which eludes a clear definition.

In my post, I purposely omitted possible areas of the brain that seem to be responsible for this or that emotion. It didn't matter to what I was writing. But, what you replied does lend weight to an idea that certain groups of neurons have different functions. That isn't to say that they function particularly differently, but the functions they collectively perform do different things. That's a good correlation to my ideas, I think.

Ultimately, it has to come down to what neurons can be grown to do, with a basis in DNA directing it.

I wonder how different its neurons are in construction than those in other parts of the brain. Or, is the difference found in how interconnected its neurons are?

At some point, it seems obvious, we want to be able to explain the chain of events from DNA all the way to consciousness. Having read the wiki article on that amygdala, it is apparent that the gaps in the knowledge of what happens in between those two endpoints are becoming smaller and smaller.

Does there exist some reason to think we won't put it all together to a point when we understand it pretty well?

As organisms evolved, those that developed abilities to sense and react to their environment gained a reproductive advantage over those that could not. They could seek or avoid environments, depending on if that environment contained risk or not.

This sort of decision making requires no processing, it can be easily hard wired. And, that works. In it's limited capacity, it works. It works well enough to control satellites in space. That situation isn't very adaptable to changing environments, though. For example, some new thing. Is it a hazard? Is it a danger? If no "pre-wired" logic exists for the new environmental pressure, how does that new pressure affect existing life? Or, what about many new things? How would such basic organisms avoid or take advantage of those new things in their environment?

Here's where I think interesting things started to happen.

Memory. It all comes down to memory. An ability to record events and recall those events. Without memory, an organism can't easily adapt. Such an organism is limited to its wired behavior. But, with memory, an organism can develop a model of its environment, and make decisions about what the organism senses in its environment that are based on comparisons to its internal model... "Red thing in sight. Have I ever eaten a red thing? No? I won't eat this red thing..."

An organism with a mechanism that can store events, recall them later and use them in decision making, is going to have a reproductive advantage over those that cannot.

So. Are we machines? Or, are we machines with routines running in our brains? I'm betting on the latter. What about those routines, though? How did that happen? Back to that evolving organism...

I've already described a basic routine that compares. That routine may be what compares multiple extant situations to order the best behavior from the executive routine. Or, that comparison routine may get information from the memory routine, to arrive at a choice that is based on history as well as current events. That sort of comparison would add a dimension of knowledge to decisions that increases the likelihood of a good decision.

What about these routines? A routine in our world is an abstraction. In our world, a good approximation may be a running executable in a computer. That executable gets loaded into working memory, takes inputs, follows its coded purpose, and produces instructions for the executive to carry out. I propose that the routines running in our brains are quite similar to this idea.

What we lack though, is a means to copy verbatim all of a routine's status in working memory to nonvolatile memory, so we can shut the routine down. When we shut our routines down (die), they can't be restarted. This indicates that these routines are stored in our brains in a fashion that falls apart if the machine shuts off. Not much different from a computer without a hard drive. So long as sufficient working memory exists in a computer, that isn't impossible to do. From my desk, we appear to be the equivalent of computers without hard drives.

What about feelings? Emotions? What's going on there? In the context of reproductive success, how do feelings come into the picture? I don't think this is that difficult a problem, actually. What then, is a feeling? I think it's a judgment of value of a comparison. When the comparator routine produces its output (go/no-go, for example), that output could then be run through a values routine that can prioritize the resulting action. Since we do multiple things at the same time, something needs to sort those things out, prioritize them for the good of the organism. Fear of snakes should drive an action that comes before "pick and eat the strawberry". That feeling, or value judgment, about snakes can save your ass. Pretty useful.

Memory. How, exactly, does that operate in our brains? What is the mechanism by which our brains write, index, and read memories? I have some ideas there, too. I think we're recording all the time. But, we can't read everything we record, forever. I suspect there is some mechanism that records cumulative successes in memory recall. It's possible that a judgment routine affects that "success register" when the memory is laid down. It's also possible that that success register can be incremented as that memory is read. Memories with higher success register indices would be much easier to recall than others. It's also possible that memories with very low success register values eventually become overwritten, though I tend to doubt that (random recall of decades old events, for example).

So, there it is. My best guess is that we are machines. But, machines with routines that were evolved long ago, and those routines emerge as we develop and grow. If we shut the machine off, we are at the mercy of the non-volatile nature of our memories, and the routines, which are us, halt in an unknown state and cannot be restarted.

You would likely enjoy Dennet's Freedom Evolves

His main thesis shares a lot in common with yours.

Thanks, man. I also have a copy of Sagan's books, Shadows of Forgotten Ancestors, and Broca's Brain. I think I'll grab one of them and review what the good professor had to say about these ideas. Along the lines of what I wrote, I have read those books, but decades ago. I expect to see some memories returning of my reading them, and what I thought about it at the time. There's a high probability that most, or all, of what I wrote is a synthesis of stuff I've learned over the years.

proudfootz wrote:The problem being a massless immaterial thing outside of time or space... In what sense can it move?

It does not itself move but, rather, it begins motion.

IIRC Aquinas claims the 'first mover' is 'put in motion':

"it is necessary to arrive at a first mover, put in motion by no other"

Which is a contradiction to this premise in the argument:

"Only an actual motion can convert a potential motion into an actual motion"

So this hypothetical 'first mover' can only impart motion by being in motion itself, and no motion can occur without a previous motion.

A few moments of reflection shows that Aquinas has fatally undermined his own position.

Among the problems with this trying to turn a god into an abstraction is that once it is removed far enough from reality there's no sense in which it can impact reality.

The concept of an unmoved mover is paradoxical but, as Aquinas showed, it necessarily must exist to explain the motion we see.

Aquinas may have claimed something. But that is a long way from showing anything.

An unmoved mover is not so much a 'paradox' as it is a self-refuting idea.

There seems to be a lack of scientific precision in this debate, starting with Aquinas, who failed to state whether an exploding, static object was "in motion". Looking at the position of its centre of gravity, you would say it was still static, but looking at its individual components, you would say they were newly in motion.

I've been told that by "motion" Aquinas means "change". Therefore, and exploding object would be in motion.

I am surprised you would say something like:That's the point, romansh: that there is NO 'quality' of answer in repeatedly ignoring the fact that we're not Spock.in that, it is like saying ... this statement is false

You have disqualified any statement that you [or anyone else] makes as requiring consideration. By definition in your world view you cannot back up your statement.

Thanks, man. I also have a copy of Sagan's books, Shadows of Forgotten Ancestors, and Broca's Brain. I think I'll grab one of them and review what the good professor had to say about these ideas. Along the lines of what I wrote, I have read those books, but decades ago. I expect to see some memories returning of my reading them, and what I thought about it at the time. There's a high probability that most, or all, of what I wrote is a synthesis of stuff I've learned over the years.

Fun stuff to consider.

I enjoyed Dennett's book and just about agree with everything, except the conclusion. This review by Coyne highlights the problems with Dennett's conclusion

I am surprised you would say something like:That's the point, romansh: that there is NO 'quality' of answer in repeatedly ignoring the fact that we're not Spock.in that, it is like saying ... this statement is false

You have disqualified any statement that you [or anyone else] makes as requiring consideration. By definition in your world view you cannot back up your statement.

You've omitted the text and context in which I made that statement, which specifically referenced physicalist/scientific explanations for 'consciousness'. My point being that such narratives cannot repeatedly ignore our emotional disposition and treat us like mere logic machines. I made no reference to there being NO explanations at all for our 'being'.