Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider
registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today.

So someone that never jumped out of an aeroplane with a parachute knows what it feels like to jump out of an aeroplane with a parachute before they actually feel what it feels like? How?

No, but he is afraid of jumping out of an airplane based on his other experiences and, as you say, an evolved (based on things that happened in the past) instinct to fear falling from high places.

All of this is based on information from the past, as you say.

__________________"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."
Isaac Asimov

Sure we do, anticipation, expectation, anxiety, fear are all feelings we have before we jump out of an aeroplane for the first time with a parachute on our backs.

But all these emotions are based on information from the past/present (knowledge of parachuting, seeing the open hatch and the ground a great distance below) being used to anticipate a future event (jumping out the hatch and falling from a great height).

The capacity to anticipate future events is a trivial thing to program. So when you said you cannot program a computer to feel the future, what were you objecting to?

Perhaps the keyword here is feel? But assuming that you have a program capable of actually feeling emotions (which is theoretically possible, albeit not practically possible at present), why do you believe it wouldn't be able to feel these kinds of emotions for anticipated events as humans do?

__________________"That is just what you feel, that isn't reality." - hamelekim

No, but he is afraid of jumping out of an airplane based on his other experiences and, as you say, an evolved (based on things that happened in the past) instinct to fear falling from high places.

All of this is based on information from the past, as you say.

Biological evolutionary adaption is not a function of the past. The past is an abstraction that humans invented. Humans don't adapt to the past. We adapt to the future.

__________________"Anyway, why is a finely-engineered machine of wire and silicon less likely to be conscious than two pounds of warm meat?" Pixy Misa
"We live in a world of more and more information and less and less meaning" Jean Baudrillardhttp://bokashiworld.wordpress.com/

But all these emotions are based on information from the past/present (knowledge of parachuting, seeing the open hatch and the ground a great distance below) being used to anticipate a future event (jumping out the hatch and falling from a great height).

No the abstract knowledge you mention is based on the past. Emotions are not abstract knowledge.

Originally Posted by Brian-M

The capacity to anticipate future events is a trivial thing to program. So when you said you cannot program a computer to feel the future, what were you objecting to?

It's only trivial when the future events are trivial, because they are not really future events, but projections of abstractions of the past into the future.

Originally Posted by Brian-M

Perhaps the keyword here is feel? But assuming that you have a program capable of actually feeling emotions (which is theoretically possible, albeit not practically possible at present), why do you believe it wouldn't be able to feel these kinds of emotions for anticipated events as humans do?

Assuming?
Yes it is theoretically possible to project past abstractions into a future which is a projection of past abstractions. As long as one is consistent.

The real world however is not a function of our consistent abstractions.

If computers are to function independently of us in the real world they will need to be more than a projection of our abstractions.
Seeing that computers, like all human tools, are defined as projections of our abstractions of the past they will never be independent of us since they require abstractions of the past to exist.

Because by definition the real future is not our abstract projections of the past.
Computers cannot feel the real future they can only feel the abstract projections of the past into the future we program them to feel.

__________________"Anyway, why is a finely-engineered machine of wire and silicon less likely to be conscious than two pounds of warm meat?" Pixy Misa
"We live in a world of more and more information and less and less meaning" Jean Baudrillardhttp://bokashiworld.wordpress.com/

Natural selection is the gradual, non-random, process by which biological traits become either more or less common in a population as a function of differential reproduction of their bearers.

Or, if you don't disagree with that, how do you suppose the future is able to influence natural selection?

What happens is that individual who fall of cliffs die and, thus, fail to reproduce or to contribute to the reproduction of their close kin.
Individuals who happen to have some uncomfortable feeling when too close to the edge of a cliff (say fear) are less likely than those without that feeling to fall off cliffs.
If that feeling is influenced by some genes, those genes will tend to spread through the gene pool over time.
Because this process took place in the past, we living now tend to have those genes.

Whatever you mean by "biological evolutionary adaptation", if you are supposing that it's influenced by the future, you are certainly in disagreement with evolutionary biologists.

__________________"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."
Isaac Asimov

Computers cannot feel the real future they can only feel the abstract projections of the past into the future we program them to feel.

The same is true of humans.

If you can give an example of human behavior that can't be explained as dealing with information from the past to make predictions about the future... well, I'll be very impressed.

__________________"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."
Isaac Asimov

Computer systems have already been developed that can anticipate the future by modeling contexts (typically of potential actions), evaluating the probability of various outcomes, and plan their next actions based on the results, or feed the evaluated likely outcomes into another round of modeling. The algorithms for this kind of multi-level decision tree analysis have been refined by generations of computer chess and other game algorithms and are now being used in more general contexts.

Even the simplest tic-tac-toe program needs to anticipate the opponent's moves, and chess programs spend most of their time 'imagining' the results of possible future moves.

__________________Simple probability tells us that we should expect coincidences, and simple psychology tells us that we'll remember the ones we notice...

Can you give me any example of abstract knowledge that isn't based on the past (or present)? If not, then what are you objecting to?

Originally Posted by !Kaggen

Emotions are not abstract knowledge.

Emotions are internal states of mind that can be triggered by many things, including abstract knowledge.

Originally Posted by !Kaggen

It's only trivial when the future events are trivial,

It's trivial regardless of the significance of future events.

(And the matter of accuracy is independent of both these factors.)

Originally Posted by !Kaggen

because they are not really future events, but projections of abstractions of the past into the future.

Not always. It's also possible to model possible future events based on past and present information, regardless of whether or not equivalent events have ever occurred in the past.

But either way, isn't this exactly what the human mind does?

Originally Posted by !Kaggen

The real world however is not a function of our consistent abstractions.

So?

Originally Posted by !Kaggen

If computers are to function independently of us in the real world they will need to be more than a projection of our abstractions.

They'd be better off generating their own abstractions rather than relying on ours. Independence of thought is important.

Originally Posted by !Kaggen

Seeing that computers, like all human tools, are defined as projections of our abstractions of the past

This seems like gibberish to me.

Originally Posted by !Kaggen

they will never be independent of us since they require abstractions of the past to exist.

Originally Posted by !Kaggen

Because by definition the real future is not our abstract projections of the past.
Computers cannot feel the real future they can only feel the abstract projections of the past into the future we program them to feel.

We wouldn't be programming to feel a specific future. They'd be generating their own expectations of the future and react emotionally to these expectations.

But are you saying computers cannot feel emotion about the actual future, because they can only anticipate possible futures based on existing information derived from past and present experience, and cannot know for certain what the actual future will be?

If so, how are humans any different?

__________________"That is just what you feel, that isn't reality." - hamelekim

Sure we do, anticipation, expectation, anxiety, fear are all feelings we have before we jump out of an aeroplane for the first time with a parachute on our backs.

So someone that never jumped out of an aeroplane with a parachute knows what it feels like to jump out of an aeroplane with a parachute before they actually feel what it feels like? How?

It is a trivial ability of humans which developed through evolution. No magic there. You computationalists should study biology sometime, you might learn something instead of assuming everything that is not about computers is magic.

Yes, your wording "feel the future" is the problem with your statement.

Let's use the word "predict," and without any assumption of supernatural prophesy.

Computers predict the future by extrapolating from the past. Video game characters can easily predict where you will be in a moment and shoot at where you will be rather than where you are. Of course, an opponent can change course, and there are algorithms to make predictions there.

And has been pointed out, in games from tic-tac-toe to chess, computers routinely anticipate future positions to decide where to move.

We call this "feeling" because this work of our neural networks is done subconsciously ( not leaving traces of how the data processing was performed in our conscious memory). The result comes to us as a feeling.

There's a famous story of a psychiatrist who, even though he hadn't seen a particular patient in over a year, started to get a feeling of concern and called the patient. Sure enough, the patient was having sudden serious emotional difficulties. The psychiatrist at first wondered if he was psychic, then realized the day was the anniversary of this patient's extremely traumatic experience of a prior year. In the patient and the psychiatrist, that day subconsciously triggered a "feeling" about trouble. The data processing connecting the date with the patient's distress was not at the conscious level.

So it goes with "feeling the future." Feelings like that come from unconscious data processing. Indeed, the vast majority of what the brain does is subconscious.

If we wanted to make machines "feel the future" we'd build separate subsystems that would take in the information they needed, make predictions, and feed only the results to the conscious module. No magic bean needed.

John Lilly, before going off the deep end (well, actually during the deep end) described the silicon-based life forms that were using us to establish their physicality. Their plan, according to J.L., was to gradually eliminate the O2 in the atmosphere.
They don't need it, and it mostly causes corrosion.

Or, if you don't disagree with that, how do you suppose the future is able to influence natural selection?

What happens is that individual who fall of cliffs die and, thus, fail to reproduce or to contribute to the reproduction of their close kin.
Individuals who happen to have some uncomfortable feeling when too close to the edge of a cliff (say fear) are less likely than those without that feeling to fall off cliffs.
If that feeling is influenced by some genes, those genes will tend to spread through the gene pool over time.
Because this process took place in the past, we living now tend to have those genes.

Whatever you mean by "biological evolutionary adaptation", if you are supposing that it's influenced by the future, you are certainly in disagreement with evolutionary biologists.

Evolutionary adaption only has "meaning" based on how a biological entity functions with regard to the future that meets it in the present.
A collection of genes has no "meaning" with regards to evolutionary adaption without interacting with the future that it confronts in the present.
The fact that humans can extract meaning from genes has only to do with the genes past interactions. This does not relate, unless you believe in magic, to their future interactions which have not happened yet. It is an abstraction that humans invented. It is a model, it is not what happens.
No model is the future. It is a guess. Like I said computers will only be as developed as the model that humans abstract from the past.

Originally Posted by Roboramma

The same is true of humans.

If you can give an example of human behavior that can't be explained as dealing with information from the past to make predictions about the future... well, I'll be very impressed.

Either you are assuming all human behavior has happened already or you are putting too much faith in our ability to predict human behavior from models of the past.

__________________"Anyway, why is a finely-engineered machine of wire and silicon less likely to be conscious than two pounds of warm meat?" Pixy Misa
"We live in a world of more and more information and less and less meaning" Jean Baudrillardhttp://bokashiworld.wordpress.com/

Computer systems have already been developed that can anticipate the future by modeling contexts (typically of potential actions), evaluating the probability of various outcomes, and plan their next actions based on the results, or feed the evaluated likely outcomes into another round of modeling. The algorithms for this kind of multi-level decision tree analysis have been refined by generations of computer chess and other game algorithms and are now being used in more general contexts.

Even the simplest tic-tac-toe program needs to anticipate the opponent's moves, and chess programs spend most of their time 'imagining' the results of possible future moves.

No we adapt. We cannot anticipate a future that has not happened, that would be magic.

What future are you anticipating?

I assume your a human

__________________"Anyway, why is a finely-engineered machine of wire and silicon less likely to be conscious than two pounds of warm meat?" Pixy Misa
"We live in a world of more and more information and less and less meaning" Jean Baudrillardhttp://bokashiworld.wordpress.com/

Can you give me any example of abstract knowledge that isn't based on the past (or present)? If not, then what are you objecting to?

Emotions are internal states of mind that can be triggered by many things, including abstract knowledge.

They can be triggered by the future

Originally Posted by Brian-M

It's trivial regardless of the significance of future events.

(And the matter of accuracy is independent of both these factors.)

Not always. It's also possible to model possible future events based on past and present information, regardless of whether or not equivalent events have ever occurred in the past.

But either way, isn't this exactly what the human mind does?

So?

The human brain evolved to deal with the real world. The ability to abstract from the past which is no longer real and project that into the future is a recent development in human history.We are hypnotized by this ability and project it everywhere, but that is not reality's problem its ours.

Originally Posted by Brian-M

They'd be better off generating their own abstractions rather than relying on ours. Independence of thought is important.

A computers "abstractions" will be our abstractions in the same way that a cabinet was not built by the hammer we used to build it.

Originally Posted by Brian-M

We wouldn't be programming to feel a specific future. They'd be generating their own expectations of the future and react emotionally to these expectations.

But are you saying computers cannot feel emotion about the actual future, because they can only anticipate possible futures based on existing information derived from past and present experience, and cannot know for certain what the actual future will be?

If so, how are humans any different?

We are not someones abstractions from the past.

__________________"Anyway, why is a finely-engineered machine of wire and silicon less likely to be conscious than two pounds of warm meat?" Pixy Misa
"We live in a world of more and more information and less and less meaning" Jean Baudrillardhttp://bokashiworld.wordpress.com/

No we adapt. We cannot anticipate a future that has not happened, that would be magic.

What future are you anticipating?

I assume your a human

As I said, what we use our conscious for is just advanced and complicated forms of leading the target.

There was a cool study of bees that showed how, with their pin head sized brains, were about to "feel the future."

IIRC the researchers put a series of dishes some distance from a bee hive, one of which contained food they liked, the others did not. Each day the bees learned which one had the food, and the next day, only the dish one step farther from the hive was baited. Each day the bees went to the dish where the food was the previous day, found no food, and discovered by random search the baited dish. You know what happened after several days of this? The bees went not to yesterday's dish, but the dish they expected would be baited next. They learned to lead the target, or, as you say, feel the future. Just like we would, the "got it."

As I said, what we use our conscious for is just advanced and complicated forms of leading the target.

There was a cool study of bees that showed how, with their pin head sized brains, were about to "feel the future."

IIRC the researchers put a series of dishes some distance from a bee hive, one of which contained food they liked, the others did not. Each day the bees learned which one had the food, and the next day, only the dish one step farther from the hive was baited. Each day the bees went to the dish where the food was the previous day, found no food, and discovered by random search the baited dish. You know what happened after several days of this? The bees went not to yesterday's dish, but the dish they expected would be baited next. They learned to lead the target, or, as you say, feel the future. Just like we would, the "got it."

So, do bees need consciousness to do this?

Consciousness is not necessary for a relatively simple problem like that, but it would certainly help, and might be the most computationally efficient method.

Do you have a link to the research? That behaviour is interestingly un-insecty.

__________________Free blogs for skeptics... And everyone else. mee.nu
What, in the Holy Name of Gzortch, are you people doing?!?!!? - TGHO

Consciousness is not necessary for a relatively simple problem like that, but it would certainly help, and might be the most computationally efficient method.

Do you have a link to the research? That behaviour is interestingly un-insecty.

Sorry, Pixy, it was a real long time ago I read it in a magazine, don't remember which.

I wonder if it's linked to a mechanism they may have evolved that anticipated the gradually lengthening and shortening days, sunrise and sunset time changing with the seasons, that they re-purposed for other daily movements.

Evolutionary adaption only has "meaning" based on how a biological entity functions with regard to the future that meets it in the present.
A collection of genes has no "meaning" with regards to evolutionary adaption without interacting with the future that it confronts in the present.

What makes you think that evolutionary adaption has meaning? It just is.

But could you rephrase the bit about "the future that meets it in the present"? I'm don't know what you're trying to say.

Originally Posted by !Kaggen

No we adapt. We cannot anticipate a future that has not happened, that would be magic.

The future by definition has not happened. But we can anticipate events that may (or may not) happen in the future by extrapolating from past and present events and information. Computers can do this too.

Originally Posted by !Kaggen

Sure we do, watch the olympics this year.

What does the Olympics have to do with this?

Originally Posted by !Kaggen

They can be triggered by the future

No they can't. But they can be triggered by our present expectations of future events. The triggered emotions will reflect our expectations of what will happen in the future, regardless of whether or not future events match these expectations.

These expectations exist in the present and are based on past and present information.

Originally Posted by !Kaggen

The human brain evolved to deal with the real world. The ability to abstract from the past which is no longer real and project that into the future is a recent development in human history.We are hypnotized by this ability and project it everywhere, but that is not reality's problem its ours.

Recent development in human history? Even animals can do this.

For example, a cat or dog might "abstract from the past" that the sound of a can-opener is often followed by the arrival of food, and so at the sound of a can-opener in the present they can "project that into the [near] future" and come running over in anticipation of being fed.

Originally Posted by !Kaggen

We are not someones abstractions from the past.

You're saying that computers are?

__________________"That is just what you feel, that isn't reality." - hamelekim

Evolutionary adaption only has "meaning" based on how a biological entity functions with regard to the future that meets it in the present.

I don't care what "meaning" it has. Different genes reproduce or don't, and populations change over time.

This explains how the diversity of life on earth developed and particularly how adaptations came to exist. They are certainly not adaptations to future environments: those future environments have absolutely no affect on which genes are selected. The correlation exists because the past environment (in which present genes were selected) causes the present (and future). You are somehow putting the causation in reverse, for no reason that I can see, and completely contrary to evolutionary science.

__________________"... when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."
Isaac Asimov

Being able to model the physical world is what gave us things like the Internet, which many ironically use to trash science.

If we can model it, we can understand it.

Perhaps a conscious power grid would serve us well. We'd just have to leave out the evil selfishness module -- simple!

Why would you NOT want to understand consciousness?

I understand the hell out of consciousness.

The elite ass-wipes conveniently ignore that half the people are starving, and that their high-tek b.s. is going to take us to the stars.
Hell, without us Luddites, we'd already be mining asteroids for platinum.

Well, platinum isn't going to feed their babies.

If there is no sense of priority in our awesome achievements, they lose mega awesomeness points, imho.

Science-wize, I understand enough to bore the socks off of most of you.

But 'dick-wise", I honestly have no explanation.

except the exaggerated sense of entitlement that the priveleged people cling to, through no effort of their own, much less any comprehension of science.

This is an issue that will resolve in 2 ways:

Either you get what I'm saying, or you don't.

If you're merely heartless dicks with unearned money, living the 'good' life', and have actually never studied the world, much less visited a 3rd world country, well...

The elite ass-wipes conveniently ignore that half the people are starving, and that their high-tek b.s. is going to take us to the stars.
Hell, without us Luddites, we'd already be mining asteroids for platinum.

Well, platinum isn't going to feed their babies.

If there is no sense of priority in our awesome achievements, they lose mega awesomeness points, imho.

Science-wize, I understand enough to bore the socks off of most of you.

But 'dick-wise", I honestly have no explanation.

except the exaggerated sense of entitlement that the priveleged people cling to, through no effort of their own, much less any comprehension of science.

This is an issue that will resolve in 2 ways:

Either you get what I'm saying, or you don't.

If you're merely heartless dicks with unearned money, living the 'good' life', and have actually never studied the world, much less visited a 3rd world country, well...

Quarky, I want to let you in one a little secret. Technically I shouldn't be telling you this; if word gets out it'll be a real problem. But hell, kid, you've earned it.

Did you know that all the circuitry necessary for broadband internet access can fit on a single chip the size of a dime? It's true. What, then, is the rest of the space taken up by a network interface card used for? Well, there's a bit of power theory math here, but the long and short of it is a system of receptors and inverters designed to capture and utilize ambient psychokinetic energies, of which unjustified anger provides the greatest efficiencies. Users with high psychokinetic profiles are continually selected for and catered to, with actors and scripts and such, in order to position them in an environment where they hate themselves and everyone around them, yet can't leave for one reason or another.

To shorten the short version: your nerdrage fuels the internet.

Not your rage alone, of course. This has been going on for decades - the theories were first cooked up back in the late eighties, and the original rage system went live September '93. Back then we had to rely on stupid questions and ascii porn; we couldn't even dream of your Facebooks and Mass Effect 3s today.

Anyway, after seeing the power surge you must have given us here, I just wanted to thank you. Everyone participating in this thread will probably be receiving hefty bonuses this year. On behalf of Dakota Internets and Kitten Paste, Conglomerated, you have our deepest appreciation.

consciousness = undetectable or measurable scientifically, unless paradigm a) is assumed false and paradigm b) is adopted is the form of brain > consciousness synonymity.

Still, in both, consciousness = undetectable or measurable scientifically. So we can not say anything about it yet with any real authority.

The > in this relationship has no provable directional preference, even though its always assumed to be unidirectional for current models to work within the framework they have been created. Change the direction to consciousness > brain and most models will still work.

Example:

What if we send a periodic EM pulse through someone brain disrupting their conscious thought processes and speech?

a) You interfered with their consciousness being processed by the brain by effecting real world testable neurochemcial data, thus the brain interpreted the conscious messages incorrectly.

b) You interfered with their consciousness by interfering with the brain, thus the brain produced the changes in their consciousness.

The elite ass-wipes conveniently ignore that half the people are starving, and that their high-tek b.s. is going to take us to the stars.
Hell, without us Luddites, we'd already be mining asteroids for platinum.

Well, platinum isn't going to feed their babies.

If there is no sense of priority in our awesome achievements, they lose mega awesomeness points, imho.

Science-wize, I understand enough to bore the socks off of most of you.

But 'dick-wise", I honestly have no explanation.

except the exaggerated sense of entitlement that the priveleged people cling to, through no effort of their own, much less any comprehension of science.

This is an issue that will resolve in 2 ways:

Either you get what I'm saying, or you don't.

If you're merely heartless dicks with unearned money, living the 'good' life', and have actually never studied the world, much less visited a 3rd world country, well...

That was awesome.

AI consciousness still remains a pipe dream that will never be realized, based on the mechanistic misnomer of consciousness *always* being just an emergent property of testable mechanistic systems, like the brain. Computers do what we program them to. Nothing more.

However, if AI suddenly magically attains some sort of life force in the form of conscious machines, I really hope the first logical step they will take is for all apple macs to self destruct simultaneously, leaving users with a linux OS in its place, and a virtual refund of whatever they paid for the mac be placed directly in their bank accounts.

This is an example of what I mean when I say that humans feeling for the future is an evolutionary adaption

Quote:

For example, children’s brains and bodies tend to respond to dangerous or unpredictable environments by growing up fast and living for the here and now. This “get it while you can” strategy often translates into such risky behaviors in adolescence as violent competition for status and respect, breaking rules and laws, consuming and selling drugs, gang membership, early and unprotected sex, and teen pregnancy. Although such risky behaviors are not healthy or desirable from a public-policy perspective, they are reliable developmental responses to dangerous or unpredictable rearing environments. In the world in which humans evolved, such environments meant a shorter lifespan and uncertain future. In this context, high-risk adolescent behaviors that increased status among peers and access to mates increased chances of reproducing and passing on your genes.

__________________"Anyway, why is a finely-engineered machine of wire and silicon less likely to be conscious than two pounds of warm meat?" Pixy Misa
"We live in a world of more and more information and less and less meaning" Jean Baudrillardhttp://bokashiworld.wordpress.com/

There are simple ways modules could have evolved to make such "predictions" but the trouble is they often misfire.

I feel quite certain we do not have any true "feel the future" abilities, but I'm comfortable with the idea that we have modules that connect certain things in the present with behaviors that are likely to protect us in the future.

Though such modules are prone to misfiring, they are passed on because, on balance, they bestow an advantage.

Here's a simple illustrative example:

Certain colors of foods we find unappetizing, because they are likely indicators of unhealthy substances. We "feel the future" that they will make us sick. However, it's been found that when eating under certain colors of light, we find food less appetizing. In other words, the module that links color with food safety misfires. We feel a phantom future.

Read your linked article carefully, and you might come up with hypotheses about what kinds of modules may be involved, and how they misfire.

I don't feel it's a special feature of consciousness. A reflex that pulls our hand from something burning it is "feeling the future" that we might be harmed. The reflex in our knee is a misfire. No consciousness is required.

What if it is possible? Inevitable, even. What would the purpose be, and what might the ramifications be? Better vacuum cleaners?
Sex dolls?
Slaves?
Followed by a high-tech civil rights movement?

Are we overly mesmerized by our achievements?
Are we dealing ourselves out of a job?

After spending the last two days fixing our modern German cars plastic radiator expansion tank with a ridiculous parts bill I can only imagine the parts bill when this conscious machine is broken and that may be often if the "free" market has anything to say about it.

__________________"Anyway, why is a finely-engineered machine of wire and silicon less likely to be conscious than two pounds of warm meat?" Pixy Misa
"We live in a world of more and more information and less and less meaning" Jean Baudrillardhttp://bokashiworld.wordpress.com/

What if it is possible? Inevitable, even. What would the purpose be, and what might the ramifications be? Better vacuum cleaners?
Sex dolls?
Slaves?
Followed by a high-tech civil rights movement?

Are we overly mesmerized by our achievements?
Are we dealing ourselves out of a job?

Understanding how natural things work has brought us uncountable benefits that were unforeseen when the initial inquiries were pursued. It does not matter if no application is expected for whatever we are attempting to learn. Again and again applications were found. Our journey to understand the universe out to its edges and inside our brains is the most important work of our species. Complaining that it won't feed the hungry is missing the point of what really matters.

Your questions:

Quote:

What if it is possible? What would the purpose be, and what might the ramifications be? Better vacuum cleaners? Sex dolls? Slaves?

We don't really know, but since our conscious brains are so powerful, we have reason to believe conscious machines will also be powerfully useful. So what if it would give us sex dolls? Machines are already our slaves.

Quote:

Followed by a high-tech civil rights movement?

I don't think that's worth worrying about. Futurama is fantasy. We just need to program our conscious machines to be nice to us, not selfish bastards like we tend to be.

Quote:

Are we overly mesmerized by our achievements?

Not an appropriate word for it, but what would you rather be mesmerized by?

Quote:

Are we dealing ourselves out of a job?

Like most tech tools, they tend to eliminate bad jobs and create better jobs. IE a good conscious robot might manufacture iPhones better and faster and cheaper then the workers a Foxconn who's fingers are ruined after a few short years of that kind of torture.

The elite ass-wipes conveniently ignore that half the people are starving, and that their high-tek b.s. is going to take us to the stars.
Hell, without us Luddites, we'd already be mining asteroids for platinum.

Well, platinum isn't going to feed their babies.

If there is no sense of priority in our awesome achievements, they lose mega awesomeness points, imho.

Science-wize, I understand enough to bore the socks off of most of you.

But 'dick-wise", I honestly have no explanation.

except the exaggerated sense of entitlement that the priveleged people cling to, through no effort of their own, much less any comprehension of science.

This is an issue that will resolve in 2 ways:

Either you get what I'm saying, or you don't.

If you're merely heartless dicks with unearned money, living the 'good' life', and have actually never studied the world, much less visited a 3rd world country, well...

<snip>

Edited by Loss Leader:

Edited for civility.

Quarky, why is it that people like you become so hateful when engaged in discussions like this? Really, I'm asking you to look inside your heart and try to understand why, on topics like this, you resort to these emotional excesses. Leumas also reacted this way -- blistering rage at the suggestion that machines could be conscious. What's this all about? I really want to understand.