Author
Topic: Stolen cobalt-60 found abandoned (Read 730 times)

On average, a half dozen thefts of radioactive materials are reported in Mexico each year and none have proven to be aimed at the cargo, Eibenschutz said. He said that in all the cases the thieves were after shipping containers or the vehicles.

Unintentional thefts of radioactive materials are not uncommon, said an official familiar with cases reported by International Atomic Energy Agency member states, who was not authorized to comment on the case. In some cases, radioactive sources have ended up being sold as scrap, causing serious harm to people who unknowingly come into contact with it.

In a Mexican case in the 1970s, one thief died and the other was injured when they opened a container holding radioactive material, he said.

The container was junked and sold to a foundry, where it contaminated some steel reinforcement bars made there. Eibenschutz said all foundries in Mexico now have equipment to detect radioactive material.

The carjackers who set off international alarm bells by absconding with a truckload of highly radioactive material most likely had no idea what they were stealing and will probably die soon from exposure, Mexican authorities said at the end of a brief national scare....“The people who handled it will have severe problems with radiation,” he said. “They will, without a doubt, die.”

...we are on the brink of creating machines that will be as intelligent as humans. Specific timelines vary, but the broad-brush estimates place the emergence of human-level AI at between 2020 and 2050. This human-level AI (referred to as "artificial general intelligence" or AGI) is worrisome enough, seeing the damage human intelligence often produces, but it's what happens next that really concerns Barrat. That is, once we have achieved AGI, the AGI will go on to achieve something called artificial superintelligence (ASI) -- that is, an intelligence that exceeds -- vastly exceeds -- human-level intelligence....

To Barrat, and other concerned researchers quoted in the book, this is a lethal predicament. At first, the relation between a human intellect and that of an ASI may be like that of an ape's to a human, but as ASI continues its process of perpetual self-improvement, the gulf widens. At some point, the relation between ASI and human intelligence mirrors that of a human to an ant.

Needless to say, that's not a good place for humanity to be.

And here's the kicker. Barrat argues that the time it will take for ASI to surpass human level intelligence, rendering us ant-like in comparison, could be a matter of days, if not mere hours, after it is created. Worse (it keeps getting worse), human researchers may not even know they have created this potent ASI until it is too late to attempt to contain it. An ASI birthed in a supercomputer may choose, Barrat writes, to hide itself and its capabilities lest the human masters it knows so much about it, attempt to shut it down. Then, it would silently replicate itself and spread. With no need to eat and sleep and with an intelligence that is constantly improving and war-gaming survival strategies, ASI could hide, wait and grow its capabilities while humanity plods along, blissfully unaware.

...we are on the brink of creating machines that will be as intelligent as humans. Specific timelines vary, but the broad-brush estimates place the emergence of human-level AI at between 2020 and 2050. This human-level AI (referred to as "artificial general intelligence" or AGI) is worrisome enough, seeing the damage human intelligence often produces, but it's what happens next that really concerns Barrat. That is, once we have achieved AGI, the AGI will go on to achieve something called artificial superintelligence (ASI) -- that is, an intelligence that exceeds -- vastly exceeds -- human-level intelligence....

To Barrat, and other concerned researchers quoted in the book, this is a lethal predicament. At first, the relation between a human intellect and that of an ASI may be like that of an ape's to a human, but as ASI continues its process of perpetual self-improvement, the gulf widens. At some point, the relation between ASI and human intelligence mirrors that of a human to an ant.

Needless to say, that's not a good place for humanity to be.

And here's the kicker. Barrat argues that the time it will take for ASI to surpass human level intelligence, rendering us ant-like in comparison, could be a matter of days, if not mere hours, after it is created. Worse (it keeps getting worse), human researchers may not even know they have created this potent ASI until it is too late to attempt to contain it. An ASI birthed in a supercomputer may choose, Barrat writes, to hide itself and its capabilities lest the human masters it knows so much about it, attempt to shut it down. Then, it would silently replicate itself and spread. With no need to eat and sleep and with an intelligence that is constantly improving and war-gaming survival strategies, ASI could hide, wait and grow its capabilities while humanity plods along, blissfully unaware.

Is it wrong that my first thought was my Playstation would start nagging me about playing the same game over and over? It would sit there and make fun of me when I couldn’t get past levels until I smashed the controller in a desperate attempt to inflict some form of physical pain upon it... Without smashing my tv who is probably an asshole to. It would probably not dvr my shows to get me back for not cleaning it.

What a nightmare. I like stupid machines that I can speak down to in a condescending manner. It makes me feel like a big man.