Comments for log.examachine.nethttps://examachine.net/blog
Turning science fiction into science fact.Mon, 04 Dec 2017 13:55:41 +0000hourly1https://wordpress.org/?v=4.9.3Comment on Why the Simulation Argument is Invalid by Silviahttp://examachine.net/blog/why-the-simulation-argument-is-invalid/#comment-472
Mon, 04 Dec 2017 13:55:41 +0000http://www.examachine.net/blog/?p=99#comment-472Bostrom is just one among many( much more convincing) that wrote about simulation argument: you should refute all.
In any case I find it childish to think that a very advanced civilizations would use a digital way to simulate universes. Humans are nearly at the point to create new particles, new biological life, etc, and in thousand years someone will be still here to make things with a computer ? Nah. Someone is playing too many videogames.
]]>Comment on Simulation Argument and Existential AI Risk by ProgrammingGodJordanhttp://examachine.net/blog/simulation-argument-and-existential-ai-risk/#comment-470
Sun, 26 Nov 2017 14:18:59 +0000http://examachine.net/blog/?p=452#comment-470Any thoughts on Dr. James Gate’s adinkra formulation?
Topic: “Computer Code Discovered In Superstring Equations”

[youtube=http://www.youtube.com/watch?v=bp4NkItgf0E&w=740&h=447]

]]>Comment on Slaughterbots - AI scare PR gone wrong by examachinehttp://examachine.net/blog/slaughterbots-ai-scare-pr-gone-wrong/#comment-469
Fri, 24 Nov 2017 19:24:10 +0000http://examachine.net/blog/?p=567#comment-469I don't think it does. It tries to show smart munitions as a tool terrorists can easily obtain and use. Such weapons would be illegal. And if any drones are to be used by CIA, etc., that's just political tomfoolery, has nothing to do with actual uses of smart munitions. IOW, it does portray military applications of AI as if they would cause mass proliferation of very dangerous weapons that would be used by every terrorist and shadowy government agency. That's just really foolish speculation nothing more. And it doesn't help raise any awareness, at best it confuses people. If advanced weapons are so easy to deploy why don't we see them everywhere already? Saying smart munitions will be used for ethnic cleansing? That's just more foolishness, what will they use it for, to kill more black people in USA? These charlatans would be more concerned with cops killing black people if they really meant it, but they don't. Is it a mass destruction weapon? No, it is not. The film is just someone's foolish imagination, someone who doesn't understand military strategy and current weapons, terrorism, covert operations, weapons sale and use, etc. Maybe those clueless idiots at FHI like the clown called Anders Sandberg. You don't need face recognition to control a drone to kill a single person if you really want to. And you certainly don't need any AI to control drones with massive killing capability. But is there any opposition to drones here? No. Any real opposition to smart weapons like guided missiles, drones, smart bombs? No. So, this is just foolishness and illogical attempt at creating emotional outrage, at large. All of this is just trying to inflate any dangers from AI, their usual schtick. All of the stuff they show is highly illegal actions that also violate international treaties, they would be so with any technology. In other words, this is just a red herring. What's the relevance? None.
]]>Comment on Slaughterbots - AI scare PR gone wrong by Toby Kelseyhttp://examachine.net/blog/slaughterbots-ai-scare-pr-gone-wrong/#comment-468
Sat, 18 Nov 2017 22:25:33 +0000http://examachine.net/blog/?p=567#comment-468Agreed trying to ban the technology will not prevent non-state and clandestine use. The US army can kill an equal number of poeple with indiscriminate drones and missiles, but mass killings cause political blowback. Small UAVs with shaped charges and face recognition are easier to transport & hide than missiles and can be targeted and designed by small groups, also suitable for false flag ops. The film performs a useful service in raising awareness of these technologies for the wider public.
]]>Comment on Slaughterbots - AI scare PR gone wrong by examachinehttp://examachine.net/blog/slaughterbots-ai-scare-pr-gone-wrong/#comment-467
Thu, 16 Nov 2017 15:24:45 +0000http://examachine.net/blog/?p=567#comment-467Well, given that our societies aren't necessarily getting smarter, unfortunately that might take a long time. My point being that most killing is already automated, there is already a great deal of distance between sophisticated aerial weapons and operators, I will update the essay with some interesting information that colleagues pointed out later.
]]>Comment on Slaughterbots - AI scare PR gone wrong by Ben3141592653589http://examachine.net/blog/slaughterbots-ai-scare-pr-gone-wrong/#comment-466
Thu, 16 Nov 2017 00:50:46 +0000http://examachine.net/blog/?p=567#comment-466Isn't the obvious evolution of military towards non-lethal weapons? Drexler draws a future without wars, since any war will be over within a day with 0 deaths.
]]>Comment on Why the Simulation Argument is Invalid by examachinehttp://examachine.net/blog/why-the-simulation-argument-is-invalid/#comment-465
Mon, 06 Nov 2017 13:15:46 +0000http://www.examachine.net/blog/?p=99#comment-465Dear Joscha,

For what it's worth, I am indeed analyzing the probabilistic "model". It is no model of course. There isn't even a distribution. However, the algorithmic-complexity objection is the final argument against all variations of intelligent design, including this one. That is the main argument, and it's quite well explained in the present draft. I suggest you take another, detailed look. There are several arguments in the paper, that one is the most devastating, it's the most detailed one anyhow, I glossed over the other objections, it would be too long.

]]>Comment on Ultimate Intelligence Part III: Measures of Intelligence, Perception and Intelligent Agents by examachinehttp://examachine.net/blog/ultimate-intelligence-part-iii-measures-of-intelligence-perception-and-intelligent-agents/#comment-464
Mon, 11 Sep 2017 22:37:29 +0000http://examachine.net/blog/?p=464#comment-464Thank you Michael. I am trying to make it into a theoretical book on AGI, many other chapters are planned. I hope machine learning researchers and students will enjoy it, I'm going to put a lot of effort to make it a bit more ambitious than usual.
]]>Comment on Ultimate Intelligence Part III: Measures of Intelligence, Perception and Intelligent Agents by Mitchell Porterhttp://examachine.net/blog/ultimate-intelligence-part-iii-measures-of-intelligence-perception-and-intelligent-agents/#comment-463
Mon, 11 Sep 2017 21:41:32 +0000http://examachine.net/blog/?p=464#comment-463My opinion is that it's exciting to see another item in this series by you. 🙂
]]>Comment on Simulation Argument and Existential AI Risk by examachinehttp://examachine.net/blog/simulation-argument-and-existential-ai-risk/#comment-462
Wed, 26 Jul 2017 13:40:46 +0000http://examachine.net/blog/?p=452#comment-462Hi Jason,

I think your concerns are legitimate but the analogy between nuclear weapons and AI is wrong. AI isn't a destructive thing, it is just cognition, thinking. We seem to want to anthropomorphize it like everything else, but these programs are not humans or animals. Sure, we could try to build a free artificial person, I have a few blog entries about that sort of experiment (take a look at earlier entries, a lot of interesting posts about AI). But then again, would it have to be like human in any way? And do we really need artificial persons? In all likelihood, AI tech will not be artificial persons, but more like smart tools that get stuff done in the real world, like driving a car, or flipping burgers.

Now, let us consider the other possibility. Is AI then like nuclear power, could it turn into fearsome weapons in the wrong hands? Not by its inherent qualities. You can use AI to control a weapon, but you'd need to build such a robot first. Militaries are already building robot armies. Hardly needs AI to control. You can remote them. No autonomous decision making is necessary even if AI is used, just point and shoot, if a jarhead can do it, so can a machine. I suggest we should worry about militaries, wars, politicians, bankers, etc., more than we worry about AI tech.