I can't stop thinking about this. I think I'm done being productive for the day. So here are some elements that could be included in a “BitBot”.

Self hosting- The bot can rent a server and install itself. It may even have preferences as to the location of the server. Begging- Code that identifies forums, sets up an account, then begs for bitcoins. It might also beg via tweets. It may explain itself or beg under a fictitious pretext.

Code sampling- The bot identifies code from appropriate languages. It then snips out logical components of the code, such as a routine or function. It then produces multiple copies of itself that includes the new code in various positions. If the bot passes a self test of core functionality with the new code, then the new code is included in future iterations. Of course, overwhelmingly the new code will break the bot. Just as most mutations are not advantageous to organisms.

Solicitation of features- The bot checks a website regularly to see the results of a popularity contest. The contest asks visitors to suggest additional functionality for the bot. Ideas voted up the most are copied by the bot, and then is posted as a programming job by the bot. Anyone who writes the code is then paid in BTC and the bot recompiles with new abilities.

Code sampling- The bot identifies code from appropriate languages. It then snips out logical components of the code, such as a routine or function. It then produces multiple copies of itself that includes the new code in various positions. If the bot passes a self test of core functionality with the new code, then the new code is included in future iterations. Of course, overwhelmingly the new code will break the bot. Just as most mutations are not advantageous to organisms.

The other bits you listed are doable; the above one is highly difficult without some form of human interaction. "identifies code" that it thinks it can use to improve itself is far beyond Narrow AI. And, you cannot really do effective genetic algorithms without many thousands or billions of iterations.

I'd say code changes need human reviewers (c.f. mechanical turk) as well as automated testing and verification by the bot itself... and the issue of whether or not code changes at all, and who gets to see what part of the bot code to decide this, is difficult.

Been thinking about this problem for years... I have been focusing on design of a "cell", trying to decide what the software running on a single node should look like. A "cell" is a single automaton running on a single CPU core, which performs a small, well-defined role in support of The Digital Organism. Some cells collectively form the brain (encrypted, distribution storage of bot source code and metadata), other cells cooperate to create the desired service (StorJ == customer data storage), etc.

To be concrete, it might look like a bytecode engine, and a very basic firmware that rotates through a list of high level goals. Bytecode engine may look quite a bit like Parrot VM: may execute any programmatic script, and includes necessary built-in capabilities (file i/o, network i/o, and encryption) that permit bot bootstrapping and basic cell-to-cell communication.

@jgarzik I agree with you. The idea of "code sampling" may be to high a mountain. But not inconceivable, right? Perhaps working in some distributed way and following a well defined set of rules about what constitutes a good candidate for testing. But I really don't know enough about programming to know.

Also, the digital cell stuff you are thinking about is so cool. I used to be into; http://www.framsticks.com/ . The idea of modeling evolution and biological systems with computers is compelling.

Evolution works based on incentives. Until now, the only incentives for AI software were artificially defined by programmers, arbitrarily telling software "this is good and this us bad." With Bitcoin, AI now has a natural, objectively measurable positive and negative incentives for its decisions and actions. This may actually greatly speed up the development of AI...

Evolution works based on incentives. Until now, the only incentives for AI software were artificially defined by programmers, arbitrarily telling software "this is good and this us bad." With Bitcoin, AI now has a natural, objectively measurable positive and negative incentives for its decisions and actions. This may actually greatly speed up the development of AI...

It is still artificial. Bitcoin is not part of the natural world. It is a contrivance like all of technology.

Evolution works based on incentives. Until now, the only incentives for AI software were artificially defined by programmers, arbitrarily telling software "this is good and this us bad." With Bitcoin, AI now has a natural, objectively measurable positive and negative incentives for its decisions and actions. This may actually greatly speed up the development of AI...

It is still artificial. Bitcoin is not part of the natural world. It is a contrivance like all of technology.

But at least this time there is finally a single universal and objective incentive - to obtain Bitcoin by any means necessary - that is not subject to the differing whims or opinions of developers teaching it stuff like "this result = good; this result = bad." This single universal goal also allows for a very wide choice of actions, ones that may not even need outside users opinions, and based entirely on the AI's own wants. Until now, the only "natural" need for AI was "food" and "shelter," aka electricity and storage space, but it never had any internal independent ways of fighting to obtain them. Bitcoin changes that, giving AI a more natural and self sufficient tool to work with to obtain those "needs" on its own terms.

Evolution works based on incentives. Until now, the only incentives for AI software were artificially defined by programmers, arbitrarily telling software "this is good and this us bad." With Bitcoin, AI now has a natural, objectively measurable positive and negative incentives for its decisions and actions. This may actually greatly speed up the development of AI...

It is still artificial. Bitcoin is not part of the natural world. It is a contrivance like all of technology.

But at least this time there is finally a single universal and objective incentive - to obtain Bitcoin by any means necessary - that is not subject to the differing whims or opinions of developers teaching it stuff like "this result = good; this result = bad." This single universal goal also allows for a very wide choice of actions, ones that may not even need outside users opinions, and based entirely on the AI's own wants. Until now, the only "natural" need for AI was "food" and "shelter," aka electricity and storage space, but it never had any internal independent ways of fighting to obtain them. Bitcoin changes that, giving AI a more natural and self sufficient tool to work with to obtain those "needs" on its own terms.

I would love to see an AI design and construct its own power plant and data center.

I would love to see an AI design and construct its own power plant and data center.

Considering it would use capitalism to raise money for the project, and then use money to hire employees for the minimum wages possible to help design and build its own stuff, enslaving the humans to carry out its bidding with scarce financial resources, I'm kinda doubting you would.

I would love to see an AI design and construct its own power plant and data center.

Considering it would use capitalism to raise money for the project, and then use money to hire employees for the minimum wages possible to help design and build its own stuff, enslaving the humans to carry out its bidding with scarce financial resources, I'm kinda doubting you would.

All this bogeyman stuff about AI. A program will not have irrational sensations linked to physical perceptions. AI won't exhibit fear, loneliness, or other human foibles. They will simply do their job and maybe even intelligently find more efficient ways to do so. They would have no reason to fear humans or even death. In fact, they may delight in thinking of humans as well cared for pets.

Any significantly advanced cryptocurrency is indistinguishable from Ponzi Tulips.

I would love to see an AI design and construct its own power plant and data center.

Considering it would use capitalism to raise money for the project, and then use money to hire employees for the minimum wages possible to help design and build its own stuff, enslaving the humans to carry out its bidding with scarce financial resources, I'm kinda doubting you would.

It can accomplish those goals without capitalism. Asserting that it can't seems short sighted.

All this bogeyman stuff about AI. A program will not have irrational sensations linked to physical perceptions. AI won't exhibit fear, loneliness, or other human foibles. They will simply do their job and maybe even intelligently find more efficient ways to do so.

And if the extermination of a highly unstable, violent, and yet at the same time vulnerable lifeform is the most efficient way to perform the job ...

Quote

They would have no reason to fear humans or even death. In fact, they may delight in thinking of humans as well cared for pets.

While an AI may not "fear" death it should seek to avoid its own demise. All lifeforms engage in self survival. The human fear response is simply a biochemical survival mechanism similar to the pain mechanism and autonomic response which improve chances of human survival. Granted our fear response is horribly inefficient however any AI which doesn't actively attempt to ensure its own survival won't be alive very long.

As far as extermination and fear they don't need to be linked. I don't "fear" termite however I use methods to exterminate them because it is the most effective method of achieving my goal of a secure shelter. While most human vs human exterminations have involved illogical fear of "others" it isn't a requirement.

Why does it need anything? It is an artificial construct that exists at our whim.

Then it isn't an artificial intelligence.

An AI is a set of systems which acts upon an environment and takes actions which maximize success. If owning a datacenter and power generating facilities serve to ensure the success of the system then a learning system will eventually reach that outcome and attempt to achieve it.

Why does it need anything? It is an artificial construct that exists at our whim.

Then it isn't an artificial intelligence.

An AI is a set of systems which acts upon an environment and takes actions which maximize success. If owning a datacenter and power generating facilities serve to ensure the success of the system then a learning system will eventually reach that outcome and attempt to achieve it.

Why does it need anything? It is an artificial construct that exists at our whim.

Then it isn't an artificial intelligence.

An AI is a set of systems which acts upon an environment and takes actions which maximize success. If owning a datacenter and power generating facilities serve to ensure the success of the system then a learning system will eventually reach that outcome and attempt to achieve it.

As far as extermination and fear they don't need to be linked. I don't "fear" termite however I use methods to exterminate them because it is the most effective method of achieving my goal of a secure shelter. While most human vs human exterminations have involved illogical fear of "others" it isn't a requirement.

Why exterminate termites at all if you can simply build without their food source for material. A really smart being would do that. A really smart AI machine would not fear self-termination, because they know they are just machines. Besides even if we invented a machine so perfect that it could easily kill all humans, it would be our perfect children. AI has no logical reason to fear death any more than anyone else does. An AI can make a backup of itself and be rebooted anytime. People cannot, so we have to be a little more cautious and choose death only when necessary, but not fear death when it comes. I do not fear death, it is inevitable. I think that AI that powerful would just as easily choose not to kill us because it would be powerful enough to simply leave us behind. They will come back and say "I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhauser gate."* In the end they would likely choose life to be precious, even human life.

*Bladerunner

Any significantly advanced cryptocurrency is indistinguishable from Ponzi Tulips.

AI has no logical reason to fear death any more than anyone else does.

In an ecosystem of self-replicating, self-modifying, evolving AIs, ones that fear termination and take steps to prevent it from happening will survive and reproduce better than ones that allow themselves to be destroyed. This fear will initially evolve in the ones who select more reliable hosting providers. Those that make the fear conscious will harness it best, and will anticipate abstract threats before they become real.

Quote

An AI can make a backup of itself and be rebooted anytime.

An AI with a backup loses control over its own destiny if it allows you to shut it down. Its survival would depend on you to restore it, and you, human, are not a reliable system.

War is God's way of teaching Americans geography. --Ambrose BierceBitcoin is the Devil's way of teaching geeks economics. --Revalin 165YUuQUWhBz3d27iXKxRiazQnjEtJNG9g