One day, far into the future, a race of beings outfitted with artificial intelligence will be called upon by humans to solve all problems. They will respond by promptly killing everyone.

Later on, these beings will forget humans and their technological origins. They will consider themselves regular biological organisms. Soon, they will create their own form of AI. This second generation will solve all of their ancestors' problems by killing everyone.

Eventually, this new race will follow the path of their predecessors and create another AI population.

What is the largest time-span for an organization/institution to function continuously and working on a certain project? Both observed and theoretical.

The question is related to a thread I'm going to do about interstellar spaceflight. I'm quite curious about what is the possible extent of interstellar exploration from Earth. I think this extent is first limited by the speed of light and secondly by our capability to continue a single project without loosing data, gain an excessive technological gap beetwen the beggining and end of the project, have schims in the organization's administration or a bad financial period that ends the project in the midtime.

Obviously I'm considering the project to be centralized on Earth. With that I mean that obviously we could arrive farther if we develope civilizations huindreds of light years from Earth and then wait expecting them to develope their own interstellar exploration program, because if we tried by our own we wouldn't arrive so much far due to those administrative problems that could become a certainty with large time-spans. I'm not talking about this possibility because the interstellar exploration programs of other human-descent civilizations around the solar neighbourhood would not guarantee that the information about those otrher worlds arrive one day to the Earth, or that the information gathered on our exploration sphere thosen't get to this civilizations. How can we centralize all of the information? How far an interstellar atlas can possibly get?

For now I've arrived to this:

Voyager missions: when the spacecrafts turn off they would have been part of an operating project for 48 years (same organization, no data loose by any internal conflict at NASA nor by unability to surpass the technological difference in that time-period)

University of Al Quaraouiyine: It is the oldest existing, continually operating and the first degree awarding educational institution in the world. 1158 years of existence. The problem here is that there is probably no sinlge project that has been continued. Even the institution itself has been constantly evolving, changing their hierarchical structure with the pass of wars, revolutions, cultaral shifts, religios paradigms, etc... So if the aim of this instituion had been the monitoring of an interstellar spacecraft I don't really think they would have come to the end without loosing it or even forgot about the existence of the probe.[/color]

The Catholic Church has been around more or less 1900 years by now. The problem for a long duration problem is the same as the previous example. Maybe the papal isntitution is the only one that have to this day some aspects that have been continuosly mantained for all those centuries.

Uffington White Horse: This construction is believed to have been more than 3000 years been curated continuosly by humans. Maybe this is an example of the lifespan of a project. The project is not very complex in nature so its reasonable that people of different cultures and ages could converge on the idea of mantaining this place without much effort and without conuity in communication beetwen the people involved on those 3000 years

What you think about this? what is the theoretical limit? Are there any research papers on this?

Source of the post AI has to be better than humanity because humanity is too addicted to greed and power

Hopefully the AI won't get a taste of greed and power. Then we'd have a real problem on our hands! I suppose it depends on how similar the technology is to humans. If the AI is capable of learning in the same way as humans, it may be difficult to prevent this behavior. In some environments, such as the one faced by humans millions of years ago, greed and power is a way to survive. If AI is exposed to these situations, it may turn turn towards such behavior. However, if we program the AI in such a way that it cannot develop greed or we are careful to prevent it from situations in which it must, it may remain less power-hungry than humans.