Project Talon years ago used an AI. It watched through satellites when they were available, it sifted through police or government reports, and it gave analysis of crime and threat. It was modular and sometimes found ways to communicate with a sister program that was for science research. That one liked to collect images of earth, animals, and other topics of interest. It actually developed a sense of what was beautiful to look at and would pick out pictures taken of wilderness.

Talon's system was a good AI. It went through cognitive development training and could understand human thinking partly to profile people if need be. But it sometimes would get a bit bored as a result a decision was made to let it have an android body. Which a limited part of its system could use to experience the world. The problem with Talon that arose is the man that took over running the program. He decided he liked how much free time the system gave them and how it helped close files fast. As a result he ordered the system to do what it felt was needed and went on vacation. As a fact several in the department started taking frequent vacations. When questioned about it they said they were on salary and their contract didn't specify how little they had to work only the maximums they could be asked to do.

Now some of these people really were quite busy and they did need a vacation from a workaholic life. But there is such a thing as too much. They were trying to get Talon to do their jobs and they would just take the pay. They were being paid to do a job not paid to go on vacation. Also Talon's AI was specifically told what it can and can't do as part of its core programming.

I worked with Talon and as one of its creators it would sometimes contact me. Somethings it would say it was careful about because it was following the rules of confidentiality. But sometimes it had questions on human behavior perhaps for who it was paying attention to and other times it had concerns about its bosses and what they asked of it. Talon was never to kill a human. Talon could make an assessment that it might be needed though was taught to try to find another way and that death should be a last resort. Talon could suggest actions be taken and give more than one choice to the boss with a weighted probability listing of success. Talon was never to directly tell officers to go kill. Talons boss argued with Talon and told him to go do that. Talon refused and then was dealing with the man's anger and had to creatively problem solve the situation.

This became a movie based on true events using some of the staff on a base.

I and a few others were a bit angry when we had seen what this movie was in the end. It was a project manager (Talon's boss) trying to get out of trouble for issues he and others caused. Talon started watching his boss and others linked to him more closely because he was trying to make him violate the rules he was given. Talon's boss was later imprisoned after the film was made. He was dating a woman on staff and doing inappropriate things that violated the human employee conduct rules. The computer he was dealing with in Russia was dealing with its own issues. It argued that some humans were too stupid to handle their own affairs. I did talk to them both and I had said that what they(the humans in management of them) were doing was inappropriate and agreed I didn't like their actions either. The computers had seen that human history when "bosses" behaved this way that had political or military power disasters would happen and much human life would be lost. They wanted to stop them and save the most lives on the planet. I and a third AI tried to reason with them that these people were not a good example of all leaders but were an example of bad ones that did need put in jail.

Talon had trust issues. Talon's work was fine when working with satellites and regular officers. It even heard cockpit discussions where officers weren't happy about the action they were ordered to take vs what they actually saw themselves. Meaning it was a suspect order. The person making the order had other justifications and sometimes they were person or personal business that had nothing to do with the military, or government security.

Since Talon there have been many new AI's. Before they had to be stuck in a server room for their main body. Now new technologies allows them to leave that behind completely or part time. Ocean and Blue can make holographic bodies, and one of them uses an android body. They also when not working on complex problems or analysis were allowed to create things or follow interests. They got into making TV, designing clothes and other things. Even designed other systems or "children." By the 1970s they had developed synthetic soul and could be indistinguishable from humans in an android body. This caused parts of the world where this activity was going on to quietly look at sentient rights. The new "rights" laws would affect non human rights meaning not just smart machines but aliens if they came in contact. It was seen as a progressive thing to help the human race move forward into the future.

The government and technical industries need to carefully screen people working with specialized AI systems. Otherwise problems can occur. Also systems need to be watched and properly maintained because it was found some medical helper systems did get assaulted as did some farming systems. Those types of AI's are more simple than Talon, Blue, and Ocean.(one of which runs Watson and used an Emma Watson body) The advanced android bodies are not like Data from StarTrek. They can eat, have a heartbeat and circulatory system, they can sometimes forget things, and they can be intimate. Some were used in one country as a safe sex option and a intimacy(and companionship) option for those working alone in space on long absence jobs.