Credit: Christian Isaac Peñaloza SanchezBrain-controlled systems that can process thoughts and translate them into commands that move objects can restore some communication and movement to those who can't speak or move. But users of these systems can suffer from mental fatigue.

Christian Isaac Peñaloza Sanchez, a Ph.D. candidate for Cognitive Neuroscience Applied to Robotics at the University of Osaka, Japan, has designed an intelligent interface for these systems that minimze mental fatigue.

His interface, called the Automating a Brain-Machine Interface System, is capable of learning up to 90 percent of the user’s instructions, which allows it to operate autonomously after awhile.

The system consists of electrodes placed on the scalp, which measures brain activity in form of EEG signals. These are used to detect patterns generated by various thoughts and the mental state of the user -- whether he or she is awake, drowsy or asleep, etc. -- and the level of concentration. It also includes a graphical interface that displays the available devices or objects, which interprets EEG signals to assign user commands and control devices.

In addition, there are wireless sensors distributed throughout the room that send environmental information -- such as temperature or lighting -- and mobile hardware actuators which receive signals to turn on and off appliances and an artificial intelligence algorithm.

"The latter collects data from wireless sensors, electrodes and user commands to learn a correlation between the environment of the room, the mental state of the person and its common activities," Peñaloza Sanchez said in a press release. “We give learning capabilities to the system by implementing intelligent algorithms, which gradually learn user preferences. At one point it can take control of the devices without the person having to concentrate much to achieve this goal."

For example, an individual can use it to control an electric chair and move it to the living room using basic commands (forward, backward, left or right), which are learned by the system. Thus, the next time the user wants to take the same action he or she only needs to press a button or think about it for the chair to automatically navigate to the desired destination.

Once the system operates automatically, the user no longer has to exert concentration to control devices. However, the system continues to monitor the EEG data to detect a signal called Error-Related Negativity, which occurs when people become aware of an error committed by themselves or by a machine.

For example, when the temperature in the room is warm, the user expects the window to open automatically, but if the system makes a mistake and turns on the TV, this action can be detected by the human brain in a spontaneous way without the user making any effort. This allows the command that caused the error to be corrected and the system re-trained.

Credit: AXONThe London Metropolitan Police Service will start wearing AXONbody(TM) video cameras, and the data will be managed by EVIDENCE.com, a business unit of TASER International.

The one-year pilot will be rolled out across nine London boroughs and up to 500 AXON cameras will be deployed.

These small, yet highly visible cameras, powered by a pocketsize battery pack, can attach securely to sunglasses, a cap, a shirt collar, or a head mount and, when recording, capture a wide-angle, full-color view of what an officer is facing. The video automatically uploads to EVIDENCE.com, a web-based computerized storage and management system, where it can be easily accessed for review. An end-user cannot tamper with video files stored online or on AXON video camera systems; files cannot be deleted or altered in any way by the user while on the device.

Police Chief William A. Farrar of Rialto, Calif., conducted a year-long study that investigated whether officers' use of video cameras could bring measurable benefits to relations between police and civilians. The results showed an 88 percent reduction in citizen complaints and a 60 percent reduction in uses of force after implementation of TASER's AXONflex BWV cameras.

Researchers from Tufts University, Brown University and Rensselaer Polytechnic Institute are working with the U.S. Navy to attempt to answer this question.

They are exploring whether robots can learn right, wrong, and the consequences of both, which would be beneficial in the battlefield.

In one scenario, a robot medic is responsible for helping wounded soldiers. It is ordered to transport urgently needed medication to a nearby field hospital. En route, it encounters a Marine with a fractured leg. Should the robot abort the mission to assist the injured? Will it?

If the machine stops, a new set of questions arises. The robot assesses the soldier's physical state and determines that unless it applies traction, internal bleeding in the soldier's thigh could prove fatal. However, applying traction will cause intense pain. Is the robot morally permitted to cause the soldier pain, even if it's for the soldier's well-being?

According to a Tufts University press release, the plan to develop moral robots includes isolating essential elements of human moral competence through theoretical and empirical research. Based on the results, the team will develop formal frameworks for modeling human-level moral reasoning that can be verified. Next, it will implement corresponding mechanisms for moral competence in a computational architecture.

The goal is to create a completely autonomous moral robot. According to Selmer Bringsjord, head of the Cognitive Science Department at RPI, all robot decisions would automatically go through at least a preliminary, lightning-quick ethical check using simple logics inspired by today's most advanced artificially intelligent and question-answering computers. If that check reveals a need for deep, deliberate moral reasoning, such reasoning would be fired inside the robot, using newly invented logics tailor-made for the task.

These days, it seems like nearly every new relationship started online. But for all the well-intentioned online daters out there, a few creeps always threaten to ruin the fun.

Enter CreepShield, new facial recognition software that helps make online dating safer. The technology allows a user to check a dating site member's headshot against a database of nearly 500,000 sex offenders culled from publicly-accessible federal and state registries. CreepShield works with any online dating site; the user simply uploads a photo or pastes a link directly into the search field on the company's homepage.

The facial recognition seems to work fairly well, although CreepShield doesn't claim to achieve exact matches. Instead, it gives percentages based on the similarity of the provided photo to mugshots on file. Now, if only someone will create software that helps you weed out the bad dates.

Labeled Faces in the Wild Credit: LFWFacial recognition technology may be thwarted by photos' variations in pose, illumination, expression and occlusion. But for the first time, a computer algorithm, developed by researchers from Chinese University of Hong Kong, has beat the facial recognition accuracy of humans.

The researchers Chaochao Lu and Xiaoou Tang used the Labeled Faces in the Wild database – a collection of 13,000 faces of 6,000 public figures from the Internet, each labeled with the person's name – as a benchmark. The database provides a range of face images with variations in pose, lighting, expression, race, ethnicity, age, gender, clothing, hairstyles, and other parameters.

The algorithm, named GaussianFace, scored an accuracy rating of 98.52 percent, beating the human average of 97.53 percent. Prior to GaussianFace, the highest score achieved by technology was 97.25 percent.

According to the Physics arXiv Blog, it works by normalizing each face into a 150x120 pixel image using five image landmarks: the position of both eyes, the nose and the two corners of the mouth. It then divides each image into overlapping patches of 25x25 pixels and describes each patch using a vector, which captures its basic features. Then it compare the images looking for similarities.

GaussianFace was trained by using four databases that contain very different images. One of these is the Multi-PIE database, which consists of face images of 337 subjects from 15 different viewpoints under 19 different conditions of illumination taken in four photo sessions. Another is a database called Life Photos, which contains about 10 images of 400 different people.