Gurgaon-based Staqu today has launched the AI-powered Smart Glasses with inbuilt camera in India. It comes with speech and image recognition combined. The company says that it can identify potential threats to the civil society, such as criminals, intruders or terrorists. The Staqu AI-powered Smart Glass’s built-in camera can capture input to trigger Facial Recognition, and once the face is identified within the given databases, the Smart Glass projects the results on the glass screen. The entire process will happen in real-time as the user simply glances over the vicinity. According to the company, the Glasses will work even in wild scenarios as it fuses together speech and image recognition to utilize a hybrid identification technology and uniquely identify anyone. The information is streamed in real-time from a centralized server, and these glasses can further be controlled from the centralized administrative portal, and specific recognition targets for each glass can be set remotely. According to a ET report, Staqu will be starting a pilot of its smart glass platform with Punjab Police and will work very closely with them to identify to help identify criminals. It will be provided on a yearly license-based model to customers. Commenting on the new announcement, Atul Rai, Co-Founder & CEO of Staqu said: At Staqu, …Fone Arena

Police in the UK are trialling a new “stop and scan” power, which lets them check the fingerprints of unknown individuals against national criminal and immigration databases.

Officers will be able to stop anyone when an offense is suspected and scan their fingerprints using a mobile device if the individual cannot otherwise identify themselves. The scanners will check fingerprints against 12 million biometric records held in two databases: IDENT1, which contains the fingerprints of people taken into custody, and IABS, which contains the fingerprints of foreign citizens, recorded when they enter the UK.

Speaking to Wired UK, project manager Clive Poulton, who is helping oversee the trials for the Home Office, said: “[Police] can now…

Check Point researchers warn that criminals could turn vacuum cleaners and dishwashers from LG into equipment for espionage.

Researchers at IT security company Check Point have found a security flaw in SmartThinQ, the smart home software from South Korean consumer electronics company LG. This, they claim, could enable hackers to take over internet-connected devices such as refrigerators, ovens, dishwashers, air conditioners, dryers, and washing machines. They have outline their findings in a detailed blog post.

Dubbed ‘HomeHack’, the flaw can even take control of an in-built security camera mounted on LG’s Hom-Bot vacuum cleaner, which doubles up as a home security device. When Hom-Bot detects movement in the home, an alert is sent to the homeowner’s smartphone, who can then switch on the Hom-Bot’s camera.

“However, this camera, in the case of account takeover, would allow the attacker to spy on the victim’s home, with no way of them knowing, with all the obvious negative consequences of invasion of privacy and personal security violation,” write Check Point’s researchers.

Security flaw in login app

Researchers found the flaw residing in the login process; that is, when users sign into their accounts on the LG SmartThinQ app. Hackers would need to recompile the LG application on the client side, in order to bypass security protections.

“This enables the traffic between the appliance and the LG server to be intercepted. Then, the would-be attacker creates a fake LG account to initiate the login process,” says Check Point’s researchers.

By manipulating the login process and entering the victim’s email address instead of their own, they explain, it is possible to hack into the victim’s account and take control of all LG SmartThinQ devices owned by the user.

Check Point said the flaw highlights the potential for smart home devices to be exploited, either to spy on home owners and residents and to steal data, or to use devices as staging posts for further attacks, such as spamming, denial of service attacks (as seen with the giant Mirai botnet in 2016) or spreading malware.

The researchers urge users of LG products to download the latest software updates from the LG website.

The era of artificial intelligence is upon us, though there’s plenty of debate over how AI should be defined much less whether we should start worrying about an apocalyptic robot uprising. The latter issue recently ignited a highly publicized dispute between Elon Musk and Mark Zuckerberg, who argued that it was irresponsible to “try to drum up these doomsday scenarios”.

In the near-term however, it seems more than likely that AI will be weaponized by hackers in criminal organizations and governments to enhance now-familiar forms of cyberattacks like identity theft and DDoS attacks.

A recent survey has found that a majority of cybersecurity professionals believe that artificial intelligence will be used to power cyberattacks in the coming year. Cybersecurity firm Cylance conducted the survey at this year’s Black Hat USA conference and found that 62 percent of respondents believe that “there is high possibility that AI could be used by hackers for offensive purposes.”

Artificial intelligence can be used to automate elements of cyber attacks, making it even easier for human hackers (who need food and sleep) to conduct a higher rate of attacks with greater efficacy, writes Jeremy Straub, an assistant professor of computer science at North Dakota State University who has studied AI-decision making. For example, Straub notes that AI could be used to gather and organize databases of personal information needed to launch spearphishing attacks, reducing the workload for cybercriminals. Eventually, AI may result in more adaptive and resilient attacks that respond to the efforts of security professionals and seek out new vulnerabilities without human input.

Rudimentary forms of AI, like automation, have already been used to perpetrate cyber attacks at a massive scale, like last October’s DDoS attack that shut down large swathes of the internet.

“Hackers have been using artificial intelligence as a weapon for quite some time,” said Brian Wallace, Cylance Lead Security Data Scientist, to Gizmodo. “It makes total sense because hackers have a problem of scale, trying to attack as many people as they can, hitting as many targets as possible, and all the while trying to reduce risks to themselves. Artificial intelligence, and machine learning in particular, are perfect tools to be using on their end.”

The flip side of these predictions is that, even as AI is used by malicious actors and nation-states to generate a greater number of attacks, AI will likely prove to be the best hope for countering the next generation of cyber attacks. The implication is that security professionals need to keep up in their arms race with hackers, staying apprised of the latest and most advanced attacker tactics and creating smarter solutions in response.

For the time being, however, cyber security professionals have observed hackers sticking to tried-and-true methods.

“I don’t think AI has quite yet become a standard part of the toolbox of the bad guys,” Staffan Truvé, CEO of the Swedish Institute of Computer Science said to Gizmodo. “I think the reason we haven’t seen more ‘AI’ in attacks already is that the traditional methods still work—if you get what you need from a good old fashioned brute force approach then why take the time and money to switch to something new?”

Weapons of Cyber Warfare

Cyber security has become a key issue in our national and international discussions. No longer do cyber attacks concern only email companies and individuals who are unwilling to update their tech. Now, cyber crime has had a major impact on both U.S. mainstream political parties, and almost any organization — even hospitals — should have some concern about the possibility of an attack through a computer network.

In their struggle to fight cyber crime, major companies like IBM are turning to two of the world’s most powerful technologies — artificial intelligence (AI) and quantum computing.

IBM’s AI, Watson, helps human analysts sift through the 200,000 or so “security events” the company has to deal with on a day-to-day basis. It helps determine which events don’t require special attention, such as instances when an employee forgets their password, and which should receive more scrutiny.

“Before artificial intelligence, we’d have to assume that a lot of the data — say 90 percent — is fine. We only would have bandwidth to analyze this 10 percent,” Daniel Driver from Chemring Technology Solutions, a provider of electric warfare, said in an interview with theFinancial Times. “The AI mimics what an analyst would do, how they look at data…It’s doing a huge amount of legwork upfront, which means we can focus our analysts’ time.”

Time for Faster Fighters

Watson is about 60 times faster that its human counterparts, and speed is key for defending against cyber attacks. But even Watson’s impressive rates pale in comparison to those that can be attained with quantum computers.

“The analogy we like to use is that of a needle in a haystack,” Driver said in the interview. “A machine can be specially made to look for a needle in a haystack, but it still has to look under every piece of hay. Quantum computing means, I’m going to look under every piece of hay simultaneously and find the needle immediately.”

While these technologies are not silver bullets against cyber attacks, they are becoming vital tools in the cyber security industry, which is projected to grow from $ 74 billion last year to $ 100 billion in 2020. Part of this growth may be attributed to society’s increasing reliance on the Internet of Things (IoT). As everything from light bulbs to our jackets become digitally accessible, every person should be more concerned about cyber security.

As we continue to see advancements in both AI and quantum computing technologies, more businesses and households will have access to these protective tools. AIs are already finding their places in different sectors of our society — including healthcare. Perhaps in addition to diagnosing medical images, AIs can also protect hospitals from future cyber attacks.