Saturday, September 14, 2019

10
reasons why the GDPR is the opposite of a ‘notice and consent’
type of law

… A ‘notice and
consent’ framework puts all the burden of protecting privacy and
obtaining fair use of personal data on the person concerned, who is
asked to ‘agree’ to an endless text of ‘terms and conditions’
written in exemplary legalese, without actually having any sort of
choice other than ‘all or nothing’ (agree to all personal data
collection and use or don’t obtain access to this service or
webpage). The GDPR is anything but a ‘notice and consent’ type
of law.

There are many reasons
why this is the case, and I could go on and get lost into the
minutiae of it. Instead, I’m listing 10 high level reasons,
explained in plain language, to the best of my knowledge:

Friday, September 13, 2019

Before Android 10, only one app could access an
audio input at once; if an app tried to ask permission to an input
while it was in use by something else, the app would be blocked. As
of Android 10, audio inputs can be shared by multiple apps, but only
in some cases.

Several years ago, users had to download and
install flashlight applications on their devices, but Android now
includes the functionality natively. However, flashlight
applications continue to exist, and there are hundreds of them.

… Of the analyzed apps, 408 request just 10
permissions or less, which seems fairly reasonable. However, there
are 262 apps that ask for 50 permissions or more (up to 77). Thus,
the average number of permissions requested by a flashlight app is
25.

… Some of the requested permissions, however,
are difficult to explain for flashlight applications, the security
researcher says.

For example, 77 of the applications request
permission to record audio, 180 request permission to read contact
lists, and 21 of them want to be able to write contacts.

Since
the bill defines an “Officer
camera” as
a “body-worn
camera” the
patrol car dash cam is exempt, personal devices used for security are
exempt, drones are exempt, in fact everything else is exempt.

California
lawmakers on Thursday temporarily banned state and local law
enforcement from using facial-recognition software in body cameras,
as the most populous US state takes action against the technology.

The
bill, AB
1215,
marks the latest legislative effort to limit adoption of
facial-recognition technology, which critics maintain raises privacy
and accuracy concerns. Now the bill, also referred to as the Body
Camera Accountability Act, heads to Governor Gavin Newsom, who must
decide whether or not to sign it into law by October 13. If he does,
it will go into effect in January.

… the
bill prohibits the use of biometric surveillance technology, which
includes facial-recognition software, in police body cameras. It also
prohibits police from taking body-camera footage and running it
through facial-recognition software at a later time. It does not
prevent state and local police from using facial-recognition
technology in other ways, such as in stationary cameras

(Related) Most
criminals have cars with license plates. You have a car with a
license plate. Therefore, you might be a criminal!

“Any
state or local law enforcement agency participating in the RPSN will
be able to access real-time data from any part of the network at no
cost. The Company is initially launching the network by aggregating
vehicle data
from customers in over 30 states. [If
you subscribe you must share your data? Bob]
With thousands of automatic license plate reading cameras currently
in service that capture approximately 150 million plate reads per
month, the network is expected to be live by the first quarter of
2020.”

RPSN is a 30 state real-time law
enforcement license plate database of more than 150 million people.

At
the opening of an OECD conference on cryptocurrencies, French economy
and finance minister Bruno Le Maire said: “I want to be absolutely
clear: In these conditions, we cannot authorise the development of
Libra on European soil.”

Facebook’s
Libra cryptocurrency was announced earlier this year and is set to
launch at some point in 2020. Despite Libra having certain
technological similarities with bitcoin, its creators hope that its
more centralised infrastructure will allow it to become a global
currency that could rival the US dollar.

Artificial intelligence
(AI) will be widely adopted in office environments in a variety of
ways over the next few years as businesses invest in digital
workplace initiatives, Gartner analysts said today.

The trend is expected to
gather steam as voice-activated personal assistants that have proved
a hit at home begin to make inroads in the office.

By 2025, the technology
will “certainly be mainstream,” said Matthew Cain, vice president
and distinguished analyst at Gartner – even though privacy and
security concerns have limited deployments so far.

The threat for online
poker players is not the human desktop card sharks playing against
you, but the superhuman artificial intelligence bots that could
infiltrate games, according to analysts at Morgan Stanley.

For my geeks. Even if
you only try a few of these, you’ll be ahead of the curve.

Thursday, September 12, 2019

Baltimore
acknowledges for first time that data was destroyed in ransomware
attack

… Auditor Josh Pasch told the mayor and other
top city officials at a meeting of the city’s spending board that
without the data, his team
has been unable to check some claims the department made about its
performance. The data was stored locally and not backed up.

A
report published in April by South Korea-based ESTsecurity describes
attackslaunched
by Kimsuky against entities in South Korea and the United States.

… As
part of this campaign, which the cybersecurity firm has dubbed
“Autumn Aperture,” the hackers sent out emails with specially
crafted Word documents that the targeted user was likely to open.
One of the files contained the notes of an individual who gave a
presentation at the Nuclear Deterrence Summit earlier this year in
Virginia. Another document was a report from a U.S.-based university
affiliate discussing a North Korean ballistic missile submarine. The
last document described in Prevailion’s report appeared to
originate from the U.S. Treasury Department and contained a North
Korea sanctions license.

… When
opened, each of the Word documents instructed the targeted user to
enable macros before displaying content. This is a widely used
technique that allows attackers to install malware on the victim’s
device.

… The non-password protected Elasticsearch
database belonged to Dealer Leads, which is a company that gathers
information on prospective buyers via a network of SEO-optimized,
targeted websites. According to Jeremiah Fowler, senior security
researcher at Security Discovery, the websites all provide car-buying
research information and classified ads for visitors. They collect
this info and send it on to franchise and independent car dealerships
to be used as sales leads. The exposed database in total contained
413GB of data.

Investment
in artificial intelligence (AI) is growing, with 60% of adopters
raising their budgets 50% year over year, according to Constellation
Research.
But working with AI under emerging privacy standards is complex,
requiring a dynamic balance that allows for continued innovation
without misstepping on regulatory requirements. Under privacy
regulations, businesses are responsible for gaining consent to use
personal data and being able to explain what they are doing with that
data. There
is a real concern that black box automation systems that offer no
explanations and require the long-term storage of large customer data
sets will simply not be permitted under these regulations.

I’ve been telling my students they need to know
more than how to spell AI.

… So should you be thinking about the prospect
of being replaced by an AI-driven algorithm? And if so, is there a
way for you to AI-proof your career?

The High-Level View: AI Is Coming

Let’s start with a high-level assessment of the
future of AI. AI is going to continue to advance, at rates that
continue accelerating well into the future. In 2040, we may look
back on the AI available today the same way our
ubiquitous-internet-enjoying culture looks back on the internet of
1999.

Essentially, it’s conceivable that one day, far
into the future, automation and AI will be capable of handling nearly
any human responsibility. It’s more a question of when, not if,
the AI takeover will be complete. Fortunately, by then, AI will be
so embedded and so phenomenally powerful, our access to resources
will be practically infinite and finding work may not be much of a
problem.

But setting aside those sci-fi visions, it’s
realistically safe to assume that AI will soon start bridging the gap
between blue-collar and white-collar jobs. Already, automated
algorithms are starting to handle responsibilities in journalism,
pharmaceuticals, human resources, and law—areas once thought
untouchable by AI.

… That said, AI isn’t a perfect tool. AI
and automation are much better than humans at executing rapid-fire,
predictable functions, but there are some key areas in which AI tends
to struggle, including:

Prominent members of Europe's so-called "Skype
Mafia," all co-founders or early employees of the
voice-over-Internet conferencing service, are backing Pactum, a
startup that uses artificial intelligence to automate business
contract negotiations.

Founded late last year but only emerged from
stealth mode on Wednesday, Pactum uses a chatbot-like interface to
conduct contract talks. The bot can offer changes to standard terms,
including price, delivery conditions and days to pay, in order to
reach a better deal. The company is based in Mountain View, Calif.,
with engineering offices in Tallinn, Estonia, where Skype's first
engineering offices were also located.

… The idea behind Pactum, Kaspar says, it to
deploy the chatbot with firms that have hundreds of thousands or
millions of suppliers, which means they previously have relied on
standard contracts. "We can start a conversation with 5 million
suppliers and in 15 minutes, negotiate bespoke contracts for each of
them, and automatically update the contract terms," he says.

… Some other noteworthy businesses and apps
that provide student discounts to anyone with an EDU email address
include Best Buy, Autodesk, LastPass, FedEx, Squarespace, Newegg, and
Dell. Indeed, it’s always worth doing a quick search to see if
there are EDU benefits before you buy or subscribe to anything on the
web.

Wednesday, September 11, 2019

The
potential for a 'miscalculated' enemy cyberattack keeps me up at
night, warns Pentagon cyber chief

When asked what kept him up at night, Deputy
Assistant Secretary of Defense for Cyber Policy Ed Wilson told
members of Congress it was the possibility of an enemy erring in an
attack.

"I think it would be the miscalculation of an
adversary that is trying to seek ... an outcome it miscalculates with
regards to how they go about doing it, the WannaCry-like incident,
that maybe has much more implications worldwide or globally than what
an actor would have anticipated. And so, that's what I guess keeps
me up in the middle of the night," Wilson said.

… Cybersecurity
experts have long warned of the unintentional dangers posed by
cyberweapons. The ambiguous nature of cyberactors means that it is
often difficult to determine an adversary's intention. Governments
and militaries also run the risk of falling victim to "false
flags," or operations in which one actor makes it appear that
another is responsible for an attack.

"Due
to the difficulty of determining whether certain activity is intended
for espionage or preparation for an attack, cyber operations run the
risk of triggering unintended escalation," wroteBenjamin
Brake, a fellow with the Council on Foreign Relations, in 2015.

… When
NotPetya first hit, Maersk was unable to determine exactly what was
occurring, Banks explained. It took several hours to establish the
cause of the attack, and the wide-spread impact. IT services,
end-user devices and applications/servers were dramatically affected.
As many as 49,000 laptops
were destroyed and 1200 applications were inaccessible.

“I
didn’t go home for 70 days,” Banks said, as he worked tirelessly
with the rest of the business to respond and recover.

FBI's Internet
Crime Complaint Center (IC3) says that Business Email Compromise
(BEC) scams are continuing to grow every year, with a 100% increase
in the identified global exposed losses between May 2018 and July
2019.

Also, between
June 2016 and July 2019, IC3 received victim complaints regarding
166,349 domestic and international incidents, with a total exposed
dollar loss of over $26 billion.

51
tech CEOs send open letter to Congress asking for a federal data
privacy law

… CEOs
blamed a patchwork of differing privacy regulations that are
currently being passed in multiple US states, and by several US
agencies, as one of the reasons why consumer privacy is a mess in the
US.

This patchwork
of privacy regulations is creating problems for their companies,
which have to comply with an ever-increasing number of laws across
different states and jurisdictions.

… “Communities
should absolutely adopt the school safety measures that they think
are necessary for their community, but we [also] want to make sure
that they don’t have unintended consequences – that they don’t
actually harm
students
more than they help ensure school safety,” Vancesaid.
Listen
to the full interview.

… Specifically,
Vance highlighted examples of students who have typed a sensitive
word or phrase, like “shooting hoops,” or posted images that are
falsely flagged as problematic. As a result, these students – and
the school administrators – can end up trapped in time-consuming
“threat assessment process” that can lead to unjust school
suspension or even expulsion.

Vance
noted, “You have students who have gone through the threat
assessment process, which is intended to make things better for
students… but what we’ve seen is, in some cases, these threat
assessments are discriminating against students with autism or
students with disabilities… Those students aren’t threats,
they’re simply students who need additional help.”

Vance
also warned that some surveillance technologies could inadvertently
deter students from seeking help (e.g. searching for resources and
support for depression) because they believe certain search terms
they will be ‘flagged’ as potential threats.

… Texas
Attorney General Ken Paxton’s office, which is leading the
nationwide probe, on Monday issued a 29-page civil investigative
demand obtained by Bloomberg. In more than 200 directives,
investigators ordered the company to produce detailed explanations
and documents by Oct. 9 related to its sprawling system of online
advertising products.

… The
process of showing an ad to a single person visiting a web page can
involve dozens of companies and multiple auctions and transactions.
Google has worked its way into controlling much of that process, and
investigators want to know exactly how powerful the company has
become in this space.

… Google
controls about 37% of digital ad spending in the U.S., ahead of No. 2
Facebook at 22%, according to EMarketer.

… The
state attorneys general asked for information on how Google shares
data with other companies and how it tracks behavioral data of
advertisers and people on its Chrome web browser. That
could signal an interest in privacy in addition to the focus on
competition in the advertising market.

Ethics
in A.I. is about trying to make space for a more granular discussion
that avoids these binary polar opposites. It’s about trying to
understand our role, responsibility, and agency in shaping the final
outcome of this narrative in our evolutionary trajectory.

This
article divides the issues into five parts:

What do we mean by ethics and A.I.?

Our lack of ability to understand the intended and unintended
consequences of innovation.

Our lack of ability to understand the connections and ramifications
between separate events.

… Launched
in early August, Certified
Artificialpromises
a “neutral, independent third-party certification service” for
helping separate the AI snake oil from the real deal. One part of
this service focuses on companies requesting third-party verification
of the fact that they’re using the latest AI techniques in their
services and products rather than simply relying on groups of human
workers or older statistical methods. Certified Artificial’s other
line of business involves evaluating the quality of advice coming
from certain thought-leaders who frequently discuss AI technologies
and their social impacts.

“Our
goal is not to penalize anyone because they made a little misstep on
how they talked about AI,” says Tim Hwang, partner and technical
director of Certified Artificial, and director of the Harvard-MIT
Ethics and Governance of AI Initiative. “We want to signal places
where someone has either been consistently spreading disinformation
about AI or is opining about it so it impacts in a way that erases a
lot of people doing really amazing work in this space.”

The
newest part of the service includes an online
browser extensionthat
anyone can install in order to see assigned
ratingsfor
thought-leaders whenever their names pop up in search engines or
websites. Those experts who demonstrate both technical knowledge
about AI and responsible awareness of the technology implications may
receive gold, silver, or bronze certification badges. On the other
hand, individuals who frequently spread misinformation about AI can
receive a “Do Not Recommend” badge.

Data
is fast replacing code as the foundation of software development.
Here’s how leading organizations anticipate processes and tools
transforming as developers navigate this paradigm shift.

… Today,
applications are deterministic. They are built around loops and
decision trees. If an application fails to work correctly,
developers analyze the code and use debugging tools to track the flow
of logic, then rewrite code in order to fix those bugs.

That's
not how applications are developed when the systems are powered by AI
and machine learning. Yes, some companies do sometimes write new
code for the algorithms themselves, but most of the work is done
elsewhere, as they pick standard algorithms from open source
libraries or choose from the options available in their AI platforms.

These
algorithms are then transformed into working systems by selecting the
right training sets and telling the algorithms which data points —
or features — are the most important and how much they should be
weighed.

Glideis
probably my favorite new tool of 2019. The free service lets you
take a Google Sheet and quickly turn it into a mobile app. It can be
used to create all kinds of apps including staff directories, study
guides, scavenger hunts, and local tourism guides. My tutorial on
how to use Glide can be seen here.

This
week Glide introduced a new feature that lets you share your app as a
template. This means that once you've created an app that you like
you can share it and let others make a copy of it to modify for their
own needs.

Tuesday, September 10, 2019

Cybercriminals
count on human interaction in 99% of attacks, research shows

Cybercrooks
exploit human flaws in about 99% of their attacks, using social
engineering across email, cloud applications and social media to gain
a foothold in a targeted infrastructure, new research shows. Almost
all cyber-attacks begin with luring employees into clicking on
malicious content.

arstechnica:
“Scraping a public website without the approval of the website’s
owner isn’t a violation of the Computer
Fraud and Abuse Act,
an appeals court ruledon
Monday. The ruling comes in a legal battle that pits Microsoft-owned
LinkedIn against a small data-analytics company called hiQ Labs. HiQ
scrapes data from the public profiles of LinkedIn users, then uses
the data to help companies better understand their own workforces.
After tolerating hiQ’s scraping activities for several years,
LinkedIn sent the company a cease-and-desist letter in 2017 demanding
that hiQ stop harvesting data from LinkedIn profiles. Among other
things, LinkedIn argued that hiQ was violating the Computer Fraud and
Abuse Act, America’s main anti-hacking law. This posed an
existential threat to hiQ because the LinkedIn website is hiQ’s
main source of data about clients’ employees. So hiQ
sued LinkedIn,
seeking not only a declaration that its scraping activities were not
hacking but also an order banning LinkedIn from interfering. A trial
court sided
with hiQin
2017. On Monday, the 9th Circuit Appeals Court agreed with the lower
court, holding that the
Computer Fraud and Abuse Act simply doesn’t apply to information
that’s available to the general public…”

Capital
One Hack Prosecution Raises New and Old Questions about Adequacy of
CFAA

…
While
Congress has made periodic amendments, the CFAA is outdated and has
failed to maintain pace with advances in technology. The antiquated
provisions of the CFAA create challenges for prosecutors. For
example, the prosecution of Sergey Aleynikov, a former high-frequency
trader at Goldman Sachs, hit a snag when the trial court dismisseda
CFAA charge—holding that Section 1030 does not criminalize actions
taken by an employee who had permissible access to information that
the employee subsequently misappropriates (“In short, unless an
individual lacks authorization to access a computer system, or
exceeds the authorization that has been granted, there can be no
violation of § 1030(a)(2)(C).”). Similarly, in the so-called
“cannibal cop” prosecution, the Second
Circuit heldthat
a person cannot be prosecuted under the CFAA when the person has
approved access to information, yet accesses the information with an
improper motive.

Can
we still use biometrics for security? Stay tuned! Consent is not
enough?

In
late August 2019, the Swedish data protection regulator issued its
first ever fine under the General Data Protection Regulation (GDPR).
The fine was for 200,000 Swedish Krona, which is just over $20,700.

The
action was brought against the Skelleftea municipality, where a local
school had run a trial facial biometric recognition system to track
22 students for a period of three weeks. The
school had obtained the consent of both the students and their
parents, and the trial was intended to improve school
administration. The trial was a success, and the school had planned
to expand the trial before the regulator stepped in and blocked it.

The
regulator's decision was that the consent obtained did not satisfy
GDPR consent requirements. According to the European Data
Protection Board's commentary on the incident, "consent was not
a valid legal basis given
the clear imbalance between the data subject [the students] and the
controller [the school]." The wider question for
business and security is whether this same 'imbalance' also exists
between employee and employer.

It
appears that it does, making the required use of biometrics (which is
defined as personal data, in fact, a 'special category' of personal
data) for purposes of authentication and access potentially
problematic throughout Europe. This would also apply to the European
offices of American companies.

Less
than two weeks before a likely iOS software update that will give
iPhone users regular pop-ups telling them which apps are collecting
information location in the background, Facebookhas
published
a blog postabout
how the Facebook app uses location data.

The
blog post appears to be a way to get out in front of software changes
made by Appleand
Googlethat
could unsettle Facebook users given the company’s poor reputation
for privacy.

…
For
now, Mindar is not AI-powered. It just recites the same preprogrammed
sermon
about the Heart Sutraover
and over. But the robot’s creators say they plan to give it
machine-learning capabilities that’ll enable it to tailor feedback
to worshippers’ specific spiritual and ethical problems.

“This
robot will never die; it will just keep updating itself and
evolving,” saidTensho
Goto, the temple’s chief steward. “With AI, we hope it will grow
in wisdom to help people overcome even the most difficult troubles.
It’s changing Buddhism.”

I
could see using this technology to find parts for all my old
appliances.

Syte
snaps up $21.5M for its smartphone-based visual search engine for
e-commerce

Visual
search has become a key component for how people discover products
when buying online: If a person doesn’t know the exact name of what
he or she wants, or what they want is not available, it can be an
indispensable tool for connecting them with things they might want to
buy.

… Syte’s
approach is notable in how it engages shoppers in the process of the
search. Users can snap pictures of items that they like the look of,
which can then be used on a retailer’s site to find compatible
lookalikes. Retailers, meanwhile, can quickly integrate Syte’s
technology into their own platforms by way of an API.

DeepCode
is bringing its AI-powered code review capabilities to Visual Studio
Code. The company announced an open-source
extensionthat
will enable developers to use DeepCode to detect bugs and issues in
Visual Studio Code.

DeepCode
is designed to alert users about critical vulnerabilities and avoid
bugs going into production. It uses a machine learning bot to
continuously learn from bugs and issues, and determine the intent of
code. The bot is currently free to enterprise teams of up to 30
developers.

Links

About Me

I live in Centennial Colorado. (I'm not actually 100 years old., but I hope to be some day.) I'm an independant computer consultant, specializing in solving problems that traditional IT personnel tend to have difficulty with... That includes everything from inventorying hardware & software, to converting systems & data, to training end-users. I particularly enjoy taking on projects that IT has attempted several times before with no success. I also teach at two local Universities: everything from Introduction to Microcomputers through Business Continuity and Security Management. My background includes IT Audit, Computer Security, and a variety of unique IT projects.