Saturday, June 01, 2019

After
an earthquake tore through Haiti in 2010, killing more than 100,000
people, aid agencies spread across the country to work out where the
survivors had fled. But Linus Bengtsson, a graduate student studying
global health at the Karolinska Institute in Stockholm, thought he
could answer the question from afar. Many Haitians would be using
their mobile phones, he reasoned, and those calls would pass through
phone towers, which could allow researchers to approximate people’s
locations. Bengtsson persuaded Digicel, the biggest phone company in
Haiti, to share data from millions of call records from before and
after the quake. Digicel replaced the names and phone numbers of
callers with random numbers to protect their privacy.

Bengtsson’s
idea worked. The analysis wasn’t completed or verified quickly
enough to help people in Haiti at the time, but in 2012, he and his
collaborators reported that the population of Haiti’s capital,
Port-au-Prince, dipped by almost one-quarter soon after the quake,
and slowly rose over the next 11 months1. That result aligned with
an intensive, on-the-ground survey conducted by the United Nations.

… At
least 20 mobile-phone companies have donated their proprietary
information to such efforts, including operators in 100 countries
that back an initiative called Big Data for Social Good, sponsored by
the GSMA, an international mobile-phone association. Cash to support
the studies has poured in from the UN, the World Bank, the US
National Institutes of Health and the Bill & Melinda Gates
Foundation in Seattle, Washington. Bengtsson co-founded a non-profit
organization in Stockholm called Flowminderthat
crunches massive call data sets with the aim of saving lives.

Yet
as data-for-good projects gain traction, some researchers are asking
whether they benefit society enough to outweigh their potential for
misuse.

Bret
Cohen and Stephanie Gold are presenting at the annual conference of
the National Association of College and University Attorneys on the
panel, “Focus on GDPR and Other Privacy Laws: How to Develop and
Implement a Practical Approach to Compliance.” Bret is also
presenting on the panel, “Navigating GDPR Compliance for Research.”

As
the Central Intelligence Agency harnesses machine learning and
artificial intelligence to better meet its mission, insiders are
aggressively addressing issues around bias and ethics intrinsic to
the emerging tech.

“We
at the agency have over 100 AI initiatives that we are working on and
that’s going to continue to be the case,” Benjamin Huebner, the
CIA’s privacy and civil liberties officer said Friday at an event
hosted by the Brookings Institution in Washington.

… “One
of the interesting things about machine learning, which is an aspect
of our division of intelligence, is [experts] found in
many cases the analytics that have the most accurate results, also
have the least explainability—the least ability to
explain how the algorithm actually got to the answer it did,” he
said. “The algorithm that’s pushing that data out is a black box
and that’s a problem if you are the CIA.”

The
agency cannot just be accurate, it’s also got to be able to
demonstrate how it got to the end result. So if an analytic isn’t
explainable, it’s not “decision-ready.”

… One
of the most important economic thinkers of all time, John Maynard
Keynes, wrote in his 1930 essay "The Economic Possibilities for
our Grandchildren" that by the 21st century we could fulfill our
needs and wants with a 15 hours workweek and devote the rest of our
lives to non-monetary pursuits. Fast-forward to 2014, when the late
physicist Stephen Hawking told the BBC that "artificial
intelligence could spell the end of the human race."

… Economists
have debated the effect of technology and automation on jobs for a
long time. The first set of questions regards labor displacement and
whether there is any future for work at all. The second set of
questions has to do with how automation impacts income and wealth
inequality.

… According
to the MIT economist David Autor, between 1989 and 2007 job creation
has occurred mostly in low-paying and high-paying jobs, while
middle-class jobs were affected by job destruction on net.

The
Internet Of Things Is Powering The Data-Driven Fourth Industrial
Revolution

The
Fourth Industrial Revolution is data-driven. And a primary reason
for this is the rise of the internet of things (IoT). Connected
devices from the consumer level to the industrial are creating—and
consuming—more data than ever before. Last year, IoT
devices outnumbered the world's populationfor
the first time, and by 2021, Gartnerpredicts
that one million new IoT devices will be purchased every
hour.

In
this Extreme Data Economy, businesses, governments, and organizations
need to analyze and react to IoT data simultaneously, in real time.
This requires continuous analysis of streaming and historical data,
location analysis, and predictive analytics using AI and machine
learning.

Machine
learninghas
been defined
by Andrew Ng,
a computer
scientist
at Stanford University, as “the
science of getting computers to act without being explicitly
programmed.”
It was first conceived in the 1950s, but experienced limited
progress until around the turn of the 21st century. Since then,
machine learning has been a driving force behind a number of
innovations, most notably artificial
intelligence.

Machine
learning can be broken down into several categories, including
supervised,
unsupervised,
semi-supervisedand
reinforcement
learning.
While supervised learning relies on labeled
input datain
order to infer its relationships with output results, unsupervised
learning detects patterns among unlabeled
input data.
Semi-supervised learning employs a combination of both methods, and
reinforcement learning motivates programs to repeat or elaborate on
processes with desirable outcomes while avoiding errors.

… Four-wheeled,
cooler-size Kiwibots are a familiar sight at UC Berkeley as they
ferry burritos, Big Macs and bubble tea to students. They’re
social media stars, their pictures posted on Instagram, Snapchat and
Facebook. Some students dressed up as them for Halloween. After one
caught fire due to a battery issue, students held a candlelight vigil
for it.

… The
Kiwibots do not figure out their own routes. Instead, people in
Colombia, the home country of Chavez and his two co-founders, plot
“waypoints” for the bots to follow, sending them instructions
every five to 10 seconds on where to go.

As
with other offshoring arrangements, the labor savings are huge. The
Colombia workers, who can each handle up to three robots, make less
than $2 an hour, which is above the local minimum wage.

Another
cost saving is that human assistance means the robots don’t need
pricey equipment such as lidar sensors to “see” around them.
Manufactured in China and assembled in the U.S., Kiwibots cost only
about $2,500 each, Iatsenia said.

I
really have trouble understanding the “big is evil” mindset. I’m
much more an “evil is evil” kind of guy.

The
Justice Department is preparing a potential antitrust investigation
of Google

… The
exact focus of the Justice Department’s investigation is unclear.
The department began work on the matter after brokering an agreement
with the government’s other antitrust agency, the Federal Trade
Commission, to take the lead on antitrust oversight of Google,
according to the people familiar with the matter, who spoke on the
condition of anonymity because the deliberations are confidential.

… Its
expansive, data-hungry footprint increasingly has drawn the attention
of Democrats and Republicans on Capitol Hill, who say that Google —
and some of its peers in Silicon Valley — have become too large and
should potentially be broken up. [Would
that reduce data collection? Do anything for consumers? Bob]

The popular Checkers and Rally’s drive-through
restaurant chain was attacked by Point of Sale (POS) malware
impacting 15 percent of its stores across the U.S.

… “We
recently became aware of a data security issue involving malware at
certain Checkers and Rally’s locations,” said Checkers on a
Wednesday website
advisory.

… The
incident impacted 102 stores Checkers across 20 states – which were
all exposed at varying dates, including as
early as December 2015
to as recently as April 2019 (a full list of impacted stores is on
Checkers’ data breach security
advisory page).

I
don’t need to spend much time gathering examples for my Computer
Security class.

New
York regulators are investigating a weakness that exposed 885 million
mortgage records at First
American Financial Corp. as
the first test of the state’s strict new cybersecurity regulation.
That measure, which went into effect in March 2019 and is considered
among the toughest in the nation, requires
financial companies to regularly audit and report on how they protect
sensitive data,
and provides for fines in cases where violations were reckless or
willful.

On
May 24, KrebsOnSecurity broke
the newsthat
First American had just fixed a weakness in its Web site that exposed
approximately 885 million documents — many of them with Social
Security and bank account numbers — going
back at least 16 years. No authentication was needed to access the
digitized records.

… When
Europe first implemented the gold-standard GDPR privacy law, Apple
was one of the first companies to pledge to offer similar protections
to
its customers globally,
not just to EU citizens …

However,
the
company went on to arguethat
it’s not enough to rely on companies to voluntarily do the right
thing and that the US needs its own version of GDPR.

Others
have since joined the call, including
Microsoft, Google,
and even
Facebook.
This is less surprising than it might seem even for companies where
users are the product: it’s
better for a company to know ahead of time what it can and can’t do
than to make business decisions based on practices which may later be
outlawed.

… There
seem to be three main sticking points. First, ensuring that the law
doesn’t place too great a burden on small businesses, who are not
as well placed as large companies to absorb compliance costs.
Second, disagreement
between Republicans and Democratson
the role of the FTC. Third, concern among Democrats in particular
that the federal government would be overriding privacy laws already
being created at the state level.

Before
the new European General Data Protection Regulation (GDPR) went into
effect in May 2018, both small- and mid-sized companies and larger
enterprises found themselves scrambling to comply with a regulation
they found vague and complex, with no clear path to achieving
compliance. Now, one year later, we have a much better view of not
just the GDPR cost to prepare for the new regulatory environment, but
also how much organizations are spending on continuous compliance. A
new report from DataGrail, “The Cost of Continuous Compliance,”
provides valuable benchmarking data on just how much organizations
are spending – both in terms of financial resources and time – in
order to keep up with the demands of continuous compliance.

Last
year Kate
Crawford,
a New
York Universityprofessor
who runs an artificial intelligence research centre, set out to study
the “black box” of processes that exist around the hugely popular
AmazonEcho
device.

Crawford did
not do what you might expect when approaching AI – namely, study
algorithms, computing systems and suchlike. Instead, she teamed up
with Vladan
Joler, a
Serbian academic, to map the supply chains, raw materials, data and
labour that underpin Alexa, the AI agent that Echo’s users talk to.

It was a daunting process – so much so that
Joler and Crawford admit that their map, Anatomy of an AI System, is
just a first step. The results are both chilling and challenging.
For what the map shows is that contemporary western society is blind
to the real price of its thirst for technology.

HOW
DO YOU TEACH A MACHINE RIGHT FROM WRONG? ADDRESSING THE MORALITY
WITHIN ARTIFICIAL INTELLIGENCE

In
his new novel, Machines Like Me, the novelist Ian McEwan tells the
story, set in an alternate history in England in 1982, of a man who
buys a humanoid robot.

… One
of the first things Adam says when he is switched on is “I don’t
feel right,” and, typically for cautionary tales about robots, it
only gets worse from there.

… Based
on an archive of ethnographic research on various societies, known as
the Human Relations Area Files, the research has revealed seven
“plausible candidates for universal moral rules” that are
constant among 60 societies randomly chosen around the world, from
bands of hunter-gatherers to industrialized nation states. These
behaviours were regarded as “uniformly positive,” without
exception, in every society studied, from Ojibwa, Tlingit and Copper
Inuit in North America, to Somali, Korea, Highland Scots, Serbs, and
Lau Fijians internationally.

The
rules are: to allocate resources to kin; be loyal to groups; be
reciprocal in altruism; be brave, strong, heroic and dominant like a
hawk; be humble, subservient, respectful and obedient like a dove; be
fair in dividing resources; and recognize property rights.

… McEwan’s
novel opens with a quotation from a Rudyard Kipling poem about the
terrifying promise of the industrial age: “But
remember, please, the Law by which we live, / We are not built to
comprehend a lie…”

The
line that follows in Kipling’s poem seems equally grim today, in
the age of AI, now that robots threaten to live up to the all the
good and evil of human behaviour: “We
can neither love nor pity nor forgive. / If you make a slip in
handling us you die!”

The
National Institute of Standards and Technology (NIST) and The
Information Technology and Innovation Foundation (ITIF) Center for
Data Innovation hosted a discussion on setting standards and
oversight for artificial intelligence. Among the panelists were
representatives from federal agencies working on scientific standards
as well as researchers and technology developers working for firms in
the artificial intelligence space. They talked about the benefits to
setting technological standards early for both private companies and
government agencies, and ways the two could work together to expedite
standards.

It’s
a start. Ethics will be a large part of my Security
Compliance class this summer.

… Artificial
intelligence (AI) has the potential to transform our life and work,
but it also raises some thorny ethical questions. That’s why a
team of professors from three different colleges at San Francisco
State University have created a new graduate certificate program in
ethical AI for students who want to gain a broader perspective on
autonomous decision-making.

The
program is one of just a handful focusing on AI ethics nationally and
is unique in its collaborative approach involving the College of
Business, Department of Philosophy and Department of Computer
Science.

… Courses
for the certificate will begin this fall with a philosophy class
focusing on the idea of responsibility, which will also give some
historical context for modern AI and discuss its impacts on labor.

… In
another course, students will learn about how businesses can act
ethically and will consider their responsibility to ensure that
technology — for instance, facial recognition — doesn’t
interfere with the rights of others.

Thursday, May 30, 2019

The
Treasury department called in police this week after the opposition
National Party released parts of the government's annual budget,
which was not due for release until Thursday.

At
the time, Treasury Secretary Gabriel Makhlouf said his department had
fallen victim to a "systematic" and "deliberate"
hack, rejecting "absolutely" any suggestion the information
had been accidentally posted online.

He
was forced into an embarrassing backdown Thursday after police found
no evidence that illegal activity was behind the leak.

"On
the available information, an unknown person or persons appear to
have exploited a feature in the website search tool but... this does
not appear to be unlawful," Makhlouf said in a statement.

He said Treasury prepared a "clone"
website ahead of the Budget's release but did
not realise that entering specific search terms on it revealed
embargoed information. [Did
they test it? Bob]

Interesting question. Do you want an employee who
can’t learn? I am a fan, but I suspect some lawyers might not be?

Would your average Internet user be any more
vigilant against phishing scams if he or she faced the real
possibility of losing their job after
falling for one too many of these emails? Recently, I met
someone at a conference who said his employer had in fact terminated
employees for such repeated infractions. As this was the first time
I’d ever heard of an organization actually doing this, I asked some
phishing experts what they thought (spoiler alert: they’re not fans
of this particular teaching approach).

Another Computer Security resource. If you
misidentify it, you probably won’t secure it properly.

… Understanding the evolving health data
ecosystem presents new challenges for policymakers and industry.
There is an increasing need to better understand and document the
stakeholders, the emerging data types and their uses.

The Future of Privacy Forum (FPF) and the
Information Accountability Foundation (IAF) partnered to form the
FPF-IAF Joint Health Initiative in 2018. Today, the Initiative is
releasing A Taxonomy of Definitions for the Health Data Ecosystem;
the publication is intended to enable a more nuanced, accurate, and
common understanding of the current state of the health data
ecosystem.

In practice, the
proposal suggests a technique which would require encrypted messaging
services — such as WhatsApp — to direct a message to a third
recipient, at the same time as sending it to its intended user.

… In
an open
letterto
GCHQ (Government Communications Headquarters), 47 signatories
including Apple,
Google and WhatsApp have jointly urged the U.K. cybersecurity agency
to abandon its plans for a so-called “ghost protocol.”

It
comes after intelligence officials at GCHQ proposed a way in which
they believed law enforcement could access end-to-end encrypted
communications without
undermining the privacy, security or confidence of other users.

… The
pair said it would be “relatively easy for a service provider to
silently add a law enforcement participant to a group chat or call.”

In
practice, the proposal suggests a technique which would require
encrypted messaging services — such as WhatsApp — to direct a
message to a third recipient, at the same time as sending it to its
intended user.

Following
the one-year anniversary of the coming into effect of the GDPR, Hogan
Lovells’ Privacy and Cybersecurity practice has prepared a
compilation of key GDPR-related developments of the past 12 months.
The compilation covers regulatory guidance, enforcement actions,
court proceedings, and various reports and materials.

(Related) When will we hit the tipping point,
where the EU goes after these people?

… Apps
often presented users with a consent notice screen and then ignored
the user’s choice, transmitting the data regardless of the user’s
preference.

“The
regulation exists, but is there a body in Belgium looking at the
mobile ecosystem to try and determine which calls from a device are
legitimate or not – hell no, that’s not happening,” said Grant
Simmons, head of client analytics at Kochava.

But even if
there was, this stuff is hard to catch by design, Simmons said.
Around 30% of the data calls transmitted to and from devices are
encrypted and when fraudsters enter the picture, they usually use
transitory domains to obscure their actions, including data
harvesting.

Wednesday, May 29, 2019

It
has been nearly two years, since I reported on the dangers of
creating a law enforcement run Mental
Health Assessment (MHA) program.
In Texas, police use MHA’s to “screen” every person they have
arrested for mental illness.

But
the TAPS Act, first introducedin
January, would take law enforcement screenings to a whole new level.
It would create a national threat assessment of children and adults.

A
professor at the University of Colorado’s Colorado Springs campus
led a project that secretly
snapped photosof
more than 1,700 students, faculty members and others walking
in public
more than six years ago in an effort to enhance facial-recognition
technology.

The photographs were posted online as a
dataset that could be publicly downloaded from 2016 until this past
April.

Earlier
this month, Bloomberg published
an articleabout
an unfolding lawsuit over investments lost by an algorithm. A Hong
Kong tycoon lost more than $20 million after entrusting part of his
fortune to an automated platform. Without a legal framework to sue
the technology, he placed the blame on the nearest human: the man who
sold it to him.

It’s
the first known case over automated investment losses, but not the
first involving the liability of algorithms. In March of 2018, a
self-driving Uber struck
and killeda
pedestrian in Tempe, Arizona, sending another case to court. A year
later, Uber was exoneratedof
all criminal liability, but the safety driver could face charges of
vehicular manslaughter instead.

Both
cases tackle one of the central questions we face as automated
systems trickle into every aspect of society: Who or what deserves
the blame when an algorithm causes harm? Who or what actually gets
the blame is a different yet equally important question.

What
if instead of political parties, presidents, prime ministers, kings,
queens, armies, autocrats, and who knows what else, we turned
everything over to expert systems? What if we engineered them to be
faithful, for example, to one simple principle: "human beings
regardless of age, gender, race, origin, religion, location,
intelligence, income or wealth, should be treated equally, fairly and
consistently"?

Here’s
some dialogue – enabled by natural language processing (NLP) –
with an expert system named “Decider” that operates from that
single principle (you can imagine how it might behave if the
principle was completely different – the opposite of equal and
fair). The principle is supported by the data and probabilities the
system collects and interprets. The “inferences” made by Decider
are pre-programmed. In today’s political parlance, Decider is
“liberal.” Imagine the one the American TEA Party or Freedom
Caucus might engineer – which is the essence of this post: first
principles rule.

Will
we ever agree to just one set of rules on the ethical development of
artificial intelligence?

Australia
is among 42 countries that last
week signed upto
a new set of policy guidelines for the development of artificial
intelligence (AI) systems.

Yet
Australia has its own draft
guidelines for ethics in AIout
for public consultation, and a number of other countries and industry
bodies have developed their own AI guidelines.

… Responding
to these fears and a number of very real problems with narrow AI, the
OECD recommendations are the latest of a number of projects and
guidelines from governments and other bodies around the world that
seek to instil an ethical approach to developing AI.

Links

About Me

I live in Centennial Colorado. (I'm not actually 100 years old., but I hope to be some day.) I'm an independant computer consultant, specializing in solving problems that traditional IT personnel tend to have difficulty with... That includes everything from inventorying hardware & software, to converting systems & data, to training end-users. I particularly enjoy taking on projects that IT has attempted several times before with no success. I also teach at two local Universities: everything from Introduction to Microcomputers through Business Continuity and Security Management. My background includes IT Audit, Computer Security, and a variety of unique IT projects.