Archive for February, 2016

Check Point alerts eBay to an online sales platform vulnerability which allows cyber criminals to distribute phishing and malware campaigns.

eBay, the online auction and e-commerce giant, has locations in over 30 countries and serves more than 150 million active users worldwide. As a successful company with a massive customer base, it’s no surprise that the corporation has been the target of many cyberattacks.

Check Point has discovered a severe vulnerability in eBay’s online sales platform. This vulnerability allows attackers to bypass eBay’s code validation and control the vulnerable code remotely to execute malicious Java script code on targeted eBay users. If this flaw is left unpatched, eBay’s customers will continue to be exposed to potential phishing attacks and data theft.

An attacker could target eBay users by sending them a legitimate page that contains malicious code. Customers can be tricked into opening the page, and the code will then be executed by the user’s browser or mobile app, leading to multiple ominous scenarios that range from phishing to binary download.

After the flaw was discovered, Check Point disclosed details of the vulnerability to eBay on Dec 15, 2015. However, on January 16, 2016, eBay stated that they have no plans to fix the vulnerability. The exploit Demo is still live. (emphasis added)
…

What other result did you expect?

Where is the incentive for eBay? It’s eBay customers being damaged, not eBay.

No liability for software defects = No incentive for improvement of software security.

For our third project here at NYC Data Science, we were tasked with writing a web scraping script in Python. Since I spend (probably too much) time on Reddit, I decided that it would be the basis for my project. For the uninitiated, Reddit is a content-aggregator, where users submit text posts or links to thematic subforums (called “subreddits”), and other users vote them up or down and comment on them. With over 36 million registered users and nearly a million subreddits, there is a lot of content to scrape.
…

Daniel walks through his scraping and display of the resulting data.

In case you are sort on encrypted core dumps, you can fill up a stack of DVDs with randomized and encrypted Reddit posts. Just something to leave for unexpected visitors to find.

Be sure to use a Sharpie to copy Arabic letters on some of the DVDs.

Who knows? Someday your post to Reddit, in its encrypted form, may serve to confound and confuse the FBI.

This month, we feature another yet another patent that takes an ordinary business practice and does it on a computer. Our winner is US Patent No. 8,738,435, titled “Method and apparatus for presenting personalized content relating to offered products and services.” As you might guess from its title, the patent claims the idea of sending a personalized marketing message using a computer.

Claim 1 of the patent is representative (the claims are supposed to describe the boundaries of the invention). It claims a “method of generating a set of personalized communications … with a computer system.” The steps are described at an extremely high level of abstraction, including things such as “accessing a computer-accessible storage medium” using “identifying content to distinguish each person from other persons.” The patent plainly proposes using ordinary computers to achieve this task. In fact, the “preferred embodiment of the apparatus” is illustrated in Figure 1 and includes fascinating, non-obvious details like a “display,” a “keyboard,” and a “mouse or pointing device.”
…

…
These charts use data from the annual Wiretap Reports published by the Administrative Office of the US Courts to display the portion of total reported wiretap orders that have been undermined by encryption technologies from 2001 to 2014. (This dataset only examines domestic wiretap requests. Information relating to wiretap requests regulated by the Foreign Intelligence Surveillance Act of 1978 is not available.) The charts show that, contrary to popular assumption, encryption technologies have only complicated a minuscule percentage of reported wiretap investigations in recent years.
…

Of the 147 wiretaps that encountered encrypted calls, 0.45% of 32,539 calls, 132 were deciphered so only 15 or 0.046% went undeciphered by the government.

For the sake of 15 wiretaps, the FBI and friends would strip over 300 million people of their privacy.

Did someone say that marijuana is no longer illegal in Washington, D.C.?

That’s the only explanation I can imagine for 15 wiretap cases being more important that 300+ million citizens.

…
Ledgett poked his finger at the media even more explicitly. “We track when our foreign intelligence targets talk about the security of their communication,” he said. “And we see a growing number of them, because of what’s in the press about the value of encryption, moving towards that.”

The implication of these statements—that media reports are somehow optimized to help terrorists be better at evading law enforcement—is a dangerous one. Yes, of course terrorists read. But Brenner and Ledgett’s statements situate media support for strong encryption on the side of terrorism. Neither intelligence leader recognized how members of their own communities might also benefit from media reports about encryption. In fact, neither Brennan or Ledgett bothered to acknowledge that their own agencies rely on encryption as a crucial security measure.

Neither Brennan or Ledgett specified which reports were believed to be frequently dog-eared on ISIS squatters, but that doesn’t matter. Extremists are interested in privacy tools, and media reports on privacy tools. Saying that they read about which tools to use is just saying that any group with goals attempts to find information that will help achieve those goals. Implying that media reports are aiding and abetting the enemy—not to mention the notion that reports highlighting privacy protections are somehow devious—is just unfair and chilling.

Kate’s right about blaming the media for extremists using encryption is far fetched, not to mention “…just unfair and chilling.”

But what we are witnessing is the projection (Jung) of ignorance of the speakers onto others.

These witnesses making these statements have as much expertise at encryption as I do at break dancing. Which is to say none at all.

They are sock puppets who “learn” about encryption or at least buzz phrases about encryption from public media.

On in the case of the FBI, from an FBI training manual that shows images of hard wired connections in a phone junction box.

Comey now wonders why encryption is allowed to defeat such measures. You have to wonder if Comey has noticed that cellphones are not followed by long phone lines.

Other than summarizing their nonsensical statements, the news media in general should not interview, quote or report any statement by these witnesses without a disclaimer that such witnesses are by definition incompetent on the question at hand.

Members of Congress can continue to billow and coo with those of skills equal to their own but the public should be forewarned of their ignorance.

Games are a great way to get started writing programs in any language. In Emacs Lisp, they’re even better—you use the same exact techniques to extend Emacs, configuring it to do what you want. In this presentation, Zachary Kanfer livecodes tic-tac-toe. You’ll see how to create a basic major mode, make functions, store state, and set keybindings.

North Carolina State University (NCSU) Libraries recently debuted a free, web-based social media archives toolkit designed to help cultural heritage organizations develop social media collection strategies, gain knowledge of ways in which peer institutions are collecting similar content, understand current and potential uses of social media content by researchers, assess the legal and ethical implications of archiving this content, and develop techniques for enriching collections of social media content at minimal cost. Tools for building and enriching collections include NCSU’s Social Media Combine—which pre-assembles the open source Social Feed Manager, developed at George Washington University for Twitter data harvesting, and NCSU’s own open source Lentil program for Instagram—into a single package that can be deployed on Windows, OSX, and Linux computers.

“By harvesting social media data (such as Tweets and Instagram photos), based on tags, accounts, or locations, researchers and cultural heritage professionals are able to develop accurate historical assessments and democratize access to archival contributors, who would otherwise never be represented in the historical record,” NCSU explained in an announcement.

“A lot of activity that used to take place as paper correspondence is now taking place on social media—the establishment of academic and artistic communities, political organizing, activism, awareness raising, personal and professional interactions,” Jason Casden, interim associate head of digital library initiatives, told LJ. Historians and researchers will want to have access to this correspondence, but unlike traditional letters, this content is extremely ephemeral and can’t be collected retroactively like traditional paper-based collections.

“So we collect proactively—as these events are happening or shortly after,” Casden explained.

…

I saw this too late today to install but I’m sure I will be posting about it later this week!

Do you see the potential of such tooling for defeating would-be censors of Twitter and other social media?

I’m not going to quote any of Stewart’s post because I want to test your powers of deduction on his likely position.

Here is the one clue I will give you:

Stewart A. Baker is a partner in the Washington office of Steptoe & Johnson LLP. He returned to the firm following 3½ years at the Department of Homeland Security as its first Assistant Secretary for Policy. He earlier served as general counsel of the National Security Agency.

That blurb appears next to the post itself. I have no way to verify that information but accept it as true for the purposes of my question:

“On tap at the brewpub. A nice dark red color with a nice head that left a lot of lace on the glass. Aroma is of raspberries and chocolate. Not much depth to speak of despite consisting of raspberries. The bourbon is pretty subtle as well. I really don’t know that find a flavor this beer tastes like. I would prefer a little more carbonization to come through. It’s pretty drinkable, but I wouldn’t mind if this beer was available.”

Besides the overpowering bouquet of raspberries in this guy’s beer, this review is remarkable for another reason. It was produced by a computer program instructed to hallucinate a review for a “fruit/vegetable beer.” Using a powerful artificial-intelligence tool called a recurrent neural network, the software that produced this passage isn’t even programmed to know what words are, much less to obey the rules of English syntax. Yet, by mining the patterns in reviews from the barflies at BeerAdvocate.com, the program learns how to generate similarly coherent (or incoherent) reviews.

The neural network learns proper nouns like “Coors Light” and beer jargon like “lacing” and “snifter.” It learns to spell and to misspell, and to ramble just the right amount. Most important, the neural network generates reviews that are contextually relevant. For example, you can say, “Give me a 5-star review of a Russian imperial stout,” and the software will oblige. It knows to describe India pale ales as “hoppy,” stouts as “chocolatey,” and American lagers as “watery.” The neural network also learns more colorful words for lagers that we can’t put in print.

This particular neural network can also run in reverse, taking any review and recognizing the sentiment (star rating) and subject (type of beer). This work, done by one of us (Lipton) in collaboration with his colleagues Sharad Vikram and Julian McAuley at the University of California, San Diego, is part of a growing body of research demonstrating the language-processing capabilities of recurrent networks. Other related feats include captioning images, translating foreign languages, and even answering e-mail messages. It might make you wonder whether computers are finally able to think.
…
(emphasis in original)

An enthusiastic introduction and projection of the future of recurrent neural networks! Quite a bit so.

My immediate thought was what a time saver a recurrent neural network would be for “evaluation” requests that appear in my inbox with alarming regularity.

What about a service that accepts forwarded emails and generates a review for the book, seller, hotel, travel, etc., which is returned to you for cut-n-paste?

That would be about as “intelligent” as the amount of attention most of us devote to such requests.

You could set the service to mimic highly followed reviewers so over time you would move up the ranks of reviewers.

I mention Amazon, hotel, travel reviews but those are just low-lying fruit. You could do journal book reviews with a different data set.

Near the end of the post the authors write:

…
In this sense, the computer-science community is evaluating recurrent neural networks via a kind of Turing test. We try to teach a computer to act intelligently by training it to imitate what people produce when faced with the same task. Then we evaluate our thinking machine by seeing whether a human judge can distinguish between its output and what a human being might come up with.

While the very fact that we’ve come this far is exciting, this approach may have some fundamental limitations. For instance, it’s unclear how such a system could ever outstrip the capabilities of the people who provide the training data. Teaching a machine to learn through imitation might never produce more intelligence than was present collectively in those people.

One promising way forward might be an approach called reinforcement learning. Here, the computer explores the possible actions it can take, guided only by some sort of reward signal. Recently, researchers at Google DeepMind combined reinforcement learning with feed-forward neural networks to create a system that can beat human players at 31 different video games. The system never got to imitate human gamers. Instead it learned to play games by trial and error, using its score in the video game as a reward signal.
…

Instead of asking whether computers can think, the more provocative question is “whether people think for a large range of daily activities?”

Consider it as the Human Intelligence Test (HIT).

How much “intelligence” does it take to win a video game?

Eye/hand coordination to be sure, attention, but what “intelligence” is involved?

Computers may “eclipse” human beings at non-intelligent activities, as a shovel “eclipses” our ability to dig with our bare hands.

This paper describes the language DIMPL, a domain-specific language (DSL) for discrete mathematics. Based on Haskell, DIMPL carries all the advantages of a purely functional programming language. Besides containing a comprehensive library of types and efficient functions covering the areas of logic, set theory, combinatorics, graph theory, number theory and algebra, the DSL also has a notation akin to one used in these fields of study. This paper also demonstrates the benefits of DIMPL by comparing it with C, Fortran, MATLAB and Python &emdash; languages that are commonly used in mathematical programming.

From the comparison, solving simultaneous linear equations:

Much more is promised in the future for DIMPL:

Future versions of DIMPL will have an extended library comprising of modules for lattices, groups, rings, monoids and other discrete structures. They will also contain additional functions for the existing modules such as Graph and Tree. Moreover, incorporating Haskell’s support for pure parallelism and explicit concurrency in the library functions could significantly improve the efficiency of some functions on multi-core machines.

Which returns a screen in three sections, left to right: Browse Collections, 21 Matching Collections (Add collections to your project to compare and retrieve their data), and the third section displays a world map (navigate by grabbing the view)

Under Browse Collections:

In addition to searching for keywords, you can narrow your search through this list of terms. Click Platform to expand the list of platforms (still in a tour box)

Next step:

Now click Terra to select the Terra satellite.

Comment: Wondering how I will know which “platform” or “instrument” to select? There may be more/better documentation but I haven’t seen it yet.

NASA’s Common Metadata Repository (CMR) is a high-performance, high-quality repository for earth science metadata records that is designed to handle metadata at the Concept level. Collections and Granules are common metadata concepts in the Earth Observation (EO) world, but this can be extended out to Visualizations, Parameters, Documentation, Services, and more. The CMR metadata records are supplied by a diverse array of data providers, using a variety of supported metadata standards, including:

Initially, designers of the CMR considered standardizing all CMR metadata to a single, interoperable metadata format – ISO 19115. However, NASA decided to continue supporting multiple metadata standards in the CMR — in response to concerns expressed by the data provider community over the expense involved in converting existing metadata systems to systems capable of generating ISO 19115. In order to continue supporting multiple metadata standards, NASA designed a method to easily translate from one supported standard to another and constructed a model to support the process. Thus, the Unified Metadata Model (UMM) for EOSDIS metadata was born as part of the EOSDIS Metadata Architecture Studies (MAS I and II) conducted between 2012 and 2013.

…

What is the UMM?

The UMM is an extensible metadata model which provides a ‘Rosetta stone’ or cross-walk for mapping between CMR-supported metadata standards. Rather than create mappings from each CMR-supported metadata standard to each other, each standard is mapped centrally to the UMM model, thus reducing the number of translations required from n x (n-1) to 2n.

Here the mapping graphic:

Granting profiles don’t make the basis for mappings explicit, but the mappings have the same impact post mapping as a topic map would post merging.

The site could use better documentation for the interface and data, at least in the view of this non-expert in the area.

Thoughts on documentation for the interface or making the mapping more robust via use of a topic map?

Last month I was over in Norway doing training for ProgramUtvikling, the good folks who run the NDC conferences I’ve become so attached to. I was running my usual “Hack Yourself First” workshop which is targeted at software developers who’d like to get up to speed on the things they should be doing to protect their apps against today’s online threats. Across the two days of training, I cover 16 separate discrete modules ranging from SQL injection to password cracking to enumeration risks, basically all the highest priority security bits modern developers need to be thinking about. I also cover how to inspect, intercept and control API requests between rich client apps such as those you find on a modern smart phone and the services running on the back end server. And that’s where things got interesting.

What the workshop attendee ultimately discovered was that not only could he connect to his LEAF over the internet and control features independently of how Nissan had designed the app, he could control other people’s LEAFs. I subsequently discovered that friend and fellow security researcher Scott Helme also has a LEAF so we recorded the following video to demonstrate the problem. I’m putting this up front here to clearly put into context what this risk enables someone to do then I’ll delve into the details over the remainder of the post:
…

Troy Hunt, located in Australia, controls a Nissan LEAF located in Norther England via a web browser.

Heater on/off, driving (trip) history), nothing more serious but worldwide accessibility via a VIN number is an odd design decision.

You won’t be able to try this on as Nissan is reported to have taken the service offline as of 25 February 2016.

Don’t be too disappointed. Bad design and implementation decisions are repeated over and over again. Perhaps you will find the next one first.

While Twitter sets up its Platonic panel of censors (Plato’s Republic, Books 2/3)*, I am wondering if conduct/truth will be a defense to censorship for accounts that make positive posts about the Islamic State?

I ask because of a message suggesting accounts (Facebook?) might be suspended for posts following these rules:

Do no use foul language and try to not get in a fight with people

Do not write too much for people to read

Make your point easy as not everyone has the same knowledge as you about the Islamic state and/or Islam

Use a VPN…

Use an account that you don’t really need because this is like a martydom operation, your account will probably be banned

Post images supporting the Islamic state

Give positive facts about the Islamic state

Share Islamic state video’s that show the mercy and kindness of the Islamic state towards Muslims, and/or showing Muslim’s support towards the Islamic state. Or any videos that will attract people to the Islamic state

Prove rumors about the Islamic state false

Give convincing Islamic information about topics discussed like the legitimacy of the khilafa, killing civilians of the kuffar, the takfeer made on Arab rules, etc.

Or simply just post a short quick comment showing your support like “dawlat al Islam baqiaa” or anything else (make sure ppl can understand it

Remember to like all the comments you see that are supporting the Islamic state with all your accounts!

If we were to re-cast those as rule of conduct, non-Islamic State specific, where N is the issue under discussion:

Do no use foul language and try to not get in a fight with people

Do not write too much for people to read

Make your point easy [to understand] as not everyone has the same knowledge as you about N

Post images supporting N

Give positive facts about N

Share N videos that show the mercy and kindness of N, and/or showing A support towards N. Or any videos that will attract people to N

Prove rumors about N false

Give convincing N information about topics discussed

Or simply just post a short quick comment showing your support or anything else (make sure ppl can understand it

Remember to like all the comments you see that are supporting N with all your accounts!

Is there something objectionable about those rules when N = Islamic State?

As far as being truthful, say for example claims by the Islamic State that Arab governments are corrupt, we can’t use a corruption index that lists Qatar at #22 (Denmark is #1 as the least corrupt) and Saudi Arabia at #48, when Bloomberg lists Qatar and Saudi Arabia as scoring zero (0) on budget transparency.

There are more corrupt governments than Qatar and Saudi Arabia, the failed state of Somalia for example, and perhaps the Sudan. Still, I wouldn’t ban anyone for saying both Qatar and Saudi Arabia are cesspools of corruption. They don’t match the structural corruption in Washington, D.C. but it isn’t for lack of trying.

Can an account that follows the rules of behavior outlined above be banned for truthful posts?

I think we all know the answer but I’m interested in seeing if Twitter will admit to censoring factually truthful information.

* Someone very dear to me objected to my reference to Twitterists (sp?) as Stalinists. It was literary hyperbole and so not literally true. Perhaps “Platonic guardians” will be more palatable. Same outcome, just a different moniker.

The European Journalism Centre (EJC) has just launched LEARNO.NET – a new, cutting-edge resource for anyone interested in creating powerful data-driven or digital stories.

Following the success of the EJC’s renowned Data Journalism Handbook, and its associated ‘Doing Journalism with Data’ MOOC, LEARNO.NET is the Centre’s latest step in its promotion of best practice journalism and digital storytelling.

LEARNO.NET currently features three data driven training courses, run by world renowned journalists and data practitioners.

If you have witnessed the casual ease with which the professional liars now running for the office of U.S. President toss off one lie after another, reporting on any of the possible regimes will require a range of data skills.

Check out the courses available now and watch for new courses in the near future!

PS: Casually pass this along to colleagues who may not admit to be looking for journalism data skill courses.

The Metasploit Framework has a lot of exploit modules including buffer overflow attacks, browser exploits, web application vulnerabilities, backdoor exploits, bot pwnage tools, etc. Exploit developers and contributors to the framework have shared a wide variety of interesting and very useful stuffs.

For this article, we will talk about utilizing Metasploit to hack and take over common backdoors and botnets. We will not go into all of the modules, but we will be mentioning some modules that could be of use to your future penetration testing job or work. We will not be doing exploit development so no need to get your debuggers and code editors.

If you are new to using Metasploit Framework particularly with the msfconsole (command-line interface of the framework) then you don’t need to worry because this is a simple step by step guide also on how to use an exploit module. One of the things needed for this tutorial is that you have Metasploit installed on your attacker machine thus I would advise you to have Kali Linux or maybe BackBox Linux which are penetration testing distributions and have Metasploit pre-installed.

For our target machine, I also suggest that you install Metasploitable 2 on your favorite virtualization platform like VMWare or VirtualBox. Metasploitable 2 is a vulnerable Ubuntu Linux virtual machine which is good for practicing your Metasploit-fu skills because it is built to be insecure and to be your pet.
…

Except for U.S. Presidential primaries and their “debates,” one of which was captured by closed-captioning as:

most of the major sports are between seasons.

No better time than the present to begin acquiring and/or polishing your Metasploit skills!

The Insecure Internet of Things (IIoT) requires the digital equivalent of a “church key” for entry:

Last week’s post (The truth about vaccinations: Your physician knows more than the University of Google) sparked a very lively discussion, with comments from several people trying to persuade me (and the other readers) that their paper disproved everything that I’d been saying. While I encourage you to go read the comments and contribute your own, here I want to focus on the much larger issue that this debate raised: what constitutes scientific authority?

It’s not just a fun academic problem. Getting the science wrong has very real consequences. For example, when a community doesn’t vaccinate children because they’re afraid of “toxins” and think that prayer (or diet, exercise, and “clean living”) is enough to prevent infection, outbreaks happen.

“Be skeptical. But when you get proof, accept proof.” –Michael Specter

What constitutes enough proof? Obviously everyone has a different answer to that question. But to form a truly educated opinion on a scientific subject, you need to become familiar with current research in that field. And to do that, you have to read the “primary research literature” (often just called “the literature”). You might have tried to read scientific papers before and been frustrated by the dense, stilted writing and the unfamiliar jargon. I remember feeling this way! Reading and understanding research papers is a skill which every single doctor and scientist has had to learn during graduate school. You can learn it too, but like any skill it takes patience and practice.

I want to help people become more scientifically literate, so I wrote this guide for how a layperson can approach reading and understanding a scientific research paper. It’s appropriate for someone who has no background whatsoever in science or medicine, and based on the assumption that he or she is doing this for the purpose of getting a basic understanding of a paper and deciding whether or not it’s a reputable study.
…

Copy each of Jennifer’s steps, as you follow them, in a notebook with your results from applying them. That will help you remember the rules but help capture your understanding of paper.

In 1781, Christian Wilhelm von Dohm, a civil servant, political writer and historian in what was then Prussia published a two volume work entitled Über die Bürgerliche Verbesserung der Juden (“On the Civic Improvement of Jews”). In it, von Dohm laid out the case for emancipation for a people systematically denied the rights granted to most other European citizens. At the heart of his treatise lay a simple observation: The universal principles of humanity and justice that framed the constitutions of the nation-states then establishing themselves across the continent could hardly be taken seriously until those principles were, in fact, applied universally. To all.

Von Dohm was inspired to write his treatise by his friend, the Jewish philosopher Moses Mendelssohn, who wisely supposed that even though basic and universal principles were involved, there were advantages to be gained in this context by having their implications articulated by a Christian. Mendelssohn’s wisdom is reflected in history: von Dohm’s treatise was widely circulated and praised, and is thought to have influenced the French National Assembly’s decision to emancipate Jews in France in 1791 (Mendelssohn was particularly concerned at the poor treatment of Jews in Alsace), as well as laying the groundwork for the an edict that was issued on behalf of the Prussian Government on the 11th of March 1812:

“We, Frederick William, King of Prussia by the Grace of God, etc. etc., having decided to establish a new constitution conforming to the public good of Jewish believers living in our kingdom, proclaim all the former laws and prescriptions not confirmed in this present edict to be abrogated.”

To gain the full rights due to a Prussian citizen, Jews were required to declare themselves to the police within six months of the promulgation of the edict. And following a proposal put forward in von Dohm’s treatise (and later approved by David Friedländer, another member of Mendelssohn’s circle who acted as a consultant in the drawing up of the edict), any Jews who wanted to take up full Prussian citizenship were further required to adopt a Prussian Nachname.

What we call in English, a ‘surname.’

From the vantage afforded by the present day, it is easy to assume that names as we now know them are an immutable part of human history. Since one’s name is ever-present in one’s own life, it might seem that fixed names are ever-present and universal, like mountains, or the sunrise. Yet in the Western world, the idea that everyone should have an official, hereditary identifier is a very recent one, and on examination, it turns out that the naming practices we take for granted in modern Western states are far from ancient.

…

A very deep dives on person names across the centuries and the significance attached to them.

Not an easy read but definitely worth the time!

It may help you to understand why U.S.-centric name forms are so annoying to others.

Posted in Language, Names | Comments Off on Sticks and Stones: How Names Work & Why They Hurt

In business, the promise of opportunity is often tempered with the reality of risk.

This formula holds true not only for those working to build and sustain a business, but also for those looking to victimise one.
The story told in our 2016 Global Economic Crime

Survey is one with which we are all too familiar: economic crime continues to forge new paths into business, regulatory compliance adds stress and burden to responsible businesses, and an increasingly complicated threat landscape challenges the balance between resources and growth. The moral of this story is not new, but is one that may have been forgotten in our haste to succeed in today’s fast-paced global marketplace.

This work needs to be embedded in your day-to-day decision-making, and supported by strong corporate ethics. Preparing your company for sustained success in today’s world is no longer an exercise in mapping out plans that live out their days in dusty binders on a director’s shelf. Preparation today is a living,
breathing exercise; one that must be constantly tweaked, practiced and tended to, so that it is ready when threats become realities.

Understanding the vision of your company and strategically mapping out a plan for both growth as well as a plan for defence – one that is based on your unique threat landscape and profile – will be the difference between realizing your opportunity or allowing those who want to victimise you to capitalise on theirs.

It wasn’t entirely clear to me what was meant by “economic crime,” aside from possibly a different method of making a profit than the complaining enterprise. It’s all capitalism. Crime is just capitalism that doesn’t follow a particular set of local rules.

I am bolstered in that belief by Fig. 2 from the paper:

I have always puzzled over bribery & corruption for example. Why piece-work corruption is any worse than structural corruption (the sort preferred in the United State) has never been clear to me.

It isn’t clear how useful you will find the report, especially given graphics like the one found at Fig. 3:

I have puzzled over it and the accompanying text for some time.

Does the 49% for financial services represent its percentage of the 36% of global crime rate? Seems unlikely because government/state owned follows at 44% and retail & consumer at 43%, which put us up to over 136%, without including the other categories.

Is it that 49% of financial services are economic crimes? That’s possible but I would hardly expect them to claim that title.

The Internet Archive’s amazing Pulp Magazine Archive includes all 176 issues of If, a classic science fiction magazine that ran from 1952 to 1974.

Included in the collection are all of the issues edited by Frederick Pohl from 1966-68, three years that netted him three consecutive Best Editor Hugo awards. If‘s Pohl run included signficant stories by Larry Niven, Harlan Ellison, Samuel Delany, Alexei Panshin and Gene Wolfe; it was the serialized home of such Heinlein novels as The Moon is a Harsh Mistress, as well as Laumer’s Retief stories and Saberhagen’s Berserker stories.

Caveat: Don’t be confused by the errant page numbering in the table of contents (TOC). I have checked (and you can too) the authorities against the pages where cited. I’m not sure why the TOC is wrong but it is. Total length is sixty-five (65) pages.

To entice you to read the document in full, here is the first paragraph:

This is not a case about one isolated iPhone. Rather, this case is about the Department of Justice and the FBI seeking through the courts a dangerous power that Congress and the American people have withheld: the ability to force companies like Apple to undermine the basic security and privacy interests of hundreds of millions of individuals around the globe. The government demands that Apple create a back door to defeat the encryption on the iPhone, making its users’ most confidential and personal information vulnerable to hackers, identity thieves, hostile foreign agents, and unwarranted government surveillance. The All Writs Act, first enacted in 1789 and on which the government bases its entire case, “does not give the district court a roving commission” to conscript and commandeer Apple in this manner. Plum Creek Lumber Co. v. Hutton, 608 F.2d 1283, 1289 (9th Cir. 1979). In fact, no court has ever authorized what the government now seeks, no law supports such unlimited and sweeping use of the judicial process, and the Constitution forbids it. (emphasis in original)

Now that’s an opening paragraph!

I especially like the “…to conscript and commandeer Apple in this manner” language.

Even if you have to go “blah, blah,” over the case citations, do read this memorandum.

It will leave you with no doubt the FBI has abandoned even lip service to the Constitution and our system of government.

Sure, digital design apps might be finally coming into their own, but there’s still nothing better than pen and paper. Here at Co.Design, we’re notebook fetishists, so we recently asked a slew of designers about their favorites—and whether they would mind giving us a look inside.

It turns out they didn’t. Across multiple disciplines, almost every designer we asked was thrilled to tell us about their notebook of choice and give us a look at how they use it. Our operating assumption going in was that most designers would probably be pretty picky about their notebooks, but this turned out not to be true: While Muji and Moleskine notebooks were the common favorites, some even preferred loose paper.

But what makes the notebooks of designers special isn’t so much what notebook they use, as how they use them. Below, enjoy a peek inside the working notebooks of some of the most prolific designers today—as well as their thoughts on what makes a great one.
…

Images of analog notebooks with links to sources!

I met a chief research scientist at a conference who had a small pad of paper for notes, contact information, etc. Could have had the latest gadget, etc., but chose not to.

That experience wasn’t unique as you will find from reading John’s post.

Notebooks, analog ones, have fewer presumptions and limitations than any digital notebook.

In the wake of non-stop news about identity theft, malware, ransomware, and all manner of information security catastrophes, Americans have educated themselves and are fully leveraging today’s powerful technologies to keep themselves safe… not.

While 67% told Morar Consulting they “would like extra layers of privacy,” far fewer use the technological tools now available to them. That’s the top-line finding of a brand-new survey of 2,000 consumers by Morar on behalf of the worldwide VPN provider “Hide My Ass!”

A key related finding: 63% of survey respondents have encountered online security issues. But, among the folks who’ve been bitten, just 56% have permanently changed their online behavior afterwards. (If you don’t learn the “hard way,” when do you learn?)

According to Morar, there’s still an odd disconnect between the way some people protect themselves offline and what they’re willing to do on the web. 51% of respondents would publicly post their email addresses, 26% their home addresses, and 21% their personal phone numbers.

…

Does this result surprise you?

If not:

How should we judge projects/solutions that presume conscious effort by users to:

Encode data (think linked data and topic maps)

Create maps between data sets

Create data in formats not their own

Use data vocabularies not their own

Use software not their own

Improve search results

etc.

I mention “search results” as it is commonly admitted that search results are, at best, a pig’s breakfast. The amount of improvement possible over current search results is too large to even be guesstimated.

Rather than beat the dead horse, “…users ought to…,” yes, they should, but they don’t, it is better to ask “Now what?”

Why not try metrics?

Monitor user interactions with information and test systems to anticipate those needs. Both are measurable categories.

Consider that back in the day, indexes never indexed everything. Magazine indexes omitted ads for example. Could have been indexed but indexing ads didn’t offer enough return for the effort required.

Why not apply that model to modern information systems? Yes, we can create linked data or other representations for everything in every post, but if no one uses 90% of that encoding, we have spent a lot of money for very little gain.

Yes, that means we will be discriminating against less often cited authors, for example. And your point?

The preservation of the Greek literature discriminated against authors whose work wasn’t important enough for someone to invest in preserving it.

Of course, we may not lose data in quite the same way but if it can’t be found, isn’t that the same a being lost?

Let’s apply metrics to information retrieval and determine what return justifies the investment to make information easily available.

Between the looming enslavement of programmers by the FBI, U.S. presidential candidates competing for how much they hate foreigners/segments of the U.S. population, not to mention poor media reporting on the same, it’s hard to find good news to report.

But, today, thanks to a Facebook post by Simon St. Laurent (of O’Reilly fame), I can point you to:

Here’s a tricky task. Pick a photograph from the Web at random. Now try to work out where it was taken using only the image itself. If the image shows a famous building or landmark, such as the Eiffel Tower or Niagara Falls, the task is straightforward. But the job becomes significantly harder when the image lacks specific location cues or is taken indoors or shows a pet or food or some other detail.

Nevertheless, humans are surprisingly good at this task. To help, they bring to bear all kinds of knowledge about the world such as the type and language of signs on display, the types of vegetation, architectural styles, the direction of traffic, and so on. Humans spend a lifetime picking up these kinds of geolocation cues.

So it’s easy to think that machines would struggle with this task. And indeed, they have.

Today, that changes thanks to the work of Tobias Weyand, a computer vision specialist at Google, and a couple of pals. These guys have trained a deep-learning machine to work out the location of almost any photo using only the pixels it contains.

Their new machine significantly outperforms humans and can even use a clever trick to determine the location of indoor images and pictures of specific things such as pets, food, and so on that have no location cues.

Is it possible to build a system to determine the location where a photo was taken using just its pixels? In general, the problem seems exceptionally difficult: it is trivial to construct situations where no location can be inferred. Yet images often contain informative cues such as landmarks, weather patterns, vegetation, road markings, and architectural details, which in combination may allow one to determine an approximate location and occasionally an exact location. Websites such as GeoGuessr and View from your Window suggest that humans are relatively good at integrating these cues to geolocate images, especially en-masse. In computer vision, the photo geolocation problem is usually approached using image retrieval methods. In contrast, we pose the problem as one of classification by subdividing the surface of the earth into thousands of multi-scale geographic cells, and train a deep network using millions of geotagged images. While previous approaches only recognize landmarks or perform approximate matching using global image descriptors, our model is able to use and integrate multiple visible cues. We show that the resulting model, called PlaNet, outperforms previous approaches and even attains superhuman levels of accuracy in some cases. Moreover, we extend our model to photo albums by combining it with a long short-term memory (LSTM) architecture. By learning to exploit temporal coherence to geolocate uncertain photos, we demonstrate that this model achieves a 50% performance improvement over the single-image model.

You might think that with GPS engaged that the location of images is a done deal.

Not really. You can be facing in any direction from a particular GPS location and in a dynamic environment, analysts or others don’t have the time to sort out which images are relevant from those that are just noise.

Urban warfare does not occur on a global scale, bringing home the lesson it isn’t the biggest data set but the most relevant and timely data set that is important.

Relevantly oriented images and feeds are a natural outgrowth of this work. Not to mention pairing those images with other relevant data.

Seeking to bolster its effort to counter ISIS messaging on social media, the Obama administration is assembling something of a high-tech dream team to battle the terrorist group online.

At a meeting conducted at the Justice Department on Wednesday, executives from Apple, Twitter, Snapchat, Facebook, MTV and Buzzfeed offered their input to top counter intelligence officials, according to an industry source familiar with the meeting.

Nick Rasmussen, Director of the National Counterterrorism Center, told the group the administration is making strides in combating ISIS on social media, where the terrorist army has inspired potential lone wolf assailants to carry out attacks

“We’ve seen more aggressive takedowns across social media platforms, which is a really good thing,” Rasmussen was quoted as saying by the source at the gathering.

Apple’s participation in the meeting is notable, given the high-tech firm’s clash with the administration over the company’s use of encryption to shield customers’ data on its popular smart phones.
…

Why does the government need tech giants in order to counter Islamic State messaging with the truth?

Oh, I forgot, the U.S. government wants to counter the truthful messages of the Islamic State with lies and false narratives of hope.

Is that too extreme?

Think about it. What do you think the changes are for regime change in say Saudi Arabia? Kuwait? UAE? Any other corrupt and oppressive Arab governments you care to name? Changes in any toady governments supported by the United States or Russia?

If you are going to spin narratives of “hope,” shouldn’t those be true narratives of hope? Or is it enough that false narratives of hope fit into U.S. plans to continue to be a colonial power in the Middle East?

The government asked a court to order Apple to create a unique version of iOS that would bypass security protections on the iPhone Lock screen. It would also add a completely new capability so that passcode tries could be entered electronically.

This has two important and dangerous implications:

First, the government would have us write an entirely new operating system for their use. They are asking Apple to remove security features and add a new ability to the operating system to attack iPhone encryption, allowing a passcode to be input electronically. This would make it easier to unlock an iPhone by “brute force,” trying thousands or millions of combinations with the speed of a modern computer.

We built strong security into the iPhone because people carry so much personal information on our phones today, and there are new data breaches every week affecting individuals, companies and governments. The passcode lock and requirement for manual entry of the passcode are at the heart of the safeguards we have built in to iOS. It would be wrong to intentionally weaken our products with a government-ordered backdoor. If we lose control of our data, we put both our privacy and our safety at risk.

Second, the order would set a legal precedent that would expand the powers of the government and we simply don’t know where that would lead us. Should the government be allowed to order us to create other capabilities for surveillance purposes, such as recording conversations or location tracking? This would set a very dangerous precedent.
…

The first sentence captures all that need to said for me:

The government asked a court to order Apple to create a unique version of iOS that would bypass security protections on the iPhone Lock screen.

Suddenly, the “land of the free,” becomes “land of the free, so long as you don’t cross the FBI…”

The government can certainly ask Apple to undertake such a project but Apple (and you) have an absolute right to decline. For any reason.

The FBI wants your freedom to choose to be at the sufferance of the FBI.

That doesn’t fit with my notion of liberty under the U.S. Constitution.