In recent days many people have shown interest in making sure the Wayback Machine has copies of the web pages they care about most. These saved pages can be cited, shared, linked to – and they will continue to exist even after the original page changes or is removed from the web.

There are several ways to save pages and whole sites so that they appear in the Wayback Machine. Here are 6 of them.
…

There’s a new bill in Congress that would threaten your right to free expression online. If that weren’t enough, it could also put small Internet businesses in danger of catastrophic litigation.

Don’t let its name fool you: the Stop Enabling Sex Traffickers Act (SESTA, S. 1693) wouldn’t help punish sex traffickers. What the bill would do (PDF) is expose any person, organization, platform, or business that hosts third-party content on the Internet to the risk of overwhelming criminal and civil liability if sex traffickers use their services. For small Internet businesses, that could be fatal: with the possibility of devastating litigation costs hanging over their heads, we think that many entrepreneurs and investors will be deterred from building new businesses online.

Make no mistake: sex trafficking is a real, horrible problem. This bill is not the way to address it. Lawmakers should think twice before passing a disastrous law and endangering free expression and innovation.
…

Rather than focusing on a known location for sex trafficking, Congress is putting “…small Internet businesses…” in harm’s way.

The large content providers, Facebook, Google, Twitter, already have the financial and technical resources to meet the demands of SESTA. So in a very real sense, SESTA isn’t anti-sex trafficking but rather anti-small Internet business, in addition to being a threat to free speech.

WHEN BIOLOGISTS SYNTHESIZE DNA, they take pains not to create or spread a dangerous stretch of genetic code that could be used to create a toxin or, worse, an infectious disease. But one group of biohackers has demonstrated how DNA can carry a less expected threat—one designed to infect not humans nor animals but computers.

In new research they plan to present at the USENIX Security conference on Thursday, a group of researchers from the University of Washington has shown for the first time that it’s possible to encode malicious software into physical strands of DNA, so that when a gene sequencer analyzes it the resulting data becomes a program that corrupts gene-sequencing software and takes control of the underlying computer. While that attack is far from practical for any real spy or criminal, it’s one the researchers argue could become more likely over time, as DNA sequencing becomes more commonplace, powerful, and performed by third-party services on sensitive computer systems. And, perhaps more to the point for the cybersecurity community, it also represents an impressive, sci-fi feat of sheer hacker ingenuity.

“We know that if an adversary has control over the data a computer is processing, it can potentially take over that computer,” says Tadayoshi Kohno, the University of Washington computer science professor who led the project, comparing the technique to traditional hacker attacks that package malicious code in web pages or an email attachment. “That means when you’re looking at the security of computational biology systems, you’re not only thinking about the network connectivity and the USB drive and the user at the keyboard but also the information stored in the DNA they’re sequencing. It’s about considering a different class of threat.”
…

Very high marks for imaginative delivery but at its core, this is shellcode in data.

Question: Can you name any data pipelines that have been subjected to adversarial pressure?

The reading of DNA and transposition into machine format reminds me that a data pipeline could ingest apparently non-hostile data and as a result of transformations/processing, produce hostile data at some point in the data stream.

Facebook has censored Fine Art Bourse’s (FAB) adverts for the online auction house’s relaunch sale of erotic art on the grounds of indecency. In 2015, FAB, then based in London, went into receivership shortly before its first sale after running out of funds due to a delay in building the technology required to run the cloud-based auctions. But the founder, Tim Goodman, formerly owner of Bonhams & Goodman and then Sotheby’s Australia under license, has now relaunched the firm in his native Australia, charging a 5% premium to both buyers and sellers and avoiding VAT, GST and sales tax on service charges by running auctions via a server in Hong Kong.

When Goodman attempted to run a series of adverts for his relaunch sale of Erotic, Fetish, & Queer Art & Objects on 12 September, Facebook barred the adverts citing its policy against “adverts that depict nudity” including “the use of nudity for artistic or educational purposes”.
…

Remember to use #FCensor for all Facebook censorship. (#GCensor for Google censoring, #TCensor for Twitter censoring.)

Every act of censorship by Facebook and every person employed as a censor, is a splash of red ink on the books at Facebook. Red ink that has no profit center offset.

Facebook can and should erase the red ink of censorship from its books.

Provide users with effective self-help filtering, being able to “follow” filters created by others and empowering advertisers to filter the content in proximity to their ads (for an extra $fee), moves censoring cost (read Facebook red ink) onto users and advertisers, improving Facebook’s bottom line.

What sane investor would argue with that outcome?

Better and “following” filters would enable users to create their own custom echo chambers. Oh, yeah, that’s part of the problem isn’t it? Zuckerberg and his band of would-be messiahs want the power to decide what the public sees.

I’ll pass. How about you?

Investors! Use your stock and dollars to save all of us from a Zuckerberg view of the world. Thanks!

The Python programming language is a widely used tool for basic and advanced research in Astronomy. Watch this amazing presentation to learn specifics of using Python by astronomers. (Jake Vanderplas, speaker)

The only downside to the presentation is Vanderplas mentions software being on Github, but doesn’t supply the URLs.

For example, if you go to Github and search for for “Large Synoptic Survey Telescope” you get two (2) results:

Both “hits” are relevant but what did we miss?

Try searching for LSSTC.

There are twelve (12) “hits” with the first one being highly relevant and completely missed by the prior search.

Two lessons here:

Search is a lossy way to navigate Github.

Do NOT wave your hands in the direction of Github for software. Give URLs.

Science-specific tools and extensions for SQL. Currently the project contains user defined functions (UDFs) for MySQL including spatial geometry, astronomy specific functions and mathematical functions. The project was motivated by the needs of the Large Synoptic Survey Telescope (LSST).

Not an attack on Tor per se but defeated the use of Tor none the less.

Can you spot the suspect’s error?

From the complaint:

…F. Law Enforcement Identifies “Brian Kil’s” True IP Address

51. On June 9, 2017, the Honorable Debra McVicker Lynch authorized the execution of a Network Investigative Technique “NIT” (defined in Clause No. 1:17-mj-437) in order to ascertain the IP address associated with Brian Kil and Victim 2.

52. As set forth in the search warrant application presented to Judge Lynch, the FBI was authorized by the Court to add a small piece of code (NIT) to a normal video file produced by Victim 2, which did not contain any visual depictions of any minor engaged in sexually explicit activity. As authorized, the FBI then uploaded the video file containing the NIT to the Dropbox.com account known only to Kil and Victim 2. When Kil viewed the video containing the NIT on a computer, the NIT would disclose the true IP address associated with the computer used by Kil.

…

57. When Kil viewed the video containing the NIT on a computer the NIT disclosed the true IP address associated with the computer used by Kil.
…

Where did “Kil’s” opsec fail?

“Kil” viewed content of unknown origin on a networked computer.

“Kil” thought the content originated with Victim 2, but all remote content on the Internet should be treated as being of unknown origin.

No one knows if you are a dog on the Internet just as you don’t know if the FBI sent the video you are playing.

Content of unknown origin is examined and stays on non-networked computers. Copy text only to networked systems. If you need the original content, well, you have been warned.

One of the most entertaining and informative presentations you are likely to see this year! It includes an opening tip for those common digital safes found in hotel rooms.

From the description:

We’ve built a $200 open source robot that cracks combination safes using a mixture of measuring techniques and set testing to reduce crack times to under an hour. By using a motor with a high count encoder we can take measurements of the internal bits of a combination safe while it remains closed. These measurements expose one of the digits of the combination needed to open a standard fire safe. Additionally, ‘set testing’ is a new method we created to decrease the time between combination attempts. With some 3D printing, Arduino, and some strong magnets we can crack almost any fire safe. Come checkout the live cracking demo during the talk!

This won’t work against quality safes in highly secure environments but most government safes are low-bidder/low-quality and outside highly secure environments. Use tool appropriate for the security environment.

If you are working with the Apache TinkerPop™ framework for graph computing, you might want to produce, edit, and save graphs, or parts of graphs, outside the graph database. To accomplish this, you might want a standardized format for a graph representation that is both machine- and human-readable. You might want features for easily moving between that format and the graph database itself. You might want to consider using GraphSON.

GraphSON is a JSON-based representation for graphs. It is especially useful to store graphs that are going to be used with TinkerPop™ systems, because Gremlin (the query language for TinkerPopTM graphs) has a GraphSON Reader/Writer that can be used for bulk upload and download in the Gremlin console. Gremlin also has a Reader/Writer for GraphML (XML-based) and Gryo (Kryo-based).

Unfortunately, I could not find any sort of standardized documentation for GraphSON, so I decided to compile a summary of my research into a single document that would help answer all the questions I had when I started working with it.
…

Bookmark or better yet, copy-n-paste “Vertex Rules and Conventions” to print on one page and then print “Edge Rules and Conventions” on the other.

Could possibly get both on one page but I like larger font sizes. 😉

Type in the “Example GraphSON Structure” to develop finger knowledge of the format.

Way back in the 1980s, when I was a young naval officer, the Global Positioning System was still in its experimental stage. If you were in the middle of the ocean on a cloudy night, there was pretty much only one reliable way to know where you were: Loran-C, the hyperbolic low-frequency radio navigation system. Using a global network of terrestrial radio beacons, Loran-C gave navigators aboard ships and aircraft the ability to get a fix on their location within a few hundred feet by using the difference in the timing of two or more beacon signals.

An evolution of World War II technology (LORAN was an acronym for long-range navigation), Loran-C was considered obsolete by many once GPS was widely available. In 2010, after the US Coast Guard declared that it was no longer required, the US and Canada shut down their Loran-C beacons. Between 2010 and 2015, nearly everyone else shut down their radio beacons, too. The trial of an enhanced Loran service called eLoran that was accurate within 20 meters (65 feet) also wrapped up during this time.

But now there’s increasing concern about over-reliance in the navigational realm on GPS. Since GPS signals from satellites are relatively weak, they are prone to interference, accidental or deliberate. And GPS can be jammed or spoofed—portable equipment can easily drown them out or broadcast fake signals that can make GPS receivers give incorrect position data. The same is true of the Russian-built GLONASS system.
…

Sean focuses on the “national security” needs for a backup to GPS but it isn’t North Koreans, Chinese or Russians who are using Stingray devices against US citizens.

No, those are all in use by agents of the federal and/or state governments. Ditto for anyone spoofing your GPS in the United States.

You need a GPS backup, but your adversary is quite close to home.

The new protocol is call eLoran and Sean has a non-technical overview of it.

You would have unusual requirements to need a private eLoran but so you have an idea of what is possible:

…
eLoran technology has been available since the mid-1990s and is still available today. In fact, the state-of-the-art of eLoran continues to advance along with other 21st-century technology. eLoran system technology can be broken down into a few simple components: transmitting site, control and monitor site, differential reference station site and user equipment.

Modern transmitting site equipment consists of a high-power, modular, fully redundant, hot-swappable and software configurable transmitter, and sophisticated timing and control equipment. Standard transmitter configurations are available in power ranges from 125 kilowatts to 1.5 megawatts. The timing and control equipment includes a variety of external timing inputs to a remote time scale, and a local time scale consisting of three ensembled cesium-based primary reference standards. The local time scale is not directly coupled to the remote time scale. Having a robust local time scale while still monitoring many types of external time sources provides a unique ability to provide proof-of-position and proof-of-time. Modern eLoran transmitting site equipment is smaller, lighter, requires less input power, and generates significantly less waste heat than previously used Loran-C equipment.

The core technology at a differential eLoran reference station site consists of three differential eLoran reference station or integrity monitors (RSIMs) configurable as reference station (RS) or integrity monitor (IM) or hot standby (RS or IM). The site includes electric field (E-field) antennas for each of the three RSIMs.

Modern eLoran receivers are really software-defined radios, and are backward compatible with Loran-C and forward compatible, through firmware or software changes. ASF tables are included in the receivers, and can be updated via the Loran data channel. eLoran receivers can be standalone or integrated with GNSS, inertial navigation systems, chip-scale atomic clocks, barometric altimeters, sensors for signals-of-opportunity, and so on. Basically, any technology that can be integrated with GPS can also be integrated with eLoran.
… Innovation: Enhanced Loran, GPS World (May, 2015)

Some people are happy with government controlled services. Other people, not so much.

An intensive review of Internet data has established that Google has severed links between the World Socialist Web Site and the 45 most popular search terms that previously directed readers to the WSWS. The physical censorship implemented by Google is so extensive that of the top 150 search terms that, as late as April 2017, connected the WSWS with readers, 145 no longer do so.

These findings make clear that the decline in Google search traffic to the WSWS is not the result of some technical issue, but a deliberate policy of censorship. The fall took place in the three months since Google announced on April 25 plans to promote “authoritative web sites” above those containing “offensive” content and “conspiracy theories.”

Because of these measures, the WSWS’s search traffic from Google has fallen by two-thirds since April.

The WSWS has analyzed tens of thousands of search terms, and identified those key phrases and words that had been most likely to place the WSWS on the first or second page of search results. The top 45 search terms previously included “socialism,” “Russian revolution,” “Flint Michigan,” “proletariat,” and “UAW [United Auto Workers].” The top 150 results included the terms “UAW contract,” “rendition” and “Bolshevik revolution.” All of these terms are now blocked.
… (emphasis in original)

How can a single person understand what’s going on in a collection of millions of documents? This is an increasingly common problem: sifting through an organization’s e-mails, understanding a decade worth of newspapers, or characterizing a scientific field’s research. Topic models are a statistical framework that help users understand large document collections: not just to find individual documents but to understand the general themes present in the collection.

This survey describes the recent academic and industrial applications of topic models with the goal of launching a young researcher capable of building their own applications of topic models. In addition to topic models’ effective application to traditional problems like information retrieval, visualization, statistical inference, multilingual modeling, and linguistic understanding, this survey also reviews topic models’ ability to unlock large text collections for qualitative analysis. We review their successful use by researchers to help understand fiction, non-fiction, scientific publications, and political texts.

Deep neural network-based classifiers are known to be vulnerable to adversarial examples that can fool them into misclassifying their input through the addition of small-magnitude perturbations. However, recent studies have demonstrated that such adversarial examples are not very effective in the physical world–they either completely fail to cause misclassification or only work in restricted cases where a relatively complex image is perturbed and printed on paper. In this paper we propose a new attack algorithm–Robust Physical Perturbations (RP2)– that generates perturbations by taking images under different conditions into account. Our algorithm can create spatially-constrained perturbations that mimic vandalism or art to reduce the likelihood of detection by a casual observer. We show that adversarial examples generated by RP2 achieve high success rates under various conditions for real road sign recognition by using an evaluation methodology that captures physical world conditions. We physically realized and evaluated two attacks, one that causes a Stop sign to be misclassified as a Speed Limit sign in 100% of the testing conditions, and one that causes a Right Turn sign to be misclassified as either a Stop or Added Lane sign in 100% of the testing conditions.

…
In this paper, we introduced Robust Physical Perturbations (RP2), an algorithm that generates robust, physically realizable adversarial perturbations. Previous algorithms assume that the inputs of DNNs can be modified digitally to achieve misclassification, but such an assumption is infeasible, as an attacker with control over DNN inputs can simply replace it with an input of his choice. Therefore, adversarial attack algorithms must apply perturbations physically, and in doing so, need to account for new challenges such as a changing viewpoint due to distances, camera angles, different lighting conditions, and occlusion of the sign. Furthermore, fabrication of a perturbation introduces a new source of error due to a limited color gamut in printers.

We use RP2 to create two types of perturbations: subtle perturbations, which are small, undetectable changes to the entire sign, and camouflage perturbations, which are visible perturbations in the shape of graffiti or art. When the Stop sign was overlayed with a print out, subtle perturbations fooled the classifier 100% of the time under different physical conditions. When only the perturbations were added to the sign, the classifier was fooled by camouflage graffiti and art perturbations 66.7% and 100% of the time respectively under different physical conditions. Finally, when an untargeted poster-printed camouflage perturbation was overlayed on a Right Turn sign, the classifier was fooled 100% of the time. In future work, we plan to test our algorithm further by varying some of the other conditions we did not consider in this paper, such as sign occlusion.

Excellent work but my question: Is the inability of the classifier to recognize overlapping images similar to the issues encountered as overlapping markup?

To be sure overlapping markup is in part an artifice of unimaginative XML rules, since overlapping texts are far more common than non-overlapping texts. Especially when talking about critical editions or even differing analysis of the same text.

But beyond syntax, there is the subtlety of treating separate “layers” or stacks of a text as separate and yet tracking the relationship between two or more such stacks, when arbitrary additions or deletions can occur in any of them. Additions and deletions that must be accounted for across all layers/stacks.

I don’t have a solution to offer but pose the question of layers of recognition in hopes that machine learning models can capitalize on the lessons learned about a very similar problem with overlapping markup.

The fields of neuroscience and artificial intelligence (AI) have a long and intertwined history. In more recent times, however, communication and collaboration between the two fields has become less commonplace. In this article, we argue that better understanding biological brains could play a vital role in building intelligent machines. We survey historical interactions between the AI and neuroscience fields and emphasize current advances in AI that have been inspired by the study of neural computation in humans and other animals. We conclude by highlighting shared themes that may be key for advancing future research in both fields.

Extremely rich article with nearly four (4) pages of citations.

Reading this paper closely and chasing the citations is a non-trivial task but you will be prepared understand and/or participate in the next big neuroscience/AI breakthrough.

FOIA requests can and do uncover unfavorable information about government policies and actions, but far too often after the principals have sought the safety of the grave.

It’s far better to expose and stop ill-considered, even criminal activities in real time, before government adds more blighted lives and deaths to its record.

Traditional leaking involves a leaker, perhaps you, delivering physical or digital copies of data/documents to a reporter. That is it requires some act on your part, copying, email, smail, etc., which offers the potential to trace the leak back to you.

Have you considered No Fault Leaking? (NFL)

No Fault Leaking requires only a public Wi-Fi and appropriate file sharing permissions on your phone, laptop, tablet.

After arriving at a Public Wi-Fi location, turn file sharing on. It’s as simple as that. You don’t know who if anyone has copied any files. Before you leave the location, turn file sharing off. (This works best if you have legitimate reasons to have the files in question on your laptop, etc.)

No Fault Leaking changes the role of the media from spoon-fed recipients of data/documents into more active participants in the leaking process.

To that end, ask yourself: Am I a fair weather (no risk) advocate of press freedom or something more?

Sessions has thrown down his gage, declaring war on occasional transparency from government leakers. Indirectly, that war will include members of the media as casualties.

Shakespeare penned the best response for taking up Sessions’ gage:

Cry ‘Havoc,’ and let slip the dogs of war;

In case you don’t know the original sense of “Havoc:”

The military order Havoc! was a signal given to the English military forces in the Middle Ages to direct the soldiery (in Shakespeare’s parlance ‘the dogs of war’) to pillage and chaos. Cry havoc and let slip the dogs of war

It’s on all of us to create enough chaos to protect leakers and members of the media who publish their leaks.

Observations – Not Instructions

Data access: Phishing emails succeed 33% of the time. Do they punish would-be leakers who fall for phishing emails?

Exflitration: Tracing select documents to a leaker is commonplace. How do you trace an entire server disk? The larger and more systematic the data haul, the greater the difficulty in pinning the leak on particular documents. (Back to school specials often include multi-terabyte drives.)

There’s a shortage of facts available concerning this hack of HBO (Home Box Office) but 1.5 terabytes is being thrown around as a scary number for the data loss.

While everyone else oohs and aahs over 1.5 terabytes of data, you can smile knowing that a new Dell XPS 27 sells pre-configured with a 2 terabyte drive for $1899.99, shipping, taxes, blah, blah extra. That’s a mid to low range desktop.

Hackers may have gotten 1.5 terabytes of data but that’s no indication of its worth. How do you count emails with dozens of people on the cc: line? Or multiple versions of the same video?

I don’t have time to watch the majority of HBO content on my legitimate subscription so I’m not interested in the stolen content, assuming it includes anything worth watching.

Of greater interest is forensic analysis of how the hack was performed, because post-Sony, one expects HBO avoided the obvious faults that led to the Sony hack. If they did, perhaps there is something to be learned here.

Unlike the Podesta “hack,” which consisted of losing his email password in a phishing attack. That’s not really a hack, that’s just dumb

This report offers a review of laws regulating the collection of intelligence in the European Union (EU) and Belgium, France, Germany, Netherlands, Portugal, Romania, Sweden, and the United Kingdom. This report updates a report on the same topic issued from 2014. Because issues of national security are under the jurisdiction of individual EU Member States and are regulated by domestic legislation, individual country surveys provide examples of how the European nations control activities of their intelligence agencies and what restrictions are imposed on information collection. All EU Member States follow EU legislation on personal data protection, which is a part of the common European Union responsibility.

If you are investigating or reporting on breaches of intelligence gathering laws in “the European Union (EU) and Belgium, France, Germany, Netherlands, Portugal, Romania, Sweden, and the United Kingdom,” this will be useful. Otherwise, for the other one hundred and eighty-eight (188), you are SOL.

Other than as a basis for outrage, it’s not clear how useful intelligence gathering laws are in fact. The secrecy of intelligence operations makes practical oversight impossible and if leaks are to be credited, no known intelligence agency obeys such laws other than accidentally.

Moreover, as the U.S. Senate report on torture demonstrates, even war criminals are protected from prosecution in the name of intelligence gathering.

If you have the top 1,000 passwords by popularity, you are close to 91% of the “changed” passwords you will encounter. (That link leads to the top 10,000 passwords if you are looking for completeness.)

You could argue that improving the security of the Internet of Things by 9 percentage points (maybe) isn’t nothing.

True but it is so nearly nothing as to not be worth the effort.

PS: There are solutions to the IoT password issue but someone needs to pay money to spark that discussion.

Officials from Department of Defense (DOD) components identified advantages and disadvantages of the “dual-hat” leadership of the National Security Agency (NSA)/Central Security Service (CSS) and Cyber Command (CYBERCOM) (see table). Also, DOD and congressional committees have identified actions that could mitigate risks associated with ending the dual-hat leadership arrangement, such as formalizing agreements between NSA/CSS and CYBERCOM to ensure continued collaboration, and developing a persistent cyber training environment to provide a realistic, on-demand training capability. As of April 2017, DOD had not determined whether it would end the dual-hat leadership arrangement.
…

At first I thought it said “ass-hat” leadership and went back to check. 😉

You can read the recommendations if you are in charge of improving that situation (an unlikely outcome) or take the GAO at its word as a place to mine for leaks.

Are dual-hat arrangements “leak patterns” much like “design patterns” in programming languages?

I ask because identifying “leak patterns,” whether in software (buffer overflows) or recurrent organizational security failures, could be a real boon to hounds and hares alike.

This news was well received by developers and end users alike. Well, at least most people liked the demise of Adobe Flash. But it seems that Adobe Flash has still some fans left.

A group of developers at GitHub have come up with a petition to “save Adobe Flash”. Just a few days after the announcement by Adobe, Juha Linstedt, a web developer with username “Pakastin” on GitHub started a petition calling on Adobe to allow for open source Flash, which he thinks is part of Internet history.
…

Losing Flash altogether will impair access to resources developed using Flash but even as open source, preserving Flash strikes me as the equivalent of preserving small pox for later study.

If Adobe does open source the necessary components, it could have value as examples of how not to code an application. Or for testing of code auditing tools.

The XML tree paradigm has several well-known limitations for document modeling and processing. Some of these have received a lot of attention (especially overlap), and some have received less (e.g., discontinuity, simultaneity, transposition, white space as crypto-overlap). Many of these have work-arounds, also well known, but—as is implicit in the term “work-around”—these work-arounds have disadvantages. Because they get the job done, however, and because XML has a large user community with diverse levels of technological expertise, it is difficult to overcome inertia and move to a technology that might offer a more comprehensive fit with the full range of document structures with which researchers need to interact both intellectually and programmatically. A high-level analysis of why XML has the limitations it has can enable us to explore how an alternative model of Text as Graph (TAG) might address these types of structures and tasks in a more natural and idiomatic way than is available within an XML paradigm.

Hyperedges, texts and XML, what more could you need? 😉

This paper merits a deep read and testing by everyone interested in serious text modeling.

You can’t read the text but here is a hypergraph visualization of an excerpt from Lewis Carroll’s “The hunting of the Snark:”

The New Testament, the Hebrew Bible, to say nothing of the Rabbinic commentaries on the Hebrew Bible and centuries of commentary on other texts could profit from this approach.

If that sounds wishful, remember Cluley reports the “technique” used by the prankster was: 1) create an email account in the name of a White House staffer, 2) send an email from that account. This has to be a new low bar for “fake” emails.

This is a malware manipulation environment for OpenAI’s gym. OpenAI Gym is a toolkit for developing and comparing reinforcement learning algorithms. This makes it possible to write agents that learn to manipulate PE files (e.g., malware) to achieve some objective (e.g., bypass AV) based on a reward provided by taking specific manipulation actions.
… (highlight in original)

We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible. The outcome of this venture is uncertain and the work is difficult, but we believe the goal and the structure are right. We hope this is what matters most to the best in the field.

will vary depending upon your objectives.

From my perspective, it’s better for my AI to decide to reach out or stay its hand, as opposed to relying upon ethical behavior of another AI.

Below are 14 contributions on the topic of decentralization and Linked Data. These were shared in reply to the call for contributions of DeSemWeb2017, an ISWC2017 workshop on Decentralizing the Semantic Web.

We invite everyone to add open reviews to any of these contributions. This ensures fair feedback and transparency of the process.