The Free Thought Project was one of 810 accounts that got booted off FaceBook and Twitter for ‘inauthentic activity’, in what seemed more like a co-ordinated act of political censorship. While the full list hadn’t been released, the main targets appeared to have been groups reporting on corruption within politics and law enforcement – you know, things we have a civic duty to discuss on the Web.
Quoting Brittany Hunter in Foundation for Economic Education article: ‘What began with the ban of Alex Jones last summer has since escalated to include the expulsion of hundreds of additional pages, each political in nature. […] one thing is absolutely certain: we need more market competition in the realm of social media.‘
What’s particularly worrying is that the Silicon Valley corporations aren’t simply private entities excercising their own rights, as is commonly argued in their defence. They represent a giant oligopoli that has a disproportionate amount of control over the means of communication on the Web, an oligopoli that’s engaged in a co-ordinated suppression of political opinion, an oligopoli with more influence on the democratic system and access to politicians than the Russian state could ever hope to gain.

An alternative is needed to democratise social media. For many people in the know, Minds.com seems to be that alternative. Here’s why:

Minds is production-quality, can be deployed as a finished application, and it’s open source.

Users don’t need to provide personal or identifying information when registering an account.

Minds was developed for content creators.

The developers are working on decentralisation solutions.

Minds.com supports crypto currency and monetisation.

The first point is an interesting one. In Ottman’s opinion, a solution released as proprietary software cannot be a viable alternative, because of transparency or somesuch. I think he might have conflated administrative integrity with software integrity – that open source projects have been pressured into adoptinga uniform ‘Code of Conduct’ demonstrates the problem with that reasoning. Personally I don’t think the open/proprietary thing has much bearing on a platform’s viability as an alternative to FaceBook, unless there’s a need to verify claims about certain features, such as whether true end-to-end encryption is being provided.
No, what’s more important is that Minds isn’t a half-baked proof-of-concept, but is a completed iteration comparable in quality and appearance to any mainstream social media site. This is the deciding factor that determines whether a solution would gain traction. Anyone could clone the software, deploy it on their own server and run their own version of Minds.com.

The option to register accounts anonymously/pseudonymously with Minds.com is probably the most important feature, because I strongly believe we should be setting boundaries between our online and offline lives, and between family, social circle, work colleagues and strangers. Such a thing isn’t really possible on a social network in which everyone’s posting under their real names. Also, I don’t think it’s possible, in our current political climate, to have any meaningful debate without pseudonymity, since it seems fashionable to ensure anyone expressing a dissenting opinion suffers disproportionate ‘social consequences’.

An undersold feature of Minds.com is the ease with which a citizen journalist, blogger, whistleblower, etc. can create and publish content. For the individual user, who wants to protect his/her identity, a Minds.com channel (with publicly-viewable blog posts) is cheaper and easier to maintain than a Web site, and it still provides the same benefits in terms of posting content and getting views.

Problems with the Design and Architecture

Now, for the things I’m not entirely sure about: My main criticism is that Minds.com is not (yet!) actually ‘engineered for freedom of speech, transparency and privacy’ in any tangible sense, as it’s still a centralised service hosted on AWS in the United States. Whether Minds.com defends its principles actually depends on the people running it – people who could sell Minds.com to a corporation, people who might face legal, financial and political pressures, and people who would eventually be hiring others.

When asked, by Neoxian, writing for Steemit, whether Minds could truly be considered decentralised, Ottman gave the following answer:
‘Good questions. It’s decentralized in that ultimately, yes, nodes will be able to optionally federate (this is still in dev). It is censorship resistant in that we allow all legal content, and in the future will integrate torrent options.‘

This is actually not an empty promise. The Minds developers have already been working on a decentralisation component called ‘Nomad‘, which is based on the Beaker browser and the DAT protocol. I’ve experimented with these briefly this weekend, and they really do work. If a P2P system does go mainstream, it’s likely to be this.

Like this:

Just as I was composing this post about Nick Cohen’s book (‘You Can’t Read this Book‘), which addresses the psychology of religiously-motivated censorship, I read about Stephen Fry reportedly being investigated by Irish police under blasphemy laws. Since the existence of such law, in 2017(!), would be as retarded as Fry’s understanding of theology, I was initially a bit skeptical. Unfortunately it’s true. According to Independent.ie, the complainant, one member of the public, believed that Fry’s remarks were criminal under the Defamation Act 2009. The Act has an entire section (36) on blasphemy, and it’s extremely subjective in its wording. Hard to believe, isn’t it, that such a backward piece of legislation exists in Ireland and in the United Kingdom?

Onto Nick Cohen’s Book: There are three sections, dealing with religion, money and the state, and there is a fourth section suggesting solutions that are more abstract than practical. Here I’ll cover the first and add some of my own thoughts. Not because of the religious angle, per se, but because it’s where we find the most lucid descriptions of how the supposition of our collective liberalism and tolerance is pretty difficult to justify sometimes.

‘Be it enacted by the General Assembly that no man shall be compelled to frequent or support any religious worship, place, or ministry whatsoever, nor shall be enforced, restrained, molested or burthened in his body or goods, nor shall otherwise suffer on account of his religious opinions or belief, but that all men shall be free to profess, and by argument to maintain, their opinions in matters of Religion, and that the same shall in no wise diminish, enlarge or affect their civil capacities‘.

Here Jefferson demanded no less than the right of anyone to express their religion in the public sphere and the right of anyone to criticise a religion. It does not imply that expressions of religion should be banned from the public arena, or that one should keep his/her religious beliefs private – legislating that would be state censorship, essentially, for what is religion but a system of ideas?
Jefferson is essentially trusting in the individuals’ ability to reason for themselves, to defend their opinions and beliefs through argument, and to follow their consciences. Christianity is no less a valid basis for morality than what the secular world ultimately bases its ideals on, if most of us believe in the principles of fundamental rights, human dignity and the sanctity of life. We have the intellect to resolve the more challenging questions of applying these principles in the real world.

This freedom is important, because human rights violations, oppression and injustice do indeed happen, they should be exposed and they should be openly discussed. Sometimes they aren’t: Overall Cohen’s book is about how our desire to openly discuss the issues is often outweighed by the fear of retribution, the fear of being sued, the fear of how it would impact our careers, the fear of something consequential. He made the case for this far better than I ever could.
Cohen argued that mainstream ‘liberals’, maybe for fear of causing outrage among religious zealots, cannot be objective and consistent in criticising oppressive ideology, and there are real-world examples provided of established liberals turning on those who criticise the oppressors – the Salman Rushdie drama being just one case in point. This is perhaps why we see only outrage against trivial instances of ‘oppression’ within our Western culture, instead of solidarity with victims of real oppression in other nations where Islam is dominant. And this is only a facet of the underlying problem – ultimately the same kind of fear prevented employees of global banks warning us of the impending economic crash of 2008, and forces the press to consider the risks of being sued when holding those with financial power to account.

As you probably know already, I’m rather zealous in my belief that freedom of expression and privacy are fundamental rights, and they could only be guaranteed with technical safeguards.

Around the same time the Investigatory Powers Act (without opposition from New Labour) granted 40-odd public authorities access to most peoples’ Web browsing histories, Tory politicians took it upon themselves to submit a bill (also unopposed) to ban online pornographic videos that contain anything that wouldn’t be allowed on a commercial DVD. Meanwhile in the United States, The Powers That Be have given themselves a mandate, in the form of what’s referred to as ‘Rule 41‘, to maliciously hack any computer on a Tor circuit.
Given the mainstream media’s campaign against alternative media ‘fake news’ and the associations made between non-mainstream opinion and the ‘far right’, I wonder if tomorrow would see the banning of non-mainstream ideas, and maybe our browsing histories being made available to private sector organisations.

Obviously the solutions must be open source, they must be resilient against adversaries with advanced resources and their designs beyond the control of The Powers That Be. Subgraph OS provides us with an operating system security model needed for the current age, in one installation. As well as being a Linux distribution, I like to think of Subgraph OS as a template or pattern that other Linux installations can be configured to emulate with a little work. This is what I’ve done with my own Linux system over the years.

Application Layer
The first security measure is sandboxing of processes, using Linux namespaces to segregate resource usage. Conventionally a Linux system has a root process, and a single process tree that grows the more programs are running on the system. Linux Namespaces provides a way to logically isolate processes and process trees, so that each uses a separate instance of whatever system resources it needs. Since Linux Namespaces has been a native part of the kernel for a while, anyone could set this up on their own system.

As far as a compromised process is concerned, the root process is the first within the container, and the root privileges won’t extend outside to resources outside the namespace, and neither would the process be able to navigate beyond the virtual root directory – it shouldn’t, for example, be able to spawn a malicious process capable of installing rootkit components in the system directories. The namespace itself might be compromised, but its effects are isolated.

Tor
Based on the older YAZ proxy, Metaproxy creates an independent Tor circuit for each application, and handles the session routing between the applications and Tor proxy.
What protection does this provide? Despite being referred to as an anonymising technology, it only masks the IP addresses of the source and destination servers, and other layers of security are needed to strip payloads of anonymising data.
I should point out there are other options (e.g. VPNs and I2P) to fall back on, for anyone who doesn’t trust Tor.

Kernel Security
Although the containers and namespaces at the application layer are good for isolating compromised processes and containing the damage, ideally the first line of defence is to prevent exploits executing in the first place, at a low level.

PaX provides three low level security measures: Fields are added to the ELF as it’s loaded into memory, so the stack can be made non-executable and the executable section as non-writeable. This effectively prevents the functionality of a running program being extended by malicious code. Address Space Layout Randomisation (ASLR) makes it much harder to an exploit creator to predict memory addresses, assigning a different memory map whenever a process is spawned from an executable file.
Modern operating systems already have similar native components, and existing kernels can be upgraded with versions incorporating the PaX extensions.

Filesystem Encryption
The full disk encryption included with Subgraph is based on the mature and open source dm-crypt. This is an effective defence against threats who gain physical access to the hardware while it’s switched off – for example, if it’s a laptop that’s stolen or mislaid. Often this is provided with mainstream distributions (Linux Mint and openSUSE) as an option during the installation process.
Storage volumes using dm-crypt can also be mounted on Windows systems using LibreCrypt Explorer, by the way, so that potentially allows for some portability.

Payload Anonymisation
One pretty essential layer of security that appears missing from Subgraph is payload/traffic anonymisation, but it should be possible to install Privoxy or Ghostery for this. I strongly recommend this. During the typical browsing session, the payload of browser traffic contains identifying data, and potentially the browser could fetch malicious code from compromised ad servers, even when using Tor.

‘The Government has recently been looking to introduce new checks to ensure that adult content can only be viewed by those over 18. To do that, it will introduce age verification schemes, and sites that don’t implement them will be rendered inaccessible from within the UK.’

The problem with this is twofold: First the government would need to implement some form of Internet ID scheme, and this comes loaded with potential issues. Obviously they can’t use IP addresses, since multiple people could be on the same network or even the same computer. They could use a centralised identity assurance scheme, but that would result in event logs recording who visited what, and I don’t think anyone’s stupid enough not to use Tor and VPNs instead. A likely candidate is Gov.uk Verify, which is actually intended for identity assurance with trusted parties, and it could become essentially the National Identity Register we (including yours truly) campaigned against last decade because there were serious trust issues. Additionally this could pave the way for an Internet ID system that should be avoided for similar reasons.

Secondly, the government would have to block access to any sites that didn’t comply with the age verification thing. The systems for implementing that have been deployed since before 2013, and we know the filtering can readily be applied to other categories of Web content – for example, I found it blocked access to sites related to to e-cigarette suppliers, hacking and martial arts for a couple of months. It’s entirely possible that a future government would selectively block access to something like Wikileaks and campaign groups on the grounds of some ‘hate speech’ or ‘anti terror’ laws.

Simply censoring stuff doesn’t address cause, and attempting to enforce morality on a single issue doesn’t work in a society that encourages consumerism, double standards, decadence, moral relativism, self entitlement and lack of community. If young people spend most their nights holed up with their pornography and games (instead of going to Church!), there’s a deeper and more serious problem. So, yes, I think there’s a problem related to pornography, but glossing over the situation with a Web filter that could easily be abused and repurposed is even more morally ambiguous.

Like this:

‘Many decades from now, people could look back at this period and say ‘that was a very rare moment, the period from the 1990s to 2010 where there was this global communications platform that just disintegrated’, and so that’s why many of us […] are advocating for an Internet protection movement […]’
Ron Deibert, The Citizen Lab.

Exactly as I warned everyone for the past two years, the Internet ‘pornography’ filter was a political move that had nothing whatsoever to do with pornography and everything to do with simply having the capability to censor stuff. All it took was for the Daily Heil to muster a few easily misled cretins lobbyists to make that happen.
If anyone still thinks I’m a paranoid conspiracy theorist, TalkTalk is now providing roughly the same level of crippled Internet access you’d expect in a primary school library, with sites related to martial arts, electronic cigarettes, alcohol and even nicotene patches – none of them even remotely pornographic – all blocked, and it’s proving rather difficult to (legally) sign into the customer portal to resolve this as a responsible adult with the freakin login details written in black and white.

At the risk of this blog ending up on the shitlist (which might happen anyway), I’ll dedicate this post to the readers who didn’t see this coming, and will provide a solution that requires the least effort. Unfortunately it’s still going to involve learning some technical networky-type stuff.

Proxies and VPNs
The technical situation is basically this: The IP addresses for most sites actually point to their hosting providers’ nameservers, and the ISP filtering seems to work by scanning the TCP payloads for blacklisted URLs. What this means is the only practical countermeasure is to encrypt and proxy our HTTP requests. There are two options available to us.

The average web proxy, the type we’d get after a brief Google search, operates by fetching web pages and sending requests on behalf of the client, in effect acting as a relay between the client and destination server. Of course, if the ISPs are scanning TCP payloads for URLs, this method would only be effective when the connection to the proxy server is encrypted.
It’s the quick and easy way to get around filtering, but users might be setting themselves up for a Man-in-the-Middle attack, with the proxy operators able to read everything that passes through, and no doubt some people, thanks to someone’s clever idea of getting ISPs to filter legitimate non-pornographic stuff, will resort to conducting financial transactions with a blocked site via a potentially malicious proxy.

Virtual Private Networks function in roughly the same way, but they are fundamentally different in that they allow a pass-through of connections properly encrypted between the client and destination server, while adding another layer of encryption between the client and proxy. In other words, the TCP/IP packets between the client and destination server are re-encapsulated to become the encrypted payload for TCP/IP packets between the client and VPN server.

For the time being, it looks like VPNs are the safest, most practical and reproducible method of getting around censorship in a client-server Internet.

Setting Up OpenVPN
The UWN Thesis blog has several months’ research, some decent walkthroughs and YouTube vids on setting up OpenVPN. There’s also a SANS paper for anyone who’s interested in the technical details.
Here I’ll cover the setup on a Linux system with a Gnome/LXDE/Mate desktop, and go a little more into the background so readers understand what’s happening.

There are actually two steps to getting VPN access: Firstly, we need to install the OpenVPN client on the local system to handle tunneling, encryption, authentication and other back-end stuff. Unlike HTTPS, where data is encrypted by the browser and written to TCP/UDP sockets, the OpenVPN client is functioning as an intermediary and handling the traffic below the application layer. Therefore we should see encrypted sessions between the browser and web server being tunneled through another encrypted session between the OpenVPN client and whatever VPN service.
The second stage involves configuring the OpenVPN client to establish that tunnel with a VPN service.

So, the OpenVPN client backend needs to be installed first, and most users will want a GUI front-end. Linux users will need to fetch ‘openvpn‘, ‘gadmin-openvpn-client‘ and ‘openvpn-blacklist‘ from the package repositories. The latter should alert users when a known dodgy certificate is being used.

When the packages have been successfully installed, the OpenVPN GUI will normally be found somewhere in the System or Administration section in the desktop menu, although I won’t actually be using it here.

The next step is to find a VPN service, and it’s important to choose one that’s reputable. Time to scour the Internet for a provider that supports OpenVPN and download something called a ‘bundle’ – an archive text files containing the service settings and certificates. For this demo I chose an excellent service called VPNBook (another recommended by UWN Thesis). Extract this.

Now for a tiny bit of command line work. Navigate to the extracted bundle directory in the command line (as root), and enter the following:#openvpn --config vpnbook-euro2-tcp80.ovpn

The authentication details it asks for are available on the VPNBook’s ‘Free VPN Accounts‘ page.

This uses just one of the files to set up the VPN connection for web browsing. The other files are for HTTPS, DNS and I’m assuming the UDP 25000 file is for stateless packets to get around certain firewalls that otherwise block VPN traffic.

And that’s it. To prove it worked I was able to access the sites that were previously blocked. I also checked the SSL/TLS certificates on the other sites I was accessing, and am 99% certain that VPNBook was safely tunneling my encrypted connections – always check this anyway.

Categories

Profile

My name is Michael, and I’m a software developer specialising in clinical systems integration and messaging (API creation, SQL Server, Windows Server, secure comms, HL7/DICOM messaging, Service Broker, etc.), using a toolkit based primarily around .NET and SQL Server, though my natural habitat is the Linux/UNIX command line interface.
Before that, I studied computer security (a lot of networking, operating system internals and reverse engineering) at the University of South Wales, and somehow managed to earn a Masters’ degree. My rackmount kit includes an old Dell Proliant, an HP ProCurve Layer 3 switch, two Cisco 2600s and a couple of UNIX systems.
Apart from all that, I’m a martial artist (Aikido and Aiki-jutsu), a practising Catholic, a prolific author of half-completed software, and a volunteer social worker.