Nicolas Christin, a Carnegie Mellon professor who has closely studied Dark Web drug markets, suggests the site’s simplicity and lack of its own payment system could reduce its “attack surface”—less code means less hackable bugs for law enforcement to attack. “It’s more like traditional drug dealing with online support than a real full-fledged anonymous marketplace,” says Christin, comparing RAMP to Silk Road’s simpler predecessor OVDB, or the Open Vendor Database. “To some extent it’s very primitive. But to some extent it clearly works really well, because these guys are still alive and kicking.”

From the trusted members on BBS, the idea of a totally public forum developed — the Open Vendor Database (OVDB), a move away from the existing market model of closed forums and privately organised groups.

And many others for my stellar contributions to the transnational struggle against the slave trafficking conspiracy, I've picked up a few things about protecting freedom. I will preface this by saying I know people who are secure who have done everything from illegally looking at all sorts of underage porn for years, to running major drug markets that trafficked millions of dollars worth of drugs, all without running into the slightest problem. You can be free and secure, don't let the government zombies convince you that the government is all powerful.

Note, however, that there is always risk in anything you do. You are not assured of your security by following the presented advice, it is just that you are able to massively decrease your risk. Over time security tends to naturally decrease, which means that the longer you engage in a pattern of activity the more likely it is you will be compromised whilst doing so. After all, entry guards rotate, stylometric datasets increase, computational ability of attackers increases, attacker sophistication increases, opportunities for mistakes multiply, intersection data points are enumerated, etc.

Of course, over time, people also naturally improve their security abilities, which means that they can reduce the probability that they will be compromised after eventually targeted. Additionally, security systems tend to trend toward increasing in strength. After all, people start using Tor, they start encrypting their drives, they learn about Amnesia, and Isolation, people stop using Windows, and PBKDF2 turns into Argon2, Tor actually introduces the concept of entry guards, etc.

This is meant to be a fairly high quality introductory tutorial, but it is by no means comprehensive. I will likely add to it over time.

I will start this post off with a bit of a preface, seeing as I'm about to cover a lot of information, but you don't really need all of it per-se. There are preconfigured systems that provide substantial security out of the box, or with minimal configuration. Using these systems already very significantly protects you from a wide variety of attacks. I will cover three such systems initially;

NOTE: All of these distros use Tor Browser for their browser (Tails additionally having an aptly named unsafe browser, which doesn't route over Tor and which is intended only to be used for getting past WiFi access gateways).

Tor Browser has a security slider (the green onion icon by the URL bar takes you to it) for configuring the usability-security trade off. An example of such a trade off is if Javascript is enabled or not; with Javascript enabled you can use the numerous javascript functionalities of websites, but at the same time you significantly degrade your security, not only in increasing complexity (there being generally an inverse correlation between complexity and security), but also in the need for executable memory allocations (in the case of JIT compilation) which allow attackers to inject shellcode that can be executed via routes that bypass W^X protections (nx-bit).

You should set the security slider to your threat model; in the case where you need the utmost security (ie: a compromise would be catastrophic), you must keep the security slider set to high, and in cases where vital functionality is broken by this you must simply not use that functionality and find another method to accomplish your goal. In cases where you just want general protection but a compromise will not be catastrophic, you can set it to lower levels as you feel comfortable. Even with the security slider at low you will have significant protection, it's just setting it higher has the ability to protect you from numerous sorts of hacking attacks that are not yet patched (ie: zero days), so essentially it is a browser hardening slider that goes from not hardened (the low setting, which has only the hardenings built into Tor Browser) to extremely hardened (the high setting, which hardens the browser from a wide variety of potentially unpatched vulnerabilities being exploited, and thus makes your browser a much harder target for hackers / NITs [AKA network investigatory techniques]).

Tor Browser is a browser that comes preconfigured to route its traffic over Tor. It is the simplest of all to use, and is included as a browser in the operating systems I'm about to discuss in this section. Using Tor Browser alone, although still much more secure than using nothing at all, is less secure than using something such as Tails, Whonix, or Subgraph. This is because Tor Browser only provides a secure browser, it does not provide a secure operating environment in which to run the browser. This means that, sans your own configuration, an attacker can bypass Tor with a single successful exploitation of a software vulnerability in the browser (note that this is still a non-trivial task, particularly with the security slider at high).

Additionally, Tor Browser is only configured to route its own traffic over Tor. This makes downloading things with Tor Browser dangerous, because downloaded files may trigger network activity that bypasses Tor trivially. For example, if you were to download an office document with an embedded hotlinked image via Tor, although the download would be anonymous, as soon as you opened the document the document viewer would bypass Tor and leak your real IP address to the hotlinked destination. For this reason, you should simply not download files with plain Tor Browser. Leaks are possible in even unexpected ways, with unexpected file formats having the potential to bypass Tor. In other words, feel free to browse the Internet with regular Tor Browser, but don't download anything unless you are using Tails, Whonix, or have a similar custom configuration or other full operating environment security solution.

Tails is a live security oriented OS. Live means that you boot it from a DVD or USB Memory Stick (hence forth simply 'USB'), and that it doesn't use persistent storage. Tails has numerous

Security Advantages

1. It comes with applications preconfigured to route traffic over Tor.

2. Applications not configured to do so will have their traffic dropped by the firewall

3. It avoids all malware that may be present on your host (provided you boot it from a USB or DVD and not into a virtual machine) [Of course this is ignoring the potential for BIOS bootkits, etc]

4. It isolates the browser with AppArmor, which hardens from hackers doing bypasses of Tor, as LE is tending to do these days (LE = Law Enforcement)

5. It protects from proxy bypasses via its firewall rules, which drop non-Tor traffic (though apparmor is required to enforce this because of the unsafe browser, which is only for initiating Internet from WiFi access points with gateway pages)

6. It automatically spoofs your MAC addresses if it can manage to, it at least attempts to but success depends on the firmware of the WiFi adapter

7. It has support for securely encrypted persistence (in the form of a single encrypted directory, and provided that you boot it from a USB)

8. It avoids leaving forensic traces (other than possibly in the encrypted persistent directory, in the case that you opt to enable this). This is extremely important for some threat models! After you shut down, there should be no forensic traces of your session. The one exception to this is in cases where a hacker penetrates into your system through the browser for example, and bypasses apparmor such that they can get hardware serial numbers, in which case they can link the state of the system that they can see from their malware to the serial numbers, and at a later point upon recovering those serial numbers from your hardware they can say that they have proof that hardware was on a system with the state they observed from their malware.

9. It implements the back end of a dead man switch, which allows for reflexive shut downs in the case of an emergency situation that leads to your rapid restraint. I will cover this more in the drive encryption section.

Unfortunately, Tails does have some security disadvantages, primarily that, due to its lack of persistence between sessions (booting into it), it does not have the ability to remember the Tor nodes it has previously used, and so it rotates its entry guards every session. Persistent entry guards provide significant extra protection from various traffic analysis (SIGINT) attacks, and Tails does not get as much of this protection as solutions that can use persistent entry guards do. On the plus side, this lack of persistent entry guards (when coupled with successful MAC spoofing) means that Tails users are unlinkable between sessions between different WiFi access points (rather blending into the crowd of all Tor users in a given area).

I find lack of persistent entry guards to be at least moderately concerning. The more frequently you use Tails the more concerning this becomes, for example if you engage in one Tails session per week it is less concerning than if you engage in one per day (in fact there is 7X more entry guard rotation if you have one session per day than per week), and if you engage in one per three months it is about equivalent to solutions with persistent entry guards. In practice, people seem to be able to frequently rotate their entry guards without much concern, but in some instances users have been made significantly more vulnerable to attacks in the wild due to this behavior of Tails, the primary reference case being the RELAY EARLY attack, but this actually holds for all attacks of traffic confirmation nature).

Whonix is a security oriented distro typically meant to be run inside of a Virtual Machine. It implements a form of isolation known as hypervisor (which is another name for virtual machine manager essentially) isolation, or virtual isolation. This has the goal of keeping any hackers who hack any of the applications in the workstation virtual machine from being able to break out of it to the host system, and makes it harder for them to bypass Tor (as they must break out of the workstation VM in one way or another in order to bypass Tor). Whonix is preconfigured to route all traffic from the workstation over Tor, or to drop it if it cannot. This is enforced by firewall rules and additionally by the hypervisor isolation.

Whonix has various security (and other)

advantages

1. It protects from proxy bypass attacks with the layer of hypervisor isolation. This makes it significantly more difficult for attackers to get to your IP address via hacking, and is a similar technique to that of Tails isolating the browser with apparmor (which is a form of access control isolation).

2. It maintains state, and is meant to be run in addition to anything else on the host OS, which makes it more convenient for day to day use, as well as threat models that do not have a strong emphasis on not leaving any forensic traces of sessions.

Whonix also has various

disadvantages

1. It does nothing to prevent recoverable forensic trace evidence from being left on the drive of the host, necessitating the utilization of manual FDE (Full Drive Encryption) for some degree of protection from disk forensic analysis.

2. It does not protect from malware already on the host (IE: If you run your Whonix VM on a host that already has malware on it, it will not be able to protect you from that malware, and that malware can comromise the whonix workstation trivially).

Subgraph is a next generation security oriented distro that builds on the theory of whonix and tails, but with a more advanced configuration. Eventually it will likely be the go to choice and superior to both Tails and Whonix, however it is currently in an alpha version and probably not suitable for use when high security is required due to how young it is. It is nonetheless a distro to keep an eye on. Due to the fact that it isn't ready for use in security critical situations yet, I will not enumerate its advantages nor disadvantages.

Counter SIGINT (AKA: Online Anonymity)

Every time you send a packet over the Internet, it contains your IP address. This is required so that the destination server can be able to respond to you. Typically, the destination server will keep a log of your IP address. Your ISP will typically keep logs of who each IP address was assigned to at a given time, and they will maintain these logs for some period of time, which is called data retention. In some countries there are even laws that mandate data retention for certain periods of time.

This allows LE to trace you, because if they get server logs containing your IP address, they can find the ISP that owns the IP address, and then they can subpoena your ISP to get the data retention logs that show who the IP address was assigned to. This process of tracing IP addresses falls under the field of traffic analysis, which is under the discipline of SIGINT, or Signals Intelligence.

NOTE: Traffic Analysis alone is only enough to secure a search warrant, and to result in a raid. By itself it does not stand up as evidence that results in convictions. There is a dichotomy here, the Intelligence-Evidence dichotomy. Intelligence is that which is used to narrow in on the location of evidence, evidence is that which is proof of an action. Intelligence alone is not enough to secure convictions, it is merely enough to narrow in on evidence such that it can be obtained via search warrants.

Additionally, in routing your packets for you, your ISP is able to see the content of your traffic as well as the destinations you are communicating with. Likewise, the ISP of the destination is able to see your traffic and know that you are communicating with a specific one of their clients. Traffic analysis can also take point at these points in various fashions, Deep Packet Inspection (DPI) being one such methodology.

This brings us to Tor. Tor is an anonymity network consisting of several thousand nodes that act essentially as proxy servers. When you use Tor, the Tor Client will build various circuits, which are paths through selections of three nodes from the total available set. Tor constructs layer encrypted tunnels between each of the nodes on the circuit, illustrated in the above graphic. Your traffic then is routed from your machine, through the layer encrypted tunnels, and to the destination server. Now your ISP is incapable of seeing the content of your traffic because it is wrapped in multiple layers of encryption. They also cannot see the destination server for the traffic, rather they can only see that you are communicating with the first node on the circuit, which is called the entry guard. Likewise, the destination server no longer can see your IP address, rather they can see only the IP address of the last node on your circuit, which is called the exit node (or rendezvous point in the case of connections to hidden services). The destination server will not have logs of your IP address on for this reason, rather it will have logs of the exit nodes IP address (or logs of an internal IP address, such as 127.0.0.1, in the case of hidden services).

This substantially improves your anonymity on the Internet. Note that today Tor is typically bundled with Tor Browser (and Tor Browser is bundled with Tails, Whonix, etc). Tor Browser is the only browser with which it is secure to use Tor, because it has been hardened against numerous anonymity, forensic, and security issues, which I will spare a comprehensive enumeration of.

Tor does provide extremely good anonymity, and time and time again resists substantial compromise rates even in the face of attackers like the FBI (Via traffic analysis or application layer hacking) and the NSA (Via traffic analysis, the NSA has extremely good hackers though and they can probably hack most individual Tor users to deanonymization, in the event that they can get an application layer attack path to them). However, Tor is not perfect anonymity, and a subset of Tor users do end up compromised via either application layer hacking (thus far always due to security errors such as failing to keep the browser up to date, or not setting security slider to high), or more rarely sophisticated traffic analysis (typically due to finding an implementation flaw in Tor, which rarely happens today, and with discovered flaws being patched).

Let's cover some of the basics of what an attacker can do against Tor in the event that they own nodes on a targets circuit;

As you can see, the entry guard is the most valuable node on the Tor circuit (which is why it is discomforting that Tails rotates it between sessions, due to the fact that every new selection has another chance of being owned by the attacker). However, merely owning an entry node is not enough to compromise the circuit. Owning an entry node and an exit node (or otherwise being able to watch traffic to the destination server, such as by owning it or being at its ISP) is enough to substantially decrease the anonymity of Tor, and an attacker in this position is thought as capable of deanonymizing the target, though there is some degree of difficulty involved in carrying out an attack even from this positioning. Of course, an attacker who owns the entire circuit can trivially deanonymize the target.

Thankfully, the Tor network is quite large as I said, consisting of in the area of 7,000 nodes distributed around the world. Even in the event that attackers do flood nodes into the network, it needs to be kept in mind that various attackers will not collude with each other. In other words, if the FBI owns your entry node and the British Police own your exit node, you are still safe unless they share this intelligence with each other, something they are unlikely to do in general. Even the NSA and GCHQ seem to have limited SIGINT intelligence sharing between them, as we can determine from the Edward Snowden leaks. Additionally, undoubtedly a great many of the Tor nodes are not operated by adversaries, but rather by libertarians, cryptoanarchists, privacy advocates, and similar.

Protecting From Hacking (AKA: NIT NOT)

Due to the previously mentioned strong protection from traffic analysis provided by Tor, attackers are finding that they have more success when they hack targets on the application layer, infect their systems with malware, and then bypass Tor. This process is illustrated as follows;

As you can see, the attacker is still incapable of tracing the connection on the network layer because they have a Tor Circuit between them and the target. However, they can of course still send traffic to the browser, otherwise there would be little point to using Tor! However, this means that they can also send malicious traffic to the browser, in the form of attack payloads that exploit vulnerabilities in the browser in order to execute arbitrary code on the targets system. Of course, this arbitrary code will typically be instructions to route packets to a server controlled by the attacker without going through Tor, which is called a proxy bypass attack. This is how the FBI NITs work.

Protecting from this sort of attack is orthogonal to Tor. Rather, it introduces us to the three pillars of computer security; Correctness, Isolation, and Randomization (I would preferably call Randomization 'Mitigations', seeing as there are more of them than ASLR).

Correctness means simply the lack of exploitable vulnerabilities. If there are no security bugs in the browser, then the attacker cannot send attack payloads to compromise the browser. Unfortunately, the correctness of your browser is dependent largely upon those who implemented it, there is little you can do in terms of improving the correctness of Tor Browser unless you work on coding it yourself. However, there are steps you can take to reduce the complexity available to the attacker.

Complexity is essentially the lines of code in a program, or the lines of code it takes to perform a task. Correctness and complexity tend to have an inverse correlation, in that as complexity increases correctness decreases. This is because the more lines of code there are the more opportunities there are for a mistake to have been made, and additionally because it takes more time to audit all of the lines of code to find mistakes.

Tor Browser can have its complexity reduced by setting the security slider to higher values. For example, at the highest setting, Javascript is disabled. This removes the attackers access to the lines of code associated with the javascript implementation, and in doing so removes their ability to exploit security vulnerabilities present in those lines of code. Various other attack surfaces are also removed as the security slider goes higher, another example is that the rendering of arbitrary fonts is disabled, thereby removing the attackers ability to exploit the font rendering engine. Thus, you can reduce the presented complexity of Tor Browser, and thereby increase the pertinent correctness, by setting the security slider to high. This already substantially hardens you against the currently discussed class of attack.

Of course, another way to ensure correctness is to make sure that all software used is the most up to date version available. This entails regularly checking the releases of things such as Tails (if you use that), Tor Browser, etc. One of the primary ways people get pwnt is in not updating as soon as possible to the most recent version of the software packages they are using. This is particularly important for your internet facing software, because that is the window through which remote attackers come. Making sure that you are using the most recent release of software, be it Tails or Tor Browser, and following the security advisories, is a crucial step in having correct software, and therefore is vital for security. Although keeping the security slider set to high can make Tor Browser vulnerabilities unexploitable, it is better to patch all security vulnerabilities as quickly as possible, which entails getting the most recent version of software. Today Tor Browser will eventually update itself, and there are warnings when software is out of date, however this is not a replacement for checking the current releases yourself, and following the security of the software packages you use yourself, rather it is a last ditch effort to provide some degree of security to the people who fail to do this.

Isolation consists of configuration techniques whereby software applications are contained into security domains. There are two general classes of isolation; software based and hardware based.

The above graphic illustrates a software based isolation technique, namely hypervisor (which is another name for virtual machine manager essentially) isolation, in which the browser is contained inside of a permission restricted virtual machine, that has no knowledge of the users IP address, and which has an inability to route traffic other than over Tor. This is the technique implemented by Whonix. Now if the attacker manages to successfully exploit the browser contained in the VM, though they can still execute arbitrary code, they cannot cause packets to be routed around Tor, because they virtual machine they have taken control of is not able to route traffic around Tor. Rather, the attacker must break out of the isolation by exploiting an additional security vulnerability in the hypervisor.

CAUTION: Although an attacker in such a scenario cannot route traffic around Tor, and although they are additionally isolated from hardware serial numbers, they do gain a persistent foothold into the virtual machine, and can exploit the hypervisor at some time in the future as their ability to do so arises. For this reason, it is suggested to first create a clean base template virtual machine for solutions like Whonix, and then to periodically clone the base template after updating it with new security patches, rather than to continue using the same instantiation of the virtual machine indefinitely. By doing this, you can periodically push any attackers out of their foothold, forcing them to need to hack into the virtual machine through the browser again, a feat they may not be able to do with their original browser exploit, in the case that the vulnerability it exploited has since been patched. Note however that this reduces the ability to use persistence, seeing as changes made between clones of the base VM will be lost. Downloaded content can, however, be saved to encrypted media and reloaded into the new virtual machine, and things such as GPG keys can be included in the initial clean base VM template

Another sort of isolation is done with hardware, and is named physical isolation. In this situation you have two systems, one of which is connected to the Internet (the Tor Router), and one of which is connected over LAN to the Tor Router, without its own direct connection to the Internet (the Workstation). The Tor Router allows the machines connected to it to route through it over Tor, but does not allow any other traffic to be routed over it. This is essentially the physical manifestation of the virtualized implementation of hypervisor based isolated.

I've written a detailed guide on configuring this, though this is for advanced users primarily with significant Linux backgrounds: https://pastee.org/t8vaw

Whonix also has the ability to be run in physical isolation mode. Physical isolation is superior to virtual isolation in that it significantly reduces the attack surface and increases correctness via both doing without the hypervisor (thereby reducing the complexity of a hypervisor and removing the jailbreak path through the hypervisor to the host), and additionally by physically separating the hardware of the workstation from the Tor Router (thereby protecting from physical layer attacks such as Row Hammer, wherein an attacker rapidly fluctuates capacitor charge on the RAM in order to cause discharge to flip bits in adjacent capacitors, allowing them to break out of any form of isolation on systems with RAM vulnerable to row hammer).

NOTE: You should make sure that your RAM is how hammer resistant or immune. Most likely this will entail using modern DD4 RAM, though unfortunately some of that is still vulnerable to row hammer. Doing research on this topic should allow you to find some brands of RAM that resist row hammer, there is a paper that enumerates some of the ones that do and do not, though I don't have the link on hand.

CAUTION: If your RAM is not immune to row hammer, you absolutely must keep the security slider on high, because row hammer can be utilized remotely if javascript is enabled, allowing for remote code execution.

Also be advised that although physical isolation is superior to virtual isolation, it does not isolate hardware serial numbers like virtual machines do. In other words, if an attacker takes control of the workstation with the browser in it, although they cannot bypass Tor without compromising the Tor Router, they can still directly view hardware serial numbers, and additionally they can route traffic back to themselves through the Tor Router (just not bypassing Tor in doing so). For this reason, it is highly suggested to use a virtual machine on the workstation even when running in physical isolation mode, due to the additional isolation from hardware serial numbers, as well as the ease of pushing attackers out of their foothold, as was previously elucidated.

In addition to these two sorts of isolation, there is also access control based isolation, which is a form of software isolation. This is similar to the standard Unix discretionary access controls (DAC), but in a far more fine grained sense (ie: you can restrict even the IP addresses that a given application can communicate with, similar to a traditional network firewall, and many other fine grained controls that go beyond the traditional Unix DAC). Examples of such systems include Grsec RBAC, and additionally AppArmor (Which is utilized by Tails for isolation of the browser, which protects from proxy bypass attacks when coupled with the firewall rules Tails comes preconfigured with).

Randomization (and other mitigations) entails a wide variety of security techniques that can be implemented at various parts of your stack. Unfortunately you will typically have little control over these. Randomization in particular refers typically to Address Space Layout Randomization (ASLR), which randomizes your memory layout, making it more difficult for attackers to exploit security vulnerabilities in your software (possibly necessitating an additional arbitrary memory read vulnerability in order to bypass ASLR). You will get the most out of ASLR if you have a 64-bit CPU and run a 64-bit OS on it, unfortunately Tails is not a 64-bit OS but it still does get some benefit from ASLR.

Another mitigation is nx-bit on the CPU, which supports w^x (Write XOR execute), a technology which prevents attackers from injecting executable shell code into system memory (most attacks work by the attacker injecting arbitrary executable shell code, and then overwriting a pointer or return address such that it points to their payload, executing the arbitrary code when the pointer is dereferenced or the next instruction is called). You should make absolutely sure that your CPU has nx-bit support (on Linux this is as simple as 'cat /proc/cpuinfo' and looking for label 'nx' under flags). Additionally, if you use Virtual Box, you should make sure to enable nx-bit seeing as it is disabled by default, you can do this by going to the CPU settings of the virtual machine manager for that VM and checking the pertinent box).

Note that w^x can be bypassed in certain circumstances, for example by default javascript will use JIT compilation, which necessitates the allocation of executable memory segments, into which an attacker can inject their shellcode. This is yet another reason why disabling javascript is important, so that this w^x bypass doesn't exist (even at lower than the highest security slider setting, JIT compilation is disabled, however keeping it all the way to high is still suggested as it reduces complexity further).

There are various other advanced mitigation techniques as well, however these are more in the realm of advanced concepts that will probably be of little use for me to mention here. I will however say that SECCOMP, Shadowstack, Grsec, Grsec RAP, namespaces, etc, all are concepts that are noteworthy. In general, you will rely on a preconfigured distro to implement such things for you, seeing as they are quite advanced, Subgraph is unique in that it is utilizing several advanced mitigation technologies, however as previously mentioned it is still in alpha version.

Additionally, although this is security via obscurity, which is derogatory thing to be seeing as it is not a suggested way to obtain security, using obscure operating systems such as OpenBSD or similar can protect you from dragnet attacks that only have exploits written against the more popular operating systems. However, this does absolutely nothing in itself to prevent targeted attacks against you from being successful, which is why it is security via obscurity. However, it is noteworthy that in their Freedom Hosting attack against Tor users, the recovered FBI attack was only targeted to Windows users. Windows should simply not be used for anything that is sensitive, at least a linux VM such as whonix should be utilized, or preferably booting into Tails, in general you want to be as far away from Windows and OSX as possible, and only be using Linux or BSD distros (or other fully open source solutions).

Note that Enhanced Secure Erase can be done by substituting the correct flag with this following; --security-erase-enhanced

Note that as completely different mechanisms are used between these erasing methodologies, that it is suggested to do BOTH of them, in the event that one has a catastrophic implementation error.

Some words on passwords

Preface

This aims to be a pretty comprehensive post on passwords and some of the systems protected by them, however it is broad but not very in depth, I'm sparing a lot of the important very technical details and trying to write this article for a neurotypical to be able to understand it.

If you are merely interesting in making secure passwords, feel free to skip ahead to chapter six-nine.

Chapter One: Misconceptions about passwords

It is very common for people to use inadequate passwords, even when they think their passwords are secure. As this XKCD comic so elegantly explains, people have gotten into the habit of using complex substitution schemes that are difficult for humans to remember, but they are still not hard for computers to guess.

The first thing I should point out is that not all passwords need to have the same security requirements, because they serve different purposes. There are two sorts of password cracking attack, online and offline. In an online attack, the attacker may make a bot that attempts to login to your account on a forum over and over. In an offline attack, the attacker may have a hash of your password that they can test guesses against without having to send data over the network, or they may have something you have symmetrically encrypted that they try various passwords to decrypt offline.

Online attacks are much slower than offline attacks, as you can see this XKCD comic assumes an attacker who can only attempt 1,000 passwords per second. This is reasonable in an online attack, there is natural rate limiting in the form of network bandwidth, websites may rate limit login attempts with captchas and similar, and delay per attempt can be programmed logically

Chapter Two: Online Versus Offline Attack Models

My Ruby is very rusty but I will use it for various programming examples throughout this post because I find it is pretty easy to read and concise.

As you can see, a logic delay of one second has been hard coded into this simple login script example. An attacker who is faced with programming logic like this through an online interface can therefore be arbitrarily limited in the number of password attempts they can make in a given time period.

Offline attacks allow for guesses to be attempted in a much more rapid fashion. To understand how offline attacks work you need to have some basic understanding of how password authentication systems work. Typically with website login password authentication, when you submit your password to a website during registration or whatever, a cryptographic hash of the password is taken.

Which the server will store in the database under the users entry in the database. Now when they get the users password sent to them again, they hash it and then compare it to what they already have stored.

Now if an attacker hacks into the database of the website they can retrieve the hash of your password, but they cannot actually use this for logging into the website, because if they present it to the website as your password it will first be hashed by the server before being compared to the stored hash of your password, and the hash of

which doesn't match the stored hash value of the password "Password sent from internet"

This is just a very basic overview and not exhaustively explaining the components of adequate login systems (salt has not been mentioned, nor have various other techniques for improving the security of this sort of login system, this is just a bare bones explanation of what is happening).

However, since the attacker has gotten your passwords hash, they don't need to ask the server to compare the H(password_guess) to the H(stored_password) [H being a hash function], because they already have the output of H(stored_password). So now entirely on their own system they can take the hash value of password guesses and compare them to the hash they got from the server, up to the point that they find a collision.

How rapidly can they make guesses if they have the password hash? Very rapidly!

Using graphics card clusters those attackers were able to attempt 364,000 guesses per second against a hashing function that was hardened for use with passwords, and 63 billion per second against a hashing function that wasn't hardened in such a way but which is still not uncommonly used for the purpose!

So in conclusion of this section, the XKCD comic is concerning itself with online attacks in saying that a password with 44 bits of entropy is strong, that is adequate for preventing online attacks, but not for preventing offline attacks. The same is true for passwords that protect encrypted information (hash functions are commonly said to be used for password encryption, but this is not the case, passwords are not usually encrypted but rather are obfuscated by being run through a hashing function, the goal is to make it so you cannot reverse the one way function of a hash algorithm, which contrasts with encryption where you typically want to maintain the ability to decrypt the ciphertext back into a plaintext). Passwords that protect encrypted information assume an offline attacker, so 44 bits of entropy is inadequate for such a password.

Chapter Three: So how do we determine the strength of passwords anyway?

Password strength is measured in bits of entropy. A bit is a one or a zero (1, 0). Entropy has extremely technical academic definitions and it is a challenge to fully appreciate (I certainly still struggle with entropy, and perceive thinking of it as a mix of science and art), however it is essentially unpredictability. A fair coin flip should produce one bit of entropy, the result is either heads or tails (1 or 0), and you cannot predict which ahead of time. Note that in reality it is sometimes hard to get such true entropy, for example the coin is bound by the laws of physics and the result of the coin flip is arguably not truly random, however it is still unpredictable to you.

A password has a key space that is 2^bits_of_entropy_in_it. This is because a one bit password can be guessed with 2^1 guesses, it is either a 1 or a 0, so two guesses is the entire key space of the password. A password with two bits of entropy in it has a key space that is 2^2, the password is in the following set

[00, 01, 11, 10]

And so on for however many bits are in the password. As you can see, the more bits of entropy in a password the lower the probability that any given guess will correctly be the password. Going back to the XKCD comic, a password with 44 bits of entropy has a key space of 2^44, which is 17,592,186,044,416. At a rate of 1,000 password guesses a second, this will take (17,592,186,044,416 / 1000) / 60 / 60 / 24 / 365 years to guess with absolute certainty, or about 557.8 years. Of course, it is always possible to guess correctly on the first attempt, it's just you are more likely to win the lottery!

Chapter Four: Ok, so we measure password strength in bits of entropy, but how do we measure bits of entropy in a password?!

There are two answers to this question! We can either use algorithms that were designed so we can estimate password entropy, or we can generate or accumulate pure entropy and encode it into passwords such that we can precisely calculate their entropy!

One system for estimating the entropy in a password is this NIST proposal by Bill Burr.

Over 20 character passwords assume thatentropy grows at 1 bit per character

Award an entropy bonus of up to 6 bits forpassword composition rules

Award an entropy bonus of up to 6 bits for adictionary test– Bonus declines for long “pass-phrases”• Have to contain common words or you can’t remember them• No bonus for over 20 char

Here is this password entropy estimation algorithm implemented as a Ruby script, awarding two points of entropy each for character set additions (upper/numeric/special), and using one of the following password dictionaries for the 6 point bonus comparison [named dictionary and in the same directory as the script] https://wiki.skullsecurity.org/Passwords (note that these dictionaries have obscure words but lack in common ones, also make sure to remove any blank lines at the end or else they cause matches)

It estimates the entropy for "correct horse battery staple" at 28 bits! Why would there be such a disparity between the entropy figure given by XKCD and the estimation of this algorithm? Well, the key thing to notice is that the XKCD comic said the phrase was randomly generated, whereas this algorithm is for estimating the entropy of user created passwords.

Chapter Five: On the topic of character sets, length, and complexity

It is easy to compute the exact entropy characteristics of a password that was known to be randomly generated, you merely need to know the length of the password and the character set that it came from! The most basic example would be a binary password, so it is from the character set [1, 0].

101010111100

If this were randomly generated, each character would be one bit of entropy, so the entire password would be 12 bits of entropy. The more characters that are in a password, the less complex the character set has to be for it to very entropic. As you can see, we could make a randomly generated password of 44 ones and zeros, and it would have equivalent entropy to the XKCD password despite being constructed with characters from a set of two! However, there are more characters in total to remember, we have reduced our character set at the expense of having to remember more total characters.

The entropy of any character from a given character set is easily calculated as

log2( character_set_size )

log2 being the inverse of to the power of two.

So a character set size of 2, like binary, has log2(2) bits of entropy capacity per character, or 1 bit of entropy per character. A character set of all ASCII characters contains log2(95) bits of entropy capacity per character, or about 6.57 bits of entropy capacity per character. Note that I say capacity, because each character will only have this much entropy if it is actually truly randomly selected (and humans are bad at randomly selecting things because our minds are associative rather than random).

If you make a password from the entire printable ASCII character set, it is very unlikely to completely saturate the entropy holding potential of the individual characters. For example, if you were to use English words you would already be reducing your entropy massively, because of character frequency not being uniformly distributed in English words, and also some characters tending to appear together in English etc. So although if 'correct horse battery staple' were randomly generated it could hold up to 44 bits of entropy (according to XKCD), if a human thought up a phrase that looked similar to it, it is probable that it would contain 28 bits of entropy according to the NIST estimation algorithm.

Fairly recent research from MIT has shown the difficulty of password entropy estimation, as it was discovered that user generated passwords are less entropic than previously estimated.

....“It’s still exponentially hard, but it’s exponentially easier than we thought,” Duffy says. One implication is that an attacker who simply relied on the frequencies with which letters occur in English words could probably guess a user-selected password much more quickly than was previously thought. “Attackers often use graphics processors to distribute the problem,” Duffy says. “You’d be surprised at how quickly you can guess stuff.”....

Chapter Six: So how do I make good passwords?

There are three general techniques people accept as potentially secure for passwords, each with their own advantages and disadvantages.

---------------------------------------------------------------------

One: English Prose

Advantage: Possible to think it up in your head and not rely on random number generators

Disadvantage: Your head is not good at thinking up random things.

Advantage: Easier to remember due to more or less following English rules and being a regular sentence

Disadvantage: English sentences that follow normal rules have low per character entropy

Disadvantage: At best you can estimate the entropy of such a password, and estimations are usually higher than they should be.

Notes: If you go this route your passphrase should not be particularly related to you, if it is meaningful to you it is less unpredictable to the attacker.

Advantage: Possible to precisely compute the entropy in the password [it's log2(word_list_count) bits of entropy per word]

Advantage: Not too hard to remember, although the words are randomly selected they are not uncommon in themselves

Disadvantage: You need to preferably print out the entire word list and roll dice to construct your password, or you need to rely on a computer program to do it for you, but implementation flaws are possible in such computer programs, and they rely on a source of good entropy to begin with, and it exposes your password to the environment that generated it

Notes: It is suggested to actually print the word list and use dice, however you should [b]destroy the word list after use, to avoid the scenario in which you inadvertently touch certain words leading to forensic analysis of the printed word list narrowing in on likely words you used. In the case that you do not believe your system to currently be compromised, you may opt to not print the word list out, in the event that you are selecting an FDE password and plan to wipe the drive afterwards, I suggest doing so without an internet connection to prevent any potential malware from phoning home the screen as you scroll through the list selecting words for your password[/b]

Three: Random Characters

Advantage: You can create very short truly random passwords, especially if you use the entire character set of your keyboard

Disadvantage: Despite being very short these passwords are hard to remember because they are completely nonsensical and only recognizable on the single character level (as compared to diceware which is recognizable on the word level despite not the phrase level, itself as compared to randomly thought up English prose which is recognizable on the phrase level as well). People typically rely more heavily on muscle memory with these passwords.

Chapter Seven: So what is diceware?

Diceware is a system by which passwords similar to "correct horse batter staple" can be generated in an actually random fashion.

Download the complete Diceware list or the alternative Beale list and save it on your computer. Print it out if you like. Then return to this page.

Decide how many words you want in your passphrase. A five word passphrase provides a level of security much higher than the simple passwords most people use. We recommend a minimum of six words for use with GPG, wireless security and file encryption programs. A seven, eight or nine word pass phrase is recommended for high value uses such as whole disk encryption, BitCoin, and the like. For more information, see the Diceware FAQ.

Now roll the dice and write down the results on a slip of paper [NOTE: DO NOT LEAVE PRESSURE IMPRESSIONS ON ANY MATERIAL BELOW THE PAPER, ONLY AIR SHOULD BE BEHIND IT]. Write the numbers in groups of five. Make as many of these five-digit groups as you want words in your passphrase. You can roll one die five times or roll five dice once, or any combination in between. If you do roll several dice at a time, read the dice from left to right.

Look up each five digit number in the Diceware list and find the word next to it. For example, 21124 means your next passphrase word would be "clip" (see the excerpt from the list, above).

When you are done, the words that you have found are your new passphrase. Memorize them and then either destroy the scrap of paper or keep it in a really safe place. That's all there is to it!

Here is an implementation of diceware in ruby that can use the word list here http://world.std.com/~reinhold/diceware8k.txt or various other diceware word lists [don't combine them though you want no redundancy of words it skews the calculations] (world.std.com/~reinhold/diceware.wordlist.asc) saved as a file named 'wordlist' in the same directory as the Ruby script. Note to not have any empty lines at the bottom of the word list.

It is preferable to actually print the word list out and roll dice though, because there could be implementation flaws in this code or any of the code supporting it, and it still relies on a randomness source (or sources) that could itself be defective or even malicious!

This is really interesting research: "Stealthy Dopant-Level Hardware Trojans." Basically, you can tamper with a logic gate to be either stuck-on or stuck-off by changing the doping of one transistor. This sort of sabotage is undetectable by functional testing or optical inspection. And it can be done at mask generation -- very late in the design process -- since it does not require adding circuits, changing the circuit layout, or anything else. All this makes it really hard to detect.

The paper talks about several uses for this type of sabotage, but the most interesting -- and devastating -- is to modify a chip's random number generator. This technique could, for example, reduce the amount of entropy in Intel's hardware random number generator from 128 bits to 32 bits. This could be done without triggering any of the built-in self-tests, without disabling any of the built-in self-tests, and without failing any randomness tests.

I have no idea if the NSA convinced Intel to do this with the hardware random number generator it embedded into its CPU chips, but I do know that it could. And I was always leery of Intel strongly pushing for applications to use the output of its hardware RNG directly and not putting it through some strong software PRNG like Fortuna. And now Theodore Ts'o writes this about Linux: "I am so glad I resisted pressure from Intel engineers to let /dev/random rely only on the RDRAND instruction."

Yes, this is a conspiracy theory. But I'm not willing to discount such things anymore. That's the worst thing about the NSA's actions. We have no idea whom we can trust.

Chapter Eight: So how many bits of entropy (secure) do my passwords need to be?

Generally all of your passwords should be cryptographically secure, any passwords that don't need to be cryptographically secure should still be simply because you should be using a password safe application to store them, and the password for the password safe should be cryptographically secure.

Your FDE password is probably your most important password, it is used as an input into a PBKDF (password based key derivation function) to produce a key for a symmetric encryption algorithm (such as AES128 or AES256). You can find suggested symmetric key sizes at the following link:

Keep in mind that we are concerned with bits of entropy only, if your password is 500 bits but has only one bit of entropy it is equivalent to a one bit symmetric encryption key. As you can see the estimates for secure password entropy into the future vary considerably based on the agency,

according to the Lenstra estimates, a 100 bit symmetric encryption key (equivalent to a password with 100 bits of entropy) should be good for security to 2039-2048. However, other agencies are suggesting more in the area of 128 bits of entropy for the same time frame! In general you should aim for the best passwords you can remember, however you should not drop below 100 bits of entropy for a cryptographically secure password, and should aim for 112+.

Today there are specialized ASICs and graphics card clusters that are capable of doing billions upon billions of hash operations per second.

This device is selling for about $2,000 on Amazon and it is capable of doing 1,000,000,000,000 SHA256 hash operations per second! So assuming you had a 44 (entropy) bit password hashed with plain SHA256 up against this device, it would take 2^44 / 1,000,000,000,000 seconds to completely exhaust your passwords key space (though remember that the password can be guessed before the key space is exhausted).

17592186044416 / 1,000,000,000,000 = 17.59 seconds!

So clearly a 44 bit password is not adequate to protect from offline attacks, as I previously mentioned that XKCD comic was concerned entirely with online attacks.

But how does a 100 bit password stack up? Well, 2^100 / 1,000,000,000,000 seconds to exhaust the key space.

1267650600230000000000000000000 / 1,000,000,000,000 = 1267650600000000000 seconds to exhaust the key space, or 40196936834 years. So a password with 100 bits of entropy is secure from an attacker with one of those machines, however we need to keep in mind that powerful attackers such as law enforcement have millions of dollars at their disposal for cracking passwords with (and other things for bypassing the security of passwords).

In its budget request for next year, the FBI asked for $38.3 more million on top of the $31 million already requested last year to “develop and acquire” tools to get encrypted data, or to unmask internet users who hide behind a cloak of encryption. This money influx is designed to avoid “going dark,” an hypothetical future where the rise of encryption technologies make it impossible for cops and feds to track criminal suspects, or to access and intercept the information or data they need to solve crimes and investigations.

FBI has this computational power at their disposal for both brute force of passwords as well as fuzzing, fuzzing being an advanced concept in relation to application layer hacking which I will not cover here.

Chapter Nine: Miscellaneous thoughts on making passwords easier

memorizing

Typically I will write down the first character of every word in my password (taking care not to leave pressure impressions on underlying materials, write with the paper with only air behind it). I will reference this paper as needed for a number of days after I create a new password. While this paper is in existence there is a window of increased threat from it since the compromise of the paper can greatly reduce the key space of the password, however I always keep it on me. I then use my password normally (or excessively to get practice!) over the course of the next several days, getting it built into my muscle memory by typing it in, and also repeating it over and over to myself several times a day, just as I would commit anything else to verbatim to memory. You may find that using visualization techniques is useful for you as well, as the XKCD comic alludes to. Pretty much committing your password to memory takes practice and time, if you make a secure password and read it once then assume you have it memorized, you are going to forget your password.

However, after you have broken in a new password as previously mentioned, you can keep even very entropic passwords committed to memory indefinitely, such that you will not forget them unless you don't use them for a significant period of time.

Some of your passwords you need to keep entirely in your head. These are usually your FDE password (full drive encryption), a password safe password, and an operating system password. All of your other passwords can be managed by the password safe, such that you don't even need to know them. Keepassx is a popular open source password manager.

Avoid using proprietary password managers, or any that are internet based!

Chapter Ten: Key stretching, or how to get the most out of your users passwords

You will also remember that an attacker who hacks the database is able to get the password hash and bypass the time delay on guessing passwords by doing an offline attack. We can address this by using a PBKDF, password based key derivation function, that builds a time delay into computing the hash of the password guesses. As you can see from the graphic I've posted, now instead of merely hashing the users password a single time, it is hashed some number of iterations, each iteration of course costing computational resources, and therefore taking additional time.

In the context of symmetric encryption it is the same case. I should point out that symmetric algorithms like AES128 and AES256 always take the same number of bits for their key, 128 and 256 respectively. Since your password is unlikely to be exactly 256 bits (of entropy or of length), it is typically used as input to a PBKDF rather than directly used as the encryption key. This is essentially hashing the password with something like SHA256, SHA256 being a cryptographic hash function that accepts arbitrary inputs and produces 256 bit outputs that correlate with them, in a one way fashion, such that it is easy to compute the output with the input but very difficult or impossible to determine the input from the output (you would use a PBKDF though since SHA256 would rather be fast and for passwords you want computation to be slower to slow down attacks).

Cryptographic hash functions work as entropy accumulators / entropy distillers, in that when given an arbitrarily sized input with X bits of entropy in it, they will produce a fixed size output (256 bits in the case of SHA256) with as many bits of entropy as the input, up to the size of the output of the hash function. The output also has the entropy distributed throughout it (the exact meaning of this being subtle and relating to statistical independence, it should not be taken to mean that SHA256 fed one bit of entropy produces an output of 256 bits each having 1/256th bits of entropy in them).

Now when an attacker tests "password" against a pilfered hash (or against your encrypted hard drive in the case of FDE!) they are not doing merely one SHA256 operation, but rather two thousand of them. This delay disparity is not even noticeable to a legitimate user, but it adds up to an attacker who experiences it trillions of times, and best of all it is not possible to bypass it like you can with logical delays on the server side of the exposed interface. Note: PBKDFs do more than just iterations, they also add salts to protect from rainbow table attacks, and various other things. Iterations are merely one component of a PBKDF

As you can see, we can determine the equivalent entropy strength of a password that has been hardened with a PBKDF2 with the following formula

equivalentEntropy = log2(2^realEntropy * iterations)

This is because with one iteration (like plain SHA256), the attacker needs to do 2^realEntropy operations in order to exhaust the key space. So with a password with 10 bits of entropy, to exhaust the key space with one iteration the attacker must perform 1024 hash operations to exhaust the key space. With tow iterations per password attempt though, we double the key space to 2,048 hash operations, because now the attacker must do the original hash operation and then take the hash of that as the result. log2(2048) is 11, so now our password with 10 bits of entropy in it has the resistance of a password with 11 bits of entropy if no PBKDF/key stretching had been utilized.

This gain can be significant, though users still need secure passwords, PBKDFs make the difference between a password that will be immediately broken and one that will be broken in a month, and they also make the difference between a password that will be broken in a decade and one that will be broken in a decade and an extra year, so although they should not be relied upon they certainly provide significant security advantages and should always be utilized with passwords. Assuming that you have a password with 100 bits of entropy and utilize 20,000 pbkdf iterations

log2(2^100 * 20,000) = 114.28

your 100 bits of entropy password, which is the minimum strength a cryptographic password is suggested to be, has essentially security equivalent to a 114 entropy bit password, which is more in line with best practices.

PBKDF2 is a previous generation PBKDF, modern alternatives are Catena and Argon2.

The modern PBKDFs were designed with memory bottlenecks in order to memory bind the attackers in order to reduce the risk of massively parallelized attacks. Note that it is not uncommon for much software today that uses PBKDF2 and similar to use only in the area of 500-2,000 iterations, which is certainly better than nothing, but much lower than can realistically be utilized on modern hardware.

We can demonstrate however that PBKDFs are not a replacement for secure passwords. Indeed, iterations logarithmically harden passwords, whereas adding bits of entropy to the password exponentially increases the key space.

Password strength can be measured in bits of entropy. If a password has one bit of entropy, there is a keyspace of 2^1, which is 2. This is because there is one bit of entropy, aka one random bit, and a bit can be a 0 or a 1, so there are two to choose from.

The formula for seeing the strength of a PBKDF2 hardened password, as compared to a password without PBKDF2, is

log2(2^EntropyBits * Iterations)

So if we have a password with 1 bit of entropy and 1 iteration (the minimum number, since it is always hashed at least once)

log2(2^1 * 1) = 1

Since we had one bit of entropy and one iteration, the password is equivalent to one without PBKDF2 being applied.

If we had 1,000 iterations though

log2(2^1 * 1000) = 10.965784285

With 1,000 iterations our 1 bit of entropy password is equivalent in strength to a 10.965784285 bits of entropy password without PBKDF2 applied to it, because the attacker must hash out 1,000 times for each guessed bit, meaning it takes 2,000 hash operations to certainly exhaust the key space, and 2^10.965784285 = 2,000, and log2(2,000) therefore equals 10.965784285

However the growth rate of adding iterations is logarithmic I believe, because

As you can see we are adding our entropy bit to the log2 for the iteration count.

Conversely adding actual bits of entropy to the password itself has exponential growth, because

pow2(1) = 2pow2(2) = 4pow2(3) = 8pow2(4) = 16

As you can see adding bits of entropy, which follows an exponential growth curve, more rapidly expands the key space than adding iterations, which has a logarithmic growth curve.

Introduction to modern drug trafficking

Traditionally:

Enter Security

Secure Messaging (E-mail and PM)

Overview

Here is an overview of the technical details of how GPG operates

Using GPG is not difficult! There are numerous GUI interfaces to it. Although I will not exhaustively cover them here, they are all quite similar, so here I will give a basic overview of using one of them.

Kleopatra is a cross platform GPG GUI, there may be slight differences between it on Linux and Windows, and I've done this tutorial on Linux, but the general theme should be the same, if anything Windows version will be easier to use. In any case, this general tutorial is more or less how all the GPG GUIs work, so by following it you should be able to essentially learn how to use any GPG interface (some are worse than others though). You may need some basic computer competency to figure things out yourself if your GUI deviates by much, but using this as a general reference should get you where you need to be.

You should encrypt your private messages and E-mails with GPG because it makes it so that you can communicate with people through untrusted servers. Even if you trust victory and the others with access to the private messages to not read them, hackers could compromise the server and see what you said. Theoretically law enforcement could seize a server that you use to communicate and get your plaintext messages and spy on your communications.

Even if you think you don't say anything sensitive in private, using GPG is helpful because if everyone gets in the pattern of doing it it makes it harder for attackers to tell sensitive communications from regular communications, and in general you assert your right to privacy by using GPG.

If you engage in sensitive communications, like sending people your address to get drugs shipped to it, then you really need to use GPG for security.

So first download GPG4win, or if you are using Linux get kleopatra (Linux distros almost all come with GPG). On debian based distros this is just: sudo apt-get install kleopatra

First Disable Networking

You will not be using keyservers, and you want to make sure not to upload your key to them. If you upload your key to a keyserver without using Tor it will link your GPG key to your IP address, which is a security risk. Keyservers are not really used by most people I know, I never use them.

On my system kleopatra presents a little icon on the taskbar that can be right clicked on for options. You will likely have a similar icon of some sort, or otherwise may need to check preferences some other way and look for networking section.

Delete all of the preconfigured keyservers.

Now Generate A Key

This is the main interface of kleopatra.

Go to File->New Certificate

Create a personal OpenPGP keypair, we are not interested in actual certificates.

Put your pseudonym here, not your real information. The people who get your public key can see the information you put here so if you don't want to deanonymize yourself be careful. I usually put a fake E-mail address if I'm on a forum, mrz@sluthate.com or something. This helps people with shitty GUIs who can only select keys by E-mail instead of name.

You want to opt to go to advanced settings to make your key the maximum supported size, which gives you enhanced protection. Today 4,096 is suggested as the minimum, 2,048 may be adequate though it's hard to say and relates to quantum computing progress in stabilized qubits and their ability to run shor's algorithm (ultimately we need to move away from RSA eventually toward quantum resistant algorithms).

After you click to continue, you will be presented with basic information about the key you are about to generate.

After you click to create your key you will be presented with a box asking for a passphrase. This passphrase protects your local private key, it isn't used for the encryption of messages, but you should still use a good password for it (from a password safe is fine!).

Now you will be presented with a box to type into. This is to generate randomness because a lot of random numbers need to be generated to make the key. This can take a while, you should type into the box for a while but you may want to let it run in the background if it takes forever.

When it is done it will show you some basic information about your key and let you know.

Now your key shows up in your main GUI interface.

Right click on it and go to export certificate (this also is export key). At this point it will ask you to pick a file location to export it to. Export it to whichever location you want, and then open that file in a text editor. It will have something in it that looks like this.

This is your public key, you give it to people to encrypt messages to you and can post it publicly even if you want, it doesn't matter if people get your public key.

How To Import Keys

To send people encrypted messages you need their public keys. They simply send you the block of text that I previously showed was a public key. Select it and copy it to your clipboard.

From the taskbar icon go to

clipboard->Certificate Import

It will show you a message saying that it successfully imported the key, and it will also give you basic information, like it says 0 changed for me because I already had my own public key before I imported it.

How To Encrypt Messages

Write your message in a basic text editor (or in the box provided by the GUI if it is good enough to have one). Then copy it to your clipboard, you don't need to save the file and shouldn't if you don't need it, note that you should not write the plaintext message into the browser!

From the taskbar icon go to encrypt.

You will be asked to select recipients. Add everyone who you want to be able to decrypt the message. Note that you need to add yourself if you want to be able to decrypt the message, but often you will not even want to add yourself. I've selected to encrypt the message to myself because I only have my public key.

I only want to encrypt the message to myself so I click next.

Encryption was successful, so now my clipboard contents have been replaced with ciphertext that looks like this:

With the ciphertext to decrypt in your clipboard, go to decrypt/verify from the taskbar icon.You will be prompted for your password.

Decryption was successful, no signatures were found because I didn't sign the message prior to encrypting it. The contents of my clipboard have now been replaced with the plaintext;

here is a message I want to encrypt to someone. I will encrypt it to myself since I only have my key right now. Note that I usea simple text editor to write my message. This is important, if you write your message into a browser before encrypting it thereare many things that can go wrong, for example automatic unencrypted drafts can be sent to the server prior to encrypting!

I don't even need to save the message before putting it in my clipboard, and I will not because then it will write it to the disk.The text editor program may save it to the disk itself as an automatic draft though, so use a simple text editor that doesn't do this.Ideally the GPG program would present a secure text box to use, but Kleopatra doesn't seem to. This is a risk of leaving forensictraces of messages sent on the local machine, which is less serious that the risk of unencrypted message drafts being sent over theinternet, but should still be avoided.

How To Sign Messages

Message signing allows you to cryptographically verify that the person who controls the private key (you) wrote a message that was signed by it. Usually you will want to sign a message before you encrypt it, though this is actually not done as much as it should be.

You can select which key to sign the message with. With the message to sign already in your clipboard (you should write the message in a basic text editor like notepad, that doesn't save drafts etc, and never save the plaintext), go to the taskbar icon and select openPGP sign. Select the private key you want to sign the message with.

You will get a message letting you know signing was successful. The contents of your clipboard are now signed, so if I had my public key in the clipboard it would now look like this:

This plugin is quite easy to use so I will spare myself the effort of giving pictures for it. Essentially you merely go to the plugin menu and activate it, after which the interface for it in instant messages is quite intuitive and seamless. I will add that you should indeed establish shared secrets with your contacts, as otherwise the risk of MITM attacks on the encryption exists. You can keep your shared secrets for each contact stored in a password safe!

I will also note that although OTR provides encryption of instant messages, it does nothing to prevent traffic analysis. For this reason you should additionally configure all of your instant message accounts to use Tor as a proxy. Of course, if anonymity is important for you, you should use Tor only, including when you register the account in the first place.

Unfortunately, Pidgin is notorious for being pretty insecure from hackers. An alternative to using Pidgin with OTR is to use

Tor Messenger comes configured to route over Tor, and also has the OTR plugin. It is currently a beta release, which means that it has not been thoroughly tested for correctness. Eventually, it will certainly be suggested over Pidgin, however currently it is a tough call between the two; pidgin is not beta, but it is known to be generally insecure, whereas Tor Messenger is beta and its security properties are not properly understood.

Other options also exist, any client that supports OTR will provide you with the ability to encrypt your instant messages, as well as to engage in shared secret / question-answer authentication, and likely with the ability to configure to route over Tor as well. Unfortunately, none of the OTR supporting instant message clients are particularly suggested at this point in time, however in the event that you need encrypted instant messages, they are what is available, and certainly it is better to encrypt instant messages (and to engage in authentication) than to not, when the threat model requires end to end privacy of communications.

The Correlation & Link Primitives

Correlation and Link are two closely related forensic-intelligence primitives. By primitive I mean they are the fundamental cores of a wide variety of attacks in multiple disciplines. They can be thought of as a framework of sorts, abstract by themselves, but an abstract skeleton that can be fleshed out in innumerable ways. Indeed, these primitives apply to everything from traffic analysis;

Note: Tor has made much progress toward unlinkability, and this is less likely to happen today than it was earlier in the life of Tor, however care should still be taken to avoid cross contamination between activities linkable to your IRL identity and traffic that you wish to remain unlinkable to your IRL identity....it is still insecure to simultaneously browse both sorts of site, unless various techniques are utilized which I will not yet fully enumerate

Traffic confirmation, as illustrated above, is one of the most damning attacks against Tor, and is a manifestation of both correlation (in the interpacket arrival timings in this example), and linkability (between the destination server and the client browsing it).

Mail analysis is actually a form of traffic analysis that is separate from network packets. Indeed, pieces of mail are essentially equivalent to packets being routed through the mail network. As the above graphic illustrates, sequential tracking numbers can be used to link packages together. Of course, this is only a probabilistic link, because eventually a sequential tracking number will not belong to the person who sent the original package that was detected. However, this is another illustration of the dichotomy between evidence and intelligence; a package having a sequential tracking number to one which has been seized is not evidence that it is a drug package, however it is intelligence indicating that it has a higher probability of being so. As previously mentioned, intelligence is that which is used to narrow in on evidence (evidence in this case being the drugs inside the package).

To sophisticated hacking forensics;

And many other manifestations (not nearly exhaustively enumerated);

One of the original, if not the original, examples of a correlation attack, was fingerprint correlation, an early forensic attack that allowed for both identifying a suspect (via linking the fingerprints recovered at a crime scene to a human), but also linking events of interest together (via linking fingerprints recovered at events of interest together even prior to linking them to a human).

Great care should be taken in the assessment of correlational intelligence and linkability when engaging in threat modeling. In many cases correlations can be prevented from leaking, for example gloves can be worn over the hands to prevent leaving forensically recoverable fingerprint markings (NOTE: Thin tightly conforming gloves are inadequate for this, seeing as the material will conform to the ridges of the fingers and leave debris impression fingerprints in a rubber stamp fashion!).

The Intersection Primitive

As you can see, Intersection attacks can manifest in numerous fashions. Fundamentally they require the ability to

1. Enumerate crowds (for example, cell phone positioning records, or fingerprint sets, or quite commonly products and such in a photograph that is available to the attacker who then enumerates the customer lists of individual products and intersects them),

2. The ability to link crowds together. This can be from correlational intelligence (ie: fingerprints left at two different events of interest link the EOIs together, then cell phone positioning crowds around each EOI are intersected to narrow in on the suspect). Even in cases where there is not a clear link it may be possible to engage in intersection attacks, for example imagine there is a string of burglaries in a certain area; even if the burglar hasn't left clear correlational intelligence behind, nothing stops the attacker from merely intersecting crowd intelligence around all burglaries, it's just more noise may be present in the result set, or more computational power may be required to carry out the intersection.

Two years ago, when the FBI was stymied by a band of armed robbers known as the "Scarecrow Bandits" that had robbed more than 20 Texas banks, it came up with a novel method of locating the thieves.

FBI agents obtained logs from mobile phone companies corresponding to what their cellular towers had recorded at the time of a dozen different bank robberies in the Dallas area. The voluminous records showed that two phones had made calls around the time of all 12 heists, and that those phones belonged to men named Tony Hewitt and Corey Duffey. A jury eventually convicted the duo of multiple bank robbery and weapons charges.

The Wall Street Journal similarly revealed that “agents spent weeks piecing together who may have sent [the emails]. They used metadata footprints left by the emails to determine what locations they were sent from. They matched the places, including hotels, where Ms. Broadwell was during the times the emails were sent.” NBC added further details, revealing that “it took agents a while to figure out the source. They did that by finding out where the messages were sent from—which cities, which Wi-Fi locations in hotels. That gave them names, which they then checked against guest lists from other cities and hotels, looking for common names.”

Intersection attack vulnerability should always be considered thoroughly when threat modeling. To protect from intersection attacks, care should be taken to avoid leaving correlation intelligence, and additionally to avoid being enumerable via leaking such things as

Etc, that is not a comprehensive list. Note that typically the things that are concerning are those which are intrinsically linked to you (ie: Your car, your telephone number, your credit card, your wireless networking device, etc)...however even if the item is not directly linkable to you it presents concerns.

Bridges are nodes which are not publicly listed, rather you can get them three at a time after filling out captchas. There are also other techniques used to make them troublesome to enumerate, though it isn't impossible to enumerate large numbers of them if you are a powerful attacker (private bridges are also possible to run though, and these are not possible to enumerate in the same fashion). If attackers block access to Tor via refusing traffic to known Tor entry node IP addresses, then bridges will circumvent their censorship so long as they haven't yet enumerated the bridge you utilize.

More advanced attackers do not block access to Tor via IP address though, but rather they do DPI (Deep Packet Inspection) looking for Tor traffic, at which point they block it (as well as blocking the IP address it was going to, since it is then enumerated as a bridge node). To get around this more advanced sort of entry blocking, you need to use pluggable transports; https://www.torproject.org/docs/pluggab ... ts.html.en

Pluggable transports obfuscate Tor traffic to make it more difficult to fingerprint as such, and thus they resist more advanced attempts to block access to Tor. By coupling bridges with pluggable transports, you stand the best chance of circumventing censorship (in a worst case scenario you may need to use a private bridge).

There are actually two closely related subjects related to this question, one is called Entry Blocking Resistance, and the other is called Membership Concealment. They are essentially two ways of looking at the same thing though. Entry blocking resistance has the primary goal of hiding that you using Tor in order to get around censorship of Tor, so that for example Chinese people can continue using Tor despite China their attempts to block access to Tor via their great firewall.

However, entry blocking resistance isn't the only reason one may desire to use bridges and pluggable transports. Membership concealment is concerned with hiding that you use the Tor network for any reason. One reason you may desire to hide that you use the Tor network, which is orthogonal to entry blocking resistance, is so that you can avoid being enumerated as a Tor user. This may be important if you, for example, live in a sparsely populated region and ship drugs through the postal system. This is because there is an intersection attack that can be attempted here; the intersection of the crowd of Tor Users in a given geographic region (the region being identifiable via the post mark on the shipped products, the Tor users being enumerable via ISP logging) and the people living in that given geographic region (which the shipper is known to live in). If there are not many Tor users in this intersection, this attack can produce actionable intelligence. It is perhaps less serious in densely populated regions with many Tor users, because if there are thousands of identified people in the intersection of these two crowds, although the intelligence has indeed narrowed in on the target most likely, it may not be actionable to put 1,000 people under surveillance for activity related to drug shipping (though there is still a much smaller crowd which can be intersected with other crowds to further narrow in on the target).

For the previously mentioned reason, it is desirable if drug vendors and others with similar threat models take measures toward Membership Concealment, arguably they may even desire to enter with a torrenting VPN or similar, to an obfsproxy bridge (such that the protocol through the VPN can not be identified as Tor Traffic, as it otherwise would be possible to do). Or possibly they merely want to use obfsproxy bridges by themselves and not use the VPN at all. This is an interesting topic indeed, but yes membership concealment is something that is particularly pertinent to certain threat models. Unfortunately perfect membership concealment is difficult to obtain, because most people will have at least gone to torproject.org without obfuscating this fact, like by the time most people learn about the benefits of membership concealment they have already lost it to some attackers (such as NSA most likely, as they presumably make a note of everyone who goes to torproject.org, as they certainly do anyone who goes to tails.boum.org, as revealed in the Edward Snowden leaks). However, it is still good to strive toward membership concealment in the present even if it wasn't utilized in the past, if your threat model benefits from it (and in regards to VPNs, it should be noted that they actually present with a host of security issues as well, and I would want to discuss this in more depth before actually suggesting anyone uses them for any reason at all, it's just that I can see benefit to them for membership concealment in some instances, if this outweighs all of the negatives associated with them or not is up for question though).

Also, it may even be more dangerous to attempt membership concealment than to not, in the event that you fail to actually obtain it. This is because the people who utilize membership concealment when there is not a need to do so in order for entry blocking resistance may indicate that they are engaging in certain threat models where having membership concealment is beneficial.

Drugs In Mail 101

NOTE: These are just some things to take into consideration. I don't claim to have definitive answers on these topics. You would be wise to also read drug forum security subforums in relation to shipping (my technical advice is top tier of drug forums though, shipping is not my specialty though). There are a lot of trade offs involved in shipping and receiving mail and it is essentially a field of its own. On the plus side, many people simply have drugs shipped directly to their houses using their real names and no security + just check tracking from their own IP address [which is perhaps even superior to checking it with Tor, though probably inferior to using Open WiFi from a library or similar, as will be discussed more later, in the event that the package is being sent to your real name and address in the first place], and they still typically run into no problems at all ^_^. ...... So to reiterate, these are just things to think about, to weigh, to take into consideration, and are by no means definitive, I will enjoy discussing them with you more if you like and make a reply.

Here I will cover some of the basics about drugs in the mail, both from a vendor and customer point of view. It is noteworthy that I've not been particularly active in drug scene for some years now, but most of this information should still be applicable, and I was quite versed in the matter during my operational years.

Many people are hesitant about getting drugs in the mail for a variety of reasons, primarily they are concerned that drug packages will be intercepted. They should realize that the USPS handles 171 billion pieces of mail a year, whereas the USPI (Postal Inspectors), the agency primarily tasked with policing the mail system (ICE additionally being involved with international packages, and regular police sometimes contributing to the effort of screening domestic mail), has 1,200 inspectors. Although the exact number of LE involved with scanning the mail for contraband is not something I know off the top of my head, it goes without saying that they are miniscule in number compared to the number of packages shipped through the USPS alone, not to mention the addition of private couriers.

So obviously they are never going to get more than a fraction of packages. And if you know what you're doing you can significantly reduce the probability that your package will be inspected. Their human inspectors use lists of package flags to increase their detection rate, since most people shipping drugs are retards and don't know to not include flags on their packages. Simply by avoiding flags you massively reduce the probability that a package will be inspected. Here is a partial list of package flags (which can be summarized as, make your package look like it wasn't made by a high terrorist);

(Note that one flag or two flags are often fine in isolation, however in general the goal is to reduce the flags to as few as possible) 1. Handwritten address or return address2. Addresses contain misspelled information (cities, names, etc)3. Postage is not exact (too little or too much)4. Package lacks return address5. Restrictive markings (ie: "Sensitive Content" written on package)6. Sealed with tape7. Originate from a foreign country8. Originate from a drug source state9. Packaging appears to be re-used10. Emits any odor, including cover scents (such as perfumes etc)11. Bulky, ridgid, uneven, lopsided, uneven weight distribution12. Stains, discolorations, crystalizations on package13. Package looks poorly prepared 14. Addressed as sent from individual to individual 15. Return address ZIP code doesn't match the post office the package was sent from16. Redistribution of weight is felt when package is shaken, tilted, moved17. Package makes noise when shaken18. Makes use of names not connected to either address19. Sender or receiver use stereotypical common names (ie: John Doe, John Smith)20. Fake return address used 22. Fast shipping speeds are used (the faster the shipping the more likely it is to be flagged) [however the less time it will be in the system, and the harder it will be to rush to get a warrant without delaying the package, which could be a tip off that it has been intercepted]. 23. Sent to PO box or PMB

In addition to taking these flags into consideration, you should generally strive to create a package that looks as typical as possible. The goal is to blend into the crowd.

A. Keeping the weight to a minimum is suggested. The more a package weighs the more inviting of a target it is to a postal inspector, after all they are aiming for big interceptions, and there is a much lower probability that a package that weighs a gram will result in a big interception (ie: LSD at a gram could be a big interception, but not most other drugs) than a package that weighs more.

B. If you can, keep the entire package to a typical envelope (or priority mailer etc, envelope as differentiated from box/package). This is particularly important in countries like Australia, where essentially all non-letter mail is inspected. If it fits in a letter, it is much less likely to be inspected than if it is in a box, or even a DVD case sized package.

C. Rather than using tape, some people would suggest using spray adhesive. This is easier to work with whilst wearing gloves, weighs less, gives a more balanced coverage, and does a better job at securing things to what they are stuck to. It also can be used on package flaps without being a flag, unlike tape. However, care should be taken that it doesn't leave a residue. Of course, the primary thing that would be sprayed is either a vac bag or a moisture barrier bag. Note however that there is some concern about such adhesives potentially picking up hairs and such which means DNA evidence.

D. During packaging, drugs should not be used in the area, as there is a very real risk of trace scent contamination, which dogs could hit on. This means no smoking weed even in the same house that shipments are packaged in.

E. Care should be taken to avoid reusing return addresses, and to avoid sequential tracking numbers, in order to prevent cluster interceptions due to linkability in the event that one package is intercepted. All serial numbers should be taken into consideration here, and sequential encoding should be avoided to the best of your ability, in short just keep in mind the risk of linkability and cluster interceptions.

F. Care should be taken to avoid checking tracking with Tor exit nodes, or with any proxy from out of the area that the package is being shipped to. This is because this works as a flag against a package; the vast majority of people are checking tracking only from local IP addresses. However, you do not want to check tracking with your own IP address, nor with a neighbors IP address via their WiFi, because this creates a link between you and the package you are receiving. There are various solutions to this, you may consider using open WiFi from a library or other place without cameras, however this is risky in itself because it creates a new attack surface that is less secure than Tor. Another technique is to use third party tracking checking services via Tor (preferably additionally via an exit php proxy in your rough proximity such as country), the third party tracking checker scrapes the tracking results from the USPS website and in the event that they do not give their IP logs to postal inspectors you will be able to hide that you are using a proxy whilst still using a service used by many others to check tracking. Another option may be to avoid checking tracking all together unless an unforeseen delay arises, however there are downsides to this as well, primarily tracking sometimes leaks intelligence that can be used to identify interceptions (I once saw 'Package held by non-customs federal agency' as the tracking status of a package with 5 grams of LSD in it that was intercepted!), and also most people do check their tracking so in not checking tracking at all you may in fact flag your package. In general checking tracking is a sensitive topic that warrants more in depth discussion that this, I will come back and add more here shortly.

G. Drugs that dogs can hit on should either be sealed in traditional vacuum seal bags, or in ultra low permeability moisture barrier bags. Note that MBBs (moisture barrier bags) should only be utilized in the event that you can securely obtain them anonymously, you never want to get any shipping supplies from vendors that serve the function of supplying other vendors with shipping supplies. The question between MBB and traditional vacuum sealed bag is an uncertain one, some people have claimed that MBB is superior on drug forums, however I was always hesitant to accept this without a solid theoretical background on the matter. It seems as if MBB are indeed lower permeability than traditional vacuum seal bags, but they are harder to get vacuum sealers for and often people neglect to vacuum seal them, which means we need to question if it is superior to use a lower permeability bagging material without a vacuum seal, or a higher permeability material with a vacuum seal. People have claimed that in tests with drug dogs MBB performed better than traditional vacuum sealed bags, however there were random people on drug forums not people publishing in scientific journals or anything. Additionally, another concern I have is the potential that drug dogs could be trained to smell the MBB material itself, and hit on that rather than the drugs. I always viewed the claims that MBB should be utilized rather than traditional vacuum seal bags with suspicion, due to the fact that we used exclusively traditional vacuum seal bags without issue up to SR1, after which new people suggested the use of MBB due to it being lower permeability, I'm personally still not fully decided one way or the other on this and always felt a sense of caution involved with the suggestion that we radically change our packaging methodology. However, in any case, some form of secure scent barrier should be utilized.

H. Masking scents, such as dryer sheets, or perfumes, are, of course, completely worthless, in that they do not prevent dogs from hitting on drugs in any case, and are in fact a flag for postal inspectors, so even in the event they cover drug scent from a postal inspector, in being a cover scent they are a flag regardless.

I. In some cases drugs may be disguised (ie: I've gotten GHB disguised as samples of natural laundry detergent, though it was not particularly made to appear to be such, it did have fake pamphlet included with it and marketing material to make it seem as if it were laundry detergent, despite really being 500 grams of GHB). This is particularly useful in cases where there is a large shipment, it gives some degree of disguise to the contents of a package in the case it is intercepted and opened, however it is questionable how much it buys in reality. Trojaning drugs inside of objects is typically suggested against in that it will increase the packages weight and bulkiness, plus may show up as an irregularity on an x-ray, though in some cases it may come in handy. Although most drug packages will obviously contain drugs upon close inspection, there may be some merit to taking basic measures to disguising the contents of a package such that they do not readily appear to be drugs, or at least that a plausible alternative to what they are exists.

J. Real return addresses should be utilized in order to avoid the flag of having used a fake return address, however you simultaneously don't want someone to notice the contents of the package in the case it is return to sender. I've heard of various strategies for this, including using real street addresses but that are still undeliverable, for example apartment complex addresses but without specification of an apartment number. However, it is questionable if this technique should be utilized due to the fact that it may be possible to discover, in which case it would serve as a flag. It may in fact be best to use random residential return addresses and to merely hope the package is not returned to sender. Additionally, it would make sense to avoid using the real name of the tenant of a return address in the event that it is indeed returned to sender, in the hopes that they would not open the package or have suspicion aroused, however it may flag the package in not having the return address associated with a real tenanent of the address, though this risk may be less so in the case of apartments or other dwellings in which a large amount of churn takes places of tenants, it is also a risk in that packages sent from such return addresses may count as a flag in itself. In other words, this is a tricky topic with no clear cut answer, and trade offs in every direction, but is something that thought should be given to. Note however that you do not want to reuse the same return address for many packages, because doing so leads to the risk of linkability between packages and cluster interceptions in the case that any given package utilizing that return address is intercepted.

Vendors should take care to

A. Avoid leaving any fingerprints on the outside or inside of a package. This means not using thin tightly conforming gloves, seeing as they can conform to finger ridges and leave rubber stamp debris impressions of fingerprints through them, but rather using thicker less conforming gloves during every single step of the packaging (you don't want fingerprints on any single part of the packaging, including the drugs themselves!).

B. Nothing on the package should be handwritten to begin with, and in the case that anything is then it is also vulnerable to handwriting analysis, which can be used to correlate it with known handwriting samples from suspects.

C. Keep in mind that multiple things printed from the same printer can likely be linked together via correlations, and use a dedicated printer for the printing of anything related to drug vending, and treating it as a sensitive item. Also note that the printer should be paid for anonymously with cash and obtained generally as securely as possible. Note that some color printers have been known to steganographically encode sequences of yellow dots into the items they print that allow for not only linking items from the printer together (though this can likely be done with other sorts of correlation anyway) but possibly also identifying the printer itself (which will be a problem if the printer is linked to you). Although this is only known about in the case of color printers, it should always be assumed to be the case even in cases when it is not.

D. Note that paper and ink can also be analyzed, as can stamps etc. Tracking stickers, stamps, papers, etc, can all likely be traced back to where they were sold from, or at least a reduced set of locations they may have come from. For more information on this I suggest reading about the Anthrax mailing case

E. Be mindful of forensic trace evidence, be it adhesives (which may catch DNA evidence onto them for that matter), hairs, fibers, papers, tapes, etc.

F. When shipping products the vendor should be mindful of intersection attacks, as previously discussed. Driving around to numerous boxes and shipping products from them could result in intersection attacks narrowing in on the vendor, in the event that the attacker can determine the originating box of each package (uncertain if they can do this), and enumerate crowd intelligence from around those boxes in the form of cell phone positioning data, lisence plate positioning data, etc.

G. People buying drugs should opt for delivery methods where they do not need to sign for the package, and should refuse to sign for packages if they are instructed to do so regardless. This is because in controlled deliveries, LE will often pose as mailmen and coerce a suspect into signing for the package, which is used as evidence against the person that they ordered the contents of the package. Typically you do not run into trouble in regards to package signing, additionally if packages are sent to PMBs they will sign for you.

H. In some cases tracking devices are hidden inside of intercepted packages. The tracking device can be used to follow the recipient back to their home to deanonymize them (in other cases meatspace surveillance may be done on such boxes). Of course this is most useful for the attacker when the box the item is shipped to is unlinkable to the recipient to begin with, either in being reigstered with counterfeit documentation or in being random residential addresses. Additionally, in some cases they have a device in the package that can detect when it has been opened, after which they know to raid the target.

I. In cases where PMBs (private mail boxes) with counterfeit identification are obtained, it is possible to let packages 'cool down' for periods of time prior to attempting to obtain them. This may be able to increase the cost of manned surveillance, in that to have agents present during pick up the attacker will need to wait for some unknown period of time (up to the maximum time a package will be held at the PMB) prior to the target showing up. This increases the man hours of the operation in order for manned surveillance, and drains resources from the attacker.

J. All parties involved should double check to ensure that shipping addresses are correct, and that everything is spelled correctly.

K. Saliva should never be used to seal any part of the package nor the stamps. Remember, we want to avoid leaving DNA evidence! This also extends to wearing long sleeved clothing while packaging in an attempt to prevent stray hairs, and hairnets for that matter as well. As previously mentioned, there is concern about spray adhesives also picking up forensic trace evidence such as hairs or such, though tapes are also concerning here as all other adhesives are. In general just be mindful of this.

L. In some cases people opt to double vacuum seal or otherwise double bag the substance, though this does add to the weight of the package. In regards to the MBB vs vacuum seal debate, it should be noted that vacuum sealing also serves the effect of preventing substances from moving about as the package is shaken and such, which can additionally remove that as a flag. This may be harder to accomplish with a non-vacuum sealed MBB (MBB vacuum sealers being expensive and perhaps difficult to anonymously obtain). Potentially using MBB inside of a vacuum sealed bag would be ideal.

M. Packaging consists of two stages, one in which the substance is added to a bag, and one after the substance is contained inside of a bag. Care should be taken to avoid trace contaminants getting on the outside of the bag, perhaps residue being brought over by gloves being worn. One suggested technique is wearing thin latex gloves on top of thicker gloves (latex gloves alone not being adequate to prevent fingerprints due to the previously mentioned rubber stamp effect), but they may be effective as gloves to prevent contamination of the primary gloves utilized for blocking fingerprints, and they can be removed from over top of the primary gloves after all drugs have been packaged in their bagging material. Two sets of gloves may also be utilized (one for initial bagging, and one for manipulating bags after they have drugs secured in them), pretty much in general you just want to avoid getting traces of drugs outside of the scent barrier bag, how you go about this may vary though (perhaps try to avoid even touching the drugs with gloves if possible, such as by using scoops and similar to manipulate them).

N. You should not send packages from post officers, but rather from random drop boxes away from CCTV. One potential technique is the utilization of apartment complex drop boxes. However, care should be taken to avoid reusing the same drop box repetitively, you always want to try to behave randomly and to avoid patterns, though some degree of out box use may be inevitable. Even for the same outgoing batch of packages you may want to distribute them over a few boxes, however as previously mentioned care should be taken to avoid being in enumerable crowds and protecting from intersection attacks.

[b]O. Of course, while shipping products, you should not have your phone on you, nor should you have any device which broadcasts a MAC address, fitbits, or any other electronics at all for that matter. Also you should avoid using your credit card at any location near where you are shipping, and should avoid having your car near where you are shipping as well for that matter (perhaps utilizing public transportation to some extent, or otherwise being parked quite a distance from where you eventually ship from). This may not always be realistic, in some countries you actually may need to ship from a post office, if this is unavoidable weigh the risks.

[b]P. During shipping you should take care to cover any identifying marks such as tattoos or such, as well as to disguise your facial features to some degree, such as using large lens sunglasses or similar. Note that you also do not want to look like you are the unabomber though. Of course you want to avoid getting fingerprints even on the outside of packaging material, however you don't want to be seen for example wearing a ski mask and gloves as you dump packages into mail boxes, or else people will think you are a terrorist or something.

Q. In USA packages must be 13 ounces or less to be sent from drop off boxes provided by USPS.

R. Recipients may opt to have packages sent to private mail boxes (or potentially PO boxes) registered with fake identification. There is significant variance between the security of these facilities, typically mom and pop box shops (PMB rather than PO box) will have the weakest security, and may even lack CCTV, be lax on the requirement to photocopy identification (when I got a box from a mom and pop place they did not photocopy my fake ID), etc. I would suggest the utilization of PMBs rather than PO boxes, they have numerous other advantages as well, including the ability to get mail from more than USPS (ie: fedex).

S. However, USPS should be utilized rather than couriers or such, because of the sheer volume of mail they handle, and also because they require warrants to open domestic first class mail that looks as if it could be correspondence. In that vein, it is best if mail is first class, domestic (as opposed to international), and looks as if it could be correspondence.

T. Of course, when picking up packages, you want to take care to avoid leaving fingerprints anywhere (you also want to avoid leaving fingerprints anywhere when you pay for the box, preferably even on the cash you use to pay for it). At the same time you don't want to look like you are a robber though lol. You also want to avoid revealing identifying marks such as tattoos, and to disguise your face and self to the extent that you can do so without appearing suspicious, and also to avoid CCTV to the best of your abilities (and to favor the PMBs with the worst CCTV, or even no CCTV to begin with).

U. Waiting for random periods of time (but not so long that package is returned to sender, there is I believe a set limit to this at PO and PMB), can increase the cost of manned surveillance (ie: agents present waiting for the target, rather than merely electronic tracking devices or such added to the package).

V. An advantage of using fake ID PMB and such is that additionally to the risk of LE, there is the risk of scammers gaining doxing information of you, by using a PMB or similar you likewise are protecting yourself from this sort of situation, though this may be less of a concern today on the public drug markets than it was back on the underground source forums I was on where people had set pseudonyms and socially interacted with the community, rather than the potential for the more depersonalized shopping experience of the modern drug markets.

W. Speaking of point V, you should indeed be unlinking your purchases from each other, and from any social presence you have on the forums, to the greatest extent possible. For example, I'm quite well known on drug forums, but if I were to place an order from drugs from anyone other than a friend, I would simply make a new pseudonym and account on the drug market site in order to do so, so that I could unlink the order from my well known pseudonymous identity in the social setting, to avoid the risk of doxing and otherwise unlink my orders to the greatest extent possible.

X. Other options include having packages sent to random residential mailing boxes and intercepting them prior to the rightful resident of the home, and numerous other techniques. All of these techniques have advantages and disadvantages. One of the primary advantages of having packages sent to random residential boxes and intercepting them prior to the rightful owner to the residence, is that you can massively decentralize the points drugs are sent to, rather than centralizing them to the set of PMBs and such, which may avoid the ability of postal inspectors to flag packages going to known PMBs/PO boxes. The disadvantage is primarily in the need for reconnaissance to identify a box that can be subverted for your purposes (and the need to protect from intersection attack and similar in doing so), and primarily the risk that the package will actually be grabbed by the legitimate owner of the box and identified as containing drugs. It's also kind of fucked up in that you are involving random normies in drug shipments, which is somewhat immoral to say the least.

Y. If a recipient does opt to have packages sent to a place connected with themselves (such as their own home), they should make sure that it is 'clean' between shipments. This way in a worst case scenario a compromise/interception will get nothing more than that which was intercepted, rather than an additional stash of drugs that was already possessed. Clean houses of friends may also be utilized, or their homes may be used as stash houses while new shipments are inbound, in case an intercepted shipment leads to a raid of the delivery address then nothing else will be found other than that which was intercepted. It may be wise to write return to sender on drug packages and hold onto them for a day or two prior to opening them, in the event that a raid is conducted in say half an hour after picking up a package, this will maintain plausible deniability, however note that in sophisticated attacks they will have electronic devices added to the package such that they are alerted as soon as the package is opened. Speaking of that, this same technology may be useful for detecting interceptions while a package is in transit prior to picking it up (with it being sent to a box not linkable to the receipients IRL identity of course), though this is an advanced topic for large scale traffickers primarily.

Warning about mobile

First of all it should be noted that if security is paramount one should be using either a laptop or desktop system, mobile devices such as phones and tablets are not considered as possible to secure from all pertinent attack methods. Indeed, all phones are essentially backdoored and prone to active attacks against them which cannot particularly be defended against. However, this assumes that the phone is currently active when it is targeted, and additionally passive attacks can be largely protected from. In other words, if your phone is turned on and in a vulnerable state, it can be compromised, however in the event it is turned off or not yet in a vulnerable state (ie: encryption passwords have not been entered into it), it can still be secure from a variety of attacks.

Furthermore, having a phone on you can decrease your security in numerous ways. For example, the FBI has turned people their cellphones into roaming bugs;

The FBI appears to have begun using a novel form of electronic surveillance in criminal investigations: remotely activating a mobile phone's microphone and using it to eavesdrop on nearby conversations.

Attackers have also remotely activated cameras, and even mapped out homes. Additionally, a cell phone serves as a means to geoposition yourself with, which can make you vulnerable to intersection attacks and numerous other things. Additionally, your phone may even broadcast the places you have gone to any arbitrary person who cares to listen, as it will broadcast the WiFi access points it has seen before;

Every time you use Google or Apple mobile location services, you’re not just telling the services where you are. You’re also shouting many of the places you’ve been to anyone who happens to be listening around you—at least if you follow Google’s and Apple’s advice and turn on Wi-Fi for improved accuracy.

In summary, you should avoid using phones or tablets for things which require significant security, however in the event that you cannot do this, or even in the event in which utmost security is not required yet you desire to increase your security regardless, there are numerous things you can do in furtherance of this. Consider yourself warned.

Signal supports two forms of encryption; end to end in-transit encryption of messages, which means that nobody between you and the person you send a text message to can view the content of the text message (this requires them to also use Signal), and also at rest text encryption, which means that all of your text messages can be stored in ciphertext on your phone such that they cannot be viewed without your passphrase (assuming that the encryption is dismounted when the attacker gets your phone, you can configure auto-dismount as you see fit to need to re-enter your password however many times, and also dismount is automatic if the phone powers off).

Secure Phone Voice Encryption

The previously mentioned Signal Messenger also has some support for encrypted voice communications. People report varying success with this, but it is still a noteworthy feature. Of course, it requires both parties to make use of Signal to utilize it. It should be noted that encrypted voice is an extremely complicated area with lots of potential for fingerprinting attacks whereby the attacker gains information about the communications through the encryption, such as this attack for fingerprinting languages through encryption;

Voice over IP (VoIP) has become a popular protocol for making phone calls over the Internet. Due to the potential transit of sensitive conversations over untrusted network infrastructure, it is well understood that the contents of a VoIP session should be encrypted. However, we demonstrate that current cryptographic techniques do not provide adequate protection when the underlying audio is encoded using bandwidth-saving Variable Bit Rate (VBR) coders. Explicitly, we use the length of encrypted VoIP packets to tackle the challenging task of identifying the language of the conversation. Our empirical analysis of 2,066 native speakers of 21 different languages shows that a substantial amount of information can be discerned from encrypted VoIP traffic. For instance, our 21-way classifier achieves 66 % accuracy, almost a 14-fold improvement over random guessing. For 14 of the 21 languages, the accuracy is greater than 90%. We achieve an overall binary classification (e.g., “Is this a Spanish or English conversation?”) rate of 86.6%. Our analysis highlights what we believe to be interesting new privacy issues in VoIP. 1

This is not the only sort of information leakage that is possible. So although encrypted voice is superior to not using encryption for phone calls, keep in mind that it is also an area that is fraught with peril.

Protecting from Internet traffic analysis (and encrypting Internet from local attackers) on Phones

Android users can route their phones Internet traffic through Tor with an app called

Which provides protection from traffic analysis as well as encrypts the traffic such that the phone service provider cannot determine its contents. Note however that using Tor from a phone for particularly sensitive things is absolutely not suggested due to the general inability to properly secure phones. However, as previously explained, it is still better to take measures to protect your anonymity and privacy than to not, just be warned that phones are not possible to adequately secure for particularly sensitive things.

Iphone does not have an approved equivalent.

Secure Instant Messaging From Phones

On android phones you can look into ChatSecure, which supports OTR Instant message encryption for XMPP (Jabber) and also allows for configuring it to use Tor to protect from traffic analysis.

Both Android, and infamously Iphone, support full device encryption, and this should be utilized with strong passwords in order to protect the contents of your phone from attackers who may seize it (though in general you shouldn't have anything particularly sensitive on a phone in the first place).

This is particularly pertinent in the threat model in which you are a drug vendor who takes pictures of his products for upload to market sites.

If you take photographs with a camera and then upload them to the Internet there are numerous security considerations involved in this. The most trivial way that you can deanonymize yourself with an uploaded photograph is through EXIF data. EXIF data is a form of metadata which is embedded information in the image file added to it by the camera that took it. This could include everything from the serial number of the camera that took the picture (which may very well be linked to you in purchase records!), to the GPS coordinates that the photograph was taken at (which could be the location of your house for example). In other cases, metadata may include thumbnails of the full image, which may remain unadulterated even if the actual image itself is modified in an editor program (in other words, attempts to redact part of an image may fail in that the redaction doesn't propagate to the thumbnail in the metadata).

A quick trick for removing metadata from an image is to load the image on your desktop such that it takes up the full screen, then to hit the 'print screen' button on your keyboard to screen shot your full desktop. After saving the screen shot of your desktop with the image in it, you can cut out the picture that you had being displayed and save it as a new image. The newly saved image will look identical to the original, but it will not have metadata in it.

Additionally, and this is useful if you need to scrub metadata from many images, you can use various tools in order to automate the process on many files simultaneously. I will leave finding such a tool for operating system as an exercise to you, there are various different ones, though I would suggest to do more in depth looking at images after scrubbing them with any given tool to ensure that the tool you utilized actually worked.

Note that pictures are not the only sort of file which may contain deanonymizing metadata. This fact was made apparent to the BTK Serial Killer when he sent a floppy disk containing an office document to the police, who quickly found the name of the Church he belonged to (the computer of which BTK utilized to prepare the document), as well as his first name, in the metadata of the office document.

The disk contained one valid file bearing the message “this is a test” and directing police to read one of the accompanying index cards with instructions for further communications. In the “properties” section of the document, however, police found that the file had last been saved by someone named Dennis. They also found that the disk had been used at the Christ Lutheran Church and the Park City library.

Landwehr says Rader had taken pains to delete any identifying information from the disk. But he made the fatal mistake of taking the disk to his church to print out the file because the printer for his home computer wasn’t working.

“It’s pretty basic stuff,” Landwehr says about the reconstruction of the deleted information. “Anybody who knows anything about computers could figure it out.”

Of course there are numerous other risks in terms of photographs;

1. Every sort of item in the photograph can be enumerated, which can lead to intersection attack vulnerabilities in cases where, for example, numerous sorts of product are in the photograph that can have their customer lists enumerated

2. Even if intersection attacks do not arise, information still leaks, for example even something as trivial as a patterning on a bagging material (presumably containing drugs!) may be used to narrow in on the location of the photograph (ie: that specific bagging material was known to only be distributed in specific areas).

3. Dust pattern correlations (and other correlational intelligence) on the sensor of the camera can be utilized to link photographs taken by the same camera together, as well as linking photographs to the camera that took them

ABSTRACTA problem associated with digital single lens (DSLR) cam-eras is sensor dust. This problem arises due to dust particlesattracted to the sensor, when the interchangeable lens is re-moved, creating a dust pattern in front of the imaging sen-sor. Sensor dust patterns reveals themselves as artifacts onthe captured images and they become more visible at smalleraperture values. Since this pattern is not changed unless thesensor surface is cleaned, it can be used to match a givenimage to source DSLR camera. In this paper, we proposea new source camera identification method based on sensordust characteristics. Dust specks on the image are detectedusing intensity variations and shape features to form the dustpattern of the DSLR camera. Experimental results show thatthe method can be used to identify the source camera of animage at very low false positive rates.

Spidering programs can analyze photographs on social media sites comparing them to photographs of interest in dragnet attempts to link photographs of interest to photographs linked to suspects.

4. Fingerprints on camera lenses may remain as artifacts on the produced photographs (uncertain of this, need to find a forensic citation, but this is my conjecture).

5. It is theoretically possible for digital cameras to steganographically embed metadata into the photographs taken with them, such that the removal of trivial metadata does not actually completely remove the metadata from the photograph. This may be difficult to detect. It is possible that some digital cameras will watermark the photographs taken with them with information such as the camera serial number, or perhaps even GPS coordinates and such, in such a fashion that it is impossible for the human eye to detect, very difficult to scrub, and such that a secret key is required to extract and decrypt the information.

In general, low sophistication digital cameras should be utilized (ie: without even the documented ability to geotag), they should be purchased as anonymously as possible (ie: not with a credit card), they should be compartmentalized to sensitive activities only, etc.

Anonymous Payment

This is a topic I haven't been paying a whole lot of attention to, but am still somewhat versed in. Back when I was most active we did not use Bitcoin, but rather used Pecunix and Liberty Reserve (and E-gold prior to them). Sometimes we would also use CIM (Cash In Mail), Greendot with reload packs (and identity theft to activate cash out cards), etc. I never actually got very into Bitcoin, though I have followed it somewhat and have some general idea of it. So although I will provide information to the best of my ability, and can assist you in learning the basics, this isn't my specialty.

Today Bitcoin is used almost exclusively, if not altcoins. Pecunix is still around, though apparently in a new form, E-gold and Liberty Reserve were both busted and shut down for money laundering. In any case, Bitcoin can be thought of as the decentralized evolution of services such as Pecunix anyway.

History of digital currency

To start off I will cover the previous generation solutions such as pecunix, such that I can explain Bitcoin. Pecunix is essentially a centralized company based offshore that provides two services;

1. They have a vault with gold in it

2. They have a secure website that keeps track of how much gold each account has associated with it (also called simply 'Pecunix' to refer to the gold associated with an account)

Accounts can be obtained with no identification other than an E-mail address, and the website could traditionally anyway be accessed with Tor. Pecunix did not sell gold, nor did they sell Pecunix (used to refer to the gold as represented in an account), other than in very large amounts (they would also allow trading large amounts of Pecunix for large amounts of gold, but would not send small amounts of gold for small amounts of Pecunix).

Pecunix essentially acted as the core of a larger network of layered exchangers. The exchangers would spend large amounts of money to buy pecunix from the main company, and then would be able to break it up into smaller amounts and send it to other pecunix accounts in arbitrary amounts. Most people would buy pecunix from exchangers, and would cash it out via exchangers as well. Only the big exchangers actually interacted with the central Pecunix company in terms of buying/selling gold or pecunix, everyone else worked with the exchangers.

After buying pecunix from an exchanger you could send it to arbitrary other pecunix accounts. This is how we used to pay for stuff on the underground forums (E-gold and Liberty Reserve working in similar fashions). You would buy Pecunix from an exchanger, have it loaded to your anonymous Pecunix account, and then send it to the vendors anonymous Pecunix account, after which the vendor would cash it out in various fashions (perhaps by selling it to another exchanger, frequently to exchangers that would provide anonymous ATM cards which could be shipped to boxes registered with fake IDs or similar, and then cashed out at ATMs). There were also some services for mixing Pecunix (will cover mixing in more depth shortly), which attempts to unlink Pecunix that had been sent to a drug vendors account from that which was loaded to an anonymous ATM card, as an additional layer of financial anonymity on the part of the vendor (various other techniques also being utilized, including cashing through online casinos and similar).

Enter Bitcoin

Bitcoin is the evolution of these traditional centralized E-currency solutions. However, rather than a single company that keeps gold in a vault and issues digital currency tied to it via keeping track of it on a centralized website that they control, Bitcoin has nothing backing it other than its own scarcity, the demand for it, and its use as a digital currency. Rather than being owned entirely by a centralized company that merely keeps track of who is issued how much of it at a given time, bitcoin is entirely decentralized and can be "mined" for by engaging in brute force computations looking for cryptographic hash collisions, which are probabilistically found with brute force guessing, the more computational power dedicated to brute force guessing, the more guesses you can make, and therefore the higher the probability that such a collision will be found (in the same fashion that if you buy many random lottery tickets you are more likely to get a winner). When a miner successfully mines bitcoins, they are added to their wallet (covered shortly).

The bitcoin protocol harnesses the sum computational power of all of the "miners" to protect it from various attacks, such as counterfeiting (which are called 'double spend attacks' in bitcoin nomenclature). The bitcoin protocol, and the network of miners, also keep track of which "wallets" have which bitcoins stored in them. In the context of bitcoin a "wallet" is really a cryptographic signature key that allows the person who holds it to sign transactions authorizing the transfer of bitcoins they hold to other wallets; these transactions are then uploaded to the bitcoin network and kept track of in a permanent massively distributed blockchain, which is used as the basis of the hash collisions that the miners work on.

Bitcoin essentially serves as the Pecunix Company component of the modern digital currency situation, however there are still exchangers that serve the role of cashing bitcoin in and out. In other words, if you want to spend regular money to buy bitcoin, you will do so at an exchanger, or through some exchange website, that is actually not part of the bitcoin network. These exchangers function essentially the same as the previous generation Pecunix exchangers did (many actually probably additionally exchange Pecunix today still, and will convert between Pecunix and Bitcoin and vice-versa), however there is not a centralized company they can trade the bitcoins into for bars of gold, rather they buy bitcoins and sell bitcoins to arbitrary customers and take a margin of the value that passes through them as their profit.

You don't need to have a great technical understanding of Bitcoin to use it, and I will spare an overly technical explanation of it (indeed, I don't know bitcoin to very much technical depth, it is quite complex and I've not had reason to learn it so thoroughly). Most of the technical aspects of it, such as private encryption keys for example, are abstracted away from you behind fairly intuitive GUI interfaces to clients, of which there are numerous (Electrum Wallet being one popular one). Note that the wallet you use is in the place of the account on a website of a previous generation centralized digital currency provider, so rather than logging into pecunix with your E-mail and password (and PIN!), you instead run your bitcoin client program on your desktop, laptop, or other device. Also, the private keys that are the basis of your wallet, and which give you ownership of your bitcoin, are stored on your local computer (make backups!), completely under your own control. You don't even need to provide an E-mail to anything!

Bitcoin Anonymity

Bitcoin anonymity covers numerous distinct concepts; staying anonymous on the Internet with Bitcoin, anonymously converting other forms of currency to Bitcoin in your wallet (cashing in), cashing out bitcoin from your wallet to other forms of currency, and unlinking bitcoins from their source and/or destination.

In regards to internet anonymity with bitcoin, you should realize that your Bitcoin client will indeed connect to the Internet, as it must do to both download and keep up to date the entire blockchain (which is many gigabytes), and also to insert signed transactions into the network when you want to send bitcoins to people (they can tell when you have sent them bitcoin due to this being logged on the blockchain, which everyone on the bitcoin network keeps a complete up to date copy of). You must configure your bitcoin client to access the network via Tor prior to generating any wallets that you will use for anything sensitive, and should only ever let it connect to the Internet with Tor, to protect your anonymity.

In regards to anonymously cashing bitcoin in, there are numerous techniques to take into consideration. A good deal of care should be taken into cashing in as anonymously as possible, because Bitcoins are at least as anonymous as the cash in methodology, and possibly not more so than it. Although I will cover this more in discussing bitcoin unlinking, keep in mind that the entire bitcoin transaction history is public, so anyone can see which wallet got which bitcoins from which wallet and which wallet sent which bitcoins to which wallet, making bitcoin actually inherently non-anonymous in this sense. In other words, if you have a wallet, even one always run over Tor to provide Internet anonymity, and you use a bank account linked to your IRL identity to wire funds to an exchanger in return for having bitcoins sent to the previously mentioned wallet, and then send bitcoins from that wallet to a drug vendors wallet, there is a link between your bank account and the drug vendors wallet that can be determined by viewing the public blockchain; the wallet of the exchanger sent bitcoin to your wallet, and your wallet sent bitcoin to a drug vendors wallet, all that is needed is to link your wallet to your IRL identity, which the exchanger can do since they will have kept a log of the fact that your bank account sent them the original funds (furthermore mainsteam exchangers will want various forms of identification to make an account on their exchange websites in the first place, though there are some that will not, particularly underground ones).

It should be noted that when registering to bitcoin exchanger websites, you will want to use anonymously registered E-mail accounts, and fake information to the extent possible (or just period if anonymity is paramount), and behind Tor (possibly Tor to PHP exit proxies that are not in block lists, or similar Tor -> Other Exit Mechanism techniques, in the case that the exchanger blocks proxy IP addresses).

It should also be noted that many exchangers require personal information and such to get accounts, perhaps even ID scans and similar. They are trying to comply with anti money laundering laws. There are typically more underground exchangers that don't have these requirements, and usually you can find them if you dig around on google, I don't know any traditional exchangers like this off the top of my head though (wm-center was one but they were shut down by LE, you can see a nice seizure notice on their website wm-center.com, though you may want to use Tor if you give it a look since it is under the control of LE now).

In many cases people do give this information even when their intent is to buy small amounts of drugs, and although I cannot say I would suggest this (and in fact I would not suggest it), it usually doesn't end up fucking anyone over if they are only buying small amounts of drugs, in general LE is not trying very hard to bust someone ordering ten hits of LSD or similar, though I would never actually suggest not taking at least basic measures to anonymize your bitcoins prior to buying drugs with them, I do know people who have gotten bitcoin without much anonymity in furtherance of buying small amounts of drugs and nothing bad has happened to them.

With this background covered, I will now explain some mechanisms by which the anonymity of the cash in may be obtained, in the event that the account on the exchanger is registered anonymously and such to begin with (which is required to have any anonymity in the cash in obviously).

Which is different from traditional exchangers in being more of a communication hub through which arbitrary people can hook up to directly sell or buy bitcoins from each other, rather than a more centralized exchanger that buys and sells bitcoins from its clients. I would say this model is superior, and suggest utilizing this exchanger, especially because they don't to my knowledge have strict ID requirements, and various users of the site have different requirements for you in the ID you must provide them for them to work with you, making it possible to buy bitcoins without an ID. The people selling bitcoin on this service take payment in a variety of ways, ranging from bank wires, to the more anonymity friendly cash in the mail etc. Note that some bitcoin exchangers are scammers, as are some of the people on sites like localbitcoins, and you should always do background research before sending money.

Cash In Mail: is one way to anonymize your cash in, reducing to the anonymity of posting packages. If anonymity is of the utmost concern you will want to use drug vendor opsec when obtaining and packaging the cash, in other words be concerned about fingerprints particularly, including on the bills themselves. I already covered much about drugs in the mail, and cash in the mail is somewhat similar so I will refer you that section of this guide.

Western Union / Moneygram: is less possible to do anonymously than cash in the mail typical, in that you will usually at least be on CCTV when sending the wire (though some WU places may not have CCTV if you look around, most of them will). However, you can still give yourself some degree of anonymity. You should take care to disguise yourself to some extent while sending the money wire, at least cover identifying marks such as tattoos, long sleeved shirts, long pants, etc. Wigs are a possibility here, perhaps hats if it can be done in a non-suspicious way, the same for glasses. In general you want to avoid looking suspicious whilst simultaneously covering and disguising yourself as much as possible.

The forms you need to fill out are tricky in that they will catch fingerprints on them, and therefore you must take care to avoid touching them with your fingers, however wearing gloves needs to be approached with caution as you don't want to appear to be a robber either. Typically you can grab the forms and fill them out prior to taking them to the counter to be processed, in such a case you can grab the forms with gloves and fill them out in privacy without touching them, care can even be taken to use stencils to avoid leaving handwriting that can ever be analyzed (though signature is required too I believe).

Various tricks can be utilized when handing the form to the clerk that processes them, for example you can have it in a folder that acts to prevent you from touching it and then drop it out of the folder onto the table, just as an example off the top of my head. In any case, you may not need an ID to send a western union, in the past anyway only one side of a transaction needed identification for under a certain amount of money being sent, so you can maintain anonymity by not needing to have an ID in the first place (of course you will be filling out fake information as well, other than that which is required to be accurate of course), and using fake signature and name etc. In the event that you do need an ID this method is even harder to anonymize, though I've had no issue sending and receiving western union with fake IDs in the past ^_^. Keep in mind this is actually considered wire fraud I believe though, even though you aren't stealing anything.

Bank Wires and A Trick: One trick you can use is to buy a traditional E-currency such as pecunix, and then to cash it out via one exchanger to another exchanger rather than to yourself (when interacting with the first exchanger you pretend as if you cash out information is the cash in information of the second exchanger). This is expensive in that you need to pay multiple exchanger fees, but it allows you to bounce money around multiple jurisidctions, and provides some unlinkability, essentially like a chain of proxies (we will cover financial unlinkability in more depth soon). For example;

This will accumulate three exchanger fees (cash in to exchanger one, cash out to exchanger two, cash in to exchanger three for bitcoins) and is quite expensive, but note that at exchanger 3 (which is where you got bitcoins from) they have a log only of the bank account of exchanger 2, rather than your bank account, and that exchanger 1 may be in one jurisdiction, 2 in another, 3 in another, and pecunix in another entirely. This allows for some degree of anonymity to be provided to the bitcoin that eventually ends up in your wallet, in that there is an international trail leading back to you (however, it could have been made even more anonymous if the original link between you and exchanger 1 were not so tangible, perhaps due to having sent cash in the mail rather than a bank wire for the initial cash in!).

Utilizing this technique you can cash in to one exchanger with any method another exchanger will cash out with, which allows you to send anonymous bank wires, though as previously mentioned exchanger fees get expensive, and although this arguably is providing some anonymity (though I'm not particularly well versed in FININT, and this is very much in the realm of classical FININT, rather than the more technical modern realm, so take this for what it is worth), I would not want to rely on it if utmost anonymity is required for the finances (rather opting to cash in anonymously originally, to exchanger one, to try to remove that initial link entirely, rather than just obfuscating it behind layering, since possibly external attackers somewhere can perform FININT traffic analysis, I don't know the classical financial networks well enough to say who would be positioned to confirm that link but it seems resistant to analysis unless there is a FININT globally externally positioned attacker [I seem to recall a friend indicating there may be now that I mention it though, so would want to really look into this a bit, though how easily they could correlate it is also of question, but yeah I'm going off on a tangent here]).

Other Methods: There are various other payments options people may take, ranging from everything from greendot reload pack numbers, to even meeting up in person with cash (particularly on localbitcoins). These methods of obtaining bitcoin can be anonymized with the same general opsec as previously covered, like I think I've given you a bit of an overview and that you can abstract things yourself from this point.

In regards to anonymously cashing bitcoin out there are numerous techniques, one of the classical techniques was the acquisition of anonymous ATM cards from exchangers (sent to either random residential mail boxes and intercepted prior to the tenant of the home getting their mail, or to PO boxes or PMBs registered with fake identification) , which could be loaded with the E-currency and then cashed out at ATMs. Various security considerations could be utilized when cashing out at an ATM, in general the goal being of course to obfuscate your appearance such that CCTV cannot identify you (while simultaneously not looking like you are about to rob someone) [or being in the middle of no where at a deserted ATM, in which case you can look more so like you are about to rob someone]. Of course you will want to keep in mind the previously mentioned intersection attacks, after all even if you are not identifiable when you cash out, the fact that you cashed out with that card at a given ATM is recorded, and if you parked near by, well if your city has license plate scanners geopositioning cars, they can make a radius around the ATM at the time of detected cash out, and over multiple such instances they can intersect the cards that were parked near them to narrow in on your car, which is linked to you.....

You can also cash out your bitcoin through exchangers via other means as well (or via localbitcoins, which is sort of an exchanger but different as previously mentioned), however seeing as cashing out generally indicates a high risk threat model, you will want to ensure that the cash out is anonymous, which pretty much means you definitely don't want to use exchangers that require identification (unless you use completely fake identification including with pictures that are not you, seeing as they never see you in person anyway there is no need to use your real picture, and the people who make fake IDs also will do fake scans with arbitrary pictures). This could entail anything from cash in the mail to random residential mail boxes [intercepting prior to the tenant], or fake ID PO/PMB, to western union or moneygram with fake ID, etc.

Additionally, if you know people who are looking to buy bitcoin, you can always avoid exchangers entirely and work with them, this can be to both of your benefit in the situation that you have a trust relationship with them, because it avoids the need for exchanger fees, and it also avoids the exchangers entirely, however there is HUMINT risk here in that the person may be an enemy agent (in which case they may stake out the PMB etc in an attempt to compromise you [of course nothing says you need to go to pick it up yourself, you can have workers do this while monitoring them for compromise, though this is getting into major league tier shit and I'm kind of going off on tangent again ugh]).

In terms of unlinking bitcoin: first I should explain the general concept. We've already gone over this actually, in the example of layering through multiple exchangers. However, there are numerous other ways to unlink Bitcoins. This draws on the classical traffic analysis primitive known as mixing, and the services that do this are formally called "financial mixes", though colloquially they are sometimes called tumblers (which irritates the shit out of me). There are numerous ways that these can be implemented;

Simple Financial Mixes are websites (or other interfaces) that allow you to deposit bitcoins into a wallet, in return for which they give you an IOU that allows you to withdraw the same amount of bitcoins you put in (perhaps minus a small fee). The theory goes that you and a dozen others insert identical amounts of bitcoins that are respectively linked to you into the mix, and get the IOUs. Then, you make new Bitcoin wallets. After this, you withdraw the bitcoins from the mix into your new wallets via using the IOUs (maintaining Internet level unlinkability between withdraw and deposit via using Tor, and rotating circuits / new identity between the deposit and withdraw [you must unlink the wallets on the Internet layer as well as the interactions to the mix interface]). Now an external attacker, who observes the blockchain but not the internal state of the mix, cannot tell which wallet in is associated with which wallet out, it saw a total of 13 wallets put identical amounts of bitcoin into the mix, and 13 wallets withdraw identical amounts of bitcoin from the mix, but it cannot tell which of the 13 wallets that got money out are associated with which of the 13 wallets that put money in. This has unlinked the bitcoins out from the bitcoins in, to an anonymity set size of 13 (ie: any of the 13 people who put bitcoins in could own any given of the 13 wallets that got bitcoin out).

Blind financial mixes work by the same principle as simple financial mixes, but they use a cryptographic primitive known as an "unlinkable blind cryptographic signature" for their IOUs, which prevents even the owner of the mix from linking wallets in with wallets out (thus blind mixes also protect from internal attackers, whereas simple mixes only protect from external attackers and require you to trust the owner of the mix).

A third type of mix is essentially a blind zero knowledge financial mix, of which Zerocash is the only one I'm aware of, but it is theoretical and not completely implemented yet. Zerocash has numerous anonymity advantages and we need it to be implemented because it is going to massively increase the ability of people to anonymous transfer and receive finances. Zerocash protects from various traffic analysis attacks that both simple and blind financial mixes are weak to, primarily the fact that volume analysis can over time be used to infer things, in other words attackers can still see total bitcoins in and out with simple and blind financial mixes, they just cannot intrinsically link them, but if volumes differ they can link based on volumes, multiple wallets can be utilized to try to protect from this, but this turns into an extremely complex field of forensics and traffic analysis, whereas zerocash obfuscates volumes as well

Zerocash is a new protocol that provides a privacy-preserving version of Bitcoin (or a similar currency).

Zerocash fixes an inherent weakness of Bitcoin: every user's payment history is recorded in public view on the block chain, and is thus readily available to anyone. While there are techniques to obfuscate this information, they are problematic and ineffective. Instead, in Zerocash, users may pay one another directly, via payment transactions that reveal neither the origin, destination, or amount of the payment. This is a marked improvement compared to Bitcoin (and similar decentralized digital currencies), where every payment's information is made public for the whole world to see.

Zerocash improves on an earlier protocol, Zerocoin, developed by some of the same authors, both in functionality (Zerocoin only hides a payment's origin, but not its destination or amount) and in efficiency (Zerocash transactions are less than 1KB and take less than 6ms to verify).

How does Zerocash work?

Zerocash extends the protocol and software underlying Bitcoin by adding new, privacy-preserving payments. In doing so it forms a new protocol that, while using some of the same technology and software as Bitcoin, is distinct from it. This new protocol has both anonymous coins, dubbed zerocoins, and non-anonymous ones, which, for purposes of disambiguation, we call basecoins. In contrast to Bitcoin's transactions, payment transactions using the Zerocash protocol do not contain any public information about the payment's origin, destination, or amount; instead, the correctness of the transaction is demonstrated via the use of a zero-knowledge proof. Users can convert from basecoins to zerocoins, send zerocoins to other users, and split or merge zerocoins they own in any way that preserves the total value. Users may also convert zerocoins back into basecoins, though in principle this is not necessary: all transactions can be made in terms of zerocoins.

There are also various other fashions of modern bitcoin mixes, which I'm not very familiar with, some of them are P2P for example. Here is an example of one I found on google, I know nothing about it but it is worth looking into these modern bitcoin mixes; https://www.comsys.rwth-aachen.de/filea ... nparty.pdf

There are other ways to unlink bitcoins besides layering or mixing. For example, say someone were to send me 10 btc right now, but I don't want them to be linked to "mrz". I could simply middle man 10btc worth of drugs, whereby I find a customer who wants that sort of drug and buy it for him with btc (acting as the original vendor), and in turn he pays me 10btc. The 10btc I got from him are no longer linked to "mrz", and the 10btc linked to "mrz" were spent on drugs going to someone entirely different!

Other techniques we've used include running E-currency through online casinos, maybe even gamble a bit but not much, and then cash the E-currency out to a different account. This essentially utilizes the online casino as a simple financial mix, though volume analysis can still link of course (gambling a bit makes perfect correlation harder for external attacker watching coins in and out but unable to see the internal state).

Also, I believe there is something to be said for cashing coins of one sort into coins of another sort, and then back again over different wallets. There are many obscure altcoins that have exchange markets for them. By cashing through various blockchains you can kind of obfuscate things from attackers who don't analyze all of the blockchains but rather focus on one like Bitcoin. Also I would always suggest bouncing bitcoins around through various wallets, even though the blockchain is public so attackers can follow the flow of them throughout the network, you do gain some plausible deniability, for example:

Provided you keep both of the wallets unlinkable on the internet level, in the event anyone bothers you about why you sent money to a drug vendors wallet, you can always claim you actually sent them to the wallet of someone else (Your New Wallet), and that you have no idea who it belonged to nor what he did with them (of course you would only talk through an attorney though). I think this is superior to the situation in which you send directly from your original wallet to the drug vendors wallet. Of course in practice the drug vendor / market is possibly taking measures to avoid the wallet being identified as that of a drug vendor to anyone other than you anyway, but eventually I imagine they are merging their wallets and vulnerable to traffic analysis piecing things together, it's hard to say though, this is kind of a very complex subject.

Scam Protection

Bitcoin has the ability for what is called double signature escrow. This allows coins to be locked by two signatures, such that they cannot be transferred unless two wallets sign for such. This allows for taking the profitability away from scamming as follows;

1. A customer places an order with a vendor, locking his bitcoins such that they cannot be transferred without a signature from the vendor and the customer.

2. The customer waits to receive his order. If he never receives the order he never signs to release the bitcoins to the vendor, thereby removing any financial gain from the vendor if he acts as a scammer.

3. If the customer receives his order, he signs for the release of the bitcoins to the vendors wallet, and the vendor accepts them by signing authorization for the transfer.

4. If the customer doesn't sign to release the bitcoins to the vendor, he cannot spend them anywhere else anyway, because the vendor will never authorize for this. Therefore, the customer has no incentive to not release the bitcoins to the vendor, and additionally risks his reputation if he doesn't release the bitcoins.

Some Notes On Strategy

When it comes to acquiring sensitive media (such as digitized Bibles in China!), there are various strategies that may be utilized. Some people operate with full Amnesia, such that they save nothing and reacquire content every session. I would caution against this strategy because it increases exposure time, and in the case of Tails it increases entry guard rotation as well (which is actually just another manifestation of increasing exposure, in that the more entry guards you are exposed to the more likely one of them will eventually be controlled by an attacker). Rather, I would suggest minimizing Internet connected sessions to the bare minimum, opting instead to stockpile material locally, encrypted, and protected with an adequately entropic password (the characteristics of which are explained in the chapter on passwords). In the case of Tails, this would be encrypted persistence. In the case of Whonix, it would be FDE, which is separate from Whonix but for the host in which the Whonix VM is run.

The security benefits of minimizing sessions is obvious; in many cases a site will run for months or years prior to being compromised, after which it will function as a poisoned watering hole, which gives the attacker a position from which they can target those who connect to it (if they can actually compromise them is another question entirely). Usually poisoned watering holes will only operate for a matter of a week or two prior to being taken down. Therefore, an individual who connects to such a site on a daily basis will almost certainly be exposed to it in the event that it becomes a poisoned watering hole, whereas the individual who connects to it only once a month (and stockpiles all content of interest for later access from the local cache) may very well entirely avoid the site while it is operating as a poisoned watering hole.

Additionally, one can look at the RELAY EARLY attack, or really any sybil (node flooding) + confirmation attack, and see that by rotating entry guards rapidly (as will happen if you use Tails and frequently engage in new sessions), you increase the probability that eventually you will use an attacker controlled entry node. This is quite easy to visualize; if you image a bag with marbles in it, some of them red and some of them blue, you are more likely to select a red marble if you pull more marbles out of the bag.

Of course there are some security disadvantages associated with stockpiling to a local cache which is then accessed in the place of the remote storage to the greatest extent possible; primarily it results in the local storage of potentially incriminating files, and particularly the accumulation of large amounts of said files. This is, of course, partially mitigated via the utilization of proper encryption with adequately entropic passwords. In other words, stockpiling arguably decreases security (and increases severity of a compromise) in the face of an attacker who raids (whilst decreasing the probability of being raided in the first place), whereas accessing from a remote store increases the probability of being raided (via increasing exposure) but possibly increases security in the face of an attacker who raids (of course assuming that forensic trace evidence is adequately mitigated against, as it will be in the event Tails is utilized). My opinion is that this is an acceptable trade off, and that the benefits of stockpiling outweigh the risks, I would much prefer to reduce sessions by an order of magnitude or more whilst having encrypted content locally, than to have an order of magnitude or more sessions exposing me to risk in the first place, however you should indeed weigh the benefits and risks of both strategies. Particularly, you should keep in mind that in some countries LE can compel you to decrypt encrypted content, though this can be partially mitigated with plausibly deniable encryption. In the USA LE may hold you in contempt of court for refusing to decrypt encrypted content, however this has not made it to the supreme court, and in many cases they will not attempt to compel you to decrypt encrypted data unless they have secondary evidence that the encrypted data is illegal, such as forensic trace evidence indicating as much.

This brings me to my next point, which is the importance of compartmentalization. Tails is actually very good in this respect, seeing as it can be booted from a USB when security is needed, without contaminating the primary drive with forensic trace evidence, after which it can be shut down and the main drive can be utilized for normal activities. This reduces the amount of time in which forensic trace evidence is in a vulnerable, non-encrypted state (FDE only protecting the contents of a drive while the system is shut down, or booted up prior to the password being entered). This is a very beneficial property, and a form of compartmentalization; sensitive activities are compartmentalized to the Tails system, which is only booted up and utilized when required, and non-sensitive activities are compartmentalized to the regular system, which can be booted up and in a vulnerable state due to not having any relation to sensitive activities.

Whonix is not as oriented toward inherent compartmentalization, in that many people who use it will do so from a machine, and host operating system, which they use for non-sensitive activities. Even if FDE is utilized on the host system, the entire time it is mounted (ie: powered on system after the password has been entered), the user is in a vulnerable state in which the seizure of the powered on machine can reveal forensic trace evidence associated with the Whonix sessions. To compartmentalize Whonix, one should use a secondary laptop for it, which is dedicated entirely to darknet activity, and should only power it on when they need to use it, otherwise keeping it powered off and using a secondary machine for their regular internet activities. In fact, it cannot even hurt to compartmentalize Tails in such a fashion, though this isn't as strictly required due to the inherent compartmentalization with typical Tails use (ie: Tails when it is booted into on bare metal rather than in a virtual machine, of course although it is possible to boot tails in a virtual machine this negates many of the security advantages of using it in the first place). One advantage of using a separate machine for Tails is the ease with which it can be physically discarded to unlink it from yourself prior to being compromised; after all, the machine has hardware serial numbers and other things which can be remotely determined and used in a forensic context (in the event of an application layer compromise).

It should be noted that a local external attacker, such as at your ISP (or even someone who taps your WiFi or wired Internet), can observe the entry guards you are currently using. Nothing stops them from determining when you are using Tor, nor which Tor entry guards you are connecting to. One artifact of compartmentalization is that such a local external attacker may be able to infer when you are engaging in sensitive activities by monitoring your Tor entry guard selections, or even when you are using Tor from when you are not, this may allow them to make intelligent guesses about when your system is in a vulnerable state to raids. In the case that you use Tails, you may weaken their ability to do this by additionally booting tails in a VM from your non-sensitive machine/OS (note that running tails in a VM negates many of its security advantages and should likely not be done when security is required), and to make some use of the Tor Browser from within it for non-sensitive activities; this works because Tails rotates guards between sessions anyway, so such an attacker will have difficulty to determine the sensitive compartmentalized Tails use from the non-sensitive Tails use from the non-sensitive machine.

Compartmentalization of course extends to more than using separate Machines and/or Operating Systems for sensitive versus non-sensitive activities. Indeed, compartmentalization is also related to the previously mentioned concept of isolation; for example, different applications may be compartmentalized to different virtual machines which isolate them from each other, this is the security strategy utilized by Qubes-OS

Qubes allows for the construction of various security domains, which are hypervisor isolated compartments. For an attacker to move between the domain represented by the window with a green border to that represented by the red border, they will need to break out of the compartment, which involves an exploit for the hypervisor, the underlying hardware, or similar. Using multiple virtually isolated security domains is beneficial in that the compromise of an application in one of the security domains has difficulty moving to the others; though using a single security domain such as Whonix utilizes (technically two but practically one) is advantageous in isolating the workstation OS (and ability to bypass Tor) from the host OS, a compromise of the whonix workstation results in the compromise of the entire security domain it establishes, which is utilized for numerous sorts of application (the browser, instant message programs, etc). Conversely, each of these sorts of application may be run out of their own security domain, requiring the penetration of the hypervisor isolation to compromise between them, as is the case with Qubes-OS.

Compartmentalization extends to various other things as well. A surprisingly common mistake many people make is using usernames for sensitive activities that they also use for non-sensitive activities, often not engaging in security practices during the non-sensitive activities and thereby linking the username to their IRL identity. Of course, no information, be it username, E-mail address, or anything else, should be used between sensitive and non-sensitive activities, and this information which is linked to sensitive activities must always be utilized in a secured context only.

Compartmentalization actually has numerous manifestations. In the context of ordering drugs on the Internet, it is wise to compartmentalize your shipping location from your stockpile of drugs, thereby reducing the severity of a compromise (ie: if a given package is intercepted, resulting in a raid, the attacker will only find the package that was intercepted, rather than an additional cache of drugs at the shipping location). Even temporary compartmentalization, in the form of keeping drugs at a safe house while incoming shipments are in route (and maybe a day or so afterwards), can reduce the severity of a compromise. Of course there should be no intelligence indicating the safe house is being utilized as such (ie: if there are text messages recovered from your phone or such, they will of course just additionally raid the safe house). Compartmentalization in the form of using numerous fake ID PMBs can also be useful; if various vendors know only one of your N fake ID PMBs or similar, they will only be able to compromise one of them in the event they turn malicious. PMBs and such can of course be dropped when it is learned of a compromise, or after some period of time as it accumulates exposure. Of course if random residential mail boxes are utilized, they can likewise be compartmentalized.

Another aspect of compartmentalization is in the form of pseudonyms for use on underground websites. For example, if you socially interact on underground forums, there is benefit to be had in placing orders under a separate pseudonym, and even to making new pseudonyms for every order. On one hand this hurts the ability of the reputation systems, but on the other hand it compartmentalizes your risk. For example, I'm quite infamous on the drug forum scene due to being a pioneering member of it and heavily involved with it and pretty much a driving force behind it from like 2006 to 2013, but am also infamous there for my love of JBs which pisses some people off, so I would use a throw away account not linked to my established identity there if I were to place any orders from random vendors, due to the fact that I wouldn't want anyone in anti-JB crew to know my shipping information or anything about me even my city etc. This is a manifestation of unlinkability and compartmentalization.

Additionally the previously mentioned pseudonym unlinkability is important for vendors as well. For one you should avoid socialization to the extent possible in the first place because you will end up inadvertently leaking bits and pieces of information about yourself which increase your vulnerability to biographical intersection attacks. In the event that you do socialize though, you should use a pseudonym that is not linked to vending to do so, and take some care to minimize the writeprint associated with your vendor account and to obfuscate it such that writeprint analysis cannot easily link your socialization account to your vending account (of course take measures to unlink them on the application layer as well, if you log out of one and into the other they may be linked by cookies for example, one solution around this would be to compartmentalize such that two separate instantations of Tor Browser are utilized for each separate goal, though it should be noted that by using two such instantations you increase your entry guards, and also this can be fingerprinted by a local passive external attacker as something you are doing [ie: using two instantiations of the Tor Browser]).

Additionally, as the operator of a darknet site, you should simply not socialize on it under the administrative account what-so-ever, other than for maybe the bare minimum adminsitrative announcements. DPR was an idiot to socialize on SR1 under an administrative account like he did, though he was already fucked very early on anyway due to his early mistake of linking his real E-mail address and non-anonymous pseudonyms to SR before it was even operational. DPR actually made a lot of mistakes :<. You also want to minimize the time you spend authenticated to administrative roles, and the same goes for moderator roles too. Of course this has the downside of decreasing the personability of the site, and of establishing the trust and reputation system, but I think in the context of the modern drug markets this is less important than it used to be on like BBS and prior back to SL ^_^.

WiFi Anonymity (AKA: Layering Counter SIGINT)

Although Tor is by far the best anonymity solution available, and is typically enough by itself, there should be some words said on layering counter SIGINT solutions, seeing as the various fashions in which this can be done present with their own advantages and disadvantages. It should be noted that in the Tor community many people suggest not attempting to layer counter SIGINT and rather to rely entirely on Tor, however I've always taken this advice with a grain of salt (despite some highly skilled people advocating for such).

The fact of the matter is that, although Tor is extremely useful for anonymity on the Internet, and that it is a staple of any gestalt anonymity solution, it does rarely have significant failures. The archetypal example of such a massive failure was in the RELAY_EARLY attack, wherein an attacker engaged in a combination of a sybil attack and a high accuracy traffic confirmation via the exploitation of a covert channel in the control packet protocol to deanonymize significant numbers of Tor users. Those who used only Tor were at significant risk of being deanonymized by this attack (particularly those who used Tails, due to rapid entry guard rotation). Those who layered counter SIGINT solutions with Tor had their anonymity reduced to that which they layered Tor with, which is clearly a superior position to be in. For this reason, it seems pretty apparent to me that there are benefits to be had from layering counter SIGINT solutions with Tor.

One of the counter SIGINT solutions which may be layered with Tor is the utilization of open or cracked WiFi access points (henceforth referred to by the shorthand 'WiFi', with the implication that it is actually not WiFi that is linked to you). The ultimate goal of using WiFi for anonymity is to give yourself retroactive unlinkability to the access point you utilized. This means that, after a session is over, an attacker who traces the connection up to the access point is incapable of further tracing it to you. WiFi does not give live session protection, meaning that while you are utilizing it an attacker can do a live WiFi trace (which will be covered in more detail shortly). There is nothing magical about wireless packets that make them impossible to trace, it's just that they need to be traced while the packet stream is active (or in various ways that rely on linkability, as will be covered shortly). In other words, WiFi is buying you nothing more than you would get if you were to run an ethernet cable from your laptop to your neighbors WiFi or whichever other access point you use, and in fact it would be more secure from an assortment of attacks if you actually were to run a physical cable (though of course your neighbor is far more likely to notice this and be unhappy about it!). The benefit of WiFi, where any exists, is in the ability to "disconnect the cable" (and to be free of other linkability issues, intersection attacks, etc) prior to the attacker tracing it up to the access point, after they trace it up to the access point they can just "follow the cable", even though in this case the cable is invisible to the human eye in being wireless.

There are two general strategies which may be employed when utilizing open WiFi. One is the use of WiFi from a static location; for example, persistently using a neighbors WiFi. The other is using WiFi from dynamic locations, for example using open WiFi from libraries, motels, etc, and changing up the location. The other defining characteristic is if the location is linked to you or not, for example if you use a neighbors WiFi you are using WiFi from a static location that is linked to you (ie: your house). If you use WiFi from a library persistently you are using WiFi from a static location that is not linked to you. If you use WiFi from a variety of motels that you register at with your real ID, you are using WiFi from dynamic locations that are linked to you, etc. The least advantage you will get from WiFi is using it from static locations that are linked to you, though you will only get slightly more advantage by using WiFi from dynamic locations that are linked to you (particularly in that intersection attacks can be utilized to narrow in on you, in the event the attacker can link multiple sessions to the same entity of interest). Using WiFi from static locations that are unlinkable to you is superior to using WiFi from a static location that is linkable to you, and likewise using WiFi from dynamic locations that are unlinkable to you is superior to using dynamic locations that are unlinkable to you, though the risk of intersection attack and such still remains (in other words, the location may be linked to you even if you think it is not, just because you did not register a motel with your real ID doesn't mean you didn't park your car near it, etc).

Many novices use only WiFi from a static location that is linkable to them in an attempt to protect their anonymity. This is very bad, because LE are adept at tracing such people after targeting them; indeed there are numerous commercial solutions sold to law enforcement exactly for the purpose of doing a live WiFi trace.

If engaged in a pattern of behavior and only securing ones anonymity with open WiFi from a static location, one will be rapidly traced after law enforcement takes an interest in said activity, they will merely identify that the IP address identified traced back to an open or crackable WiFi hotspot, and they will monitor the WiFi spectrum with such devices for a period of time waiting for the ability to do a live trace to the real target.

Even in the event that you stop engaging in the pattern of activity from the static location prior to LE attempting to compromise you locally, there are numerous forensic methodologies by which they may locate you. For example, your WiFi device has a unique MAC address that it broadcasts with outgoing packets, the MAC address works similarly to an IP address in that it allows other devices to address return packets to your WiFi device. MAC addresses of connecting devices are often stored by routers, possibly even with session information, and therefore if LE analyzes the wireless access point you connected to during the behavior they remotely identified via their SIGINT operation, they may be able to link the activity of interest to your devices MAC address, which is problematic for you in that your device will continue in broadcasting this MAC address even when it is not connecting to that specific access point, which will therefore allow LE to geoposition it with the previously shown sort of tool even outside of the context of a live trace (the WiFi device itself being linked to the activity of interest via the logs of the access point it previously utilized). The solution to this MAC address issue is to make sure that you always spoof your MAC address, and to respoof it between all sessions. There are various tools for spoofing MAC addresses, macchanger being one of the most popular. Tails comes with macchanger and will automatically attempt to spoof your MAC address between boots, but it should be noted that the ability to do this is dependent upon the firmware of the wireless networking adapter and in some cases it will fail to do so. If you desire to ensure that you MAC address is being spoofed, with macchanger installed (as it is by default with tails), you can issue the following command from a terminal;

macchanger -s wlan0

Make sure that the current and permanent MAC are not identical, if they are spoofing has failed. You may try various spoofing options manually from macchanger to see if any work, though if one doesn't it is unlikely any will.

The exact device name varying, typically it will be wlan0 but it could be any of the devices in the output of

ifconfig -a

which is actually not available in tails. (if you have multiple WiFi devices, wlan1, etc, is the typical pattern)

Note that for various reasons tails only partially spoofs the MAC address. They seem to think this is adequate. I would tend more toward wanting to spoof the entire thing, seeing as even a partial match may be useful for intelligence imo (though in not being a full match it degrades the quality of the intelligence). You may spoof the entire MAC address as follows;

macchanger -A wlan0

Note that fully randomizing the MAC address, although possibly something you may want to consider, may also make it stick out in not following a pattern of any known vendor, most vendors have a pattern in their MAC addresses, and by not having a pattern of a known vendor your MAC address may stick out. Complete MAC address randomization can be obtained as follows;

macchanger -r wlan0

In any case, if you find that your MAC address is not actually being spoofed, you may need to get a wireless networking card that supports MAC address spoofing if you feel that this is important for your security model (and certainly if you are using WiFi as a layer of counter SIGINT you will want to be spoofing your MAC address).

MAC Address is not the only way in which multiple sessions can be linked together. One other example is via the selection of Tor Entry Guards; there are many thousands of Tor nodes, and therefore the probability that any two Tor users are using the same entry guard is fairly low (though certainly there is use between users, after all there is a pigeon hole in being orders of magnitude more Tor users than entry guards). This allows an attacker who can see the entry guard utilized at one WiFi access point, or during one WiFi session from a single access point, to probabilistically link it to WiFi sessions at other access points (or different sessions at the same access point). Due to the relatively low density of Tor users in any given area, and the fact that different clients typically use different entry guards, this is going to be enough to link access points and sessions together with very high accuracy (indeed, in particularly sparsely populated areas, even using Tor in itself may be enough for an attacker to link such sessions/access points together with high accuracy, seeing as at best you will always fall into the crowd of people using Tor from open WiFi access points, which may be a crowd of one person in some areas, though in others it is likely in the hundreds or thousands, and therefore with enough of an anonymity set size to blend into). Tails protects from this sort of WiFi access point / Session linkability, in rotating entry guards between reboots. No other solutions that I'm aware of protect from this, though you can manually force entry guards to rotate in various ways, generally you want persistent entry guards but in this one instance they can be used to link your sessions together and you may desire to rotate them between sessions. Note that in order to be able to link the sessions and/or access points together in such a way, such an attacker will need to be able to observe the Tor entry guards being utilized by the users of various access points, in some areas this is well within the realm of an ISP level attacker, though to be precise a regional external passive attacker will certainly be able to do so (in some areas this will consist of a single ISP). This is in contrast to the MAC address linkability risk, which requires an attacker to see the logs of various WiFi access points (or external logs from WiFi sensor networks that monitor all WiFi traffic) MAC address stops at the WiFi access point and doesn't propagate to the ISP.

Even with spoofed MAC address and rotated entry guards WiFi sessions may be linked together with sophisticated SIGINT/MASINT attacks that fingerprint artifacts in the wireless signal induced by unique characteristics of the devices vibrating element. Unfortunately I can no longer find the original forensics paper I read this in, but I remember it nonetheless. If this concerns you then you will need to entirely compartmentalize your WiFi devices between sessions, seeing as an attacker engaging in this degree of SIGINT/MASINT will always be able to link the sessions of the device together so long as they have spectrum analysis capable of picking up the broadcast signal from the device. I'm not aware of LE engaging in such sophisticated attacks, though it is certainly in the realm of true intelligence agencies to do so.

Keep in mind that these attacks are not strictly speaking limited to cross session linkability, in that they can be used to do live geopositioning attacks against targets through the linkability as well. Also keep in mind that there are various other concerns, for example even if you spoof your MAC address, use a new WiFi adapter, and rotate your Tor entry guards prior to connecting to an access point, if it is your neighbors Internet for example, even in the event that you stop engaging in a pattern of behavior with that access point (and from that static location entirely!) prior to the attacker coming, you are still going to look mighty damn suspicious if you've also used Tor to any extent from your home Internet connection, this is intelligence that indicates that it was you who utilized the WiFi access point for the event of interest, even though there may not be a clear link between you and it, there is a significant indication that you are the one who used it in being the only one in the area using Tor.

In addition to the previously mentioned live WiFi traces, which typically manifest as agents using the previously displayed devices in attempts to geoposition their target after going to the proximity of the WiFi access point their trace ended at, there is also substantial risk from historic wifi geopositioning signals intelligence. This may take various forms; for example, the police of Seattle had at one point a WiFi geopositioning sensor network that geopositioned all WiFi devices in its coverage area, and kept the geopositioning intelligence indefinitely;

Last week, the ACLU of Washington raised concerns about a number of white boxes that recently showed up in parts of downtown Seattle.

The boxes are part of a wireless mesh network that was installed by the Seattle Police Department to improve communication. However, there were immediate concerns about the network being used to track people's movements.

"In a democratic society you should be able to move freely without law enforcement tracking your movements unless they have reason to believe you're doing something wrong," ACLU communications director Doug Honig said last week.

I've heard from reliable sources that at any given time a few major cities will have such mesh networks operating in them, though they have a tendency to come and go, as the one in Seattle did. These can be used both to track a persons movements throughout the area of coverage, as well as to historically geoposition the WiFi devices that were accessing a given WiFi access point at a given time (and possibly to learn even more information about what they were doing on that access point, depending on the details of the logs, which can theoretically be as high resolution as every single packet of the connection).

These can remove the need of attackers to do live geopositioning attacks, and revoke retroactive unlinkability in any case where WiFi was utilized from a location linkable to you (ie: Your house, a motel linked to you, etc), via allowing the attacker to retroactively geoposition you to the location that is linkable to you after their trace gets to the WiFi access point you utilized.

Even more concerning is the subversion of mobile devices into massively distributed WiFi geopositioning sensor networks, as is being done by nearly every smart phone in the world;

When I wrote about Google making it possible to opt-out of their Wi-Fi access point mapping program, I made a mistake. I thought Google was still using its StreetView cars to pick up Wi-Fi locations. Nope, Eitan Bencuya, a Google spokesperson, tells me that Google no longer uses StreetView cars to collect location information. So, how does Google collect Wi-Fi location data? They use you.

Or, to be more exact, they use your Android phone or tablet. But, it's not just Google. Apple and Microsoft do the same thing with their smartphones and tablets.

In other words, Google et al. have massive global WiFi geopositioning sensor networks. As far as we are aware they only geoposition access points in a dragnet fashion, this is their claim, and the reason for this is partially because they use these sensor networks to assist in their WPS (WiFi Positioning System) services, which act as alternatives to GPS by allowing people to geoposition themselves based on the signal strength of nearby access points that have previously been geopositioned by Googe etc. In this context, there is not the need to geoposition client devices due to the fact that they are noise in that they rarely maintain a fixed location (as opposed to access points, which maintain a fixed location as is required for the WPS systems to utilize them for relational geopositioning). So from a WPS perspective there is actually little reason to geoposition arbitrary devices as opposed to access points, however from an intelligence perspective there is obvious benefit to geopositioning client devices as well. You should not count on Google and such not geopositioning all devices, and therefore should view WiFi from a location linkable to yourself to always be worthless, despite the fact that it might not be (ie: you can still use it and possibly get anonymity benefit from it, but you should never assume that you are not being historically geopositioned by various major corporations through their cell phone based wifi geopositioning sensor networks). Additionally, their definition of access point is quite broad, and includes devices in tethering modes, therefore you should ensure at least that all WiFi devices used for sensitive things operate strictly as clients to WiFi access points only.

When using WiFi from locations that are not linkable to you the anonymity provided can increase substantially (in that historic WiFi geopositioning will geoposition the device to a location that is not intrinsically linked to you). In such circumstances the threat model becomes primarily one of linkability and intersection attacks, as were previously explained;

Of course there are other concerns as well, such as falling into patterns with access point selection, CCTV, etc. In any case, people who use decent WiFi anonymity opsec (as opposed to nothing, and without the use of Tor in addition to it) typically last for twice as long as people who use nothing. In other words, WiFi with good opsec makes a target no longer fully soft, and necessitates significant law enforcement resources to compromise. Although one would not want to rely on this for long term protection, seeing as such targets are typically eventually compromised (usually after approximately six months of LE trying to identify them, the entire time of which they engage in a consistent pattern of activity), the benefit of layering this strategy with Tor is apparent, in that in the rare cases where Tor fails, some safety margin remains, in that assuming Tor fails for only a short period of time, or even for a single session only (ie: tails rotated from a bad entry guard selection that compromised one session due to the attacker also having the ability to view exit traffic), the security of the gestalt system is still reduced to that which LE cannot immediately compromise, thereby buying time in which the anonymity of Tor may be restored (ie: patching the RELAY EARLY covert channel), etc. Also typically people are compromised via some form of intersection attack so they are actually not using perfect opsec to begin with.

People who use less optimal WiFi opsec may still buy themselves some protection. Say that someone uses his neighbors WiFi for a period of time engaging in sensitive activities. The attacker eventually traces the Internet connection to the neighbors access point, but they take six months to get field agents there (this is not an atypical delay, usually at least you will expect a one month delay). Prior to the field agents arrival, the target has moved to a new location (because he lives in apartments perhaps, and switches them every year or half year). There are no MAC address logs or such that can link his WiFi adapter to the connection. In the event that there is not historic WiFi geopositioning signals intelligence available (which could put the session as originating in the targets apartment, and thereby linking it to him), LE may be at a dead end (they could still enumerate the people who lived nearby during the session of interest, however this creates a crowd of potential suspects, it may be resource prohibitive to investigate all of them individually, especially depending on the reason they are looking for the suspect to begin with). Of course, if they can link two such sessions to the same entity, they can intersect the suspect lists between them to narrow in on their target.

One point of interest is that USB WiFi dongles are superior to direct PCI connections, in that if a PCI device itself is compromised by local attackers (after all, the WiFi device is capable of getting inputs from local wireless signals, and essentially anything can get inputs is at risk of being hacked and subverted), the attacker gains DMA (Direct Memory Access) capabilities, which allow them to compromise the entire machine. This is not as much of an issue if USB WiFi devices are utilized. For this reason, it is preferable to use USB based WiFi dongles, and to avoid having a PCI wireless networking card in the first place, as it is a point of entry that can be targeted by local hackers for the complete subversion of the system, bypassing other security mechanisms. Note that this is mostly theoretical and contested in security circles as to the probability of it being utilized in the wild, however I make sure to avoid using PCI wireless networking cards regardless. (To be compromised by this the attacker would aditionally need to be near enough the target to send wireless packets to their networking card).

There are other considerations involved with WiFi as well. One is that they make wiretaps easier than they are with wired connections, in that the entire contents of the session are broadcast over the airwaves. If Tor is being utilized (as it very much should be!), this is less of a concern in that Tor securely layer encrypts all of the sessions traffic, and additionally pads packets to invariant sizes to protect some from fingerpritng attacks. However, this is still a noteworthy concern.

Additionally, WiFi can provide some degree of membership concealment (though not typically entry blocking resistance), in that your ISP will not see you connecting to Tor but rather the ISP of the access point you use will see someone is using that access point to connect to Tor. This can be useful for hiding that you use Tor, though the degree of membership concealment provided correlates with the degree of anonymity you are buying for yourself by using WiFi in the first place, in other words not much at all if you are using WiFi from a static location linkable to yourself, but perhaps a good bit if you are using WiFi from dynamic locations that are not linkable to you.

It should be noted that using open or otherwise publicly available WiFi is always superior to cracking WiFi due to the fact that it is illegal in itself to crack WiFi and therefore may draw attention to you, and even be enough to raid you in itself. Typically people are not apprehended for cracking WiFi, after all the people who have insecure wireless networks are typically not going to be sophisticated enough to detect that you've cracked into them, however imo this is playing with fire and should typically be avoided unless you are using dynamic locations that are not linkable to you maybe. Of course you also should keep in mind that using WiFi that is linked to you in any way is not buying you anything, obviously using your own Internet via WiFi is not even counting as an anonymity technique what-so-ever, and the same is also true of using for example a university connection with a username:password that is tied to you.

It should also be noted that you should never have traffic that is linkable to you sent over WiFi access points you are using for anonymity. This necessitates the utilization of compartmentalization, if you are using Whonix for example and connect to a WiFi access point from the host, well all the hosts Internet traffic will go over this WiFi access point now and that will include things that identify you, from unique identifiers of other programs, to your Internet traffic, etc, there are so many possibilities for linking traffic that you should simply only utilize WiFi from a compartmentalized OS and/or machine that is only used for sensitive things!

Some people frequently suggest using high powered WiFi setups, with directional antennas and amplifiers. These can indeed extend your range significantly and give you a larger number of WiFi access points you can utilize, thereby increasing the probability that one of them will be open. However there are actually many advantages to using the lowest powered WiFi solution possible; the higher powered WiFi solution you use, the less granularity sensor networks need to detect and geoposition your signal. If your WiFi setup can only send signal a dozen feet, it may not even be on the radar of mesh sensor networks that are not densely concentrated enough.

Finally, I do suggest using WiFi access points in addition to Tor when the utmost anonymity is required. You should never count on WiFi buying you much or anything for that matter, however it does have the potential to be an additional layer of anonymity in the event that Tor fails. Just keep in mind everything that has been covered.

VPN Anonymity (AKA: Layering Counter SIGINT)

Another method of attempting to remain anonymous on the Internet is the usage of VPNs. These should never be used by themselves, they are all inferior to Tor. However, the same is true of using open WiFi, but as was explained in the previous chapter, you can use open WiFi to complement the anonymity of Tor. The same is true for VPNs, though numerous security considerations again manifest, and arguably some of them have the potential to be more damning to your security than WiFi does. Many people in the Tor community suggest against using VPNs layered with Tor, however arma (the lead Tor developer) mentions that they can be okay to enter through (and certainly you do not want to enter through Tor and exit through a VPN, at most you would want to enter through a VPN and exit through Tor, otherwise there are major and serious exit traffic linkability concerns amongst other things).

Another advantage here is that it prevents Tor from seeing who you arebehind the VPN. So if somebody does manage to break Tor and learn the IPaddress your traffic is coming from, but your VPN was actually followingthrough on their promises (they won't watch, they won't remember, andthey will somehow magically make it so nobody else is watching either),then you'll be better off.

Even if you pay for them anonymously, you're making a bottleneck whereall your traffic goes -- the VPN can build a profile of everything youdo, and over time that will probably be really dangerous.

Of course, assuming that your VPN is actually safer than your local network is a big leap of faith. There are numerous concerns associated with using a VPN even if they are stellar. For example, there are far more Tor entry guards than there are VPN providers; therefore concentrating your traffic to VPNs is reducing the number of points that need to be monitored by an attacker to be in a position to do traffic confirmation attacks. When you enter through a VPN you are changing the position that an attacker needs to be in to do a confirmation attack from the Tor entry guard to the VPN node (if they are at the Tor entry guard and engage in a confirmation attack they will only trace up to the VPN node rather than your real IP address).

This lessening of decentralization and distribution of entry nodes has the potential to make you more vulnerable to signals intelligence. Also, it is seriously questionable if a random Tor entry node is more or less likely to be logging traffic and/or engaging in attacks in the first place. Certainly you do not want to use any VPN provider that anyone suggests to you, particularly if it is suggesting on an underground forum, because the FBI has been known to run VPN nodes that are advertised on underground forums in order to attack people, the prime example of this being the Shadow Crew case;

Albert Gonzalez (aka cumbajohnny) was an admin and ultimately, one of a handful of others who turned federal evidence to help destroy ShadowCrew. A proficient hacker, he later went on to steal somewhere in the order of 170,000,000 sets of credit card details. He provided users with access to a VPN being monitored by the USSS, as well access to a supposedly hacked/carded phone service which members were encouraged to use.

Therefore you should always do you own searching for a VPN provider through search engines and such, rather than trusting anyone to suggest one to you, and certainly you never will want to use a VPN that is advertised on underground forums.

In terms of the setup of various VPNs, it is essentially impossible to determine how knowledgeable they are or how much you can trust their claims of not logging anything. I would tend more toward trusting VPN providers who are realistic with their claims of the anonymity they can provide, which is not particularly much. There have been numerous examples of VPN providers making egregious mistakes in their configurations, in one example a VPN provider actually did not keep logs, but they had purchased DDOS protection from a third party that did keep logs of all the connections, and when a bomb threat was sent through the VPN the FBI was able to deanonymize the sender with these separate logs. VPN providers typically are known for being fairly incompetent, and also for being snake oil salespeople, in that they often severely exaggerate the abilities of their products. Also it should be noted that even if they don't keep IP logs, simply keeping traffic volume logs can be enough for intersection attacks in some scenarios (ie: if the attacker knows you used a certain amount of traffic in a certain time period, and they trace you back to the VPN, then they can eliminate all clients that did not use at least that much traffic in that period of time, narrowing in on you), the same is true for keeping usage time logs even separate of volume logs. Many "no logs" VPN providers are keeping logs like this even if they are not keeping what are traditionally thought of as logs.

Using a VPN can also decrease your security in various ways. VPN operates at a lower OSI layer than proxy servers / Tor, and can expose your network stack. For example, if you have any applications that listen on *:port, they the application is actually being exposed to the virtual private network as if it were on a real LAN. Some VPN providers will have firewalls on their servers that prevent internal traffic between nodes on the VPN (of which you make your computer one of when you run it on a VPN), but others are not informed enough to be doing this and will expose your network stack to everyone else on the VPN. I heard of a case where HDM (who left IRC crying because I hurt his feelings by calling him LEs bitch ^_^) of metasploit fame deanonymized an IRC troll who was using such a VPN (that also assigned external IP addresses! covered soon) by simply connecting to his exposed netBIOS interface over the VPN and leaking his real IP address!

Some VPN providers are not even putting people behind NAT and rather are assigning them an external IP address, which is particularly bad seeing as it gives the entire Internet a direct path to them and their exposed network stack, as the previous case illustrated the insecurity of. This is not just a risk for deanonymization, rather it is a privacy and security risk in general, in that attackers now have an application layer path to your system through your VPN and all of your exposed network stack (the same being true even if behind NAT in the case that the VPN server doesn't drop internal traffic, in that another user of the VPN will have such a path. In fact, in any case there is such a path created, it's just in some cases it is more egregious than in others). Hackers definitely are known to buy VPN accounts to sniff around and see what they can find/hack/do. So yeah, VPN that is not properly configured by the people running it has the potential to actually decrease your security, and it can be hard for you to determine if it is correctly configured unless you have significant technical competency, particularly before you buy an account. Indeed, even a properly configured VPN can degrade your security via creating application layer attack paths.

Speaking of buying accounts, there are benefits to paying for a VPN anonymously to avoid having it linkable to you, however this comes with the major caveat that if you are connecting to it from your real IP address it is linkable to you anyway via that (though you should still aim to purchase it anonymously, with bitcoin that is anonymized as explained in a previous chapter, or similar). However, if you are using a VPN from open WiFi access points then you absolutely must pay for it anonymously to avoid it being linkable to you, otherwise you will negate the additional benefit of using Open WiFi (seeing as after the trace gets to the VPN the attacker can simply follow financial records to you, rather than having to continue the trace in the SIGINT domain). Also, if you are using it to connect to open WiFi access points, it is able to link your WiFi sessions together, both the VPN provider itself can do this, and passive external attackers can also do this in the same fashion as they can with Tor Entry Guard selection (which as previously mentioned is mitigated by Tails due to lack of persistent entry guards between reboots, though lacking persistent entry guards also reduces security from traffic confirmation attacks).

So as you can see there are a lot of negative aspects to entering through a VPN, even though arma did suggest this can be okay (especially as opposed to exiting through a VPN, which is almost always a horrible idea). So, you may ask yourself, with all of these negatives, why on Earth would anyone opt to enter through a VPN in the first place? It should be noted that many people will say "One shouldn't enter through a VPN in the first place", including many people with substantial security skills. However, there are some potential benefits as well. For one case study we can look at the RELAY_EARLY attack, where a combination of sybil (node flooding) and high accuracy traffic confirmation (via the exploitation of a covert channel vulnerability which has since been patched) resulted in the deanonymization of a substantial number of Tor users. Had they entered through VPNs they would have been deanonymized up to their VPN only, which is undoubtedly a superior scenario to being deanonymized up to their real IP address, though how superior it is is questionable. In other words, had people entered through a VPN, the RELAY EARLY attack would have reduced their anonymity to that provided by a VPN.

How much anonymity does a VPN provide from SIGINT in the first place? This varies, but typically the answer is not a whole lot. In some cases the answer is essentially none at all. However, in other cases, we've seen the use of VPNs to result in LE taking several months longer than they would have otherwise had to take in order to deanonymize targets. Of course, essentially all VPNs will start logging when instructed to do so by LE, despite their claims to the contrary. Even in the event that they did refuse to log (which is simply not going to happen), their external infrastructure could just be instructed to log externally anyway. However, if we know anything it is that LE has fairly slow response times, after deanonymizing someone it takes anywhere from about a month to several months (even as long as a year or even a bit longer) before LE focuses on them, this is a period of time in which, provided the VPN is not already logging in the first place, and particularly so if there is already some degree of unlinkability between you and the VPN (ie: they know nothing that identify you, or only your real IP address but without having logged) switching to a new VPN provider and cutting ties to the previous VPN provider can unlink you from the partially deanonymized session prior to you being fully deanonymized. In this sense a VPN to enter through can be similar to WiFi in that it can potentially provide some degree of retroactive unlinkability after you stop engaging in a pattern of activity with it, provided it really did not log etc (which you simply do not know, though there is potential that they did not, and that their external infrastructure did not, etc). So you go from "certainly deanonymized" in the face of successful attacks against you such as RELAY EARLY, to "fucking hell that was close" rather than "certainly fucked".

In the event that you are traced to your VPN by such an attack, even in the event that no logs were kept that are capable of tracing that session back to you, there are various mechanisms via which your current sessions can be linked to the previous session, making it possible to link that previous session to you after logging is enabled at the VPN. The most obvious of these mechanisms is the entry guard you are using for Tor, in the case of RELAY EARLY you would have had to use a malicious entry guard to be vulnerable to it in the first place, so the attacker controlled your entry guard but the entry guard only knew your VPN node. However, you keep using that entry guard for a significant period of time (unless you are using Tails, in which case entry guards rotate between reboots, though this actually makes you more vulnerable to eventually using an attacker controlled entry guard in the first place, it does reduce the time for which you will do so, which typically is not much of an advantage considering it only takes one deanonymized session to fuck you in most cases, though in this specific instance it can actually be beneficial in this specific way). Due to keeping the entry guard for a significant period of time, this means that, although the original compromised session may not have a clear link to you in itself due to the VPN not keeping logs, if the attacker gets to the VPN while you are still using that entry guard, they can merely filter all the connections for the ones using that entry guard, and then they can find your real IP address and the fact that you are using that VPN to connect to that entry guard, thereby deanonymizing you with very high probability, despite not being able to link the original session to you.

Even in the case where your entry guard has rotated prior to LE getting to the VPN and logging, you still have your anonymity set reduced to that of the people using that VPN to connect to Tor, which may be a fairly small set of people (in the worst case it will be a set of one). This is probabilistic deanonymization in a sense, in that the actual session of interest is not actually linked to you, but it is still strong intelligence (to the anonymity set size, of course if 1,000 people are using the VPN to connect to Tor then you have a crowd of 1,000 people to blend into and it is weak intelligence that doesn't actionably narrow in on you).

For this reason I suggest periodically rotating entry guards (which you can do as easily as fully deleting the Tor Browser and unzipping it over again, rather than automatically updating it. Tails automatically rotates them between reboots. On Whonix there are a few ways to do it as well) if you use a VPN to enter, in furtherance of breaking this linkability. However, rotating entry guards also increases your vulnerability to being exposed to RELAY EARLY style attacks in the first place 0_0 and this should be done with caution.

I personally would exercise significant caution when it comes to using a VPN to enter Tor with, though in the event of the RELAY EARLY attack I wouldn't have minded to have used a VPN (unlinkable to me) to enter in addition to open WiFi. Another thing you need to take into consideration is that you should constantly be engaging in OSINT to try to identify compromises as soon as possible. OSINT is Open Source Intelligence, and it simply means closely following the news in this context (in a broader sense it also means reading everything pertinent you can, .pdf files from LE that leak, things like snowden leaks, etc, just everything you can related to what you are doing and security and LE in general), and also following the IRCs and forums and such for any news related to a compromise that may have exposed you to attackers. This way you can intelligently know when to rotate entry guards manually, when to drop your VPN and get a new one, etc. OSINT is quite important, [b]indeed although of course I never went to playpen due to not being in a country without laws against it while it was operational, I learned of the playpen compromise as soon as it was first reported in the media (even prior to it being identified as playpen by name), and would have been able to take countermeasures to the expectation of a raid even as raids were continuing for many months after my initial awareness of the compromise. This illustrates the importance of OSINT.

VPN also has some other potential advantages. It gives you some degree of membership concealment (and possibly entry blocking resistance in the scenario in which you are on a network that blocks connections to Tor but not to your VPN) which makes it harder for many attackers to determine you are a Tor user. This can be particularly useful for the drug vendor threat model (seeing as they leak their rough geolocation when they ship products, and thereby make themselves vulnerable to an intersection attack wherein the crowd of enumerated Tor Users is intersected with the crowd of people known to live in a given geographic region in order to narrow in on the vendor). By themselves VPNs only provide IP level membership concealment, Tor traffic can still be fingerprinted through their encryption, and therefore for actual strong membership concealment obfsproxy bridges should also be utilized in furterance of protocol obfuscation.

Note that if you use a VPN you should additionally be using firewall rules that drop all traffic that doesn't go over the VPN. This can be accomplished simply by dropping all traffic that isn't to or from the IP address of the VPN (make sure to do this for IPV4 and IPV6). You should also change your /etc/hosts file to use only the VPN for DNS and also chattr +i this file because other applications like to clobber it.

Last edited by mrz on Fri Jun 03, 2016 11:09 pm, edited 48 times in total.

I admit I'd watch JB porn if it was legal, if nothing else, probably at least out of sheer curiosity.

I've watched gore, shemales, bestiality, scat etc

I've watched everything, I'd probably watch JB porn too if it was legal, but it's nowhere near my list of priorities, so it's much easier to just refrain from watching it and not give a shit about all that.

Anal prolapse wrote:I admit I'd watch JB porn if it was legal, if nothing else, probably at least out of sheer curiosity.

I've watched gore, shemales, bestiality, scat etc

I've watched everything, I'd probably watch JB porn too if it was legal, but it's nowhere near my list of priorities, so it's much easier to just refrain from watching it and not give a shit about all that.

It's addictive as fuck, once you see legit peak fertility females your brain doesn't want anything else. Like I am indeed attracted to teleio age range as well, but I would always rather fuck a JB from ~13-15 (or 12-17), or look at porn of them, whereas before I saw JB porn (or NNs) it never even dawned on me that they are preferable and I did just fine with adult porn for the first many years of my porn career. Today it just is boring as fuck to me though, like I can watch it and sometimes I still do, but really all I want to watch is JB porn, and I fantasize exclusively about JBs typically 13/14, but in porn I like 11-17, I wouldn't do shit irl with 11 though like they can be hot but imo that is too young at least for my tastes to do anything with actually. Even most 12 are too young to actually fuck but other things still.

Last edited by mrz on Sun May 29, 2016 6:30 am, edited 1 time in total.

mrz wrote:Those who argue against me are invariably religiously delusional with propaganda, or otherwise they are simply sociopaths, those are the only two possible reasons that anyone would argue against me.

Anal prolapse wrote:I admit I'd watch JB porn if it was legal, if nothing else, probably at least out of sheer curiosity.

I've watched gore, shemales, bestiality, scat etc

I've watched everything, I'd probably watch JB porn too if it was legal, but it's nowhere near my list of priorities, so it's much easier to just refrain from watching it and not give a shit about all that.

It's addictive as fuck, once you see legit peak fertility females your brain doesn't want anything else. Like I am indeed attracted to teleio age range as well, but I would always rather fuck a JB from ~13-15 (or 12-17), or look at porn of them, whereas before I saw JB porn (or NNs) it never even dawned on me that they are preferable and I did just fine with adult porn for the first many years of my porn career. Today it just is boring as fuck to me though, like I can watch it and sometimes I still do, but really all I want to watch is JB porn, and I fantasize exclusively about JBs typically 13/14, but in porn I like 11-17, I wouldn't do shit irl with 11 though like they can be hot but imo that is too young at least for my tastes to do anything with actually. Even most 12 are too young to actually fuck but other things still.

IDK, never seen JB porn.

I only watched primejailbait.com several times but could never fap to it.

I'd probably watch it mostly for the novelty.

But the penalties are so ridiculously disproportional, it's really not worth it.

That motherfucker got like 600 years for some motherfucking innocent pics of naked 14 year olds, because they counted each pic as a separate crime.

Normally he would get 1-2 years for posession of illegal porn.

But motherfucker had several hundreds of them, most being from the same series as I recall, so it was really just one case. But he got separate sentences for each pic, and the result was over 600 years in prison!

That's just beyond ridiculous.

He would get much lighter sentence if he actually raped a toddler or even if he RAPED AND KILLED a toddler in a country without death penalty.

Fuck my ass - watching JB porn is one of the least worthwhile crimes in terms of risk-profit ratio. The profit is miniscule and the penalty is tremendous.

mrz wrote:It's addictive as fuck, once you see legit peak fertility females your brain doesn't want anything else. Like I am indeed attracted to teleio age range as well, but I would always rather fuck a JB from ~13-15 (or 12-17), or look at porn of them, whereas before I saw JB porn (or NNs) it never even dawned on me that they are preferable and I did just fine with adult porn for the first many years of my porn career. Today it just is boring as fuck to me though, like I can watch it and sometimes I still do, but really all I want to watch is JB porn, and I fantasize exclusively about JBs typically 13/14, but in porn I like 11-17, I wouldn't do shit irl with 11 though like they can be hot but imo that is too young at least for my tastes to do anything with actually. Even most 12 are too young to actually fuck but other things still.

IDK, never seen JB porn.

I only watched primejailbait.com several times but could never fap to it.

I'd probably watch it mostly for the novelty.

But the penalties are so ridiculously disproportional, it's really not worth it.

That motherfucker got like 600 years for some motherfucking innocent pics of naked 14 year olds, because they counted each pic as a separate crime.

Normally he would get 1-2 years for posession of illegal porn.

But motherfucker had several hundreds of them, most being from the same series as I recall, so it was really just one case. But he got separate sentences for each pic, and the result was over 600 years in prison!

That's just beyond ridiculous.

He would get much lighter sentence if he actually raped a toddler or even if he RAPED AND KILLED a toddler in a country without death penalty.

Fuck my ass - watching JB porn is one of the least worthwhile crimes in terms of risk-profit ratio. The profit is miniscule and the penalty is tremendous.

How much trouble you get in varies massively, you could get anything from probation to a life sentence really. You would certainly be sex offender though, which is the thing that is really the worst part of it cuz even if you get probation you will be fucked forever anyway.

OSTB wrote:mrz, have you ever had any nightmares about being caught by the FBI?

you must of had atleast a few by now over the years..

Hm I've had some panic for sure but nothing too bad usually, more just a persistent frustration that I can't just fap to whatever the fuck I want to, and a burning rage at the slave trafficking conspiracy. I would say I feel more frustration and rage than panic or fear.

Like, they want to fuck my life up......for absolutely no reason. Because of a bunch of logical fallacies that are religious delusions, disproved by science, traced back through citation chains to the assholes they were pulled out of. Well not really because of that though, the reason they want to do that is because they get paid to do that seeing as they are slave traffickers, but that is what they justify their slave trafficking conspiracy with.

dsar9012 wrote:@mrz, you ever watched Daisy's Destruction? Is that legit or just a myth? Is it true a girl dies in that video. I am not interested in pedo porn, but I think watching snuff videos and shit would be interesting, I already watched the typical cartel/ISIS beheading vids you can find on gore sites and the Maniacs with a hammer video. But I heard most snuff vids are on Tor. I just got no fucking idea how to use it.

and thank god for that

And Daisy's Destruction is a medium CP with an underage girl with some bondage and light torture but she doesn't die in the video.

The death scenes are from the fake Japanese snuff movie "Guinea Pig 2" and it perpetuated as a myth on 4chan.

You actually should stop sperging out on the Internet, and just go to Uruguay. Don't talk about it; JUST DO IT.

But don't go there for jailbait.

Go there for the culture, go there for the experience, and go there for the ADULT women. I'm sure that the Uruguayan women there are more slender and more traditional than their American counterparts. Maybe you can stop spazzing out for once, and find yourself a Uruguayan wife.

Sacrificial Lamb wrote:I didn't read any of that aspie word salad. Do you know why?

Because you're not free.

Seriously, dude....you're miserable. Just stop it with this shit.

Your brain is so damaged, that you'll never be normal.

And you will never go to Uruguay, and ruin life for a 14-year-old girl....by inflicting her with your autistically depraved presence and deformed penis.

Just stop watching child pornography, and grow the fuck up.

Why should I stop watching the porn that I enjoy the most simply because some people have delusional religious beliefs about it? There is no legitimate reason for why I shouldn't watch it. All of the arguments against it are bullshit, which you are simply incapable of realizing in the same fashion as a typically religious person is incapable of seeing the fallacies of their faith.

There is no reason why I should not look at whichever porn I prefer. You have, as everyone else has, completely failed in providing a coherent argument as to why it is bad to look at underage porn. You have presented, as has everyone else, baseless assertions, logical fallacies, debunked claims, and religious delusions.

I see no reason why I should not fap to what I get the most enjoyment from. I know it greatly offends you, but this is only because you have been indoctrinated into thinking that CP is bad to look at, and now you rationalize this belief with whichever fallacy you can, but the one thing you have been completely incapable of doing is presenting an actual logical argument for why it is bad to look at CP. You have presented empty rhetoric, that is unsubstantiated, what you have done is called arguing with ipsedixitisms

a declaration that is made emphatically (as if no supporting evidence were necessary)

That is what all of your arguments are. They are baseless assertions, in many cases disproved. Your inability to recognize this is equivalent to the inability of the religious to recognize the absurdity of their beliefs. You are close minded to the fact that you could be wrong, because your entire life you have been told that it is bad to look at CP, but you have completely failed to critically analyze this information, rather you have merely mindlessly accumulated it as fact, because it is the doxa of your culture;

Doxa (from ancient Greek δόξα, "glory", "praise" from δοκεῖν dokein, "to appear", "to seem", "to think" and "to accept" [1]) is a Greek word meaning common belief or popular opinion. Used by the Greek rhetoricians as a tool for the formation of argument by using common opinions, the doxa was often manipulated by sophists to persuade the people, leading to Plato's condemnation of Athenian democracy.

The word doxa picked up a new meaning between the 3rd and 1st centuries BC when the Septuagint translated the Hebrew word for "glory" (כבוד, kavod) as doxa. This translation of the Hebrew Scriptures was used by the early church and is quoted frequently by the New Testament authors. The effects of this new meaning of doxa as "glory" is made evident by the ubiquitous use of the word throughout the New Testament and in the worship services of the Greek Orthodox Church, where the glorification of God in true worship is also seen as true belief. In that context, doxa reflects behavior or practice in worship, and the belief of the whole church rather than personal opinion. It is the unification of these multiple meanings of doxa that is reflected in the modern terms of orthodoxy[2] and heterodoxy.[3][4] This semantic merging in the word doxa is also seen in Russian word слава (slava), which means glory, but is used with the meaning of belief, opinion in words like православие (pravoslavie, meaning orthodoxy, or, literally, true belief).

Had you the ability to think rationally, you would come to the conclusion that this belief that looking at certain pictures is somehow immoral, is completely absurd. You have disintegrated your belief system, you simultaneously think that looking at videos of ISIS beheading people is not bad, but it is bad to look at pictures of naked 14 year olds, because according to you "Looking at such pictures causes demand, and demand leads to supply, which leads to increased child sex abuse rates", which is not only an ipsedixitism, but which is a claim that is contradicted by the totality of scientific analysis of the situation;

Could making child pornography legal lead to lower rates of child sex abuse? It could well do, according to a new study by Milton Diamond, from the University of Hawaii, and colleagues.Results from the Czech Republic showed, as seen everywhere else studied (Canada, Croatia, Denmark, Germany, Finland, Hong Kong, Shanghai, Sweden, USA), that rape and other sex crimes have not increased following the legalization and wide availability of pornography. And most significantly, the incidence of child sex abuse has fallen considerably since 1989, when child pornography became readily accessible – a phenomenon also seen in Denmark and Japan. Their findings are published online today in Springer’s journal Archives of Sexual Behavior.The findings support the theory that potential sexual offenders use child pornography as a substitute for sex crimes against children.

Your rationalizations of this logic are completely nonsensical. Oh, you say, it is different because people orgasm to underage porn! As if the act of orgasming to something or not has any relevance even, that is simply non sequitur. Your arguments are not only without foundation in any science, but are additionally refuted by the scientific evidence, and additionally are completely irrational and non-following to begin with.

So why is it that I should stop looking at underage porn, when it is the porn that I like the most to look at, when all of the arguments against me doing so are irrational unsupported religious delusions? I see no reason to make my life less enjoyable just because you are insane, in the same fashion I would not take issue with working on a Sunday even if this were offensive to the delusions of those who think that this is immoral to do. The primary reason to not look at underage porn is identical to the primary reason to not work on Sunday; insane people will possibly harm you for violating their insanity! However, provided I can secure myself from the attacks of the insane, I see absolutely no other reason to not look at underage porn. You have completely failed to make any argument against this belief, rather you have merely repeated the same tired old unsupported contradicted verbatim arguments that you have accumulated from your society.

The fact of the matter is there is nothing wrong with looking at underage porn. It is completely a victimless crime. It does not lead to increased child sex abuse rates, in fact it leads to lower child sex abuse rates when people look at underage porn, as confirmed by every single scientific study ever done on the matter. You are rejecting science in favor of dogma, but I do not believe in the same faith as you do, and in fact I am entirely atheistic to the magical pictures faith.

Sacrificial Lamb wrote:mrz, can I give you some advice?

You actually should stop sperging out on the Internet, and just go to Uruguay. Don't talk about it; JUST DO IT.

But don't go there for jailbait.

Go there for the culture, go there for the experience, and go there for the ADULT women. I'm sure that the Uruguayan women there are more slender and more traditional than their American counterparts. Maybe you can stop spazzing out for once, and find yourself a Uruguayan wife.

Adult by the standards of USA? You realize that it is legal to fuck JBs in Uruguay right? But still I should stick to the age of consent laws of the USA according to you, because Uruguay is an immoral country that has legalized the rape of children according to you. You think that only the USA has the correct age of consent, all other countries with lower age of consent are immoral! Just like a follower of Christianity, exposed to it via his culture in his youth, thinks the Hindus of India are immoral, for they have likewise mindlessly accumulated the religion of their country.

I've seen much fucked up shit on Tor, however I wouldn't want you to think I intended to see all of it. As far as things that I actually enjoy that are fucked up go, it's pretty much

1. I like to watch some blackmail videos of people being coerced into stripping, they typically cry and beg not to have to get naked on web cam and show their pussy and such, and I find this super arousing

2. I like some rape videos, where the girl struggles and tries to resist the rape but is immobilized and forced to endure it.

3. I like some spanking videos, where the girl pretty much struggles against her pants being pulled down, and is put over knee or similar, and then has ass beat with belt, brush, or similar, as she attempts to fight this off and cries and begs to not be spanked and such. However, I don't like severe spanking.

4. I like some generally non-consenting stuff that is not rape, like pussy touched without consent, etc.

5. I only like people who look like 12.5+, and prefer them to be even older than that, like I primarily like tanner stage 4 and 5, though I do like some tanner stage 2 and 3 as well.

That is all the fucked up shit that I like. However, I've seen very much fucked up shit on Tor, typically thumbnails or such, seeing as I would never actually want to see these things and only incidentally saw them in looking for the previously mentioned things (and softcore JB porn as well), like babies being raped, or children being beaten severely with whips in sex dungeons, and tortured with significant injury, once I saw what looked like human remains of a dumped corpse, sex slaves in chains and locks, etc.

Jesus fuckign christ, how much shit did you write here? You have WAY too much free fucking time.

Also stop looking at CP you faggot cuck. Also the reason not to look at CP is because it encourages the exploitation of children and people that have yet to be fully capable of making a rational choice, without coercion. children naturally do as adults want and are very unsure how to react in situations. so by looking at these images and videos you're encouraging the exploitation of children.

I know you see no problem with LOOKING at it and you somehow absolve yourself of any responsibility. But you must understand where people are coming from?

Reformation wrote:Jesus fuckign christ, how much shit did you write here? You have WAY too much free fucking time.

Also stop looking at CP you faggot cuck. Also the reason not to look at CP is because it encourages the exploitation of children and people that have yet to be fully capable of making a rational choice, without coercion. children naturally do as adults want and are very unsure how to react in situations. so by looking at these images and videos you're encouraging the exploitation of children.

I know you see no problem with LOOKING at it and you somehow absolve yourself of any responsibility. But you must understand where people are coming from?

Does looking at ISIS beheading pictures encourage terrorism?Does looking at gore videos of people being stabbed to death encourage violent attacks?Does looking at anything else encourage anything else?

Why is CP magical? Why is your belief about CP completely disintegrated from your other beliefs? You believe that CP, and CP alone, is the one sort of image, that by merely looking at, causes what is in it to happen. Nothing else.

I, on the other hand, have an integrated belief system, wherein looking at pictures of anything, be they ISIS cutting peoples heads off, people being stabbed to death in attacks, JBs flashing their tits for their camera phones, or even JBs being sexually abused, causes no effect whatsoever on external reality.

It's really simple to understand. You already understand this about every single other type of media. You just need to extend your beliefs about every single other type of picture to also include CP, because there is absolutely nothing special about CP, nothing magical about CP, that makes it any different from anything else.

In before completely non sequitur logic to the contrary:

"People don't jack off to videos of people being stabbed to death!"

* So fucking what? What relevance does it have if you jack off to a picture or not? Does jacking off to pictures magically make what happened in them happen again? That is completely illogical, completely nonsensical, it is actually completely incoherent and absolutely non sequitur.

* Yes, people actually do jack off to videos of people being stabbed to death! Class 4 sadists are into that sort of shit, even if it doesn't seem sexual to you it is sexual to others, and they are allowed to sexually gratify themselves to people being stabbed to fucking death, but I am not allowed to look at pictures of JB pussies, even if they are not even exploited in the production of said pictures, because people like you are fucking absolutely insane.

"The demand for CP leads to supply and children are abused to produce CP!"

* Why is CP magical? Why doesn't it lead to supply of ISIS beheading videos when people look at them? How did it end up that naked pictures of people under 18 just happened to be the one sort of image that looking at them inherently increases the supply of them? But nothing else, only CP.

* IN EVERY SINGLE COUNTRY STUDIED, LEGALIZING THE VIEWING OF CP DID NOT LEAD TO INCREASED CHILD SEX ABUSE RATES, BUT RATHER LEAD TO DECREASED CHILD SEX ABUSE RATES, MEANING THAT THE LAWS CRIMINALIZING THE VIEWING OF CP DIRECTLY LEAD TO INCREASED CHILD SEX ABUSE RATES

* This argument is empty fucking rhetoric. Where is anything in support of it? Where is a fucking scientific study showing that demand for CP leads to increased child sex abuse rates? There are none! All of the studies say there is an inverse correlation between CP viewing rates and child sex abuse rates. This is a completely baseless claim, completely unsupported, it is the dictionary definition of an ipsedixitism.

Seriously, this is nothing short of a religious fucking delusion on your part. It is a disintegrated belief system. You are literally religiously insane to actually believe. Think of this, you look at a 9-11 image, and someone starts screaming at you that you are causing terrorism. Wouldn't you think that person is fucking insane? That is you. You are the insane person screaming at someone for looking at the 9-11 images and telling them that they are causing terrorist attacks. But you cannot see this. Because you are religiously delusional. And religious people cannot see their own delusions, they can only see the delusions of others. This is why you have non-mormon Christians, who believe a man lived in the stomach of a whale for three days, telling Mormons they are insane for believing a warrior continued to fight after he was decapitated. Of course you cannot fight after being decapitated. Of course you cannot live in the stomach of a whale for three days. But the people who believe that a man lived in the stomach of whale for three days think that people are insane for thinking you can fight after being decapitated.

This is the same with you. You think a person would be insane to think that looking at a 9-11 image causes terrorism. But you are equally insane to think that looking at CP causes people to abuse children. If you were to pay people to produce new terrorist attack images, then you would be contributing to terrorism. There are laws against paying people to commit terrorist attacks, it is highly illegal. If you pay for CP you can argue that you are indeed promoting the production of it (and conveniently ignore that the vast majority of commercial CP is non abusive artistic nude pictures of naked teenagers made with full consent of them and their parents legally in the Ukraine). But looking at it is no more promoting the production of it than looking at 9-11 images is promoting terrorism.

God, it is so frustrating that you are incapable of recognizing this. Like, if you want to be religiously delusional that is your right. But nobody has right to ruin the life of others over their religious delusions. You people are no better than someone who goes around ruining people their life for looking at 9-11 images. Absolutely no difference between you. And guess what, if you grew up in a country that said looking at 9-11 images causes terrorism, and treated people like they are terrorists for looking at 9-11 images, etc....you would indeed think that 9-11 images are fucking magical!. Because your are completely without a mind. You merely accumulate the doxa of your society, no matter how outlandish it is. Wouldn't it be outlandish to think that looking at 9-11 images causes terrorism?! Of course it would! And it is equally outlandish to think that looking at CP causes child abuse.

Fuck, I know it is hopeless to convince you of this, I would have as much luck to convince a Christian that Christianity is bullshit. But make no mistake, you are religiously delusional in having the beliefs you do, and in supporting these laws you are committing crimes against humanity, and additionally you are increasing the child sex abuse rates.

You're a very sick man if you dont see the exploitation going on. And you enjoy looking at these videos and images of children getting raped? Wow...

Images of 9-11 and CP are 2 very different things. Also looking at videos of people having their heads cut off is exactly what the perpetrators want you to do, it instills fear and makes a point. By not viewing it you take away their power to intimidate you, therefore by not viewing CP you take awy the market for it.

But you're a sick man who enjoys seeing children get fucked. So you will say whatever it takes to allow you to keep doing that. Its YOU that is the delusional religious person that cant see anything but his own viewpoint. I will concede that you make some compelling arguments but you cant hide the fact you're a disturbed person that likes seeing others exploited for their own sexual gratification.

Thats what YOU are and i hope you understand what you're doing one day.

Reformation wrote:You're a very sick man if you dont see the exploitation going on. And you enjoy looking at these videos and images of children getting raped? Wow...

Images of 9-11 and CP are 2 very different things. Also looking at videos of people having their heads cut off is exactly what the perpetrators want you to do, it instills fear and makes a point. By not viewing it you take away their power to intimidate you, therefore by not viewing CP you take awy the market for it.

But you're a sick man who enjoys seeing children get fucked. So you will say whatever it takes to allow you to keep doing that. Its YOU that is the delusional religious person that cant see anything but his own viewpoint. I will concede that you make some compelling arguments but you cant hide the fact you're a disturbed person that likes seeing others exploited for their own sexual gratification.

Thats what YOU are and i hope you understand what you're doing one day.

Reformation wrote:You're a very sick man if you dont see the exploitation going on. And you enjoy looking at these videos and images of children getting raped? Wow...

Images of 9-11 and CP are 2 very different things. Also looking at videos of people having their heads cut off is exactly what the perpetrators want you to do, it instills fear and makes a point. By not viewing it you take away their power to intimidate you, therefore by not viewing CP you take awy the market for it.

But you're a sick man who enjoys seeing children get fucked. So you will say whatever it takes to allow you to keep doing that. Its YOU that is the delusional religious person that cant see anything but his own viewpoint. I will concede that you make some compelling arguments but you cant hide the fact you're a disturbed person that likes seeing others exploited for their own sexual gratification.

Thats what YOU are and i hope you understand what you're doing one day.

Oh I completely realize it is sort of fucked up to enjoy seeing others being abused, though only within the context that a significant proportion of males enjoy rape and it is so prevalent as to not be considered a mental disorder. A conservative estimate is that 25% of males would rape if they could get away with it. Biastophilia (attraction to rape) was rejected as a mental illness in no small part due to the significant prevalence of it in the male population. Certainly it would be better if this weren't the case, but it simply is the case, and so long as people don't do anything bad from it IRL I see no problem with it at all, and there is nothing wrong with looking at images of people being raped, any more than there is anything wrong with looking at images of people being burned alive in cages, decapitated, or any other bad thing. And just as it is not bad if a class 4 sadist jacks off to videos of ISIS murdering people, it is not bad if a biastophilic hebephile jacks off to videos of 14 year olds being raped. It is only bad to murder people, or to rape people. Preferably nobody would want to murder people, nor would they want to rape people, but their want is irrelevant separate from an action that victimizes others, and it is not victimizing to look at pictures of people being murdered nor is it victimizing to look at pictures of people being raped.

So I take it that you want to criminalize ISIS beheading videos? Because that is the logical conclusion of your rhetoric. Or do you for some reason not want to criminalize ISIS beheading videos? Also you don't understand the dynamics of the CP world. It is not really market driven. People produce CP because it satisfies psychological urges in themselves, they are not molesting children to meet some demand for images of children being molested, any more than people are stabbing people to death to meet the demand for the images of it. There are rare exceptions to this where it is market driven. In the overwhelming majority of those cases it is not abusive CP, it is aesthetic artistic nudes of JBs. I see nothing wrong with taking pictures of naked teenagers in theater costumes and theatric backdrops, ie: angel wings on a nude 14 year old who is posing with a backdrop that looks like heaven. That is not abusive, it is artistic erotica, and the girls never give a fuck that they did it when they get older, they always consent to it, their parents always consent to it, they have fun being in it, they are paid for their participation, and it is legal in the countries it is produced in. That is literally like 90% of commercial CP, so that is the CP that actually is market driven in the overwhelming majority of cases, and I personally don't give the slightest fuck if people pay for such things to be produced, it is completely non-abusive and it is art really that just happens to also be arousing.