Ah, Internet drama. So a bunch of kids have decided to "destroy" the Church of Scientology by DDoS'ing the Scientology website and making lots of prank calls to the various church buildings. Now, I'm thoroughly anti-Scientology and think that it's an incredibly dangerous and subversive cult; however, the rhetoric being thrown around by the members of "Anonymous" is almost as hilarious as the idea that a multi-million dollar business is going to be "destroyed" by a few kids ordering pizzas to the Scientology buildings and flooding their website off the Internet.

Perhaps the most stupid part of this whole affair is that it's possibly the worst possible action to take. Scientology likes to smear any of its critics as suppressive persons, effectively labelling them as hopelessly mentally ill people with anti-social and destructive tendencies. By "attacking" Scientology, the members of "Anonymous" are fitting themselves exactly into the role that the Scientologists would like to portray them as: "The antisocial personality supports only destructive groups and rages against and attacks any constructive or betterment group". Now it's easy for Scientology to dismiss any Internet criticism as having been concocted by antisocial "suppressives".

While people continue to believe in Hubbard's teachings, Scientology will continue to exist. The way to destroy Scientology is to destroy those beliefs, to show the lies that the church propagates and all the crazy stories about aliens found in the upper levels. The greatest weapon against Scientology is the truth, and the Internet is the most effective way to disseminate it. Of course, now, the church has an excuse to get more of its members running censorship software - "protect yourself from dangerous Internet subversives, out to destroy Scientology!". David Miscavige himself couldn't have come up with such an effective scheme.

There is obviously a large group of people participating in the "war". What a shame that so much energy has been put towards such an utterly counterproductive effort.

The minimum price for a Macbook Air is £1199. For this, you get a slow processor, 2 gig of RAM with no option to upgrade ever, mono speakers - although I guess it doesn't need decent speakers, since there is no DVD drive to watch movies on anyway, tiny (and slow) hard drive (just in case you thought you could download movies to watch instead), no Ethernet port, and a single USB port just to fuck you over in case you thought you could plug in a USB ethernet dongle and external USB hard drives and DVD drives to work around the above inadequacies.

The best part of all is that if you pay £2000, you can get the higher spec model, which has a slightly faster processor and even less storage.

< AlexMax_> Oh fuck yes
< AlexMax_> my bash kung fu is still strong
< AlexMax_> heh this is getting messy, windows svn doesnt like being
called from a shell script so now I'm using the batch file to
update and shell script for everything else
< AlexMax_> heaven forbid anyone else try to replicate what I'm doing
< AlexMax_> OK this is really weird
< AlexMax_> If I put in a command at the bash command line, it runs fine
< AlexMax_> but if i put in that same command into a shell script, the
command acts like it doesnt recognize the paramitors
<@fraggle> sh != bash
< AlexMax_> I'm using winbash
< AlexMax_> sh is winbash
< AlexMax_> wait a minute
<@fraggle> do you have #!/bin/sh at the top of your file?
< AlexMax_> what?
< AlexMax_> No, but why should i have to, I involke it using sh
autobuild.sh
< AlexMax_> actually fuck
<@fraggle> try bash autobuild.sh
< AlexMax_> yeah, i could have sworn bash and sh were the same on this
system
<@fraggle> i think it can behave differently depending on whether you
invoke it as sh or bash
< AlexMax_> i know that sh and bash are usually distinct on linux
< AlexMax_> but i just remembered that sh is the msys sh and bash is
winbash
<@fraggle> your bash kung foo may be strong but my psychic debugging
powers are stronger

First of all, the article analyses "crime clearup rate", which is not a measure of the amount of crime, but of how much crime is solved. So what it is really claiming is that "CCTV cameras do not help police to solve crimes". It's important to make this distinction, because it's easy to misinterpret this as meaning "CCTV cameras do not deter criminals", which, indeed, is what the submitter to Slashdot thought.

Secondly, the figures themselves are used in a way that is practically meaningless. "Police in [District X] only have a clearup rate of 20%, despite [N] cameras!". Now, I'm not discounting that there may be a relationship between CCTV cameras and crime clearup rate, but I'm sure there are plenty of other factors that are likely to be much more significant when comparing clearup rates between districts - the number of police officers, their competence, and the actual crime rates in those districts, for example. We're also given no indication of what a "good" crime clearup rate is supposed to be, or how those rates have changed over time since the introduction of CCTV.

I'm always skeptical about stories about CCTV cameras (especially ones where they are described as a "publicly funded spy network"), because a lot of people seem to have an irrational fear of them. Whenever CCTV is mentioned, cries of "Big Brother" and "invasion of privacy" abound. Big Brother and George Orwell form an interesting parallel to Godwin's Law: Any discussion regarding CCTV cameras will inevitably descend into comparisons with Big Brother. "Big Brother" has become a reason unto itself to bash CCTV: a book exists, depicting a dictatorial world, and it features CCTV, therefore CCTV is bad.

Similarly I'm not quite sure how filming a public place constitutes an invasion of privacy. Nobody that I've talked to has yet been able to answer this. If there was a policeman standing on the street in the place of the camera, would that also constitute an "invasion of privacy"? The funniest answer I've had so far is that people would no longer be able to commit minor crimes that they would previously be free to commit.

Of course, I don't believe that there are no potential issues whatsoever surrounding the use of CCTV cameras, but I really detest the sensationalism and irrational paranoia that surrounds them.

Bill Dougherty has posted Part 2 of this "It's the latency, stupid" article. Sadly, this is filled with as many factual errors as the previous one.

Where do I start? First of all, HTTP: "HTTP 1.1 signals the web server to use gzip compression for file transfers". This is pure and simply wrong. Go and read the HTTP/1.1 specification. Although gzip is mentioned, there's no requirement that a HTTP/1.1 server should use gzip compression. I'd say that no browser has shipped for at least five years that uses HTTP/1.0, so this is a totally irrelevant suggestion to make. Even then, switching to HTTP/1.1 will not magically add gzip compression: it's up to the web server to optionally send you compressed data instead of the normal uncompressed data. 99+% will not do this.

Using HTTP/1.1 CAN provide an advantage, but for different reasons that are entirely unrelated to compression. The major difference between HTTP/1.0 and 1.1 is that HTTP/1.1 can reuse an existing connection to retrieve more files. HTTP/1.0 immediately closes the connection when a download has completed. This has advantages because of the way the congestion control algorithms work: they start off with a small TCP window size that is increased in order to determine the available bandwidth of the channel. With HTTP/1.0, this process is restarted when downloading each file. HTTP/1.1 allows you to reuse your existing connection that has already settled to a reasonable TCP window size. This is important for modern websites that have lots of images and other embedded content. As I mentioned before though, this is utterly irrelevant because all modern browsers already use HTTP/1.1 by default.

Then Bill comes up with this gem: "One effective method is to change the protocol. Latency is a problem because TCP waits for an acknowledgement". This is also wrong. He seems to be under the mistaken impression that TCP is a stop and wait protocol: that each packet is sent, an acknowledgement waited for, and then the next packet sent. What actually happens is that TCP sends a bunch of packets across the channel, and as the acknowledgement is received for each of packet, the next packet is sent. To use the trucks analogy again, imagine twenty trucks, equally spaced, driving in a circle between two depots, carrying goods from one depot to the other. Latency is not a problem, just like distance between the depots is not a problem: provided that you have enough trucks, the transfer rate is maintained. The TCP congestion control algorithms automatically determine "how many trucks to use".

TCP will restrict the rate at which you can send data. Suppose, for example, you're writing a sockets program and sending a file across a TCP connection: you cannot send the entire file at once. After you have written a certain amount of data into the pipe, you cannot write any more until the receiving end has read the data. This is a good thing! What is happening here is called flow control. You physically can't send data faster than the bandwidth of the channel you're using can support. Suppose that you're using 10KB/sec channel: you can't send 50KB/sec of data across that channel. All that TCP is doing is limiting you to sending data at the physical limit of the channel.

"If you control the code, and can deal with lost or mis-ordered packets, UDP may be the way to go". While this is true, it's misleading and potentially really bad advice, certainly to any programmers writing networked applications. If your application mainly involves transfer of files, the best thing to do is stick with TCP. The reason is that TCP already takes care of these problems: they've been thoroughly researched and there are many tweaks and optimisations that have been applied to the protocol over the years. One important feature is the congestion control algorithms, that automatically determine the available bandwidth. If you don't use these kind of algorithms, you can end up the kind of collapse that Jacobson describes in his original paper on network congestion. If you use UDP, you're forced to reinvent this and every other feature of TCP from scratch. As a general rule of thumb, it's best to stick with TCP unless there is some specific need to use UDP.

Finally, I'd like to examine his list of "tricks that network accelerators use":

"1. Local TCP acknowledgment. The accelerator sends an ack back to the sending host immediately. This ensures that the sender keeps putting packets on the wire, instead waiting for the ack from the actual recipient". This is nonsense. TCP keeps putting packets onto the wire in normal operation. It doesn't stop and wait for an acknowledgement. TCP acknowledgements should already be being transmitted correctly If you're interfering with the normal transmission of acknowledgements, all you're doing is breaking the fundamental nature of how the protocol and the sliding window algorithm work.

"2. UDP Conversion. The accelerators change the TCP stream to UDP to cross the WAN. When the packet reaches the accelerator on the far end, it is switched back to TCP. You can think of this a tunneling TCP inside of UDP, although unlike a VPN the UDP tunnel does not add any overhead to the stream." I fail to see what possible advantage this could bring.

"3. Caching. The accelerators notice data patterns and cache repeating information. When a sender transmits data that is already in the cache, the accelerators only push the cache ID across the WAN. An example of this would be several users accessing the same file from a CIFS share across your WAN. The accelerators would cache the file after the first user retrieves it, and use a token to transfer the subsequent requests." This is useful in the very specific case of CIFS, because SMB has known performance issues when running over high latency connections - it was designed for use on LANs, and the protocol suffers because of some assumptions that were made in its design. This doesn't apply, however, to the majority of other network protocols.

"4. Compression. In addition to caching, network accelerators are able to compress some of the data being transmitted. The accelerator on the other end of the WAN decompresses the data before sending it to its destination. Compressed data can be sent in fewer packets, thus reducing the apparent time to send." Amusingly, what this actually does is decrease the bandwidth used, and has nothing to do with latency.

I saw this blog entry linked on Digg (it currently has over 2000 diggs), and felt that I should respond to it.

The author claims that poor latency is causing problems with TCP congestion control algorithms. Basically, this entire article is based on a flawed understanding of how TCP works.

TCP has built-in congestion control algorithms that attempt to determine the amount of available bandwidth between two hosts on a network, and determine the rate at which to transmit information. If you transmit data faster than the link can handle, you end up with lost packets, where as if you transmit data too slow, you aren't using the full capacity of your network, so it's important to try to find the optimum point. These algorithms aren't based on latency: they can be affected by latency in some ways, but the overall effect in determining the available bandwidth is in general not affected by latency.

The author uses the analogy of passing sand scoops over a wall to explain his point. Unfortunately, it's a false analogy. A better analogy would be trucks driving between cities. Imagine that you have two warehouses, one in Southampton and one in Manchester. You want to transport things from Southampton to Manchester, so you put the things on a truck, the truck drives to Manchester and then drives back again.

Suppose you move the Manchester depot to Edinburgh instead. Now the trucks have to drive a lot further. If you only have one truck, doubling the latency halves the transfer rate. However, the point to realise is that with TCP, there is more than one truck. The author says, "As distance increases, the TCP window shrinks". This is the exact opposite of what happens in TCP. To use the trucks analogy again, if you increase the distance between depots, the logical thing to do is to increase the number of trucks to sustain the same throughput. This is exactly what TCP does. TCP window size = number of trucks. Latency increase leads to window size increase.

There are flaws in the existing congestion control algorithms. For example, there is a problem that people are experiencing on very high bandwidth connections where TCP window size does not scale up fast enough. However, this only affects very high bandwidth networks: 10 gigabits or more. This isn't something that will affect users on a home DSL line.

Finally, yes, latency is important for certain applications. Gaming and video conferencing are two examples of applications where latency is incredibly important. The reason is that in these situations low latency is important. Arguably, the popularity of Web 2.0 applications where users need fast updates from web servers also means that latency has increased importance. However, when speaking about download speeds, latency is irrelevant. Here, bandwidth is all that matters.

WorldPay has launched VoicePay, a voice-authenticated system for making secure payments.

The problem with new technologies like this one is that they seem deceptively secure simply because they look "hi-tech". We're used to seeing such systems appear in James Bond movies, or in Star Trek, and that gives it a false veneer of security. We need to stop and think about whether it's actually a good idea in the real world. It's important that the actual security issues with such a system are properly examined.

Consider how a voice authentication system must inevitably work. The system takes a sample of the user's voice, and extracts certain characteristic features of the voice (vocal tract properties, for example). In effect, the combination of those particular features is being used as the user's password. When the user comes to authenticate, they speak to the system, those same features are extracted again and compared with the user's profile.

The problem here is that this is basically no better than a password-based system. In fact, it's worse. It's vulnerable to the same attacks that a password-based system is vulnerable to (phishing/spoofing, keyloggers can be replaced by voiceloggers, etc). Now take into account that in effect, whenever you speak, you're broadcasting your password to anyone in the vicinity. If someone knows the voice features used by the authentication system, it's not very difficult to get a recording of someone's voice, extract those same features and feed them into a voice synthesiser.

I'm just imagining a mugging of the future, where a thief holds up a man in an alleyway, takes his credit cards, then produces a gun and a dictaphone and says, "now, beg for your life!"