I happend to manage an LDAP directory. It contains users with French names. Those names contains accents (you know é è à and so on...). LDAP has a nice support of this. It encode such entries in UTF-8 and reencode the result in base64 to be compliant with the original LDAP format.

By default, Fedora Directory Server use cn= as the first part of DN. As a result, when a user happend to have an accent in his/her name, it produced a base 64 encoded DN !!!

I have finally found the link of this famous comment about implementing RFC1349. I reproduce here the relevant extract of the comment

One suggested solution that has not worked so far, and is unlikely to work in the foreseeable future, is voluntary bandwidth allocation protocols such as RSVP. Few people bother to use them, partly because they are poorly understood and partly because they are not widely implemented at the consumer end.

However, there is room for QoS in a limited, but still useful form. This uses an existing mechanism in IP, the TOS field. Some applications already set TOS to sensible values, which is commendable - it would be nice if some notice was taken of it at the points where it really matters.

This simple priority scheme can be implemented within each queue in the above traffic shaper, perhaps combined with SFQ or similar just to give things an extra boost. It still doesn't require packet inspection beyond the TCP header, even with SFQ - and not even that if it's just a P-FIFO.

What it achieves is isolation of consumers' typical usage patterns from each other, within their own traffic streams. At the same time, applications have an incentive to correctly mark their traffic - if they just shove it in Low Latency regardless, they will get limited bandwidth, which isn't good for P2P! Malicious abuse is also avoided by the isolation of the intra-user priority from other users' flows.

That was easy, wasn't it?

I notice that Comcast is implementing a functionally similar scheme with this year's round of cable equipment updates. This appears to be just the bandwidth equalisation, which is implemented at the edge modems with help from the cable's upload receiver. I can't help guessing that this solution occurred to them only when they talked to BitTorrent Inc. I wonder whether the latter suggested it? Enquiring minds want to know.

I am not a proponent of maximal network neutrality that states that "every IP packet should be treated equally". This egalitarian approach works well when a network is quasi empty but fails when in come close to saturation. However, the need for uniform rules is obvious and I am sympathetic to arguments that would avoid that some content providers get more priority on the network than others just because they have . The calls for more investment in the Internet infrastructure are not going to be answered unless there are additional revenue streams. To me, managing the Internet bandwidth is about addressing the following issues:

how to avoid that 5% of Internet users uses 50 % of the bandwidth

how to prioritize legitimate content over peer to peer file exchange

how to decrease SPAM

how to address real time media transfer?

I recenty read some material about TCP algorithm evolution and noted an interesting proposal in the comments (see my next post). All this lead me to the following four steps to regulate internet bandwidth:

Step 1 - define three type of service uniformly across all DSL and Fiber connections on the user side

This is the most sensible proposal I have ever read and the Web. Let us imagine a typical European DSL with 8 Mbit/s download and 800 kbit/s upload. The idea is to provide worldwide support for type of service as defined RFC 1349.

Under this system, applications that would choose TOS 'maximize throuput' would have access to the full capacity of the DSL link (upload and download) but with lower priority. By default all FTP, SMTP wouls be assigned this TOS.

Application that would choose 'minimum delay (latency)' would have access to a fraction of the DSL badwidth e.g. 2 Mbit/s download, and 600 Kbit/s upload) but with higher priority. It is expected that common Web surfing would use this.

Application that would choose 'maximum reliability' would have access to a low symetric bandwidth such as 500 kbit/s but with garanteed troughput.

The same principle could be extended to Fiber and SDSL links. That would enable the ISPs to do bandwith provisionning.

The GIX nodes and peering interconnection would be critical in managing these TOS accross the Internet. It would the GIX responsibility to charge TOS traffic differently or in case of peering agresments, to weight IP traffic differently depending on the TOS. It is also possible to imagine bandwith limitation related to high priority TOS.

It is expected that on the user side, every application would ave the ability to choose its TOS although there would be defualt settings that would fit the average user's needs. This would dramatically alleviate the P2P bandwidth issue as such applications would probabily select the low priority / high bandwidth type of service to take advantage of the whole capacity offered. Normal traffic would then be processed with higher priority making the network to appear as more responsive. If a P2P application would chosse the high priority type of service, the bandwidth used would be limited.

Step 2 - separate e-mail traffic from the rest

The other source of bandwidth waste is spam. A lot of smart systems (SPF, cryptology,...) have been designed to reject SPAM mainly when the messages reach the destination the mail. They do not prevent the SPAM to be transferred over the Network. I know that this proposal will hurt every person with a libetarian approach but what we need is to go back to an administred e-mail system.

Today every body can setup a mail server and hookup on the Internet and send e-mail by a few DNS lookup on MX records.

to remove the access to SMTP protocol to Mail User Agent. SMTP would be solely used between PO and MTAs.

extend IMAP protocol to support e-mail sending by Mail User Agent and roaming (the ability to connect on a PO and have the connection relayed to the attachement post-office).

to have a simple (and free) administrative procedure to connect a private mail server to the ISP.

In that case, the ISP and the global exchange MTA would have simply the ability to pinpoint spam sources and to block them just by shutting down the transfer link between the incriminated e-mail server and the next MTA when they reach a certain quota.

Step 3 - manage media delivey differently

All the above does not address the need for some service providers that are offering VOD services. What we need is to have the ability to open IP virtual circuits from client to server and negociate the bandwith all the way long to be sure that the delivered media would be available with the requested quality.

For ToIP and media delivery, we need to go back to the congestion model proposed by the telephone network that reject new circuit overload without degrading the quality of established circuits.

Thess circuits would be high priority bandwith that would have a costs for the carrier. Content provider need to somehow share some of their revenue stream to ensure this kind of high quality transport and it is expected that the ISP would charge per minute and per kbit/s and share the revenue with the content provider.

Step 4 - multi-tiered Internet

All the above tend to outline the need to separate four types of traffic:

This
was recently posted on Slashdot (in the Firehose only). Here is my own views of things:

HTTP protocol has been originaly created to enable documents transfer. In the original idea, HTTP is pretty stateless: it consists in a series of transactions that are supposed to be more or less unrelated. Authentication has been added as a feature to enable simple access control on documents.

Of course with the evolution of the Web, appreared merchant sites which needed to:

Keep a context attached to each user in order to enforce a given page flow (i.e. forbid deep linking).

Maintain session based data for the user's basket)

Perform banking transaction for payment

The second evolution of the Web is AJAX and the idea of Web based application (GMail).

All this works with the overstretched old HTTP protocol. In the document that is linked to the post, the protocol evolution suggestions are very shy. I suggest much bolder changes:

To define a real sessionfull mode for HTTP that

Is based on an explicit persistent TCP connection. Closure of the connection would be equivalent to end of session.

That is authenticated only once at connection time

Several transaction could be opened at once but would be identified by a sequence number

Standard transaction format / RPC protocol such as AMF would be proposed along with the ineficient XML format that I personally dislike for such applicatons.

This would be the proper foundation of web based applications. Note that in the proprietary world, Flash Media Server and its open source counter partn the red5 project are an interesting model or inspiration source for such standard work.

The truth is that such another session based protocol already exists: this is SIP and with some amendment, we could embbed it in Web browser and use SIP over TCP as a control protocol for web based / session based applications. This would be a pretty neat alternative and would avoid the ususal standard duplication that we witness at IEFT so often.

While setup a web access to a directory outside the document root, I encountered a 403 Forbidden error. In error_log, this gives

Symbolic link not allowed or link target not accessible:

This error message occurs in several situation. There are no tool to troubleshoot them apart manual inspection.

1. If you are running SElinux, make sure that SElinux configuration allows httpd to access the targetted directory.

You can check/var/log/messages about SElinux access violations and or temporary disable SElinux using/usr/sbin/setenforce 0 to discriminate the case.

2. Check that the sym link is sitting in a directory or a subdirectory that has FollowSymlinks option enabled in httpd.conf

3. Check whether any directory directive would restrict this option using -FollowSymLink. Check this in httpd.conf and all included files.

4. Last but not least, check that your targed directory and files is accessible to user apache. THIS MEANS THAT any parent directory of the target should have the proper permissions. Most probably, read and execute for the whole world.

Some may know that France will run parlemntary election next month. I happend to be an happy resident of the Grenoble area and an Entrepreneur. As such, it would seem natural that I could be inclined to vote for the candidates proposed by the party of our newly elected President Mr Sarkozy.

No way. Not because I disagree with the ideas but because one of the local candidates is trying to use the current wave of enthousiams for concervatives to get elected but the problem is...

This guy, Mr Carignon, has been convicted a couple of yeas ago in a corruption scheme involving overcharge of local water distribution. Yes, he has served his setence and so on.

No Niko, I will never vote for any local UMP candidate. Unless... your Kärcher is hanging somewhere around and can be usd to put a bit of pressure on the guy and remove him from these elections. Remember all those speeches about morality in politics and the value of merits. I take you on words.

And by the way, his local challenger, Mr Cazenave had interesting opinions on the DADSVI law and is pro Open Source.

PS: Mr Sarkozy and Carignon started politics in the same circles (they were know as the 'quadras' at these times. So it is very likely that Mr Carignon will be the local candidate for the MP position. Alas.

Already sumbmitted in Slashdot but I but it here back in my journal fore records. Some guys are tring to thunk about a new design for Internet. There is no chance for this to be implemented but some of the ideas are nice.

In fact, I fully agree about the flow and the circuit handling. I really belive that we need such core mecanism like virtual circuit to finally handle real time medial (I mean video and audio here) end to end. Especially in congestion situation.