This patch adds a version of urlopen that uses available encoding
information to return strings instead of bytes.
The main goal is to provide a shortcut for users that don't want to
handle the decoding in the easy cases[1]. One added benefit it that the
failures of such a function would be make it clear why 2.x style "str is
either bytes or text" is flawed for network IO.
Currently, charset detection simply uses addinfourl.get_charset(), but
optionally checking for HTTP headers might be more robust.
[1]
http://groups.google.com/group/comp.lang.python/browse_thread/thread/b88239182f368505
[Executive summary]
Glenn G. Chappell wrote:
"2to3 doesn't catch it, and, in any case, why should read() return
bytes, not string?"
Carl Banks wrote:
It returns bytes because it doesn't know what encoding to use.
[...]
HOWEVER... [...] It's reasonable that IF a url request's
"Content-type" is text, and/or the "Content-encoding" is given, for
urllib to have an option to automatically decode and return a string
instead of bytes.
Christian Heimes wrote:
There is no generic and simple way to detect the encoding of a
remote site. Sometimes the encoding is mentioned in the HTTP header,
sometimes it's embedded in the <head> section of the HTML document.
Daniel Diniz wrote:
[... A] "decode to declared HTTP header encoding" version of urlopen
could be useful to give some users the output they want (text from
network io) or to make it clear why bytes is the safe way.
[/Executive summary]

> Christian Heimes wrote:
> There is no generic and simple way to detect the encoding of a
> remote site. Sometimes the encoding is mentioned in the HTTP header,
> sometimes it's embedded in the <head> section of the HTML document.
FWIW for HTML pages the encoding can be specified in at least 3 places:
* the HTTP headers: e.g. "content-type: text/html; charset=utf-8";
* the XML declaration: e.g. "<?xml version="1.0" encoding="utf-8" ?>";
* the <meta> tag: e.g. "<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
Browsers usually follow this order while searching the encoding, meaning that HTTP headers have the highest priority. The XML declaration is sometimes (mis)used in (X)HTML pages.
Anyway, since urlopen() is a generic function that can download anything, it shouldn't look at XML declarations and meta tags -- that's something parsers should take care of.
Regarding the implementation, wouldn't having a new method on the file-like object returned by urlopen better?
Maybe something like:
>>> page = urlopen(some_url)
>>> page.encoding # get the encoding from the HTTP headers
'utf-8'
>>> page.decode() # same as page.read().decode(page.encoding)
'...'
The advantage of having these as new methods/attribute is that you can pass the 'page' around and other functions can get back the decoded content if/when they need to. OTOH other file-like objects don't have similar methods, so it might get a bit confusing.

- page.encoding is a good idea.
- page.decode_content sounds definitely better than page.decode which can be confusing as page is not a bytes object, but a file-like object.
I am thinking if an attribute to urlopen would be better? Not exactly the mode like attribute of the builtin open, but something like decoded=False
The downside is that the attr is now for the implementation detail of the method in py3k and upside is it gives an idea to users as what return value they can/should expect.

If you add the encoding parameter, you should also add at least errors and newline parameters. And why not just use io.TextIOWrapper?
page.decode_content() bad that compels to read and to decode at once all of the data, while io.TextIOWrapper returns a file-like object and allows you to read line-by-line or by other pieces.