Month: July 2003

Right now we’re watching everyone find power outlets and other necessary connections for their presentations – presenters, Stanford A/V, Charlie Nesson and Larry Lessig. And, I am sure to Les Valdasz’ continuing dismay, lots of Mac power adapters are in evidence <G>

And the lights go down, while the music comes up – and we’re off…

Larry: As we have discussed so far, we are working a layered model of the network. Today, our focus is the content layer. Moreover, today will be on creative content, while tomorrow will be about programs, etc. So, today’s about copyright, IP and creative content.

Today’s panelists are all Berkman Center/Harvard Law School graduates, so it’s sort of an alumni day of those at the formation of the Berkman Center.

C. Nesson: Wendy and Alex were the programming core at the outset of the Center, working on what is now H2O. Glenn is of the second generation of the Berkman Center, focusing us on music.

So, today’s focus is going to be on content – raising questions and suggesting issues, rather than trying to answer/resolve them. So, let’s look at a couple of examples.

A series of parody, political statements. in digital/MP3 form to the great amusement of the audience – exmaples of sampling, downloading of digital music and the responses that have been articulated to these various items.

Alex Starts off: Since we’re all disciples of Larry, let’s talk about code as law. So what is peer-to-peer? The concept comes out of client-server models, where you have a more powerful server that supplies information on demand to a client machine – consider, for example, the web browser.

In peer-to-peer, there’s no server – everyone is both a client and a server – a democratization of the client server model.

Of course, there are complex applications. Puretunes, for example, has a server with the music and the index to the music, so the client could just query. Napster had a central index of what each client had, and Napster just mediated a matchmaking process among the clients, who actually shared the files. The most purely peer to peer, we have gnutella, a query involves a network of peers passing along queries until a set of hits appear.

Note that the user doesn’t see anything particularly different among each of the processes, except for performance.

KaZaA is similar to gnutella, but certain computers are identified/promoted into being a supernode through performance and access considerations. Thus, this simplifies the number of queries when a search for a piece of music/file since you can start with supernodes, rather than simple peers.

So, there are lots of ways to do this – the question is which approach makes sense, and how does the law treat it?

Back to Charlie: So, here’s the KaZaA home page. And here’s how to find a song – Satisfaction by the Rolling Stones – lots of successes in the search with characteristics of the file, the node and the number of downloads. We can also monitor progress in download.

KaZaA has been quite successful; starting with the legal “crashing” of Napster, Gnutella arose with a good concept, but a relatively weak implementation because of the pure P2P. KaZaA’s supernode addition to the search strategy has led to agood compromise leading to strong performance. And, here’s the song…. (plays Satisfaction).

So, here’s a clear threat – As Jonathan asks, this is a brazenly illegal act that I just committed – I downloaded, and played for public performance – the “distinguished university exception”

Comment: the shared folder exposes you as now a new potential source of Satisfaction, so you are now actively joining the network, making Stanford a potential flagged infringer to the RIAA spiders.

Charlie: Exactly – and KaZaA comes with default setting not only to share everything that you download, but you are also potentially giving away all your bandwidth to support your sharing behavior. Moreover, the system prefers hardwired/broadband connections, making universities a key element of this network.

Currently, there are 4 million users, offering up 6 million gigabytes.

Now, instead of looking for a classic, let’s look for something news – Where is the Love by the Black Eyed Peas and Justin Timberlake. As we wait for the download, Charlie points out that the KaZaA folder is pretty much an open book, anyone can see what you are offering up.

And we find that Charlie has downloaded something that sounds like the song, but suddenly degrades into a cacaphony of sound – spoofed files (Z prefers the cacaphony to the snippet that appeared).

So a quick poll: P2P will change the industry (a few hands), the RIAA will respond in a number of legal and technical mechanisms that will allow them to hang on (Hatch, Berman, etc. cited)

Now Glenn: I’m going to talk about sampling – and I am about to cause some real legal trouble, I fear. Several names, mash-up, collage, bootleg remix, bastard pop – really, all collages. So can we characterize them, according to their legal status.

Dreamworks Collage or Hollywood Collage. Starting with the Austin Powers character as a paradoy, and Myers has signed a contract to give him access to essentially all of their catalog, so that he can put himself into these films. “Think of me as the Puff Daddy of film” – the rap art of movies. Sampling become something of a rich man’s name.

Beyonce Knowles is also cited, using samples in her music. She has also been put into several mash-ups. These mash ups are all copyright violations, illegal art. Since they cannot pay to get the samples, they are out of luck.

(A side point: Beyonce’s video has several NASA images, so it was assumed that there was public domain art used in the video. When contacted, however, the researcher for the producer indicated that, since they were in a hurry to get the video out, they simply paid for everything, rather than go to the time and expense of tracking down the specific owners of the public domain stuff. So, it’s come to the point that even real public domain stuff is effectively unavailable.)

Negativland art: U2 + Casey Kasem – bootleg recording of an outtake Casey Kasem mixed with a U2 song – ‘The Forbidden Single” – a voice over on top of U2’s music and Casey Kasem’s voice announcing U2. Illegal art – U2 sued and an injunction was awarded – the physical records all had to be supplied to the copyright holder for destruction, plus the equipment, etc. Get it from Negativland today – Illegal Art

Another one – the littlest mermaid – a bootleg that is an explicitly full of copyright lawyers threatening people issuing from the mouth of the littlest mermaid (see illegal-art.org)

The Hellraiser Collage: Christina Aguilera and the Strokes (a mashup) – and a video mashup was made once it was released – here it is. (Aside: Start with “Genie in a Bottle” –> “A Stroke of Genius” – a completely reimagined tempo, key, etc. Very, very clever thing to hear – stunning skill demonstrated) – is it a parody? Not exactly – it seems to work out pretty well – Illegal Art. The Strokes’ response – "it was a funny idea; but I’m not that impressed" – is it the case that, because they’re on the same label no litigation emerged?

The White & Red Stripes Collage: The White Stripes as a focus – an artist that added a bass line to White Stripes (an uninvited bassist adds a bass track – neil strauss in the times). But, by skipping the intermediaries (in this case, the bassist met the white stripes leader who give permission to the bassist to continue), the creativity was allowed

The Creative Commons Collage: but we also might be able to work around the instantaneous copyrighting of a creative work by getting the creator to formally relinquish certain parts of copyright rights in advance – by declaring a specific legal form, such as those defined by creative commons.

Some themes/keywords; parody, licensing, esthetics, consent, availability, and some other ideas to carry into the afternoon.

Alex: Sampling takes place all the time, even if you aren’t a musician, mashup artist, etc. The WWW and links means that many WWW sites are able to sample from other sites. Also, let’s consider OPML, a distributed writing tool. Or weblogs, picking and sampling from all over the web.

Then, there are large corporations doing this, too. E.g., Google. And Google has a number of services that collect, organize and deliver massaged content.

Questions this poses: everyone samples. What problems does this lead to – who’s the author, what are the boundaries, and who gets to specify them, and when will we get together and recognize that creative works are always based on a certain kind of sampling.

Glenn: Why are we living with a “wink-wink” legal system, and how hard is it to work within it.

Wendy: I want to broaden the question of sampling. Starting with music, expanding to text and into the domain of cultural sampling –

Let’s look at DaveZilla – a www page; and we have Sony’s Godzilla – and we have a legal response by Toho claiming infringement as well as a potential trademark infringement claims. Dave fights back, and an Internet response emerges by the apparent nonsensical assertion that Toho “own” Godzilla – the Zilla Liberation Front emerges and keeps pushing, still today.

Who’s right – the Toho Corp. or the internet community?

The Aibo and Sony – a programmable robotic dog. An independent programmer, at AiboPet, learned how to reprogram the Aibo so that it could dance. He also received a cease and desist, including hardware circumvention, etc. Again, a group rises up, arguing that this is the right of the owners to manipulate the owned device. (Although it now the LegitiMutt) Sony negotaites a middle position.

Premier Press, an imprint of Course Technology, the worldwide leader in computer education products and services and part of the Thomson Corporation, the world’s largest provider of corporate and professional learning solutions, today announces one of their upcoming titles, Hacking the TiVo, which is releasing into bookstores throughout the country in late July. This book fits into the continued efforts of Premier Press to publish books that enable enthusiasts to enhance their consumer technology experience.

Listen.com on Tuesday said it has seen a nearly 100 percent increase in CD burning among subscribers to its Rhapsody online music service since cutting its fee to 79 cents from 99 cents per track.

Rhapsody would not disclose how many tracks were actually burned in June, but said that on-demand streaming has increased 45 percent to more than 11 million songs, or more than 350,000 songs per day, in June.

What does this crackdown mean, in the long term, for file-sharing services? How are file-sharing companies/users reacting to this campaign?

Expect a high-tech arms race between record companies and file swappers. Already, many former users of the P2P services have switched to fledging alternatives that allow for more privacy. Some are invitation-only, to keep out investigators — but these have the downside of a limited selection of music. Others let people share files without connecting directly, which makes it more difficult to detect the users’ IP address. And some break down files among dozens of computers, so no one computer is supplying copyrighted files. The legality of that practice is unclear, as it hasn’t been tested yet in court.

If the file-sharing networks see a big drop in traffic, they could adopt some of these technologies. Sharman, Kazaa’s parent, declined to comment. “The next wave of P2P technology is this masking of identities,” says Mr. Gonzalez of Zeropaid.

In addition to the high-tech arms race, some file swappers may turn to an older technology: CD burning. “The most obvious alternative for the kids will be the CD burner and the ‘sneaker net,’ ” or physically handing out copies of CDs, says Mr. Leigh, the Raymond James analyst.

Aussiechip released for free to the internet last week details of how to make “mod chips” – microprocessors that alter the internal workings of a console – under a licence that requires anyone downloading the plans to issue proceedings in its home jurisdiction of Queensland, should they wish to sue.

Aussiechip founder Grant Sparks says there have been several downloads of the plans from Microsoft’s corporate network in Redmond, Washington, agreeing to the click-wrap agreement. Microsoft Xbox spokesmen failed to return calls.

Intellectual property lawyer Simon Minahan says Microsoft is bound by the actions of its workers, who agreed to download the designs under the terms of the click-wrap agreement. “In actual fact it would be a little disappointing if they couldn’t sue me,” says Sparks, known as “Donatus” in the mod chip underground. “You see, I’m quite happy for them to take us to court, I just want to see it happen under conditions where we win.

A group of Xbox-security researchers say they have found a way to run Linux on the Xbox games console without a mod chip and will go public with the technique if Microsoft won’t talk to them about releasing an official Linux boot loader.

[…]

The researchers say they want Microsoft to release a “signed” Linux boot loader which would allow Xbox users to run the open-source operating system on the console without installing a chip.

A signed Linux boot loader will not allow users to load pirated games, they say. However, the release of the new Xbox-exploits they claim to have developed to run Linux on the console would have the side-effect of allowing rampant piracy without the need to install a mod chip, something the hackers say they would like to avoid.

Kazaa distributor Sharman Networks and partner Altnet hope their new group, called the Distributed Computing Industry Association (DCIA), will help legitimize the much-maligned peer-to-peer industry, which has come under fire from Hollywood, politicians and the recording industry for being a haven for pirates.

Martin Lafferty, the DCIA’s chief executive, said the group is hoping to provide a neutral forum where companies that are affected by or involved in peer-to-peer or distributed computing technology can meet to establish business practices, to encourage the adoption of standards and to help shape public policy.

Vivendi’s decision to exclude Universal Music from the sale was made after the board concluded that it would otherwise be selling the unit — the recording industry leader — at the bottom of the market. Sagging CD sales and concerns about online piracy have devalued all music companies, and the Vivendi board is hoping it can attract a higher price for Universal Music later, once the industry sorts out the piracy problem, the executives said.

The removal of Universal Music from the bidding contest is likely to benefit Liberty Media, which had been uncomfortable with the state of the music industry and may now be inclined to bid more for the other entertainment assets. MGM, NBC and Viacom were never interested in the music unit.

Z: We are going to talk about internet governance today, in the effort to tie everything together.

ICANN – have your heard of it; are you on it? (Ray, next to me). Domain names, why do people care, and so what?

How the mess came about: Start with Cerf, John Postel, et al. Dave Clark: “Well it started out with 12 people in a room…” The nerds having a good time, and trying to keep the rest of us away.

Requests for Comments, within the IETF. Iteration through the comments until it converged to a protocol. RFC 2555 – “These notes were the beginning of a dialog, and not an assertion of control.”

So we have an engineering problem: the internet is driven by numbers. Let’s put a mnemonic name that is associated with a number, and then we just appear to use easy to manage names. A so-called namespace; a lookup list.

The original list didn’t scale too well, so the dot structure was added. And, at the same time, the list gets distributed according to the dot names. And the top level domains (edu, com, net) are found by asking the .root list. A heirarchy of distributed domain names dynamically resolved into numbers.

In 1993, Jon Postel decided the job was boring and hard. The NSF had been funding him, so they generated a RFP for someone to maintain the list of names. Jon might maintain .root, and the contractor (eventually Network Solutions Inc.) maintains the list. NSI figures out that a fee for renting the name from them would offset NSF fees.

Problems emerge: NSI making a lot of money; why can’t others? We need more top level domains. And cybersquatting starts to take place, to the horrors of corporations.

Terry Fisher: Clean up #1 legal intervention – Lawsuits as a solution to the problem. The first pass at this was based on the use of trademark law (1993-1999).

Four kinds of trademark infringement: (a) identical marks on competitive products; (b) similar marks on competitive product (areo instead of oreo) – will this confuse the consumer? becomes the operant question – measured against the marketing environment; (c) similar marks on non-competitive products – even more detailed showing required to consider consumer confusion – possible confusion includes confusion as to the source of the product, confusion as to endorsement (rolls royce radio tubes), confusion post sale (bolt on parts to make a car look like a Ferrari; prestige drops; thus confusing the consumer); (d) dilution – the defendant’s behavior dilutes the power of the trademark, either by blurring it or tarnishing it.

Back to Terry Fisher: In the US, the dilution doctrine is coming into its own (originated in Germany). Trade Related Aspects of Intellectual Property agreement (TRIPS) leading to coordination across boundaries.

Trademark law is not designed for domain names, so some manipulation required. The Toeppen cases suggest that squatting might confuse and certainly dilutes. Other countries move in this direction.

Problems and limitations of trademark as applied to this problem. First, it is expensive to bring suit – thus settlements moved money; Second – jurisdictional variations make it hard; Third – you really have to strain to make it work. Dilution, in the US, applies only to famous marks, used in commerce, and requires a showing of blurring or tarnishing, which may be hard to accomplish. Problematic as this problem gets revved up.

J. Zittrain: Clean up #2 political intervention – taking over the domain name system.

Jon Postel still struggling with the problem, so he thought about changing the technology.

Jon could add some new names

Put out an RFC, add some new names

Forms a committee

The International ad hoc committee – IAHC is formed. And they generate the gTLD-MOU – Generic Top Level Domain Memorandum of Understanding. A big boring document in search of consensus, but it did have some good ideas.

A conflict emerged between NSI and Postel. Jon finds it’s time to stop running the A .root, so he gave it to NSI – immeditely regretting it. So, NSI was the keeper, but Jon was the administrator. And NSI made it clear to Jon that if the gTLD-MOU process was finalized, NSI would not abide by it.

Jon got half the roots to conduct a test – use the B.root instead of the A.root. A huge brouhaha. The US Government moved in and Ira Magaziner was moved into Commerce, and moved into the problem. “The White Paper” a statement of policy that says it’s time to privatize domain names.

Attempts within the internet community during the summer of 1998 to develop something that will work. The International Forum on the White Paper (IFWP) forms to discuss a solution to this initiative. First meeting held in Reston, VA – home of NSI. No real agenda, yet things start to percolate. Jon Postel, meanwhile, starts to think about a new IANA and the rules thereof.

NSI again wants Postel out of the picture, so they also generate a set of rules. One suggestion to negotiate at harvard, canned by Magaziner, so the meeting was held in DC. Thus was formed ICANN (IANA was rejected for obscure reasons).

At the same time, we have changes in the kind of disputes that arise. It’s not just simple cybersquatting anymore. So we get typosquatting – exploiting the misspelling of users (microsoft.com); conflicts between non-competitors; retailers exploiting name variant; commercial vs. non-commercial users (pokey.org); fan sites; parody and commentary (peta.org v peta.com – introducingmondy.co.uk).

Two new dispute resolutions mechanisms contructed. One is the UDRP – uniform domain name resolution policy (?). ICANN recognizes certain registrars, and requires that its licensees sign a contract that includes the agreement to be boind by the UDRP – a contract of adhesion that is global in its reach.

Ostensibly it governs abusive registrations and use of domain names – “legitimate interest” and “bad faith” as key terms of art that govern success in recovery (largely derived from trademark law). Imposes an expeditious process favoring plaintiffs; done by e-mail; 20 days to respond to a complaint; 14 days to decision. Limited remedies: no monies involved, but cancellation of registration or transfer of the name to the plaintiff. You have 10 days to take it to court, which can be difficult.

In addition to the UDRP, we see an addition to trademark law, known as the Anticyberquatting Consumer Protection Act – the UDP on steroids, offering up a civil cause of action using much the same language as the UDRP. Similar terms of art, but also a safe harbor provision based on intent. The key difference is that the remedies are serious money – $100,000 per name, potentially.

Courts seem to be willing to work this act in favor of plaintiffs.

How well did it work? – the state of ICANN today. What did we learn, if anything.

ICANN constructs a board, many committees, and some at large members. The at-large membership was called for by Magaziner – the Boston Working group.

One worldwide election was held; a disaster according to the board; the public interest groups thought it went ok, or at least fixably. Adopted the results, and immediately went to work making sure that there would never be another election. “Succession by right of kings,” as it now stands – no real electoral process.

When asked if they care about this disenfranchisement, the audience sits in (stunned) silence. Not many worred about this, although Ray London does care (note Barbara Roseman is here and refining some of the description of ICANN – ITU is acting to try to be a part of the process).

Alternative roots/other competitive issues

The changing organizations and the changing rules that are produced out of these groups. Against this a set of legal evolutions, moving us through a set of new approaches.

goals?

First possession

Avoid consumer confusion

Provide incentives to establish good will

Freedom of expression

Identity/Community/Equality

Efficient Web Navigation

In a world of Google, do we really need domain names anymore? Two puzzles: (a) if you believe the architecture matters at a deep level, and that markets are a problem, yet we see that ICANN could, in fact, act like a government if they wanted to, and would have an effect on this intrinsically important architecture. (b) Or who’s in charge of this anyway? And how can we build a business upon something that is so oddly managed?

A number of possible actions that affect you a great deal are listed: yet, governance is not generally viewed as the path to a resolution of conflict over these actions – other recourses are selected/applied. Why does the issue of how ICANN is governed supposed to be the answer to the problem of domain names? The second puzzle in Jonathan’s list.

Sorry, I think I lost track of something at the close here. I’m sure to come back to this.

(While waiting for the session to start, read this Slashdot flamefeat: Bill Gates On Linux, responding to this USA Today article: Gates on Linux – an astounding rewriting of the past 20 years of PC history!)

A panel discussion today from Les Valdasz (ex-Intel) and Reed Hundt (ex-FCC chair, currently at McKinsey). This afternoon we turn from theory to practical. Larry and Yochai will also cross-examine

(Sorry – Reed Hundt has asked that the postings of his statements be taken down, so I have to put in some work to sanitize this posting. It will be back, but I won’t get to it until tonight…)

Reed Hundt’s office called to request that his remarks be removed from blogs, because apparently he did ask not to be quoted. I realize this is a toughie and that most people don’t like to remove things they’ve blogged, but it would be great if you could consider removing his portion of your entry.

OK – here’s an attempt – note that I will be running this past the Berkman’s to make sure they’re OK with it. But, for the moment at least, my notes are back.

(Since Reed Hundt asked not to be quoted, the following is going to be an interpretation of some of the key things that I thought he said. It should be noted that these are my thoughts on what I thought I heard, and I am the only person responsible for the ideas expressed – you have to rely on my interpretation, which could certainly be wrong, and Mr. Hundt should feel free to contradict me as he sees fit – FRF)

Reed Hundt’s opening remarks were largely focused upon a set of concerns arising out of what appears to be a wholesale reversal of course in the FCC’s actions over the past three years. While the period preceding those three years was certainly not perfect, there were many actions taken that strove to ensure that a competitive market for communication services would be maintained.

In spite of these efforts, the telecomm business has seen essentially no real revenue growth, outside of wireless. Compared with wireless, internet revenue is not even on the radar. Moreover, most of the overall revenue growth in telecomm has not been captured by the carriers; rather, the big winners have been the hardware companies with good technology assets.

The lesson from wireless is that the traditional carriers didn’t make much from the new technology; yet the internet business is being constrained to support the entrenched network owners (cable and telephone) in direct contravention of the lessons of the success of the wireless market. Moreover, a duopoly has been created, where wireless was about offering access to many more firms. And, we have allowed these established firms to use revenues from other business units to support their internet businesses, violating yet another lesson learned in wireless.

The reasons for the creation of this structure remain cloudy at best – there doesn’t appear to be a clear rationale, or a policy imperative that has been articulated that this set of actions would require.

Moreover, there are some policies that should be revisited. In no particular order, these include:

Mandate interconnection at the broadband provider level. Access should not be something that the provider can turn on or off at the provider’s discretion

Universal service for broadband should be promoted rather than subsidizing voice interconnection

The Antitrust Division of the DoJ should revisit the current presumption that vertical integration in telecommunication is not harmful. (Aside: Certainly not if you believe Yochai’s layer model!)

And maybe spectrum should be auctioned to hardware makers, who then supply hardware to intermediary service providers – then the price of the auction is reflected in the price of the hardware and the technologies embedded to emply that spectrum.

====== End of Summary of Reed Hundt’s Opening Remarks =====

Les Valdasz: I have never been in a room with so many Apple logos facing me as I speak – *laughter*

I want to talk about how one firm has worked to push wireless. And you should know that I was not been a believer in wireless for a long time. Seeing the rise of the voluntary hot spot, however, I have seen that there is something real there.

A surprise that the innovation has led to this kind of reliable access. I’m not going to argue that it’s not great to have wireless for your PC – or even your Mac! It is, of course, but getting there is not terribly straightforward. It’s not just designing, building and supplying a product. We already had 10,000,000 cards out there; yet there is still lots of ad hoc issues facing those who want to get on the wireless nets today – even with this standard.

Participation in standards committees are needed, so Intel is now a part of them. But you also need to engage the community of users. We do this in several ways. First, we put lots of money into firms in the wireless ecosystem, to promote development of much of the necessary glue – security, antennas, etc.

Before we got to Centrino, we had 15,000 easy access points installed that would work without hassle with our tech. We want to get many more of these access points out.

While we’re doing this, the government issued a report that wireless was unsafe, insecure and unreliable. Terribly helpful, of course. So Intel had to invest in a host of associated technologies, and we made sure that it would work by ensuring that Intel ate its own dog food – wireless is available at every Intel site. Secure enough for work, and probably better than many wired locations in that respect.

Regulatory agencies are unavoidable in communications. There are some needs that emerge out of this effort. We need more unlicensed spectrum. It would be nice if this were reconciled across jurisdictions, but we could cope. It would be really nice, however, if spectrum policy were to reflect the state of the technologies available – in particular, the concept of the non-interfering use of spectrum otherwise allocated is an opportunity that the FCC does seem to be moving toward, albeit slowly.

Ubiquitous access is thus possible; where does that take us? 802.11 is short range, something else is longer distance (I missed the number). This could upset the tragic inadequacy of the current home broadband capabilities; and thus opening up a host of new applications – VoIP, entertainment, etc. It should be that one can just buy transport, with applications supplied by a wider range of competitors than we see today.

With luck Centrino pushes us toward this future.

Larry: OK, Les, tell me what you think about what Reed had to say about how the dinosaurs are defeating the innovators.

Reed told a terrible story; will wireless be allowed to blow these dinosaurs away? Will the political fight go your way.

Les: I am not politically savvy; and you ask a great question. The vision of the current participants are seeing the world as it is, rather than it might be. The Internet may very well need its own infrastructure (!). Municipalities are probably going to be the needed base for this sort of political action.

But, it may also be that the Verizons of the world will get interested, and the land grab weill be on. We have an opportunity, perhaps.

Yochai: The DoD seems to be playing around with Congress in spectrum allocation, negotiating some sort of non-interference/operating restrictions game. Similarly, the unused TV spectrum in particular localities that could be shared right now, so a software radio can be used on otherwise allocated spectrum. And we have Verizon giving out WiFi spots at payphones being used as a selling point to getting you to subscribe to Verizon DSL. Three areas where there is action. Are there opportunities/pitfalls?

Les: This is clearly an example of the chaos I spoke about. Intel should do a deal with Verizon – more access is better; I have no idea if they are doing so. More spectrum is nice, but we need to makethe most of what we have, too. Since it takes along time to move the government, and a short time to innovate the technology, we need to do both.

And a small plug, again, to get away from the nonsense of spectrum auction.

Reed: Here he says something about the idea that technology owners who can put that technology into hardware make the money these days. And he doesn’t see that giving away spectrum works any better than selling it.

Larry: Let’s get back to the point Les made about life within the technological world of rationality. But how to get Washington to understand the land grab that he fears?

Reed: Here he says that almost anything could happen – maybe even the cable and telecomm companies can learn to innovate in this space. And maybe spectrum is not getting priced correctly, not to mention that unlicensed spectrum still is costly to manage, a cost being borne by the hardware buyer

Les: I know more about the computer side rather than the dynamics of the communication area. But, the mainframe companies would never have gotten into the PC business without competition. It’s very hard to engineer change from within a business. How can one create competitive pressures to the the firm behavior that is needed to innovate? We need market pressures.

Reed: He agrees to the need for market pressure – that’s what leads to innovation. But the government should intervene as little as possible as they work to induce that innovation

Larry: But we’ve failed, as you’ve said, in the broadband context.

Reed: We’ve collectively reacted oddly to the flight of capital from this market. Just as the response to the Depression was the creation of agencies to create scarcity so that there was a return to capital investment in communication (that’s what the creation of the FCC did). Do we want to create a situation where there are more opportunities to lose money in this market?

A question on vertical integration: Reed points out that the antitrust perspective today doesn’t believe that there is a real economic loss to vertical integration. In network markets, this ought to be revisited.

A question about capital flight from telecomm; it seems to be about foolish decisions, not about interconnection: Reed suggests two models; the government creates a ubiquitous network, or one where interconnection is required. In the latter, more capital is saved.

Q: we spoke of how pipe owners could exert control over content. What might the FCC do today to avoid this problem? Open access or something else? Reed says this is just too hard to solve – there haven’t been good answers to managing this problem at the regulatory level. This may come down to a question of what sort of distribution model should exist?

I’m not sure I really managed this one well, and I didn’t get to ask my question, so you get to see it here: Les talks about the need for market pressure to promote innovation, and Reed is really happy about markets. Is it possible that part of our problem today is that we are seeing the development of markets that don’t exert the necessary pressure?

(Larry’s brought the lights down, so it must be time to start the next session) (Donna’s notes are here)

1935 – Edwin Howard Armstrong, broadcast an organ recital from a transmitter located in Long Island, demonstrating FM radio. A novel technology, and in the face of AM radio. Demonstrably a better technology for sound transmission; no static, higher fidelity, penetrates the ionosphere so lower power required to transmit over the same distance.

An employee of RCA, he screwed up the company’s ownership of AM – and Sarnoff of RCA fought this all the way. He coopted the FCC, he fought Armstrong’s patents for 6 years, until Armstrong was bankrupted and killed himself.

1964 – Paul Baran of Rand came up with “packet switching.” AT&T was shown this technology, who said they hated it. We doubt it will work, and we won’t help to create a competitor to us. Thus, the internet was delayed

How about another – let’s stream video over the internet. But we are bandwidth limited until broadband comes into being. In 1998, Excite and ATamp;T joined up and some thought to try to do this. Somers of AT&T says no way. Why compete

Innovators – internet-Cerf/Kahn – students

WWW – CERN/Swiss grad student

ICQ – Israeli kid

Hotmail – Indian immigrant

Napster – BU Students

Note: all foreigners and kids. Whom we might call “outsiders.:

Does the architecture help or hinder this sort of innovation? The key idea is the end-to-end character of the logical layer. "Intelligence at the edge; simple at the core." A design concept.

Contrast with switched networks. If AT&T liked your innovation, it would get deployed. If they didn’t like it, AT&T would keep you out. If profits are challenged, you’re out; if they are enhanced, you’re in. (cf. Baran and the video story).

In an end-to-end network, this sort of control cannot be exerted. The network is blind to the use of the packets – it just routes and delivers the packets. Thus, it’s driven by what the users want, rather than what the network allows.

David Isenberg introduced this idea to AT&T – he noted that the smart network of the company built limitations into the way that this network could evolve. Wrote about it, circulated within the company. The Isenberg 1997stupid networks paper meant he was to go. He left once he was vested, and he left to sell this.

This was a reinvention of some MIT ideas by Jerry Salzer, David Clark, and Jerry Reed (van Schewick dissertation – versions of stupid nets). The notion is that network function can be done best by letting the application complete the intent of the transmission. E.g., rather than checking data within the network, let the application manage the problem of data integrity – it will have to anyway.

This evolves into the notion of a preferred design – a bias in the design ofthe network. To the extent possible, make choices that put functionality at the edge of the network, rather than within the network. Not to say an explicit rule, but a working design argument.

Implications far beyond the architecture of the machines. The original notion of the internet IP protocols is that it’s as simple as possible.

First, the technical consequences.

Flexibility in the way that the network develops, in part because of

No coordination among network users needed to try something new. (e.g., voice over IP. Just take sound, digitize, packetize, drop into the net, reverse the process – sound) – innovation without coordination

Fast evolution of the network application (e.g., gopher went from 1991 to 1993 took off; massive growth; 1993 the first browser is release and UMn decides to charge money for gopher; the gopher dies – “dead gophers anywhere” – no network administrator interhttp://www.scn.org/~bkarger/gopher-manifestovention; demand and innovation took care of it) (gopher manifesto) The network cannot defend itself.

Competitive consequences:

Maximizes competition: We start with the commons; a resource that everyone has access to, priced or not. All, however, are free in the sense that no one has proprietary control over access to that resource (e.g., language is a commons). The Tragedy of the Commons Garrett Hardin – poisons the concept of a commons in most of the US educational process.

Lessig refutation of Hardin: the tragedy requires a rivalrous resource (if I use it, you can’t). Not all resources are rivalrous (e.g., ideas, language). Language, for example, actual becomes MORE valuable as more use it – a comity. So, is the resource one that invites a tragedy, or not. Is this a question of tangible or intangible assets?

L: economists keep seeing the commons aspect of the internet, and assuming it will die (Larry fails/elects to consider the network economists, of course).

Larry proposes an “innovation commons,” as a product of the end to end architecture. Everyone has an equal right to innovate in this space. The need is to maintain the commons, although there are many efforts to break it, generally by introducing property rights, which Larry asserts is only appropriate for rivalrous goods – or, more carefully, property is only appropriate when the benefits of using property exceed the costs of the system

Competitive power to innovate is maximized in this space, therefore, because no one is locked out.

Minimizes strategic threat: Strategic behavior, in the law, is the notion that behavior that undermines the intent of the competitive marketplace. One such strategy is defensive monopolization – the core of the government’s case against Microsoft.

(The case, Netscape and Java could possibly change competition for applications on the PC platform. With Netscape/Java, applications can be written once, run everyone – not technically successful, it appears, but not. The case asserts that Microsoft decided to attack this strategy by displacing Netscape with something MS controls, like IE. Thus, MS can protect itself from this insidious plan, by closing the platform to competition in their application.)

The US courts found defensive monopolication is certainly illegal. End to end takes this kind of protection out of the game – the network cannot protect itself from innovation.

This lowers the cost of innovation – barriers to entry go away, so the cost of entry falls.

Consumer financed growth: If you think like a utility company and you think about innovation, your notion of innovation is how expensive is it to deploy this innovation, and how much will you benefit from doing so? However, some of the most rapid innovation (e.g., the internet) takes place in domains where the consumers invest in the deployment of new technology. (Note: always using the internet as the example of rapid innovation is a bit of intellectual monoculture that weakens this discussion).

Consider 3G v. 802.11. 3G was conceived in the old model of innovation deployment; invest and make the 5 year plan. Now, it’s obsolete at the point of deployment. 802.11 is much faster, cheaper and better for at least some things. The market pull for 802.11, operating in unlicensed space, led to rapid deployment of innovation.

If this is possible with just this small innovation commons in spectum, what might happen if there were more space within which to work?

So, “e2e is heaven.” But the pessimisn – we’re “on our way to hell.”

What is happening is that the e2e layer is being pressured by the owners of the physical layer and the content layer. Corrupting the core:

Policy based routing: a layering on of a new technology that allows the physical layer to treat certain packets differently than others (“All pigs are equal, but some pigs are more equal than others”).

Xbox and cable as an example. Microsoft is now favoring the e2e network. MS wants to use the Xbox to allow gameplaying on the network. Cable companies want to permit (for a fee) gaming on their networks. So, cable companies want to extract some rent

Content begins to eat the conduit: Consider media concentration – one company controls/coordinates messages, stifling independent voices. Diller attacked it because of the corruption of the message. Turnet attacked it because he could never have competed in this space today.

The FCC response is that the Internet will solve this problem – it’s an independent source. Of course, this is true only to the degree that the internet remains e2e. But is companies can influence the architecture so that this is no longer true, then the internet solves nothing.

Does the FCC act to leave the internet e2e. No – the FCC says that businesses should be allowed to innovate in whatever way they want in the internet space – including changing the architecture as they see fit.

Possible solutions: maintain e2e like the electric power grid, rather than turning it into the cable TV network, which is allowed to discriminate in service based on fees or worse. Three debates:

Physical layer – Open access: ISPs shouldn’t be able to block participation in the network hardware. Ensure competition among providers. It failed in the US, but is successful in Japan (100 megabit/set for $50/month in Japan). NTT was not facing competition, so they did what they were told. In the US, where the Baby Bells were competing to exist, they barely complied and fought all the way.

A logical layer push for regulating a “neutral network.” FCC regulates the net to make sure that pricing discrimination won’t take place among all net participants.

So, the layers are all represented in the issues that we face, and that this framework leads us to a set of policy discussions that we have to consider with care. Changes at any of these layers, leading to lockup, will finish off the end to end objective/benefits.

Do any governments get this? So far, the basic claim is that there is “no need” to preserve this – the “market will take care of it(self).” So, end to end is at risk….

Q&A

Doesn’t the notion of the innovation commons ignore the effects of scale upon innovation?> Aren’t there some innovations that require scale that only companies can supply?

Larry agrees that these firms certainly had great innovators, but there seems always to have been a conflict within these firms between the innovators and the business divisions. Moreover, Larry argues that complex systems should be allowed to self organize, rather than be managed. (This is an interesting hypothesis; and it’s something that Larry defends, although he conflates self-organization with simplicity, I think.)

What happens to the notion of property? Investment was made, either in monies or effort, and the sacredness of property influences all of this discussion.

Larry argues that the notion of property emerges out of a social need, rather than an absolute right. What has happened has been the loss of the idea that property exists to satisfy certain social needs. What has happened is that we have the cart before the horse. There are many assets that we elect not to “propertize,” and we have to be careful about what we elect to call property.

In the Eldred case, there was a brief signed by 17 economists, including several Nobel arguments, who asserted that property as a concept requires careful consideration of what the property is expected to accomplish for society. It’s not just efficiency – there are other social objectives that should be served – and balance has to be struck.

(Hubris, I know, but I’m going to try it again today….) (Donna’s links)

(Yochai’s now at Yale, according to Larry’s intro)

Title: The Technical is Political: Access to an open information environment. An extension of the architecture of communication and its implications for the political economy of communication.

Models of Communication – Who gets to say what to whom and who gets to control that exchange of information. At the extremes we have the Broadcast model (special presenters speaking from the core to a receptive core) and the internet model (multiple presenters, throughout the system, distributed capital) – with the telephone in between, where content is at the adges, but the distribution follolws specific sorts of rules mandated by the capital owners.

Yochai then puts up a painful graphic, mostly to depict complexity in the modalities of communication. The chain is (a) noise/signal conversion; (b) intelligence produiction, (c) message production, (c) transmission and (e) reception. Yochain shows that the number of these elements that “belong” to carriers vs users varies widely along this spectrum of communication models.

The stakes of architecture – at stake is democracy. In particular political democracy. Jonas of IDT is quoted as pointing out that he wants to control content by controlling the pipe. (Note that not everyone agrees that this happens). Versus the internet as a domain where everyone can be a pamphleteer – access makes it possible to change/form opinion from the bottom up.

Another stake: personal automony. Because you can work the arhitecture, you can manipulate quality of service to move people from one source of information to another – essentially offer preferential capability to make one source more attractive than another. (This page takes forever to load – let’s try another site that is faster)

Another stake: innovation. I don’t need permission to innovate in an open network, but I do need to in the controlled networks

Another stake: efficiency – in the economic sense that monopoly is less efficient in the marketplace than competition. Also, the efficiency of flexibility – by optimizing a network to achieve one task, it’s now suboptimal for another task. So, deadweight losses as we wait for the network to change.

The State of play at the physical layer – one way to think about the policy problems is tothink of the layered model of communication. Yochai chooses three layers to construct his policy discussion: the content layer, the logical layer, the physical layer. (Shades of the class we’re putting together for CMI! This is the working model employed in many communications policy research activities at MIT) (Yochai points out that the logical layer is also the standards layer – the other two layers should be self-evident; the wires at one and the content at the other)

At the physical layer in the transition to broadband, we have a little wireless, a little more satellite, a little more DSL and a lot of cable – on Yochai’s plot, it looks like about 70% of the bar – not to scale according to Yochai. An FCC Third 706 Report, Feb 2002 statistics are cited. Looks like growth, especially in cable and DSL. But maybe not.

Lets look at homes – now it’s entirely cable and DSL – all the other wired methods (T1, etc) are used by bigger institutions. If you go to SOHO – Cable is dominant, followed by the Bells’ DSL, then a sliver of other sources of broadband. Is this competition? Not necessarily, according to Yochai. Just the phone companies and the cable companies run the physical layer that runs into the home and small office.

Historically, of course, communications has been considered to be a natural monopoly – in particular, multiple providers would be more costly than a single, because of the capital costs of constructing parallel networks. Thus, we had monopolies, but we need regulators to make sure that the monopolist doesn’t exert its power to extract monopoly rents.

In the 1990s, a move takes place to change telecommunications law to accommodate the new technologies of wired networks – cable TV and telephones. We need to get them to upgrade their networks to get them to provide broadband. How to do this – with competition or monopoly? Given that the regulators seem to have failed to save us from the inefficiencies of a regulated monopoly, the experiment is to try a little competition instead of an imperfectly regulated monopoly. (glossing of much of network economics here)

The 1996 Telecommunications Act. Aggressive regulation to require sharing of bottlenecks in the network – to construct intra-modal competition in telephone companies. However, we won’t require cable to share the same way that we require telcos to share.

The phone companies have fought this in the courts, and have managed to slow the degree to which the Bells have relinquished market to other participants. But the cable companies get a free ride, even though they have to deal with local requirements as cable is deployed. Local jurisdictions tried to set rules in these agreements that match the requirements that the telcos face. This too was fought in court, and the rulings have tended to distinguish cable from communication – thus constructing a loophole that keeps the cables out of this sharing regime.

In the past year and a half, the requirements to share are bing increasingly gutted by the FCC and the DC Circuit Court of Appeals. While there was a vision that there will be competitors on each wire, there are going to be two parallel networks – the telephones and the cables. The shift is to say that competition between the two modes (cable and telcos) is all that’s needed – we don’t need competition within the distribution mode – duopoly.

So – cable will be controlled by one company in each location; DSL will be controlled by one company in each location; competition will be between the two modes (assuming you have two to choose). Is duopoly enough?

Q&A:

A refinement of Yochai’s claim of efficiency in monopolies – it’s more efficient in that the cost to each user is as low as it can be, assuming the regulators do their jobs to keep the monopolist from extracting excess rent. Eventually, this becomes an explanation of natural monopoly – a consequence of the nature of the technologies available to supply the good.

You’re talking about the cable and the telcos, what happened to the internet? Can’t that be regulated too?

Yochai points out that the internet is not the physical layer – it’s the logical layer. So there’s something else we have to worry about.

Issues

Let’s talk about another possible physical layer – the open wireless network. Wi-Fi plus something. The plus is that the end user devices have to route, as well as receive and transmit. An ad hoc infrastructure that can scale without necessarily relying upon substantial infrastructure – the end-user needs drive the development of the last mile (the OWL network)

As you look at the topology, a key point is that there have to be at least some members on the OWL who are also connected to the hardwire internet network.

How to consider whether this is a good thing to do or not. Note first there are no owner – the end users make the network by buying the hardware they need. Moreover, there is no license – just equipment that can be acquired which does not require a license.

Some consequences: the end user equipment will probably be more expensive than it might be otherwise. Because the equipment does not require the user to subscribe to an ISP, the consumer will be willing to pay more for the equipment than they would for something that connects to leased network lines. Also a new notion of valuation is needed to assess the worth of wireless.

Which brings us to open spectrum – (I’m only going to summarize, because this is pretty widely available online already). Interference in spectrum is partly an issue of the “dumbness” of the receivers. Licensing is set up to protect stupid radio devices, in this scheme.

Cheap processing means that we can make smart receivers, so they can think harder about what the signals mean. Also, new theories, especially Shannon’s Information Theories (look up link). So, interference can be tolerated.

Also, cooperative transmission routing can also improve efficency, by load balancing and by working to minimize transmission inside the range of other transceivers – again minimizing the effects of interference.

A counter intuitive effect, the addition of users does not reduce the capacity of the network – rather, it increases the capacity – a new repeater in the network and a new cooperator/path opportunity. And there are even more technological strategies out there – multiuser detection, spatial diversity, and sharing information about signal structure to ease filtering.

Note: Much of this is still at the theoretical/model stage. Lots of things to learn to find out whether this will really work.

So what does this do to displacement in the network? In elements of this techniology, we reach a point where there is no displacement of the network resource with a particular communication – if there is no displacement/loss of netork function, then the marginal cost of the communication is $0 – and it should be priced accordingly, i.e., free.

IOW, spectrum can only be valued once you think about the technology and the local deployment of the technology – meaning that markets in spectrum are possibly far too expensive for what they sell and introduce too much inefficiency into the market for communication.

The transactions costs in this OWL network are stupendous; Yochai argues that this means there shouldn’t be a market. (But there are also arguments that would say that this means that we need a property/market construct to reduce these transaction costs.)

Yochai argues that there are open and closed pieces of the logical layer – TCP/IP open, MS O/Ses are closed, Linux open. Trusted systems introduce a closed layer; as might CBDTPA. Similarly, there are pieces in the content network that are open or closed. Copyright, KaZaA, censorship, free sharing.

Yochai argues that we need an OWL or something like that to ensure that there is at least one path, from the physical layer to the content layer, for communication that cannot be blocked though ownership/permissions.

Q&A

Where does the connection to the global internet fit into this? Doesn’t that still mean that there’s a potential block?

I don’t know; it might work, but we are currently facing the last mile as the ugliest problem. So far, we don’t see abuses at the backbone, so we may just get lucky.

How might the current providers block this development?

One thing is the push to make spectrum property; thus prohibit this technology from being legal. So far, the FCC seems to have made some substantial changes in the direction toward open spectrum, rather than away from it. Several windows in spectrum are being opened up, and we will see whether this really works.

But there will definitely be regulatory tricks. Look at the Verizon activity in NYC, where their pay phones are going to be wireless access. This may lead to some interesting market plays, either giving Verizon ownership of the OWL (thus allowing them to make you one of their subscribers) or getting you used to having this sort of access, so you’ll start paying more for (Verizon) hardware that will give it to you in the form that you like most.

Wired News covers the Intel v. Hamidi decision, the 4-3 California Supreme Court decision that limits the degree to which trespass can be used to keep individuals from using company e-mails: Ex-Intel Coder Wins E-Mail Case. Here’s the NYTimes article: Intel Loses Decision in E-mail Case [pdf] (note the keyword in the NYTimes URL – with which side of the case do you think the Times’ sentiments lie?)

I noted three interesting things in the opinion. First, the court seemed unimpressed with Aimster’s legal representation. At several points the opinion notes arguments that Aimster could have made but didn’t, or important factual questions on which Aimster failed to present any evidence. For example, Aimster apparently never presented evidence that its system is ever used for noninfringing purposes.

Second, the opinion states, in a surprisingly offhand manner, that it’s illegal to fast-forward through the commercials when you’re replaying a taped TV show. “Commercial-skipping … amounted to creating an unauthorized derivative work … namely a commercial-free copy that would reduce the copyright owner’s income from his original program…”

Finally, the opinion makes much of the fact that Aimster traffic uses end-to-end encryption so that the traffic cannot be observed by anybody, including Aimster itself. Why did Aimster choose a design that prevented Aimster itself from seeing the traffic? The opinion assumes that Aimster did this because it wanted to remain ignorant of the infringing nature of the traffic. That may well be the real reason for Aimster’s use of encryption.

Blubster developer Pablo Soto of Madrid said his music-swapping service relaunched today as a secure, decentralized system providing users with anonymous accounts.

The MP2P network (short for Manolito Peer-to-Peer) on which Blubster is based consists of more than 200,000 users sharing over 52 million files, according to Soto. The update is also said to include a new, streamlined file-distribution method that disassociates transfers from specific users.

[…] "The biggest privacy weakness of our previous version was the ability to query a list of shared songs for any user — now that can be disabled," Soto said. "It may be possible to gather IP addresses from the network, but not data about what content specific users are sharing."

Blubster uses an Internet data transfer protocol known as UDP for content look-up and transfer negotiating. Unlike the TCP protocol that serves this function in other file-sharing networks, UDP is a so-called “connectionless” method that doesn’t reveal links between nodes or acknowledge transmission in an identifiable manner.

(Note that I will be doing this multiple postings/day thing on the assumption that I can hack the innards of Personal Weblog enough to give me breaks by day in the output – but that isn’t going to happen this week unless I have a really down day)

A good day – if for no other reason than to see that Larry can be optimistic – his closing comments in the last session today, where he reinforced Terry Fisher’s suggestion that blogging’s benefits lie not in the answers they might generate, but in the fact that they are evidence of participation by individuals who might otherwise be unconnected to the political process, was a stunner to me. Last year he purposely adopted the negativist perspective (and he promises it for the rest of the week as well), so this was a welcome change. Below a few pieces of www news that I didn’t get to today.

Optisoft S.L., provider of popular peer-to-peer program Blubster, today announced the launch of Blubster 2.5 in the wake of the latest litigious efforts by the RIAA and MPAA to erode consumer privacy and monopolize control of the P2P entertainment market. As Verizon has been handed a court decision forcing the company to reveal the identity of Internet subscribers accused of music piracy, Blubster has re-launched with a new secure, decentralized, self-assembling network that provides users with private, anonymous accounts. (www.blubster.com).

[…]

"If other means of delivering media files could be compared to a postal system with an identifiable sender and receiver, then Blubster’s proprietary MP2P network could be likened to throwing a bottled message into the vast ocean,” said Pablo Soto. “The message may get to a destination, but no one knows the full path of its journey nor what is in each bottle."

The Bertelesmann AG division, which produces contemporary artists including Norah Jones, Avril Lavigne and No Doubt, said it plans to begin selling CDs in the United States protected with SunnComm’s MediaMax CD-3 product.

MediaMax CD-3 is a collection of technologies that provides copy management for CDs and DVDs while simultaneously enhancing and expanding the consumer’s experience. MediaMax CD-3 is tightly integrated with Microsoft’s (NASDAQ:MSFT) Windows Media Platform and the Digital Rights Management capabilities associated with the latest Windows Media Platforms. The company licenses and uses Windows Media Audio DRM capabilities from Microsoft as the security feature for these files.

On June 26, Rep. Martin Sabo, a Minnesota Democrat, introduced the Public Access to Science Act, a bill intended to rectify the situation. The act would amend U.S. copyright law to deny copyright protection to all “scientific work substantially funded by the federal government.” Since the U.S. government is the world’s largest sponsor of scientific research — the White House asked for more than $57 billion for science in 2003 — Sabo’s bill would have profound implications for scientific publishing. If passed, it would instantly put a huge swath of newly published research into the public domain, upending the journals’ pay-for-access business models.

Copyright law and the principles of equitable relief are quite complicated enough without the superimposition of First Amendment case law on them; and we have been told recently by the Supreme Court not only that “copyright law contains built-in First Amendment accommodations” but also that, in any event, the First Amendment “bears less heavily when speakers assert the right to make other people’s speeches.” Eldred v. Ashcroft, 123 S. Ct. 769, 788-89 (2003). Or, we add, to copy, or enable the copying of, other people’s music.

A second read suggests that Posner didn’t really like anyone’s claims on either side, but felt that Deep failed to suggest there were any noninfringing uses of the tool, and that the recording industry faced unredeemable harm, so he sustained the injunction. I’m sure I’ll hear more about it this week. See also the CNet coverage: Court: Anonymous P2P no defense.