tag:blogger.com,1999:blog-82169300378897096432018-08-28T07:37:10.783-07:00You Infinite SnakeBrighten Godfrey's blog.Brighten Godfreyhttps://plus.google.com/102845740959911684459noreply@blogger.comBlogger25125tag:blogger.com,1999:blog-8216930037889709643.post-55704955317627283252014-02-16T13:35:00.000-08:002014-02-16T13:35:15.148-08:00On getting rejected<p>One of the most frustrating things when starting a career in academic research is getting your paper rejected.</p>
<p>Even with more experience, <a href="http://researchinprogress.tumblr.com/post/33946389387/we-regret-to-inform-you-that-your-paper-has-not-been">no one enjoys a rejection</a> and we all <a href="http://researchinprogress.tumblr.com/post/33884075941/we-are-pleased-to-inform-you-that-your-paper-has-been">prefer to see good news</a>.</p>
<p>Quite often this happens on the first project you tackle. A new student might work for a year on a project, pouring in effort and passion and resulting in something that seems to have real merit ... only to be hit with a cold, hard rejection.</p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-MjKMeYIq6Ts/UwCREkF45SI/AAAAAAAAAWs/WCIZzcaRh3Q/s1600/rejection.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-MjKMeYIq6Ts/UwCREkF45SI/AAAAAAAAAWs/WCIZzcaRh3Q/s1600/rejection.png" /></a></div>
<p>And then, with the computer science conference submission schedule, you have no opportunity to respond to the reviewers and you might have to wait six months or more for another appropriate chance to submit. Science is a slow process.</p>
<p>But there's one bit of silver lining: <b>A paper's rejection doesn't mean your research is bad.</b></p>
<p>In fact, many or most of the papers you see in their final polished form in top venues went through rejections -- even the best papers. Here's a thought experiment. Among a set of published papers, some will have gotten in on the first try, some were rejected first, others were rejected multiple times. Ultimately, how impactful are the papers that got in on the first try, compared to the rejected ones?</p>
<p>To answer that let's (imperfectly) measure impact in terms of citations. Here are my own published research projects, showing the number of times the project received a rejection (X axis), and the number of citations per year (Y axis), as of a few weeks ago:</p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-vG7uOyVQCic/UwCdxJ1TX0I/AAAAAAAAAXA/ydnyENdCjdc/s1600/impact_vs_rejections.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-vG7uOyVQCic/UwCdxJ1TX0I/AAAAAAAAAXA/ydnyENdCjdc/s400/impact_vs_rejections.png" /></a></div>
<p>First you'll note that most of my projects have been rejected, either with a failed workshop, conference, or journal submission, before reaching successful publication later. And furthermore, among these published papers, <b>there is apparently no correlation between whether the project has been rejected, and its eventual impact</b>. In fact, if we were to draw a best fit line:</p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-RAtFKZKl25c/UwCcuwI9IqI/AAAAAAAAAW4/F72f7sUK2lk/s1600/impact_vs_rejections-fit.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-RAtFKZKl25c/UwCcuwI9IqI/AAAAAAAAAW4/F72f7sUK2lk/s400/impact_vs_rejections-fit.png" /></a></div>
<p>...then we see that my published projects have received 3.96 more citations per year, per rejection. Not bad considering the average number of citations per year here is 13.4. This is not a very robust method given the small sample and skewed distribution of citations. But a 2012 study by Calcagno et al of 923 journals similarly showed that <a href="http://www.sciencemag.org/content/338/6110/1065">rejected papers received "significantly" more citations</a> than those accepted on the first submission.</p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-4Mf2sQEll_c/UwECQGh_wMI/AAAAAAAAAXg/DgV__bDQeFI/s1600/rejection2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-4Mf2sQEll_c/UwECQGh_wMI/AAAAAAAAAXg/DgV__bDQeFI/s1600/rejection2.png" /></a></div>
<p>This might be counterintuitive. Doesn't a rejection mean that the project is less exciting or less well executed, which certainly imply lower impact? Perhaps, but there are at least two other factors:</p>
<ul>
<li>A rejection can improve the quality of a project's next submission, due to feedback from peer review and time to improve the manuscript.</li>
<li>Authors might judge, based on the rejection, to not bother resubmitting. These dead manuscripts dropped out of my sample.</li>
</ul>
<p>Here's what I think this says: <b>You should let your best judgement, not the reviewers' decision, guide your research</b>. Certainly you should give careful consideration to how reviewers reacted to your paper, but don't automatically take a rejection as an indication of the quality of a project. If you still believe that this thing is a good thing, then you are probably right. It is a good thing, with at least as much impact potential as an immediately accepted paper.</p>
<p>OK ... now <a href="http://researchinprogress.tumblr.com/post/33946389387/we-regret-to-inform-you-that-your-paper-has-not-been">go have some ice cream</a>.</p>
<!--
<br><br>
<p><i>* I'm sure there are many other factors and biases pushing the number of citations in either direction. First venue selection bias: I might submit a less impactful project to a lower-ranked venue, where it is more likely to be accepted immediately. Second venue selection bias: a rejected paper may be resubmitted to a lower-ranked venue, which receives less attention and thus fewer citations regardless of its true quality. Prehistory: a rejected project has been around longer than its publication date indicates, so more researchers might have learned about it through talks and such, leading to more citations.</i></p>
-->Brighten Godfreyhttps://plus.google.com/102845740959911684459noreply@blogger.com0tag:blogger.com,1999:blog-8216930037889709643.post-1496577511679928362013-11-06T16:51:00.000-08:002013-11-11T22:20:25.079-08:00Proposal: CoolNets 2014<p>The SIGCOMM <a href="http://conferences.sigcomm.org/sigcomm/2014/cfw.php">workshop proposal</a> deadline is coming up. But the community already has many workshops devoted to popular hot topics and multiple sessions on SDN at every top networking venue. Here's my proposal.</p>
<h4>CoolNets 2014:
The First Workshop on Cool Topics in Networks</h4>
<p>The First Workshop on Cool Topics in Networks (CoolNets 2014) seeks papers describing totally groovy contributions to the field of computer communication networks that are not related to presently anointed hot topics. We invite submissions on a wide range of networking research, including, but not limited to, results which are awesome, neat-o, nifty, keen, swell, rad, da bomb, badass, and slick as a greased herring in a butter factory, while being refreshingly orthogonal to:</p>
<ul>
<li>software-defined networking</li>
<li>cloud computing</li>
<li>content-centric networks</li>
<li>virtualization of networks, network functions, and machines</li>
<li>big data</li>
</ul>
<p>Submissions on the above topics will be held without review for five years, and then fast-tracked to <a href="http://www.universalrejection.org">a prestigious journal</a> unless IP-based QoS has achieved 80% deployment. Such submissions are considered not cool, dude.</p>
<p>Strong submissions may break ground in new research directions, reveal insights about longstanding problems, develop alternative solutions whose clean design will have lasting value, forge connections across fields, deconstruct misconceptions, contribute solid technical advances in important areas, or build networked systems that are just pretty darned cool.</p>
<p>Multi-part series of totally tubular papers will be considered.</p>Brighten Godfreyhttps://plus.google.com/102845740959911684459noreply@blogger.com0tag:blogger.com,1999:blog-8216930037889709643.post-3316419175369329962013-09-02T20:39:00.000-07:002013-09-02T21:15:26.856-07:00Creating research ideas over time<p>Research is built on ideas: identifying questions to investigate, problems to solve, or new techniques to solve them. Before I started as faculty one of my biggest doubts was whether I would have good enough ideas to lead a research group and help shape five or six years of each student's career. There is no deterministic procedure to sit down and generate an idea.</p>
<p>However, we can think about how to improve the conditions for them to pseudorandomly appear.</p>
<p>Since sometime in my first year of grad school (2003), I've kept a document logging ideas for research projects. The criteria for including an idea was simple and has remained, at a high level, fairly consistent: &nbsp;When I have an idea that I think would have a reasonable chance at leading to a publishable paper, I jot down a description and notes. This is useful to help remember the idea and the document is also a convenient place to record notes over time, for example if I notice a related paper months later.</p>
<p>Having grown over almost exactly ten years to 169 entries, this document is now an interesting data set in its own right.<p>
<h4>The data</h4>
<p>Of course, this is not a uniform-random sample of ideas. There are various biases; not every idea makes it into the document, my inclusion standards might have changed over time, and so on. And many of these ideas are, in retrospect, terrible. But let's take a look anyway.</p>
<p>Here is the number of ideas in the document per year. (The first and last were half years and so the value shown is twice the actual number of ideas.)</p>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-vpj2O9PRTRY/UiK3XWXS0nI/AAAAAAAAASk/C4FHLUlpyEA/s1600/ideas.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="238" src="http://2.bp.blogspot.com/-vpj2O9PRTRY/UiK3XWXS0nI/AAAAAAAAASk/C4FHLUlpyEA/s400/ideas.png" width="400" /></a></div>
<p>Now let's probe a bit deeper. The number of ideas might not tell the whole story. Their quality matters, too. To investigate that, I annotated each idea with whether (by 2013) it successfully produced a published paper.&nbsp;I also tagged each of the 169 ideas with an estimate of its quality in retrospect (as subjectively judged by my 2013 self), using a scale from 1 to 10 where</p>
<ul>
<li>5 = dubious: maybe not publishable, or too vague to have much value</li>
<li>6 = potential to result in a second-tier publication</li>
<li>8 = potential to result in a top-tier publication (e.g. SIGCOMM or NSDI in my field)</li>
<li>10 = potential to&nbsp;result in a top-tier publication and have&nbsp;significant additional impact (e.g., forming the basis of a student's thesis, producing a series of papers, a startup, etc.)</li>
</ul>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-h6yIxmUXtFU/UiLABovmMCI/AAAAAAAAAS0/EO8G-Yr5E7c/s1600/ideas-paper-producing.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="238" src="http://2.bp.blogspot.com/-h6yIxmUXtFU/UiLABovmMCI/AAAAAAAAAS0/EO8G-Yr5E7c/s400/ideas-paper-producing.png" width="400" /></a></div>
<p>The number of reasonably high quality ideas and the number that produced papers both show significant jumps in 2008-2009, though with different behavior later. &nbsp;Also, the plot below shows perhaps a gentle increase in the mean idea quality over time, and a bigger jump in the quality of the top 5 best ideas in 2008. Note that even a one-point quality difference is quite significant.</p>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-tyH1dau_OT8/UiLBAepWcgI/AAAAAAAAAS8/3EkqoHzj1js/s1600/ideas-quality.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="238" src="http://1.bp.blogspot.com/-tyH1dau_OT8/UiLBAepWcgI/AAAAAAAAAS8/3EkqoHzj1js/s400/ideas-quality.png" width="400" /></a></div>
<p>The most prominent feature of this graph is an enormous spike in number of ideas in the 2008-2009 timeframe, and a corresponding increase in higher-quality ideas.&nbsp;What might have caused this? And what can we conclude more generally?</p>
<h4>Ideas need time for creative thought</h4>
<p>A significant change in my life during 2008-2009 was my transition from PhD dissertation work to a postdoc year (split between working on post-dissertation projects at my PhD institution of UC Berkeley, and working with Sylvia Ratnasamy at Intel Labs).</p>
<p>This appears to show the value of having time to step back and think -- and also the opportunity to interact with a new set of people. By May 2008, I was largely done with my dissertation work (though I did work on it more later and finally graduated in May 2009). I had accepted a position here at the University of Illinois and deferred for a year. So I was largely free of responsibilities and concerns about employment, and had more time to be creative. While there are <a href="http://cra.org/postdocs/">reasons to be concerned</a> about the surge of postdocs in computer science, I think this indicates why this particular kind of postdoc can be extremely valuable: providing time and space for creative thought, and new inspiration.</p>
<p>If that is the explanation, then it seems I was not sufficiently proactive about creating time to be creative after the flood of professorial tasks hit in late 2009.</p>
<p>There are alternative explanations. &nbsp;For example, knowing that I was about to enter a faculty position, I might have more proactively recorded ideas for my future students to work on. However, that would not explain another observation -- that my creative expression at this time in other areas of my life outside computer science seemed to increase as well.</p>
<p>John Cleese has argued that creativity takes time, and it's more likely to happen in an&nbsp;"open mode" of playful, even absurd thought, rather than in a "closed mode" of efficiently executing tasks:</p>
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.youtube.com/embed/f9rtmxJrKwc?feature=player_embedded' frameborder='0' /></div>
<p>His talk makes other points relevant to academic research. In particular, you are less likely to get into an "open mode" of thought if you are interacting with people with whom you're not completely comfortable. &nbsp;This should certainly affect your choice of collaborators.</p>
<p>It's worth noting that having time to enter an "open mode" of creative thought does not mean that one is thinking free of any constraints whatsoever. I personally find that constraints in a problem domain can provide some structure for creative thought, like improvising around a song's fixed chord changes in jazz.</p>
<h4>Ideas need time to germinate</h4>
<p>In fact, some ideas need years.</p>
<p>You'll note from the second plot above that the number of paper-producing ideas is zero in 2012 and 2013. This is not just random variation: It's actually fairly unlikely to have an idea and immediately turn it around into a paper. &nbsp;In fact, it has happened fairly often that an idea takes a year or two to "germinate". &nbsp;I might write down the seed of an idea, and at that time not recognize whether it is valuable and what it might become. &nbsp;In coming back to it occasionally, and combining it with other ideas, and bouncing the idea off other people, the context and motivation and focus of the idea gradually takes shape until it is something much stronger and which I can recognize as a worthwhile endeavor.</p>
<p>And that is all before the right opportunity appears to begin the project in earnest -- such as a PhD student who is looking for a new project and is interested in the area -- and the project is actually developed and the paper written and submitted (and resubmitted ...) and finally published. The most extreme example I've been involved with was a 2005 idea-seed that was finally published in a top conference seven years later. In fact, in processing this data I realized there was a second idea from 2005 which lacked sufficient motivation at the time and got somewhat lost until 2011 when it combined with a new take on a similar idea that grew out of a student's work and was published in 2012. The plot below shows 14 lines, each corresponding to a project, with points at the inception year of the seed idea, intermediate ideas if any which combined with it, and finally the year of publication.</p>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-UdHnWd2Xwa0/UiL32dOtJJI/AAAAAAAAATM/oF4zlxsudLg/s1600/germination_time.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="238" src="http://3.bp.blogspot.com/-UdHnWd2Xwa0/UiL32dOtJJI/AAAAAAAAATM/oF4zlxsudLg/s400/germination_time.png" width="400" /></a></div>
<h4>Ideas from connections</h4>
<p>Reading over the document made it clear that very few if any of the ideas sprang from out of nowhere. They come from connections: with a paper I read, or with a previous project, or in chatting with collaborators. Some of these connections can be quite unexpected. For example, one project on future Internet architecture indirectly inspired a project on network debugging.</p>
<p>Many of the ideas on the list in fact owe at least as much to collaborators as they do to me. This likely is a big part of the rise in number of ideas after becoming faculty.&nbsp;Although I lost some of my open creative time after beginning as faculty, I gained a set of fantastic students.</p>
<h4>Conclusions</h4>
<p>Generating and selecting among ideas is an art, one of the most important arts to learn over years of grad school. I will never feel that I've truly mastered that art. But studying my own history has suggested some strategies and conditions that seem to help, or at least seem to help me.</p>
<p>Ideas are more likely to appear when I have time or create time to think creatively, rather than simply appearing for free.</p>
<p>They often need to germinate over a period of months or years.</p>
<p>And perhaps most importantly, they are most likely to grow out of connections with other work and other people.</p>Brighten Godfreyhttps://plus.google.com/102845740959911684459noreply@blogger.com0tag:blogger.com,1999:blog-8216930037889709643.post-21493337346004278912012-10-30T09:26:00.002-07:002013-02-24T20:45:41.681-08:00Live-blogging HotNets 2012, Day Two<p>This is Day Two. <a href="http://youinfinitesnake.blogspot.com/2012/10/live-blogging-hotnets-2012.html">Day One is here</a>.</p>
<h4>Mobile and Wireless</h4>
<p>Calum Harrison presented work on making rateless codes more power-efficient. Although rateless codes do a great job of approaching the Shannon capacity of the wireless channel, they're computationally expensive, and this can be a problem on mobile devices. This paper tries to also optimize for cost of decoding measured in terms of CPU operations, and gets 10-70% fewer operations with competitive rate. <i>[Calum Harrison, Kyle Jamieson: <a href="http://conferences.sigcomm.org/hotnets/2012/papers/hotnets12-final46.pdf">Power-Aware Rateless Codes in Mobile Wireless Communication</a>]</i></p>
<p>Shailendra Singh showed that there isn't one single wireless transmission strategy that is always best. DAS, Reuse, Net-mimo &mdash; for each there exists a profile of the user (are they moving, how much interference is there, etc.) for which that scheme is better than the others, which this paper experimentally verified. TRINITY is a system they're building to automatically get the best of each scheme in a heterogeneous world. <i>[Shailendra Singh, Karthikeyan Sundaresan, Amir Khojastepour, Sampath Rangarajan, Srikanth Krishnamurthy: <a href="http://conferences.sigcomm.org/hotnets/2012/papers/hotnets12-final86.pdf">One Strategy Does Not Serve All: Tailoring Wireless Transmission Strategies to User Profiles</a>]</i></p>
<p>Narseo Vallina-Rodriguez argued for something that may be slightly radical: "<i>on</i>loading" traffic from a wired DSL network onto wireless networks. We sometimes think of wireless bandwidth as a scarce resource, but actually your wireless throughput could easily be twice your DSL in some situations. If there is spare wireless capacity, why not use it? 40% of users use less than 10% of their allocated wireless data volume. They tested this idea in a variety of locations at different times and can get order-of-magnitude improvements in video streaming buffering. Apparently the reviewers noted that wireless providers wouldn't be a big fan of this &mdash; but Narseo noted that his coauthors are all from Telefonica. <i>Interesting question from Brad Karp:</i> How did we get here? Telefonica owns the DSL and wireless; if you need additional capacity is it cheaper to build out wireless capacity or wired? The answer seems to be that wired is way cheaper, but we need to have wireless anyway. Another commenter: this is promising because measurements show congestion on wireless and DSL peaks at different times. Open question: Is this benefit going to be true long term? <i>[Narseo Vallina-Rodriguez, Vijay Erramilli, Yan Grunenberger, Laszlo Gyarmati, Nikolaos Laoutaris, Rade Stanejovic, Konstantina Papagiannaki: <a href="http://conferences.sigcomm.org/hotnets/2012/papers/hotnets12-final101.pdf">When David helps Goliath: The Case for 3G OnLoading.</a>]</i></p>
<h4>Data Center Networks</h4>
<p>Mosharaf Chowdhury's work dealt with the fact that the multiple recent projects improving data center flow scheduling are dealing with just that &mdash; flows &mdash; with each flow in isolation. On the other hand, applications mean there are dependencies: for example, a partition-aggregate workload may need all of its flows to finish, and if one finishes earlier, it's useless. The goal of Coflow is to expose that information to the network to improve scheduling. One question that was asked was what is the tradeoff with complexity of the API. <i>[Mosharaf Chowdhury, Ion Stoica: <a href="http://conferences.sigcomm.org/hotnets/2012/papers/hotnets12-final51.pdf">Coflow: An Application Layer Abstraction for Cluster Networking</a>]</i></p>
<p>Nathan Farrington presented a new approach to build hybrid data center networks, with both a traditional packet-switched network and a circuit-switched (e.g., optical) network. An optical switch provides much higher point-to-point bandwidth but switching is slow &mdash; far too slow for packet-level switching. Prior work used hotspot scheduling, where the circuit switch is configured to help the elephant flows. But performance of hot spot scheduling depends on the traffic matrix. Here, Nathan introduced Traffic Matrix Scheduling: the idea is to repeatedly iterate between a series of switch configurations (input-output assignments), such that the collection of all assignments fulfills the entire traffic matrix. Q: Once you reach 100% traffic over optical, is there anything stopping you from eliminating the packet switched network entirely? Still there is latency on the order of 1 ms to complete one round of assignments; 1 ms is much higher than electrical DC network RTTs. Q: Where does the traffic matrix come from? Do you have to predict, or wait until you've buffered some traffic? Either way, there's a tradeoff. <i>[Nathan Farrington, George Porter, Yeshaiahu Fainman, George Papen, Amin Vahdat: <a href="http://conferences.sigcomm.org/hotnets/2012/papers/hotnets12-final135.pdf">Hunting Mice with Microsecond Circuit Switches</a>]</i></p>
<p>Mohammad Alizadeh took another look at finishing flows quickly in data centers. There are a number of recent protocols which are relatively complex. Their design is beautifully simple: each packet has a priority, and routers simply forward high priority packets first. They can have extremely small queues since the dropped packets are likely low priority anyway. End-hosts can set each packet's priority based on flow size, and perform very simple congestion collapse avoidance. Performance is very good, though with some more work to do for elephant flows in high-utilization regimes. <i>[Mohammad Alizadeh, Shuang Yang, Sachin Katti, Nick McKeown, Balaji Prabhakar, Scott Shenker: <a href="http://conferences.sigcomm.org/hotnets/2012/papers/hotnets12-final151.pdf">Deconstructing Datacenter Packet Transport</a>]</i></p>
<h4>Lunch!</h4>
<h4>Routing and Forwarding</h4>
<p>Gábor Rétvári tackled a compelling question: How much information is actually contained in a forwarding table? Can we compress the FIB down to a smaller size, making router hardware simpler and longer-lasting? Turns out, there's not so much information in a FIB: with some new techniques, a realistic DFZ FIB compresses down to 60-400 Kbytes, or 2-6 bits per prefix! A 4 million prefix FIB can fit in just 2.1 Mbyte of memory. Now the interesting thing is that this compression can support reasonably fast lookup directly on the compressed FIB, at least asymptotically speaking, based on an interesting new line of theory research on <i>string self-indexing</i>. One problem: They really need more realistic FIBs. The problem is that widely-available looking glass servers obscure the next-hops, which affect compression. "We are inclined to commit crimes to get your FIBs." Before they turn to a life of crime, why not <a href="http://lendulet.tmit.bme.hu./fib_comp">send them FIBs</a>? They have a demo! Question for the future: Can we use compressed forwarding tables at line speed? <i>[Gábor Rétvári, Zoltán Csernátony, Attila Körösi, János Tapolcai, András Császár, Gábor Enyedi, Gergely Pongrácz: <a href="http://conferences.sigcomm.org/hotnets/2012/papers/hotnets12-final13.pdf">Compressing IP Forwarding Tables for Fun and Profit</a>]</i></p>
<p>Nicola Gvozdiev wins the award for best visualizations with some nice animation of update propagation among iBGP routers. Their work is developing the algorithms and systems necessary to propagate state changes in iBGP, without causing any transient black holes or forwarding loops. <i>[Nikola Gvozdiev, Brad Karp, Mark Handley: <a href="http://conferences.sigcomm.org/hotnets/2012/papers/hotnets12-final38.pdf">LOUP: Who's Afraid of the Big Bad Loop?</a>]</i></p>
<p>Vasileios Kotronis's work takes SDN-based routing a step further: Don't just centralize within a domain, outsource your routing control to a contractor! One cool thing here, besides reduced management costs, is that you can go beyond what an individual domain can otherwise do &mdash; for example, the contractor has interdomain visibility and can perform cross-domain optimization, debug policy conflicts, etc. <i>[Vasileios Kotronis, Bernhard Ager, Xenofontas Dimitropoulos: <a href="http://conferences.sigcomm.org/hotnets/2012/papers/hotnets12-final78.pdf">Outsourcing The Routing Control Logic: Better Internet Routing Based on SDN Principles</a>]</i></p>
<h4>User Behavior and Experience</h4>
<p>Rade Stanojevic presented results from a large data set of mobile service plans (roughly a billion each of calls, SMS/MMS messages, and data sessions). The question: Are economic models of how users select bandwidth and service plans realistic? What choices do real people make? In fact, only 20% of customers choose the optimal tariff. 37% mean overpayment, 26% median. Another interesting result: use of service peaks immediately after purchase, and then decays steadily over at least a month, even with unlimited service (so it's not just because people are conservative as they near their service limits). Several Questions: Do these results really demonstrate irrationality? Users may buy more service than they need, so they don't need to worry about (and pay) comparatively pricey overage fees. Comment from an audience member: One has to imagine the marketing department of Telefonica has that exact same CDF of "irrationality" as their metric of success. <i>[Rade Stanojevic, Vijay Erramilli, Konstantina Papagiannaki: <a href="http://conferences.sigcomm.org/hotnets/2012/papers/hotnets12-final77.pdf">Understanding Rationality: Cognitive Bias in Network Services</a>]</i></p>
<p>Athula Balachandran presented a study working towards a quantitative metric to score user experience of video delivery (in particular, how long users end up watching the video). The problem here is that predicting user experience based on quantitative observables is hard: it's a complex function of initial startup delay, how often the player buffers, buffering time, bit rate, the type of video, and more. The paper analyzes how well user experience can be predicted using several techniques, based on data from Conviva. <i>[Athula Balachandran, Vyas Sekar, Aditya Akella, Srinivasan Seshan, Ion Stoica, Hui Zhang: <a href="http://conferences.sigcomm.org/hotnets/2012/papers/hotnets12-final116.pdf">A Quest for an Internet Video Quality-of-Experience Metric</a>]</i></p>
<p>Vijay Erramilli presented a measurement study of how web sites act on information that they know about you. In particular, do sites use price discrimination based on information they collect about your browsing behavior? Starting with clean machines and having them visit sites based on certain high- or low-value browsing profiles, they could subsequently measure how a set of search engines and shopping sites present results and prices to those different user profiles. They uncovered evidence of differences in search results, and some price differences on aggregators such as a mean 15% difference in hotel prices on Cheaptickets. Interestingly, there were also significant price differences based on the client's physical location. Q from Saikat Guha: How can you differentiate the vendor's intentional discrimination from unintentional? For example, in ad listings, having browsed a certain site can cause a Rolex ad to display, which bumps off an ad for a lower priced product. <i>[Jakub Mikians, László Gyarmati, Vijay Erramilli, Nikolaos Laoutaris: <a href="http://conferences.sigcomm.org/hotnets/2012/papers/hotnets12-final94.pdf">Detecting Price and Search Discrimination in the Internet</a>]</i></p>
<p>That's it! See you all next year...</p>Brighten Godfreyhttps://plus.google.com/102845740959911684459noreply@blogger.com0tag:blogger.com,1999:blog-8216930037889709643.post-534277448181073982012-10-29T12:12:00.001-07:002012-10-30T09:48:50.683-07:00Live-blogging HotNets 2012<p>Note: This blogging might be rather bursty. If you want something more deterministic, here's the <a href="http://conferences.sigcomm.org/hotnets/2012/program.shtml">HotNets program</a>.</p>
<p>This is Day One. <a href="http://youinfinitesnake.blogspot.com/2012/10/live-blogging-hotnets-2012-day-two.html">Day Two is here</a>.</p>
<h4>Session 1: Architecture and Future Directions</h4>
<p>Teemu Koponen spoke about how combining the ideas of edge-core separation (from MPLS), separating control logic from the data plane (from SDN), and general-purpose computation on packets (from software routers) can lead to a more evolvable software defined Internet architecture. <i>[Barath Raghavan, Teemu Koponen, Ali Ghodsi, Martin Casado, Sylvia Ratnasamy, Scott Shenker: <a href="http://conferences.sigcomm.org/hotnets/2012/papers/hotnets12-final76.pdf"> Software-Defined Internet Architecture</a>]</i></p>
<p>Sandeep Gupta discussed rather scary hardware trends, including increasing error rates in memory, and how this may affect networks (potentially increasing loss rates). <i>[Bin Liu, Hsunwei Hsiung, Da Cheng, Ramesh Govindan, Sandeep Gupta: <a href="http://conferences.sigcomm.org/hotnets/2012/papers/hotnets12-final107.pdf"> Towards Systematic Roadmaps for Networked Systems</a>]</i></p>
<p>Raymond Cheng talked about how upcoming capabilities which will be widely deployed in web browsers will enable P2P applications among browsers, so free services can really be free. Imagine databases in browsers, or every browser acting as an onion router. <i>[Raymond Cheng, Will Scott, Arvind Krishnamurthy, Tom Anderson: <a href="http://conferences.sigcomm.org/hotnets/2012/papers/hotnets12-final141.pdf">FreeDOM: a new Baseline for the Web</a>]</i></p>
<h4>Session 2: Security and Privacy</h4>
<p>Scott Shenker examined how to build inter-domain routing with secure multi-party computation (SMPC), to preserve privacy of policies. The idea is that interdomain routing really is a multi-party computation of global routes, and participants want it to be secure. The benefits of using SMPC: autonomy, privacy, simple convergence behavior, and a policy model not tied to computational model. The last item should be emphasized: there's a lot more potential policy flexibility here with a much easier deployment story, just changing software at the set of servers running the computation. For example do other classes of policies have different or better oscillation policies? Part of this (convergence) seems to connect with <a href="http://static.usenix.org/event/nsdi08/tech/john.html">Consensus Routing</a>. Jeff Mogul mentioned an interesting point: By adding the layer of privacy it may be very hard to figure out what's going on inside the algorithm and debug why it arrived at a particular result. [Debayan Gupta, Aaron Segal, Gil Segev, Aurojit Panda, Michael Schapira, Joan Feigenbaum, Jennifer Rexford, Scott Shenker: <a href="http://conferences.sigcomm.org/hotnets/2012/papers/hotnets12-final55.pdf">A New Approach to Interdomain Routing Based on Secure Multi-Party Computation</a>]</p>
<p>Katerina Argyraki spoke about how we can change the basic assumption of secure communication: creating a shared secret not based on computational difficulty, but on physical location. The idea is to use different wireless interference across location. Security is more robust that you might think, in that you just need a lower bound on how much information Eve misses, rather than which pieces of message Eve missed. An implementation generated 38 secret Kbps between 8 nodes. However in a few corner cases Eve learned a substantial amount about the secret. There is some hope to improve this.[Iris Safaka, Christina Fragouli, Katerina Argyraki, Suhas Diggavi: <a href="http://conferences.sigcomm.org/hotnets/2012/papers/hotnets12-final92.pdf">Creating Shared Secrets out of Thin Air</a>]</p>
<p>Saikat Guha linked the problem of data breaches to money and proposed data breach insurance ("Obamacare for data") In a survey, 77% of users said they would pay, a median of $20. (Saikat thought this may be optimistic.) They're working to develop a browser-based app to monitor user behavior, offer individuals incentives to change to more secure behavior, and see if people actually change. <i>[Saikat Guha, Srikanth Kandula: <a href="http://conferences.sigcomm.org/hotnets/2012/papers/hotnets12-final119.pdf">Act for Affordable Data Care</a>.]</i></p>
<h4>Lunch!</h4>
<h4>Session 3: Software-Defined Networking</h4>
<p>Aaron Gember spoke about designing an architecture for software defined middleboxes, taking the idea of SDN to more complex processing. Distributed state management is one challenge. <i>[Aaron Gember, Prathmesh Prabhu, Zainab Ghadiyali, Aditya Akella: <a href="http://conferences.sigcomm.org/hotnets/2012/papers/hotnets12-final31.pdf">Toward Software-Defined Middlebox Networking</a>]</i></p>
<p>Monia Ghobadi has rethought end-to-end congestion control in software-defined networks. The work observes that TCP has numerous parameters that operators might want to tune &mdash; initial congestion window size, TCP variant, even AIMD parameters, and more &mdash; that can have a dramatic effect on performance. But the effects they have depend on current network conditions. The idea of the system they're building, OpenTCP, is to provide an automatic and dynamic network-wide tuning of these parameters to achieve performance goals of the network. This is done in an SDN framework with a central controller that gathers information about the network and makes an educated decision about how end-hosts should react. Experiments show some very nice improvements in flow completion time. Questions: Did you see cases when switching dynamically offered an improvement? And in general, how often do you need to switch to get near the best performance? Some of that remains to be characterized in experiments. <i>[Monia Ghobadi, Soheil Hassas Yeganeh, Yashar Ganjali: <a href="http://conferences.sigcomm.org/hotnets/2012/papers/hotnets12-final85.pdf">Rethinking End-to-End Congestion Control in Software-Defined Networks</a>]</i></p>
<p>Eric Keller, now at the University of Colorado, spoke about network migration: Moving your virtual enterprise network between cloud providers, or moving within a provider to be able to save power on underutilized servers, for example. Now, doing this while keeping the live network running reliably is not trivial. The solution here involves cloning the network and using tunnels from old to new, and then migrating VMs. But then, you need to update switch state in a consistent way to ensure reliable packet delivery. Some questions: How do you deal with SLAs, how do you deal with networks that span multiple controllers? <i>[Eric Keller, Soudeh Ghorbani, Matthew Caesar, Jennifer Rexford: <a href="http://conferences.sigcomm.org/hotnets/2012/papers/hotnets12-final133.pdf">Live Migration of an Entire Network (and its Hosts)</a>]</i></p>
<h4>Session 4: Performance</h4>
<p><a href="http://web.engr.illinois.edu/~vulimir1/">Ashish Vulimiri</a> presented our paper on making the Internet faster. The problem: Getting consistent low latency is extremely hard, because it requires eliminating all exceptional conditions. On the other hand, we know how to scale up throughput capacity. We can convert some extra capacity into a way to achieve consistent low latency: execute latency-sensitive operations twice, and use the first answer that finishes. The argument, through a cost-benefit analysis and several experiments, is that this redundancy technique should be used much more pervasively than it is today. For example, speeding up DNS queries by more than 2x is easy. <i>[Ashish Vulimiri, Oliver Michel, P. Brighten Godfrey, Scott Shenker: <a href="http://conferences.sigcomm.org/hotnets/2012/papers/hotnets12-final34.pdf">More is Less: Reducing Latency via Redundancy</a>]</i></p>
<p>The questions are getting interesting. Where is Martha Raddatz?</p>
<p>Udi Weinsberg went in the other direction: redundancy elimination. This is an interesting scenario where a kind of content-centric networking may be a big help: in a disaster which cuts off high-throughput communication, a DTN can provide a way for emergency response personnel to learn what response is most effective, through delivery of photos taken by people in the disaster area. But in this scenario, as they have verified using real-world data sets, people tend to take many redundant photos. Since the throughput of the network is limited, smart content-aware redundancy elimination can more quickly get the most informative photos into the hands of emergency personnel. <i>[Udi Weinsberg, <a href="http://web.engr.illinois.edu/~qli10/">Qingxi Li</a>, Nina Taft, Athula Balachandran, Gianluca Iannaccone, Vyas Sekar, Srinivasan Seshan: <a href="http://conferences.sigcomm.org/hotnets/2012/papers/hotnets12-final145.pdf">CARE: Content Aware Redundancy Elimination for Disaster Communications on Damaged Networks</a></i></p>
<p><a href="http://youinfinitesnake.blogspot.com/2012/10/live-blogging-hotnets-2012-day-two.html">Onward to Day Two...</a></p>
Brighten Godfreyhttps://plus.google.com/102845740959911684459noreply@blogger.com0tag:blogger.com,1999:blog-8216930037889709643.post-61172825463810625222012-05-14T20:46:00.000-07:002012-05-25T15:47:29.691-07:00Notes on ACM, Open Access, and Copyright<p>My last post listed the <a href="http://youinfinitesnake.blogspot.com/2012/04/statements-of-acm-candidates-on-open.html">comments on open access and copyright</a> of the candidates in the <a href="http://www.acm.org/acmelections">2012 ACM Council Election</a>. Since I first posted, several more responses came in, so you might be interested to check it out. Vicki Hanson's note, in particular, provided a concise summary of the rationale for ACM's current policies.</p>
<p>So what did the candidates think? There are at least two important issues:</p>
<ol>
<li><b>Not preventing access to papers:</b> This is a question of the copyright or licensing policy. Does it inhibit researchers from distributing their own work?</li>
<li><b>Actively facilitating greater access to papers:</b> This implies that ACM itself would somehow openly distribute papers.
</ol>
<h4>Not preventing access to papers</h4>
<p>The candidates' statements differed fairly significantly on this point &mdash; so you have a meaningful choice in your vote!</p>
<p>Many candidates noted that already the ACM allows authors many rights. However, it still prevents uses such as posting on <a href="http://arxiv.org/">arXiv</a> and commercial distribution.</p>
<p>The co-chairs of the ACM Publications Board <a href="http://cacm.acm.org/magazines/2011/10/131401-acms-copyright-policy/fulltext">explained ACM's copyright policy</a> in the October 2011 <i>CACM</i>. Regarding copyright transfer, they write:</p>
<blockquote>One might wonder, given the generous rights retained by authors, why ACM requires authors to transfer copyright to ACM at all. In fact, the transfer of copyright to ACM provides substantial benefit to the computing research community and to authors.
By owning exclusive publication rights to articles, ACM is able to develop salable publication products that sustain its top-quality publishing programs and services; ensure access to organized collections by current and future generations of readers; and invest continuously in new titles and in services like referrer-linking, profiling, and metrics, which serve the community. Furthermore, it allows ACM to efficiently clear rights for the creation, dissemination, and translation of collections of articles that benefit the computing community that would be impossible if individual authors or their heirs had to be contacted for permission. Ownership of copyright allows ACM to pursue cases of plagiarism. The number of these handled has been steadily growing; some 20 cases were handled by ACM in the last year. Having ACM investigate and take action removes this burden from our authors, and ACM is more likely to obtain a satisfactory outcome (for example, having the offending material removed from a repository) than an individual.
</blockquote>
<p>My summary of this is that ACM gets the following from holding the copyright:</p>
<ul>
<li>More revenue. <i>Question:</i> how much more?</li>
<li>Easier dissemination without contacting individuals. <i>Question:</i> wouldn't this be fixed with a non-exclusive perpetual license to distribute the work?</li>
<li>Ability to pursue plagiarism. <i>Point of comparison:</i> 20 papers represents a fraction 0.000065 of the <a href="http://www.acm.org/about/annual-reports-current-fy/ACMARFY11.pdf">307,000</a> articles in the Digital Library, i.e., one in every 15,350.</li>
</ul>
<h4>Actively facilitating greater access to papers</h4>
<p>Exactly <i>zero</i> of the candidates fully endorsed open access in the sense of ACM providing all publications freely online, though Radia Perlman came closest.</p>
<p>Open access does not necessarily mean that all the Digital Library's services would be free &mdash; only that papers would be distributed freely somehow (for example, many ACM conferences already distribute their proceedings freely online). Still, full open access certainly could impact revenue, perhaps significantly. Here are some <a href="http://www.acm.org/about/annual-reports-current-fy/ACMARFY11.pdf">interesting numbers</a>. In 2011, the ACM DL grew by over 31,000 full-text articles, or 11%, to a total of 307,000 (up from <a href="http://www.acm.org/about/annual-reports-archive/fy-10-annual-reports/ACMARFY10.pdf">21,000</a> new articles in 2010). In 2011, from publications, ACM earned $18,275,000 in revenue (28% of its total) and incurred $11,750,000 in expenses. Thus, <i>for each new publication last year</i>, ACM took in $590 and spent $379 leaving about $211 to support numerous other activities beneficial to the community.</p>
<p>I assume those numbers include not only digital but also print distribution of some papers and articles. It would be interesting to have ACM's digital-only costs as a comparison to the arXiv. In 2010 <a href="http://arxiv.org/help/support/whitepaper">arXiv wrote</a>,
<blockquote>The annual budget for arXiv is $400,000. With over 60,000 new submissions per year one may think of this as an effective cost of &lt;$7 per submission. Alternatively, with over 30,000,000 full-text downloads per year this is an effective cost of &lt;1.4 cents per download.</blockquote>
<p>The one-time cost of $7 per submission is as much as three orders of magnitude lower than some other estimates of the cost of providing open access per paper. In 2009 <a href="http://cacm.acm.org/magazines/2010/2/69353-open-access-to-scientific-publications/fulltext">Michel Beaudouin-Lafon wrote</a> in <i>CACM</i>:</p>
<blockquote>But how much are authors ready to pay to publish an article? A few hundred dollars? The most prominent Open Access publisher, the Public Library of Science (PLOS), is a nonprofit organization that has received several million dollars in donations. Yet <a href="http://www.plos.org/publish/pricing-policy/publication-fees/">it charges between $1,350 and $2,900 per paper</a>, depending on the journal. In fact, many in the profession estimate that to be sustainable, the author-pay model will need to charge up to $5,000–$8,000 per publication.</blockquote>
<p>Some of these numbers might include additional services such as editing, but the arXiv numbers and <a href="http://blogs.law.harvard.edu/pamphlet/2012/03/06/an-efficient-journal/">similar numbers from JMLR</a> imply that the cost of archiving and distribution is far lower than the thousand-dollar estimates. Indeed, <a href="http://scholarlykitchen.sspnet.org/2012/05/08/the-open-access-price-wars-have-begun/">PLOS ONE publisher Peter Binfield left</a> to found <a href="http://peerj.com/">Peerj</a> which will apparently charge authors a $99 lifetime membership fee to publish open access papers starting fall 2012.</p>
<p>Remember the ACM election runs just a few more days, till May 22.</p>Brighten Godfreyhttps://plus.google.com/102845740959911684459noreply@blogger.com0tag:blogger.com,1999:blog-8216930037889709643.post-84939716200362245932012-04-24T19:41:00.001-07:002012-05-14T20:56:09.294-07:00Statements of ACM candidates on open access and copyright policy<style type="text/css">
div.fullresponsebtn {color:#aa5500; border:3px solid #888; float:left; cursor:pointer; margin-left: 20px; padding-left:5px; padding-right:5px;}
div.fullresponse {display:none; border-left:3px solid #888; margin-left: 20px; padding:10px;}
</style>
<script type="text/javascript">
function toggle_visibility(id) {
var o = document.getElementById(id);
o.style.display = (o.style.display == 'block') ? 'none' : 'block';
}
</script>
<p>In the May issue of CACM, <a href="http://dl.acm.org/citation.cfm?id=2160718.2160719">Moshe Vardi</a> argues that the interests of authors and commercial publishers have irreconcilably diverged. But "in the case of publishing by a professional association, such as ACM, the authors, as ACM members, are essentially also the publishers", so when choosing a publishing model, "the decision is up to <i>us</i>: ACM members, authors, publishers."
<p>Good point! With <a href="http://www.acm.org/acmelections">the 2012 ACM Council Election</a> happening now through May 22, what are the candidates' positions on <a href="https://freedom-to-tinker.com/blog/dwallach/tinkering-ieee-and-acm-copyright-policies/">progressive copyright policies</a>? I asked the candidates the following:
<p><i>Do you have a position on the appropriate <a href="http://www.acm.org/publications/policies/copyright_policy/copyright-policy-version-6">copyright policy for ACM's publications</a>? Specifically, should the copyright on published research papers be assigned to ACM, or should the authors retain the copyright with ACM holding a non-exclusive license to distribute the work, similar to <a href="http://static.usenix.org/event/consent_sample.html">USENIX's policy</a>? What is your position on moving ACM's publications to an open-access model?</i></p>
<p>Here are the candidates and their responses, filled in as they come.</p>
<p><b>Update May 14 2012:</b> <a href="http://youinfinitesnake.blogspot.com/2012/05/notes-on-acm-open-access-and-copyright.html">Additional notes and thoughts over here</a>.</p>
<h4>President</h4>
<p><i>Barbara G. Ryder</i>, Virginia Tech: "The ACM Digital Library (DL) has been designed and constructed by ACM, led by the vision of computing researchers in the SIGs. It now has become THE repository to go to for computing publications, having listings for many more than only ACM publications. This effort was undertaken for and supported by the computing research community; more recently, ACM has enhanced the DL with author metrics, additional search capabilities, the Authorizer tool, etc -- all in support of the research community. So the ACM DL is an important resource for computing. But ACM is a membership organization, not a for-profit company which can choose to invest in services for the community, funded by other revenue streams. At this time, the ACM DL generates a significant income stream for ACM and its SIGs, which, in part, supports further DL development as well as other activities. Any discussion of Open Access publication and ACM has to consider the financial consequences of the choices to be made. It is not just a philosophical discussion. ps These comments have already been posted on the Web, after answering similar questions from Matt Welsh:
<a href="http://cacm.acm.org/magazines/2011/12/142550-nominees-for-elections-and-report-of-the-acm-nominating-committee/fulltext">link</a> <i>[<b>Updated:</b> Regarding question about copyright policy:]</i> please look at <i>[<a href="http://www.acm.org/publications/policies/copyright_policy">ACM copyright policy</a>]</i> ... This allows non-commercial personal use by an author of her/his paper after the copyright has been signed away to ACM <i>[and]</i> the right to post a unique link using the Author-Izer ACM Linking Service on either the author’s homepage or Institutional Repository (wherever the author’s bibliography is maintained) which enables free access from that location to the definitive version of the work permanently maintained in the ACM Digital Library."</p>
<p><i>Vinton G. Cerf</i>, Google: "I much prefer a kind of creative commons method or licensing method that leaves the authors with copyright and ACM with sufficient privilege to carry out its work."</p>
<h4>Vice President</h4>
<p><i>Mathai Joseph</i>, Tata Consultancy Services (excerpt of longer response): "... I am quite happy with the ACM copyright policy because it represents a sensible balance between the rights of the author and the rights of a publisher who has invested time and effort in making the publication available to the community. ACM is competing with commercial publishers with far more restrictive policies and has to protect its rights in a fairly predatory market. ... Thank you for raising this important question. Some time back I talked to people in ACM HQ about it, thought of alternatives and then decided that the ACM policy is actually fairly sane."
<br><div class='fullresponsebtn' onClick="toggle_visibility('MathaiJoseph');">Show/hide Mathai's full response</div><br>
<div id="MathaiJoseph" class="fullresponse">
<p>Thank you for your message and question about my stand on a copyright policy for ACM.</p>
<p>First, you refer to the USENIX policy as having 'a non-exclusive license'; in fact USENIX asks for exclusive rights for a specified period (12 months) and rights to continue to maintain its copies with public access after that period.</p>
<p>More broadly, I think the important question is the expected period of interest in a publication. I may be wrong, but I would guess that material published by USENIX has immediate interest for a specific community that diminishes over time as the important ideas of the content become part of a more permanent repository for long-term reference. In that context, 12 months is probably the period when there is most interest and it is covered by exclusive rights.</p>
<p>In contrast, journals provide a long term repository for material that has been carefully selected, refereed by the community and published as part of the accepted knowledge of a field (accepting of course that errors may be found at a later time). A paper like the one by Fischer, Lynch and Paterson on 'Impossibility of Distributed Consensus with One Faulty Process' which appeared in J. ACM in April 1985, has now had over 3000 citations, many of which have appeared in the last decade, or over 15 years after original publication. So rights have to be preserved over a very long time.</p>
<p>I am quite happy with the ACM copyright policy because it represents a sensible balance between the rights of the author and the rights of a publisher who has invested time and effort in making the publication available to the community. ACM is competing with commercial publishers with far more restrictive policies and has to protect its rights in a fairly predatory market.</p>
<p>The ACM Digital Library took a very large investment from ACM members to create. It not only holds the final version of a publication, it allows it to be seen along with other similar or related publications by the same author, or on similar topics. So the value of the DL should not be seen in the context of a single publication but over a range of publications that may be of interest at the same time. If the DL did not exist, it would have to be created in order to give us all the facilities that are needed for research. ACM does have consortium agreements for access to the DL and this brings down the cost for access (to zero, in most cases, since it is the institution and not the individual who has to pay the consortium charges).</p>
<p>I would like to turn the question around and ask you what the ACM policy prevents an author from doing: in what important way is the author unable to make use of his or her publication because of ACM's policy?</p>
<p>Thank you for raising this important question. Some time back I talked to people in ACM HQ about it, thought of alternatives and then decided that the ACM policy is actually fairly sane.</p>
</div></p>
<p><i>Alexander L. Wolf</i>, Imperial College London: "Obviously, this is a very important and timely issue. ... I can tell you that the USENIX licensing model, the IEEE Security and Privacy licensing experiment, and related ideas are all under active study by various ACM volunteer groups. One thing I've learned from these discussions is that open access is a deceptively and desperately complex issue ... Personally, I subscribe to the general principle that outcomes of activities supported through public funds ... should be available for use by all citizens. ... ACM provides a staggeringly rich set of services (not just the management of professional conferences within a restricted intellectual domain, which is the predominant role of USENIX) to its members and to the larger (non-member) community. Those services cost money. ... How do we compensate for the loss of <a href="http://dl.acm.org/">DL</a> revenue, the funds that effectively subsidize many of ACM's other activities? Should we raise member dues? Should we raise conference fees? (BTW: dues and fees would have to be raised substantially, to the point that we would seriously risk the viability of both our organization and our conferences. Have you looked at <a href="https://www.usenix.org/conference/nsdi12/registration-information">the fees being charged for NSDI 2012</a> this week? And that's just to cover the conference costs, a bit of USENIX staff time, and a small share of maintenance cost for the USENIX content servers.) ... For instance, ACM is able to provide substantial financial and organizational aid to CSTA <i>[... which supports CS K-12 school teachers. Alex also mentioned ACM's role in policy, developing nations, inclusion of women, curricula guidance, the 35 ACM SIGs, and student participation.]</i> The overall point here is that we face a difficult trade off. ... The aim is to find a balance between the potentially conflicting goals of giving individuals easy access to the information generated by the community at the same time as helping guarantee a revenue stream for an organization that, frankly, plays a key role in sustaining the community. ... I urge you to take a look at several articles that have appeared in CACM related to the open access issue if you haven't already done so: [<a href="http://cacm.acm.org/magazines/2009/7/32075-open-closed-or-clopen-access/fulltext">1</a>, <a href="http://cacm.acm.org/magazines/2010/2/69353-open-access-to-scientific-publications/fulltext">2</a>, <a href="http://cacm.acm.org/magazines/2012/5/148564-fair-access/fulltext">3</a>]. I largely agree with them, and as such they also represent my position on the topic. Of course, the environment is dynamic, and new ideas are likely to emerge. I think the important thing for an officer of our association to do is maintain an understanding of and appreciation for the full context of the situation."
<br><div class='fullresponsebtn' onClick="toggle_visibility('AlexanderWolf');">Show/hide Alex's full response</div><br>
<div id="AlexanderWolf" class="fullresponse"><p>Thanks for getting in touch. Obviously, this is a very important and timely issue. It is one that is discussed and debated regularly by the ACM Council and ACM Publications Board. I take that as a healthy sign: the serious thought and effort that ACM volunteers are putting into consideration of the issue. I can tell you that the USENIX licensing model, the IEEE Security and Privacy licensing experiment, and related ideas are all under active study by various ACM volunteer groups.</p>
<p>One thing I've learned from these discussions is that open access is a deceptively and desperately complex issue, and one for which there is a lot of mis-information floating about. For example, the notion that one needs to "mov[e] ACM's publications to an open-access model". We should begin with the question: by what definition of "open access"? ACM publications are already considered "Green Open Access" as defined by various leading advocates of open access. So, we need to understand in what way GOA might not be sufficient or appropriate for ACM publications. Consider, too, ACM's new Author-izer service, which gives authors a mechanism for granting non-DL subscribers cost-free access to their publications. Access can be granted from a personal web page or from an organizational corpus (e.g., a university's publication repository). And, of course, the standard ACM copyright agreement already permits various forms of free dissemination.</p>
<p>Personally, I subscribe to the general principle that outcomes of activities supported through public funds (whether directly through government research grants, or indirectly through the education, training, and employment of people who carry out research at public institutions no matter the sponsor of that research) should be available for use by all citizens. (As a general principle it leaves aside many thorny issues, of course, such as what about partial support, what about certain specific and potentially harmful dual-use outcomes, how do we best promote industrial innovation, are not-for-profit organizations such as MIT and ACM "public" institutions, etc. Let's accept that we don't have answers to those questions for the moment.)</p>
<p>Now, how does that principle relate to your questions? It could be that this principle is exactly what you had in mind. Or it could be that you believe authors should have exclusive rights to what they produce, which could very well be in conflict with the principle outlined above. (Consider, for example, that if one follows the principle above, then by accepting public funds one has already given up certain rights.) And, then, which perspective is supported by the notion of licensing to which you alluded? I would suggest the latter (exclusive author rights), in which case we may well disagree. You see, some people may think that licensing, as opposed to copyright transfer, better supports public access, when in fact it may instead simply support exclusive author rights, at which point we must then trust each individual author (or the organization that employs the author) to make the works publicly available, and on a continuing basis. So perhaps it is actually the detail of the agreement that is put in place that is important, not so much the vehicle (license or copyright transfer) that is used to carry it. See, for example, this commentary on the IEEE Security and Privacy license experiment:</p>
<a href="https://freedom-to-tinker.com/blog/dwallach/ieee-blows-it-security-privacy-copyright-agreement/">https://freedom-to-tinker.com/blog/dwallach/ieee-blows-it-security-privacy-copyright-agreement/</a>
<p>There are many, many other issues to consider. Here is a sampling:</p>
<ul>
<li>ACM is a not-for-profit, volunteer, member organization. The decisions that ACM takes are decisions made by you and me, the members of the organization, not the headquarters staff.</li>
<li>Why is it that libraries and library consortia are willing to pay ACM for DL access? Two simple answers: (1) because ACM content is not only of the highest quality, it is far, far less expensive than the fees charged by commercial publishers -- value for money in an extremely tight economy; and (2) because it is a managed-access corpus supported by a professional organization. We must be very careful to consider this value model.</li>
<li>ACM provides a staggeringly rich set of services (not just the management of professional conferences within a restricted intellectual domain, which is the predominant role of USENIX) to its members and to the larger (non-member) community. Those services cost money. Do we believe that these services are valuable? Then we must find ways to generate the money to fund them. Should we shut down the ACM DL and let authors take full responsibility for making their papers publicly accessible? Should authors be charged a fee for ACM to provide the DL service? How do we compensate for the loss of DL revenue, the funds that effectively subsidize many of ACM's other activities? Should we raise member dues? Should we raise conference fees? (BTW: dues and fees would have to be raised substantially, to the point that we would seriously risk the viability of both our organization and our conferences. Have you looked at the fees being charged for NSDI 2012 this week? And that's just to cover the conference costs, a bit of USENIX staff time, and a small share of maintenance cost for the USENIX content servers.)</li>
<li>We need to consider that there are multiple constituencies involved in this issue. Authors, yes, but also readers, other ACM members, research sponsors, practitioners, governments, companies, teachers, students, the public at large, and libraries and library consortia. Of particularly concern to me, I must admit, are those benefiting from the other services made possible in part by the revenue generated by the ACM DL. For instance, ACM is able to provide substantial financial and organizational aid to CSTA, the Computer Science Teachers Association, which is an activity (started by the ACM) to support K-12 teachers ("school teachers" in the UK) around the world. ACM operates USACM, which provides informed technical opinions to US policy agencies and law makers, whose decisions, like it or not, have huge impact around the world. ACM is helping developing nations, such as India, organize their computer science education and research communities. ACM is promoting the inclusion of women in the profession through ACM-W and related activities. ACM provides curricula guidance used in establishing educational programs and accreditation criteria. The 35 ACM SIGs and their members receive substantial support from the ACM DL revenue, again effectively subsidizing their operations, such as to promote student conference attendance. There are many other examples.</li>
<li>Should we allow this issue to be resolved on a case-by-case basis by individual authors? By that I mean, should authors decide for themselves what rights to assign or not? My feeling is that such an approach is not viable, much in the same way that (health or car) insurance as a concept only works if the society as a whole is compelled to participate. We are in a society of sorts, a computer-professionals society, and as such we must also consider what is required of the individual to maintain the viability of the society. Of course, this is the essence of the debate, and we must resolve opposing viewpoints on that question.</li>
</ul>
<p>The overall point here is that we face a difficult trade off. Any action we take in one direction with respect to this issue must certainly be taken in consideration of its impact on the others. Facile solutions and proposals must be considered suspect.</p>
<p>The trade off, and the ACM response to it, are well represented by the emerging notion of "fair access", which is obviously an allusion to the related DRM notion of "fair use". The aim is to find a balance between the potentially conflicting goals of giving individuals easy access to the information generated by the community at the same time as helping guarantee a revenue stream for an organization that, frankly, plays a key role in sustaining the community. As ACM volunteers, let's be careful not to let our not-for-profit, professional association get caught up in the swirl surrounding the for-profit, commercial publication companies, such as Elsevier. Yes, the ACM volunteers want to maintain a revenue stream, but to support and sustain good works for the community, not to generate a "profit".</p>
<p>I hope I've answered your questions. I urge you to take a look at several articles that have appeared in CACM related to the open access issue if you haven't already done so:</p>
<p><a href="http://cacm.acm.org/magazines/2009/7/32075-open-closed-or-clopen-access/fulltext">http://cacm.acm.org/magazines/2009/7/32075-open-closed-or-clopen-access/fulltext</a></p>
<p><a href="http://cacm.acm.org/magazines/2010/2/69353-open-access-to-scientific-publications/fulltext">http://cacm.acm.org/magazines/2010/2/69353-open-access-to-scientific-publications/fulltext</a></p>
<p><a href="http://cacm.acm.org/magazines/2012/5/148564-fair-access/fulltext">http://cacm.acm.org/magazines/2012/5/148564-fair-access/fulltext</a></p>
<p>I largely agree with them, and as such they also represent my position on the topic. Of course, the environment is dynamic, and new ideas are likely to emerge. I think the important thing for an officer of our association to do is maintain an understanding of and appreciation for the full context of the situation.</p>
</div>
<h4>Secretary/Treasurer</h4>
<p><i>George V. Neville-Neil</i>, Neville-Neil Consulting: "At the moment this entire question is being gone over by the Publications Board of ACM. They are meeting this June to talk about this issue as well as others. This has not been an area of ACM policy that I have been involved with in the past, but I agree that it's extremely important, not only to authors, but to the organization as a whole. I remain open minded about what the policy ought to be in the future, and am interested in seeing what the publications board comes up with as a recommendation. Having published several articles, and a monthly column, with ACM I have to say that I do not find the current system to impose unnecessary strictures on my ability to share my work or for others to gain access to it. <i>[After a short exchange concerning arguments for open access:]</i> Thanks for the pointers, I've looked them over and they're certainly food for thought. I'll keep these in mind as and when I get to see what the publications committee comes up with. I suspect that if ACM does move to a similar model to USENIX that this will take time as there are actual financial questions to deal with in this area. While the cost of publishing has diminished, there remain costs other than printing and shipping paper that ACM has to deal with. Figuring out a path from the current model to a more open one is certainly something I'd be involved with if I were elected as Secretary/Treasurer."</p>
<p><i>Vicki L. Hanson</i>, University of Dundee: "I appreciate your thoughtful questions put to ACM candidates for election. The issues you raise have been, and continue to be, extensively discussed within ACM. ACM’s Publications Board regularly considers questions of licensing and open access and strives to continue with its high quality service while providing authors rights to their published work.
As you are likely aware, the <a href="http://cacm.acm.org/magazines/2011/10/131401-acms-copyright-policy/fulltext">Pubs Board Chairs published an editorial</a> in the October, 2011 issue of CACM about ACM’s copyright policy. Since that editorial, ACM has made available the <a href="http://www.acm.org/publications/acm-author-izer-service">Author-Izer service</a> that allows authors to put a link on their personal or institutional web page that will enable anyone to download the definitive version of published papers from ACM’s Digital Library (DL) at no charge. This service also makes available the display on these personal and institutional pages of ACM's up-to-date download and citation statistics for the publications.
ACM is exploring the implications of allowing authors to retain copyright, transferring a license to ACM for archiving, indexing, and electronic distribution. It is worth noting that such a change, according to my understanding, would make it somewhat easier for authors to distribute their work but would preclude ACM from protecting those works from plagiarism and unauthorized distribution by other entities including for-profit ones. The current policy must be reviewed, weighing the importance of such protections and other author needs.
The fully open access issue is more difficult still and requires a careful consideration of business practices and organizational sustainability. There are substantial costs involved in publishing and maintaining the high quality archival collection of materials provided by ACM’s DL. I agree with the Pubs Board’s resistance to the author pays model of open access in that this does not allow poorly-funded authors to have the same access to publishing as well-funded ones. An economic model that places the financial burden on conferences for proceedings publications similarly tends to place financial roadblocks to publication for those less able to pay. This latter model also does not address larger questions of how the DL would be funded to support journals, educational materials, and other non-conference content. The current ACM business model attempts to gives authors flexibility and rights to make their work available to the community while, at the same time, being able to provide the DL service for aggregating articles, collecting bibliometrics, and investing in further development of the DL as a resource for the computing community.
I realize that the above answers are not the definitive answers you might have sought in your questions to me. At this point in time, the issues you raise are critical ones for the future of ACM and continuing dialog is needed to consider the best way forward in terms of meeting the needs of authors and readers of DL materials as well as determining a sustainable business model that will allow authors and readers continued access to the DL, an important resource for ACM’s community of researchers and practitioners."</p>
<h4>Members at Large</h4>
<p><i>Radia Perlman</i>, Intel: "I'd like to hear arguments on all sides before having a cast-in-stone position. Some companies have worked out an agreement with IEEE and ACM for something like what you said...that ACM has non-exclusive right, but the authors also get to post and distribute. So that implies, I think, that it wouldn't be totally detrimental to ACM to do
that for everyone. Some conferences post the papers online, freely accessible. That seems like the right thing to do. Going beyond the rights of authors (and/or the company they worked for at the time they wrote the paper) having the right to post and distribute, I think the model of only letting people see the title and abstract of papers, and then having to pay to download the article, is really bad for facilitating research. When one is doing research, and browses on the web, and finds a 15 year old paper that looks like it might be relevant, but you have to pay $25 to download the paper, only to find it really is not relevant... A lot of companies and most universities have a blanket access to ACM and IEEE publications, so people at those companies probably don't notice the issue. I wonder how much ACM depends on revenue from people downloading papers. Especially really old ones. Perhaps a compromise might be to say that after, say, 3 years, the articles should be free. Anyway, my heart is in having everything easily accessible on the web, for free. I wouldn't care, as an author, whether I could distribute the paper or just a link to the paper, as long as the link allows the person to see the whole paper. For facilitating research, my inclination would also be for anyone to access all the published papers, without having to get a link from the author. [...] But as I said, I'd like to hear other points of view and legal/economic issues that I may not fully appreciate, before getting too entrenched in a position."</p>
<p><i>Ricardo Baeza-Yates</i>, Yahoo! Research, Barcelona/Santiago: "In general I am in favor of open access models and giving the author more control of their copyrights. On the other hand we need to do this without jeopardizing the financial stability of ACM."</p>
<p><i>Feng Zhao</i>, Microsoft Research Asia, Beijing: "My platform is primarily around building a sustained and quality engagement between ACM and the regional computing community in China and the rest of Asia, building on the tremendous momentum of the Council's China and India initiatives. As part of that, I felt it is important to lower the cost of access for people from the developing regions.
I have not really thought through the copyright issue at any depth. But one thing is clear. The old model of publication, dissemination, and monetization is broken in the online world today. If elected, I will work with the Council to study and innovate on ways that can expand the ACM reach and at the same time ensure the financial sustainability of the society."</p>
<p><i>Eric Allman</i>, Sendmail: "This is neither my area of expertise nor do I have all the information
(particularly about finances), so I do not (as yet) have a strongly held
position on this. However, I don't understand why it is necessary for
ACM to actually hold copyright as long as it retains the rights to use
the materials in the ways that it already does. In particular, as I
read the copyright policy, the authors retain the right to privately
publish the materials on non-ACM web sites, so the usual financial
argument about the Digital Library doesn't seem to fit here.
It also seems clear to me that research that was funded with public
money should be available to that public with no more than a cost
recovery fee. Obviously not all authors are funded by government
grants, and the ACM audience transcends any particular government, but
trying to sort articles on this basis seems excessively complex. I'm
also a supporter of the concept of replication to maintain long-term
integrity and retention of archival material, which is antithetical to
centralized administration.
Note that I'm not saying that the DL is superfluous or needs to be free.
The DL provides value through indexing, providing a stable reference
copy (URLs are notoriously unstable), and assisting ease of access.
Maintaining the DL is not without costs which need to be recovered, and
any reductions in revenue resulting from changing the copyright policy
must be balanced in some way. Fiscal responsibility is important."</p>
<p><i>Mary Lou Soffa</i>, University of Virginia: [not yet responded]</p>
<p><i>PJ Narayanan</i>, IIIT-Hyderabad: "I personally believe the authors should have all rights to distribute their work and hence should hold the copyright. ACM as the publisher and maintainer of the electronics library should have non-exclusive rights to distribute the content."</p>
<p><i>Eugene H. Spafford</i>, Purdue University: "Well, I'm not expert in ACM's policies, so I am not sure I am the best person to ask right now. However, I'll try to answer. My understanding is that there is a publications board that considers ACM policy for copyright. It is regularly reviewed. I know there have been many changes during the time I've been a member, in response to changing times, needs, and user requests. I haven't heard of any problems with the current policy, and things seem to be working okay. So, I'll assume that the current policy is appropriate until presented with evidence indicating otherwise. <i>[In response to whether authors should retain the copyright:]</i> ACM is not like USENIX -- I know, as I was a member of Usenix for 25 years. ACM publishes journals and maintains a curated digital library that must be supported over a long time to be of real value. The Usenix model is okay for some conferences, and for authors to maintain for a limited period of time, but that is not the same as immutable copies maintained in a curated collection, indefinitely. The current model seems to work fine, so, that gives a proof by example. <i>[In response to whether publications should be open access:]</i> Please define 'open access' and what it provides that the current model does not. Does it provide the necessary support and resources to maintain and enhance the ACM digital library in a global environment for an indefinite time? I'd then want to see a response from someone in ACM about the current model. I'm open to considering changes, but I need complete information to understand the issues and potential effects."</p>
<br><br><p><b>Update:</b> <a href="http://realtimerendering.com/open_access.html">Candidates' positions on open access from two years ago</a>. A couple candidates are running this year as well.</p>
<br><br><p>Notes: You can read more recent discussion of open access <a href="http://thecostofknowledge.com/">here</a> and <a href="https://freedom-to-tinker.com/blog/appel/copyright-in-scholarly-publishing-2012-edition/">here</a>. Thanks to all the candidates for taking the time to reply. Thanks to George Porter for suggesting this.</p>Brighten Godfreyhttps://plus.google.com/102845740959911684459noreply@blogger.com0tag:blogger.com,1999:blog-8216930037889709643.post-66222666810358278492012-04-24T11:04:00.000-07:002012-06-19T22:39:12.472-07:00Jellyfish: Networking Data Centers Randomly<p>People have been designing communication network topologies for <a href="http://youinfinitesnake.blogspot.com/2011/12/like-spiders-web.html">more than 150 years</a>. But usually their structure is quite constrained. Building a wide area network, one has to follow the locations of cities or railroads. Building a supercomputer, a regular structure enables simple, deadlock-free routing. Building a traditional data center network, one might use a tree-like design amenable to the Spanning Tree routing protocol. Even state-of-the-art high-capacity data center networks might use a multi-rooted tree like a fat-tree.</p>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-8uKSOcP-FDQ/T5ZhZpef-wI/AAAAAAAAAOE/lw-Pw0aMed4/s1600/FatTree.png" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="400" width="400" src="http://2.bp.blogspot.com/-8uKSOcP-FDQ/T5ZhZpef-wI/AAAAAAAAAOE/lw-Pw0aMed4/s400/FatTree.png" /></a></div>
<p style="text-align: center; font-size:80%;"><a href="http://ccr.sigcomm.org/online/?q=node/378" style="color:#808080;">3-level fat-tree</a>
&middot; 432 servers, 180 switches, degree 12</p>
<p>In our <a href="http://www.cs.illinois.edu/~pbg/papers/jellyfish-nsdi12.pdf">Jellyfish</a> paper appearing this week in <a href="https://www.usenix.org/conference/nsdi12">NSDI 2012</a>, we're proposing a slightly radical alternative: a completely random network.</p>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-fES5sXO_rF4/T5Zhy7AeinI/AAAAAAAAAOQ/KMZ0g6eMv8s/s1600/Jellyfish.png" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="400" width="400" src="http://2.bp.blogspot.com/-fES5sXO_rF4/T5Zhy7AeinI/AAAAAAAAAOQ/KMZ0g6eMv8s/s400/Jellyfish.png" /></a></div>
<p style="text-align: center; font-size:80%; color:#808080;">Jellyfish random graph &middot; 432 servers, 180 switches, degree 12</p>
<p>This project, the work of Ph.D students <a href="http://www.cs.illinois.edu/~singla2/">Ankit Singla</a> and <a href="http://cyhong.projects.cs.illinois.edu/doku.php?id=homepage">Chi-Yao Hong</a>, along with <a href="http://www.hpl.hp.com/people/lucian_popa/">Lucian Popa</a> of HP Labs and myself, has two goals. First, <b>high bandwidth</b> helps servers avoid bottlenecks while streaming big data across the network, and gives cloud operators the <a href="http://perspectives.mvdirona.com/2010/10/31/DatacenterNetworksAreInMyWay.aspx">agility to place virtual machines</a> on any physical host without worrying about bandwidth constraints between hosts.</p>
<p>Second, we want a network that is <b>incrementally expandable</b>. Cloud service providers <a href="http://www.datacenterknowledge.com/archives/2009/10/13/facebook-now-has-30000-servers/">continually expand</a> and modify their networks to meet increasing demand and to reduce up-front capital expenditure. However, existing high-capacity network interconnects have relatively rigid structure that interferes with incremental modification. The fat-tree, hypercube, butterfly, and other proposed networks can only be built from a limited menu of fixed sizes and are difficult to expand by, for example, adding a rack of servers at a time. Of course, there are some workarounds: one can replace some switches with ones of larger port-count or oversubscribe them, but this can make network bandwidth constrained or uneven across the servers. One could leave ports free for future network connections but this wastes investment while they sit idle.</p>
<p>Our solution is to simply give up on structure, and build a random network among the network routers. This sloppiness yields significantly more flexibility than past designs: a few random link swaps is all it takes to incorporate additional components, making a new random network. It can naturally support varying port-counts, and scales to arbitrary sizes rather than limiting the network to coarse design points. In fact, we show in the paper that Jellyfish reduces the cost of incremental expansion quite substantially over a past expansion heuristic for fat-tree-like (Clos) networks. Intuitively, Jellyfish makes network capacity less like a structured solid and more like a fluid. Coincidentally, it also looks like a jellyfish.</p>
<!-- http://3.bp.blogspot.com/-Zw1PRq7lgV0/T5ZchvyMEAI/AAAAAAAAAN4/AWPsUV1kpEw/s1600/Arctapodema-Curtsinger-National_Geographic.jpg -->
<div class="separator" style="clear: both; text-align: center;">
<a href="http://photography.nationalgeographic.com/wallpaper/photography/photos/underwater-oddities/transparent-jellyfish-curtsinger" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="320" width="400" src="http://3.bp.blogspot.com/-Zw1PRq7lgV0/T5ZchvyMEAI/AAAAAAAAAN4/AWPsUV1kpEw/s400/Arctapodema-Curtsinger-National_Geographic.jpg" /></a></div>
<p style="text-align: center; font-size:80%;"><a style="color:#808080;" href="http://photography.nationalgeographic.com/wallpaper/photography/photos/underwater-oddities/transparent-jellyfish-curtsinger">Arctapodema jellyfish &middot; Bill Curtsinger, National Geographic</a></p>
<p>At this point, one natural reaction is that a completely random network must be the product of a half-deranged intellect, somewhere between 'perpetual motion machine' and 'deep-fried butter on a stick'. Won't network capacity decrease, due to the sloppy interconnections? How does one route packets through a completely unstructured network? Isn't it possible to randomly pick a bad network, or for failures to cause problems? How do you physically cable up a network that bears more than a passing resemblance to a bowl of spaghetti? <i>How could it possibly work?</i></p>
<p>The first surprise is that rather than sacrificing bandwidth, Jellyfish supports roughly 25% <i>higher</i> capacity than a fat-tree of equal cost. That is, a completely random network makes more efficient use of resources than a carefully-structured one. The intuition is that Jellyfish's diverse connections &mdash; in theoretical terms, it is a good <a href="http://en.wikipedia.org/wiki/Expander_graph">expander graph</a> &mdash; give it low average path length, which in turn means that sending each packet across the network takes less work. In fact, there is reason to believe that the random graph is pretty close to the best possible network for maximizing throughput, perhaps within 10%.</p>
<p>Routing is another interesting question. Chi-Yao and Ankit found that load-balanced routing works well as long as the routing protocol provides sufficiently high path diversity, as can be obtained with OpenFlow, MPLS, or other recent proposals that go beyond <a href="http://en.wikipedia.org/wiki/Spanning_Tree_Protocol">STP</a> or simple shortest path routing. In addition, it turns out that the questions of consistent performance, resilience, and cabling have favorable, and we believe reasonably practical, answers. There are some very interesting theoretical questions that come up as well, which we're now looking into.</p>
<p>Jellyfish is sufficiently unlike past designs that implementation challenges will certainly arise. But so far it seems like an unstructured network just might work. And happily, rather than running into a tricky tradeoff, the two design goals of high bandwidth and incremental expansion appear to be satisfied by one network.</p>
<p>Congratulations to Ankit and Chi-Yao who have done (and are continuing to do) great work on this project. Don't miss Chi-Yao's talk in the Thursday afternoon data center networking session! Finally, thanks to NSF for supporting this research.</p>Brighten Godfreyhttps://plus.google.com/102845740959911684459noreply@blogger.com1tag:blogger.com,1999:blog-8216930037889709643.post-30269872031784680432011-12-06T16:39:00.001-08:002011-12-06T21:23:31.675-08:00Like a spider's web<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/--c55g623q2k/Tt64M7xCgjI/AAAAAAAAAL8/B5TaB_E3CPU/s1600/LondonAnecdotes1848.jpg" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="600" src="http://2.bp.blogspot.com/--c55g623q2k/Tt64M7xCgjI/AAAAAAAAAL8/B5TaB_E3CPU/s600/LondonAnecdotes1848.jpg" /></a></div>
<p style="font-size:200%; line-height:150%"><i>“It is anticipated that the whole of the populous parts of the United States will, within two or three years, be covered with net-work like a spider's web.”</i></p>
<p align=right>&mdash; Illustration and quote from <a href="http://books.google.com/books?id=AU5UaumQJvICe"><i>The London Anecdotes</i></a>, 1848<br>Quoted in <a href="http://tomstandage.wordpress.com/books/the-victorian-internet/"><i>The Victorian Internet</i></a></p>Brighten Godfreyhttps://plus.google.com/102845740959911684459noreply@blogger.com0tag:blogger.com,1999:blog-8216930037889709643.post-19487930419015162182011-12-04T22:22:00.000-08:002011-12-04T20:24:28.068-08:00Google+ vs. Facebook engagement<p>Here's a little statistically-insignificant self-experimentation, based on 51 near-simultaneous posts to both Google+ and Facebook, from the beginning of September 2011 to the present.</p>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-ODR21Q6_OJI/TtxDf_nwFTI/AAAAAAAAALM/XjzzqtCcvh4/s1600/scatterplot.jpeg" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="240" width="400" src="http://2.bp.blogspot.com/-ODR21Q6_OJI/TtxDf_nwFTI/AAAAAAAAALM/XjzzqtCcvh4/s400/scatterplot.jpeg" /></a></div>
<p>"Engagement" is the number of unique people (excluding me) who responded, either by commenting, liking or +1'ing the post, or liking or +1'ing a comment on the post. (The scatterplot points are perturbed slightly from their true integral values so they don't completely overlap.) What's remarkable here is how coincidentally similar the engagement is on the two networks &mdash; the difference is under 2 percent (!) despite the fact that my social network on Facebook is currently 2.25 times as large as on Google+.</p>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-Gk8i-zVwumk/TtxD3vwRA6I/AAAAAAAAALY/Kuw0Bb3efe0/s1600/means.jpeg" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="240" width="400" src="http://2.bp.blogspot.com/-Gk8i-zVwumk/TtxD3vwRA6I/AAAAAAAAALY/Kuw0Bb3efe0/s400/means.jpeg" /></a></div>
<p>Google+ has very slightly higher engagement on STEM-related posts (science, technology, engineering, and mathematics), while Facebook is slightly higher for other posts, but the differences are well within 95% confidence intervals.</p>
<p>It's possible my social network has somewhat shifted to Google+. Here is the post set split into five chronological partitions with 10 or 11 posts in each.</p>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-CfayN7x9xTU/TtxEBFXuK7I/AAAAAAAAALk/4T8u1SO1bvY/s1600/timeplot.jpeg" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="240" width="400" src="http://1.bp.blogspot.com/-CfayN7x9xTU/TtxEBFXuK7I/AAAAAAAAALk/4T8u1SO1bvY/s400/timeplot.jpeg" /></a></div>
<p>Engagement as defined above excludes re-sharing posts, because I wasn't confident the two social networks are reporting these in the same way (e.g., do they both report recursive shares?). But there is some interestingly significant difference in sharing behavior on Google+ with STEM posts seeing nearly <em>seven times</em> as much sharing as non-STEM posts in this very small data set, an effect which didn't appear on Facebook.</p>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-pGaliJVHCVA/TtxEG_fdbII/AAAAAAAAALw/nsaH-v2D4kc/s1600/share-means.jpeg" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="240" width="400" src="http://3.bp.blogspot.com/-pGaliJVHCVA/TtxEG_fdbII/AAAAAAAAALw/nsaH-v2D4kc/s400/share-means.jpeg" /></a></div>
<p>Of course, all of this is specific to my social network, and really, the sample size is too small to draw any conclusions at all. Now, if someone were to compare posts for a large number of people that cross-post publicly to Facebook and Google+, that could start to get interesting...</p>Brighten Godfreyhttps://plus.google.com/102845740959911684459noreply@blogger.com3tag:blogger.com,1999:blog-8216930037889709643.post-2940691463984179792011-12-02T15:37:00.001-08:002011-12-02T16:03:45.400-08:00Matrix multiplication algorithms over time<p>The asymptotically fastest algorithm for matrix multiplication takes time O(<i>n</i><sup>ω</sup>) for some value of ω. Here are the <a href="http://blog.computationalcomplexity.org/2011/11/matrix-mult-you-heard-it-here-third.html">best known upper bounds</a> on ω over time.</p>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-Dotb2FPFxpk/TtliwwjPt5I/AAAAAAAAAK0/8kdqlaD_-uk/s1600/multiplication.jpg" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="240" width="400" src="http://2.bp.blogspot.com/-Dotb2FPFxpk/TtliwwjPt5I/AAAAAAAAAK0/8kdqlaD_-uk/s400/multiplication.jpg" /></a></div>
<p><a href="http://www.scottaaronson.com/blog/?p=839">The latest improvements</a>, the first in over 20 years, are due to Andrew Stothers and Virginia Vassilevska Williams. The latter gave an O(<i>n</i><sup>2.3727</sup>)-time algorithm for multiplying matrices.</p>
<p>When will the sometimes-conjectured ω = 2 be reached? Certainly nothing wrong with taking a linear fit of this data, right?</p>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-vkOcwOMlhio/Ttljne9rQ1I/AAAAAAAAALA/Lk2QZfuPQfY/s1600/multiplication-fit.jpg" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="240" width="400" src="http://4.bp.blogspot.com/-vkOcwOMlhio/Ttljne9rQ1I/AAAAAAAAALA/Lk2QZfuPQfY/s400/multiplication-fit.jpg" /></a></div>
<p>So that would be around the year 2043. Unfortunately, the pessimist's exponential fit asymptotes to ω = 2.30041...</p>Brighten Godfreyhttps://plus.google.com/102845740959911684459noreply@blogger.com0tag:blogger.com,1999:blog-8216930037889709643.post-89836265112316905862011-11-16T00:21:00.001-08:002011-11-16T09:33:47.374-08:00Live-Blogging HotNets 2011 (Day Two)<p>(OK, not quite live...)</p>
<p><a href="http://youinfinitesnake.blogspot.com/2011/11/live-blogging-hotnets-2011.html">Day One was over here.</a></p>
<h4>Session 5</h4>
<p><a href="http://conferences.sigcomm.org/hotnets/2011/papers/hotnetsX-final85.pdf">Christopher Riederer</a> spoke about auctioning your personal information. Unfortunately I missed the talk, but it must have been a good one since there was quite a bit of discussion.</p>
<p>Next up was <a href="http://conferences.sigcomm.org/hotnets/2011/papers/hotnetsX-final131.pdf">Vincent Liu</a> speaking about Tor Instead of IP &mdash; which is just what it sounds like, Tor as an Internet architecture. Of course you can't just use Tor and they have proposals for controlling incoming traffic, DoS, and getting better efficiency (lower stretch) with enough diversity of jurisdiction and plausible routing-policy-compliance. Similar to what <a href="https://telex.cc/">Telex</a> does with network-layer steganography, the general approach here is to make Internet connectivity an all or nothing proposition: If you can get <em>anywhere</em> outside a censored region, you can get <em>everywhere</em> so all the censor can do is block the entire Internet.</p>
<p>Last was Gurney et al's <a href="http://conferences.sigcomm.org/hotnets/2011/papers/hotnetsX-final30.pdf">Having your Cake and Eating it too: Routing Security with Privacy Protections</a>. The notion of security here is that the neighbors of an AS can verify that the AS in question selected and advertised routes according to its agreed-upon policy. The privacy is that they can verify this without revealing any more information than the verification itself. The paper presents protocols to verify several particular policies (e.g., if the AS was offered some route, then it advertised one). Could be useful for debugging interdomain connectivity issues.</p>
<h4>Session 6</h4>
<p><a href="http://conferences.sigcomm.org/hotnets/2011/papers/hotnetsX-final60.pdf">Steven Hong</a> presented Picasso which provides a nice abstraction, hiding the complexity of full duplex signal shaping to utilize discontiguous spectrum fragments.</p>
<p><a href="http://conferences.sigcomm.org/hotnets/2011/papers/hotnetsX-final112.pdf">Mohammad Khojastepour</a> spoke about using antenna cancellation to improve full duplex wireless.</p>
<p><a href="http://conferences.sigcomm.org/hotnets/2011/papers/hotnetsX-final157.pdf">Souvik Sen</a> asked, Can physical layer frequency response information improve WiFi localization? Yes. And it involves driving Roombas around cafeterias.</p>
<h4>Session 7</h4>
<p>Poor old TCP felt snubbed at this workshop until <a href="http://conferences.sigcomm.org/hotnets/2011/papers/hotnetsX-final100.pdf">Keith Winstein</a> started roasting it. TCP is designed to work well in steady-state with very particular assumptions. It fails under messy real world conditions &mdash; hugely varying RTTs, stochastic rather than congestion-induced loss, very dynamic environments, and so on. Keith's goal is to make transmission control more robust and efficient by modeling network uncertainty. So the end-host has a model of the network (potentially including topology, senders, queues, etc.) but any of these elements can have unknown parameters describing their behavior. Then it maintains a probability distribution over possible parameter values and updates its beliefs as evidence comes in from the network (e.g., ACKs). At any given time, it takes whatever action maximizes expected utility (= some throughput / fairness / latency metric) given the current distribution of possible situations. The action is a delay until sending next packet. It's a beautifully intuitive idea, or as a certain reviewer put it,</p>
<blockquote>"This is the craziest idea I've heard in a very long time."</blockquote>
<p>Keith showed a simulation in one small example where this approach decides to use a slow start behavior &mdash; without that being hard-coded in &mdash; but then after getting a little info, immediately sends at the "right" rate. But there are big challenges. Some discussion touched on state explosion in the model, the danger of overfitting with an overly-complicated model, and how much the sender needs to know about the network.
<p>Q: What would be the first variable you'd add to the network model beyond what TCP captures? A: Stochastic loss.</p>
<p>Q (Barath): Should we add a control plane to disseminate information about the network to assist the model? A: anything that gets more info is good.</p>
<p>Q (Ion): When won't this work? A: Model mismatch &mdash; if the truth is something the model didn't consider.</p>
<p>Q: Do the smart senders coexist? If you have 2 versions of the algorithm, do they coexist? A: Good question.</p>
<p>Next, a movie. The <a href="http://conferences.sigcomm.org/hotnets/2011/papers/hotnetsX-final40.pdf">BitMate paper</a> &mdash; "Bittorrent for the Less Priviliged" &mdash; by Umair Waheed and Umar Saif was one of the few entirely-non-US papers. Unfortunately visa issues prevented their attendance but Umar presented via a movie. The problem is that high bandwidth BitTorrent nodes for mutually helpful clusters but no one wants to upload to low bandwidth nodes. The solution included several interesting mechanisms but Umar said the one that got 80% of the benefit was something they call "Realistic Optimistic Unchoke" which improved unchoking of low-bandwidth peers. BitMate gets dramatic improvements in efficiency and fairness.</p>
<p>Umar took questions via Skype. It was so 21st century.</p>
<p>BitMate was <a href="http://www.nytimes.com/external/gigaom/2011/02/28/28gigaom-bitmate-brings-bittorrent-to-the-developing-world-504.html">written up in the NYTimes</a>.</p>
<p>Q: what's your experience with real deployment? A: 40,000 users from 170 countries &mdash; a lot from US "for reasons that escape me" (~40% from North America). Many users from Iran, probably to circumvent censorship.</p>
<p><a href="http://conferences.sigcomm.org/hotnets/2011/papers/hotnetsX-final19.pdf">Vyas Sekar</a> told us that a particular real 80,000 user network with tens of sites had 900 routers and <em>636 middleboxes</em> (Firewalls, NIDS, Media gateways, load balanceers, proxies, VPN gateways, WAN optimizers, Voice gateways). Problem with middleboxes: device sprawl. Another problem: dealing with many vendors (Bruce Davie, Cisco: <em>that</em> problem we can fix!). Result: high CapEx and high OpEx. (Just network security cost $6B in 2010, $10B in 2016.) Also, middleboxes today are inflexible and difficult to extend.</p>
<p>So most net innovation happens via middleboxes, but it doesn't come easily. And middleboxes have been missing from the research community's discussion of innovation in networks. The Vision: Enable innovation in middlebox deployments. Approach: (1) software-centric implementations, (2) consolidated physical platform, (3) logically centralized open management APIs. There are also opportunities for reduced costs (via multiplexing) and improved efficiency (via reuse of functions like session reconstruction of TCP flows).</p>
<h4>Session 8</h4>
<p>This was the data center session with three more cool papers, but unfortunately I had to leave early.</p>
<p><a href="http://conferences.sigcomm.org/hotnets/2011/program.shtml">See the full program.</a>
<p>Overall I was impressed with the ideas and engaging presentations. Thanks to the chairs <a href="http://cs.wisc.edu/~akella">Aditya</a> and <a href="http://www.cs.berkeley.edu/~istoica/">Ion</a> for a very well-run workshop. </p>Brighten Godfreyhttps://plus.google.com/102845740959911684459noreply@blogger.com0tag:blogger.com,1999:blog-8216930037889709643.post-11489473674446898902011-11-14T13:41:00.000-08:002011-11-16T00:25:28.646-08:00Live-blogging HotNets 2011<p>Lots of exciting talks and discussion at <a href="http://conferences.sigcomm.org/hotnets/2011/">HotNets</a>. Here are a few highlights.</p>
<h4>Session 1</h4>
<p>The first session was on Internet architecture. <a href="http://conferences.sigcomm.org/hotnets/2011/papers/hotnetsX-final11.pdf">Ali Ghodsi</a> spoke about three unanswered questions for the burgeoning area of {data/information/content}-{centric/oriented} networking. These were privacy, data-plane efficiency, and whether ubiquitous caching (a key feature of nearly all the proposals) actually provides quantitative improvement. For the latter point, the argument is that work on web caching from the late 1990's indicated that if you have caching near the edge (as already exists in present-day web caches), then adding ubiquitous caching to the architecture does not provide much more benefit due to heavy-tailed access distributions. So, does the caching advantage of information-centric networking warrant such a large-scale architectural change?</p>
<p>In later one-on-one discussions, Dan Massey (who later gave an interesting talk on <a href="http://conferences.sigcomm.org/hotnets/2011/papers/hotnetsX-final33.pdf">the IPv4 grey market</a>) argued that at least for the NDN project, ubiquitous caching is not the focus. Something more important is being able to do policy-aware multipath load balancing &mdash; in a very dynamic way in the network, by shipping content requests optimistically to multiple locations and seeing what ends up working well. A kind of speculative execution for forwarding. This may not be specific to content-awareness, but Dan argued that if you want to make this work, you end up needing something like NDN's Pending Interest Table mechanism. (The discussion was brief, but hopefully I restated the argument accurately.)</p>
<p><a href="http://conferences.sigcomm.org/hotnets/2011/papers/hotnetsX-final17.pdf">Dave Andersen</a> and <a href="http://conferences.sigcomm.org/hotnets/2011/papers/hotnetsX-final12.pdf">Scott Shenker</a> argued that the primary goal of a future Internet architecture should be to accommodate evolution within the architecture itself, rather than just adding new functionality. The XIA approach introduces the notion of data-plane fallbacks so the sender can ask for new functionality and if it isn't supported everywhere, things still work. Scott focused on bringing evolvability to the architecture by applying the principles of extensibility and modularity.</p>
<p>There were several questions about what would be the incentives to deploy either approach. Scott responded that while incentives are important, first we need to understand what <em>technical</em> mechanisms we need to make evolvability feasible &mdash; which previously we have not understood. Ion Stoica asked, What would be the first thing that would drive deployment of one of the future Internet architectures? Some of the answers included <a href="http://en.wikipedia.org/wiki/SCADA">SCADA</a> networks which need extreme security, content caching (despite the first talk!) where content providers have monetary incentives, and (from Hari) the ability to deploy differential pricing by having more information about applications' intent (though users may not like this!).</p>
<p>Another question was whether these architectures would actually have fixed the processes which led to ossification in practice (e.g., via middlebox problems); and whether they'd aid deployment of protocols like secure BGP which have had problems in practice.</p>
<h4>Session 2</h4>
<p><a href="http://conferences.sigcomm.org/hotnets/2011/papers/hotnetsX-final4.pdf">Haitao Zheng</a> from UCSB spoke about building wireless data centers, where directional wireless interfaces on racks of servers can be dynamically steered to connect pairs of racks that need to communicate. If you want to connect racks of servers with high-bandwidth wireless links, interference is a big problem. Their approach is 3D beamforming: Rather than aiming the radio directly (in the 2D plane at the top of racks), bounce it off a reflective ceiling and put an absorber around the target. This direction of reception reduces interference. In addition to having many pretty pictures of interference patterns, this is part of a <a href="http://ccr.sigcomm.org/online/?q=node/645">line</a> <a href="http://www.cs.illinois.edu/~singla2/hotnets10.pdf">of</a> <a href="http://conferences.sigcomm.org/sigcomm/2011/papers/sigcomm/p38.pdf">work</a> (in wireless and optical) that has a very cool approach &mdash; we always think of changing the traffic flow to match the topology; now we can <em>change the physical topology</em> to match the traffic.</p>
<p><a href="http://conferences.sigcomm.org/hotnets/2011/papers/hotnetsX-final35.pdf">Abhinav Pathak</a> talked about finding energy bugs in mobile devices &mdash; as he said, that hits <em>three</em> hot keywords.</p>
<p><a href="http://conferences.sigcomm.org/hotnets/2011/papers/hotnetsX-final32.pdf">Jonathan Perry</a> spoke about Rateless Spinal Codes. My main question: why do coding schemes get to have such <a href="http://en.wikipedia.org/wiki/Tornado_codes">cool</a> <a href="http://en.wikipedia.org/wiki/Raptor_code">names</a>?</p>
<h4>Session 3</h4>
<p><a href="http://conferences.sigcomm.org/hotnets/2011/papers/hotnetsX-final3.pdf">Mark Reitblatt</a> spoke on "Consistent Updates for Software-Defined Networks: Change You Can Believe In!". Here is the problem they are solving: As you are reconfiguring your network how can you be sure your policy (like availability or security) is preserved even during the transition? Traditionally, this is hard because of the inconsistency of having one set of forwarding rules deployed some places, and another set deployed other places. Actually, it might seem impossible. Even if you magically deploy a change everywhere instantly, you can still get policy violations because <em>packets travel across non-negligible time</em>. Can you solve it? Yes you can!</p>
<p><a href="http://conferences.sigcomm.org/hotnets/2011/papers/hotnetsX-final103.pdf">Junda Liu</a> should get some sort of award for giving the most entertaining talk that also featured a state machine diagram.</p>
<p><a href="http://conferences.sigcomm.org/hotnets/2011/papers/hotnetsX-final56.pdf">Barath Raghavan</a> calculated the energy and <a href="http://en.wikipedia.org/wiki/Emergy">emergy</a> of the Internet, which has been getting some <a href="http://www.newscientist.com/blogs/onepercent/2011/10/307-gw-the-maximum-energy-the.html">press</a> recently and which generated a lot of discussion on the complexities and implications of measuring society's energy use.</p>
<p>Awesome feature of this session: all the talks finished early!</p>
<h4>Session 4</h4>
<p><a href="http://conferences.sigcomm.org/hotnets/2011/papers/hotnetsX-final26.pdf">Jon Howell</a> spoke on a proposed refactoring (and narrowing) of the API for web applications executing on user machines.</p>
<p><a href="http://conferences.sigcomm.org/hotnets/2011/papers/hotnetsX-final55.pdf">Ethan Katz-Bassett</a> spoke about Machiavellian Routing. The coolness here is a trick by which ISPs can control <em>inbound</em> routing, so if they notice there is a connectivity problem at some AS they can induce other senders to avoid the problem.</p>
<p><a href="http://conferences.sigcomm.org/hotnets/2011/papers/hotnetsX-final33.pdf">Dan Massey</a> noted that a grey market is emerging for IPv4 addresses and argued that we need a way not to prevent the market from existing outside the traditional Internet governance, but instead to verify what transactions happen. This would make the market more honest and efficient. Most interesting point from discussion (I think from Jon Howell): Why do we want to improve the IPv4 market? This will allow more efficient use of available IPv4 addresses ... but if we let the market be as baroque and inconvenient as possible, it will encourage deployment of IPv6 sooner!</p>
<p><a href="http://youinfinitesnake.blogspot.com/2011/11/live-blogging-hotnets-2011-day-two.html">Onward to Day Two...</a></p>Brighten Godfreyhttps://plus.google.com/102845740959911684459noreply@blogger.com1tag:blogger.com,1999:blog-8216930037889709643.post-37119952021362072332011-08-03T01:12:00.000-07:002012-05-01T23:12:53.999-07:00What's wrong with computer science reviewing?<p>There is a sense among some researchers in computer science that <b>many peer reviews in our field are bad</b> &mdash; in particular, too often unfairly slanted against papers in various ways that do not encourage good science and engineering. Why might this be happening and what can we do about it?</p>
<p>[Aside: you can now <a href="https://plus.google.com/102845740959911684459/posts">follow me on Google+</a>. Short posts there, long posts here.]</p>
<a name='more'></a>
<h4>The problem</h4>
<p>First of all, let me be clear: (1) I think most reviews, regardless of whether they recommend acceptance or rejection, are well done and reflect care and significant time that the reviewers have invested. (2) Exciting, impactful research still manages to get done, so the system does generally work pretty well. Still, that doesn't mean we can't improve it. (3) Despite the fact that I try to take care with each review, statistically speaking I have probably committed each of the problems discussed here.</p>
<p>With the fine print out of the way ... what is this possible problem? The evidence is almost all ancedotal and biased. But since that is all we have, let me supply some anecdotes, first that reviews are often negative:</p>
<ul>
<li><p>This July's edition of Computer Communication Review <a href="http://ccr.sigcomm.org/online/files/p3-v41n3ed-keshav-editorial.pdf">rejected all its peer-reviewed submissions</a>, thus matching (at least for one issue) the Journal of Universal Rejection as the <a href="http://www.universalrejection.org/">most prestigious journal as judged by acceptance rate</a>.</p></li>
<li><p>Jeffrey Naughton critiqued the state of research in the databases community with bad reviewing ("Reviewers hate EVERYTHING!") as a key problem, giving the anecdote that in SIGMOD 2010, out of 350 submissions, <a href="http://pages.cs.wisc.edu/~naughton/naughtonicde.pptx">only paper 1 had all reviews rate it "accept" or higher; and only 4 had an average rating of "accept" or higher</a>.</p></li>
<li>Taking a recent major systems-and-networking-related conference as a representative presumably-normal example, papers received generally 4 or 5 reviews and were scored on a scale of 1 to 5. Out of about 177 submissions, only six accepted papers received <em>any</em> 5's. Only one received more than a single 5, which also says something about variance.</p></li>
<li>"...there is a pervasive sense of unease within these communities about the quality and fairness of the review process and whether our publication processes truly serve the purposes for which they are intended. ... It was clear from the reaction to the panel that concerns with the reviewing process cut across many, if not all, fields of computer science." (<a href="http://portal.acm.org/citation.cfm?id=1462581">Panel summary</a> from a session at the 2008 CRA Conference at Snowbird)</li>
</ul>
<p><b>Is it just a reweighting problem?</b> If reviews were just conservative numerically, any problem could be fixed by "curving" the scores up. That would be nice, but ... another anecdote:</p>
<ul>
<li>A survey asked authors of SIGCOMM 2009 submissions whether they agreed that "The reviews were technically correct". <a href="http://www.sigcomm.org/sites/default/files/SIGCOMM%2009%20Comm%20FB.pdf">Roughly one-third of respondents disagreed or strongly disagreed</a>, about a third agreed or strongly agreed, and another third had no opinion.</li>
</ul>
<p>Even taking into account the fact that the survey included authors whose submissions were rejected and might just be grumpy, those numbers seem undesirable. Reviews are sometimes wrong or emphasize unimportant or subjective problems, even when the paper gets accepted. (And that's not necessarily the reviewer's fault.)</p>
<p><b>But this is true in every field, right?</b> After all, authors have been complaining about criticisms for centuries. Here, by the way, are a couple favorite criticisms:</p>
<blockquote>"Your manuscript is both good and original. But the part that is good is not original, and the part that is original is not good." (unattributed)
</blockquote>
<blockquote>"In one place in Deerslayer, and in the restricted space of two-thirds of a page, Cooper has scored 114 offenses against literary art out of a possible 115. It breaks the record." (Mark Twain, How to Tell a Story and Other Essays.)
</blockquote>
<p>Getting back to the point, there is plenty of precedent for reviewers not seeing the light. But there is anecdotal evidence that this is a <b>bigger problem in CS</b> than in certain other areas:</p>
<ul>
<li><p>A study of NSF panel reviews found that reviewers in computer science give lower scores on average than in other areas. (<b>Note:</b> I read this in CACM or some other magazine but now I can't find it; if you can, please let me know.) <b>Update:</b> <a href="http://cacm.acm.org/blogs/blog-cacm/134743-yes-computer-scientists-are-hypercritical/fulltext">Here's the data</a>: CISE proposals average 0.41 points lower than other directorates.</p></li>
<li><p>While it appears to be a common (but not universal) belief in CS that reviewers are too-often wrong and frustrating, I'm told by at least one physicist that that is not the general feeling about reviewers in that field. They are "rarely out to actively find problems with your paper", and while they may often misunderstand parts of the paper, the authors can respond and usually the reviewers or the journal editor will accept the response. Publication is still competitive and often annoying for other reasons, but reviewers are generally reasonable.</p></li>
</ul>
<p><b>So what? Does this have any negative impact?</b> As pointed out by others: </p>
<ul>
<li><p>Researchers may be discouraged. (I know of at least one top PhD graduate who went to industry citing weariness with "selling" papers as one cause.)</p></li>
<li><p>It puts CS at a disadvantage with other fields, if we are generally more negative in grant proposal reviews. As Naughton wrote, <em>"funding agencies believe us when we say we suck"</em>.</p></li>
<li><p>More speculative or unusual work (with dozens of <em>potential</em> challenges for the approach that a reviewer could cite) is at a disadvantage compared to work with well-known quantitative metrics for evaluation.</p></li>
<li><p>Variance in reviews may make papers more likely to return for another round of reviewing at another conference, increasing time to publication and reviewer workload.</p></li>
</ul>
<h4>Causes of the problem</h4>
<p>Naughton suggested that <em>"Reviewers are trained by receiving bad reviews from other reviewers who have received bad reviews in the past"</em>. Keshav <a href="http://ccr.sigcomm.org/online/files/p3-v41n3ed-keshav-editorial.pdf">suggested</a> human failings and increasing reviewer workloads. Without disagreeing with those possibilities, I'm wondering <b>what about CS in particular might exacerbate the problem?</b> Here are two ideas.
</p>
<ol>
<li><p><b>No author response to reviewers.</b> As a consequence of CS's focus on conferences, most venues (in my area) have no opportunity for authors to answer reviewer criticism. The communication is author ---&gt; reviewer ---&gt; author, with no feedback to reviewer. It's a little like putting papers on trial without a defense team. As a result:
<ul><li>Bad reviews are more likely to happen, because the reviewer typically never learns if they have submitted a bad review, and is not really held accountable.</li>
<li>Once a bad review does happen, there's no chance to fix it.</li>
</ul></p></li>
<li><p><b>Focus on bugs.</b> (This is extremely speculative.) As computer scientists, we are <em>really great</em> at spotting bugs, and that's a good thing when you're writing code. Possibly, some of that carries over into reviewing more than it should. Maybe bug-finding is easier than thinking carefully about contributions of the paper &mdash; especially if, once you honestly think there's a bug, you don't have to do any more work even if you're wrong. (Just noticed that <a href="http://www.usenix.org/event/wowcs08/tech/full_papers/crowcroft/crowcroft.pdf">someone else</a> had the same idea.)</p></li>
</ol>
<h4>Fixing the problem</h4>
<p>I'm suggesting these as possible directions to discuss, not as solutions I think are guaranteed to work.</p>
<ol>
<li><p><b>Allow authors to respond to reviewers.</b> Just as in TCP's three-way handshake, one would hope that both involved parties get feedback. Responses, at least in theory, (1) create incentive for better reviews and feedback to help improve, (2) allow authors to point out simple misunderstandings in reviews. (Note that some venues, like ASPLOS, have rebuttals. And in fact, <a href="http://www.sigcomm.org/publications/computer-communication-review">CCR</a> has reasonably fast turnaround and allows responses to reviewer comments as they arrive. Apparently that didn't help the July issue, though...)</p>
<p>One could argue that reviewers already have an incentive to do well, because they have their reviews looked at (or even voted upon!) by other program committee members. But other reviewers don't know the paper and its area as well as the authors; and reviewers have at least as much incentive to maintain a friendly relationship with other reviewers as they do to argue the case for a specific paper. Arguing a case after one reviewer has taken a negative stand involves extra effort and to a certain extent puts one's reputation on the line. I suspect <b>the most effective response comes from the authors.</b> They have the needed incentive and knowledge.</p></li>
</ol>
<p>That seems like the most obvious approach, but it does require organizational change. There are some smaller steps that might be easier on an individual level.</p>
<ol start=2>
<li><p><b>Avoid <a href="http://pages.cs.wisc.edu/~naughton/naughtonicde.pptx">Naughton</a>'s checklist for bad reviewing.</b> Quoting him directly:
<blockquote>
<ul>
<li>Is it "difficult"?</li>
<li>Is it "complete"?</li>
<li>Can I find any flaw?</li>
<li>Can I kill it quickly?</li>
</ul>
</blockquote></p>
</li>
<li><p><b>Focus on what a paper contributes, not on what it doesn't contribute,</b> which is always an infinitely long list. Focusing on the absent results will inevitably lead any paper to the wastebin of rejection and any author to a pit of misery.</p>
<p>In particular, it seems to me that <em>"This paper didn't show X"</em> is, by itself, not a valid criticism. It is an irrelevant factoid unless it negates or diminishes some other contribution in the paper. If it is fair to argue that particular results are absent, then my first beef with every paper is going to be that it fails to resolve whether P &ne; NP.</p>
<p>Of course, a paper should get more "contribution points" for a better and more thorough evaluation, but perhaps it's OK to leave some questions unanswered. Particularly since it's often hard to predict which particular dimension or potential inefficiency the reviewers will be interested in. Leaving certain questions unanswered is entirely compatible with the paper making other useful contributions.</p></li>
<li><p><b><a href="http://arxiv.org/corr/home">Submit to arXiv</a></b>, bypassing the reviewing process entirely and letting other researchers judge what they want to read. Subscribe to arXiv RSS feeds so you find out about other people's work more quickly. Of course, arXiv currently has limited value for CS systems and networking researchers, since other such researchers tend not to look for papers there. More on that later.</p></li>
<li><p><b>Adopt policies that tolerate some reviewer pessimism.</b> As an example of what seems to me like a bad idea, a recent workshop had a reviewing policy that allowed a single reviewer to effectively veto a paper if they strongly disliked it.</p></li>
<li><p><b>Implement feedback yourself.</b> If a conference doesn't provide a means for author feedback to reviewers, the reviewer could implement this herself by including in the review a way to provide feedback, e.g., a link to a Google Docs form that could preserve the anonymity of the reviewer and authors. Disadvantage: This only fixes a piece of the problem and might seem strange to PC chairs and authors. </p></li>
</ol>
<p>Other past suggestions include reducing PC workloads, making reviews public, maintaining memory across conferences (so resubmissions are associated with old reviews), and much more; see links above and below.</p>
<p>The open question is, which of these will best improve the quality of reviews and, ultimately, CS research? My guess is that any good solution will include some form of author response to reviewers, but there are several ways to do that.</p>
<em>There's voluminous past discussion on this topic. Related links:</em>
<ul>
<li><a href="http://www.usenix.org/event/wowcs08/tech/">Workshop on Organizing Workshops, Conferences, and Symposia for Computer Systems</a>, in particular, see papers/talks in the 11:00 a.m. session. One became <a href="http://cacm.acm.org/magazines/2009/1/15664-viewpoint-scaling-the-academic-publication-process-to-internet-scale/pdf">an article in CACM</a>.</li>
<li><a href="http://ccr.sigcomm.org/drupal/?q=node/348">Open Issues in Organizing Computer Systems Conferences</a> (Jeffrey Mogul and Tom Anderson, CCR July 2008)</li>
<li><a href="http://portal.acm.org/citation.cfm?id=1462581">Paper and proposal reviews: is the process flawed?</a> (summary of a panel session at the 2008 CRA Conference at Snowbird)</li>
<li><a href="http://www.nature.com/nature/journal/v473/n7348/full/473452b.html">Peer reviews: make them public</a> (Nature correspondence)</li>
<li><a href="http://go.nature.com/witfzb">Peer reviews: some are already public</a> (Nature correspondence)</li>
<li><a href="http://cacm.acm.org/blogs/blog-cacm/100284-how-should-peer-review-evolve/fulltext">How Should Peer Review Evolve?</a> (Ed Chi, blog@cacm)</li>
<li><a href="http://cacm.acm.org/magazines/2009/5/24632-conferences-vs-journals-in-computing-research/fulltext">Conferences vs. Journals in Computing Research</a> (Moshe Vardi, CACM 2009)</li>
</ul>
<p><b>Update:</b> SIGCOMM 2012 <a href="http://conferences.sigcomm.org/sigcomm/2012/cfp.php">will have rebuttals</a>. Also, <a href="http://cacm.acm.org/blogs/blog-cacm/123611-the-nastiness-problem-in-computer-science/fulltext">Bertrand Meyer</a> has something to say about CS reviewing.</p>Brighten Godfreyhttps://plus.google.com/102845740959911684459noreply@blogger.com8tag:blogger.com,1999:blog-8216930037889709643.post-43706937898639451612011-02-12T23:52:00.000-08:002013-06-21T12:25:47.821-07:00Attractive scientific plots with gnuplot<p>I use <a href="http://www.gnuplot.info/">gnuplot</a> for nearly all my graph-drawing for academic publications. On the whole, it's clean and relatively flexible, and that combined with inertia has been enough to keep me from trying interesting alternatives like <a href="http://matplotlib.sourceforge.net/">matplotlib</a>, <a href="http://plot.micw.eu/">Plot</a>, <a href="http://ploticus.sourceforge.net/doc/welcome.html">ploticus</a>, and <a href="http://www.r-project.org/">R</a>. However, gnuplot's default output is not especially pretty. I often see graphs in papers that look like this...</p>
<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/-08e5_JzkcIk/TVeOED2iosI/AAAAAAAAAHY/_eNeaC6XEDU/s1600/default_gnuplot.png"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 280px;" src="http://4.bp.blogspot.com/-08e5_JzkcIk/TVeOED2iosI/AAAAAAAAAHY/_eNeaC6XEDU/s400/default_gnuplot.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5573079264298967746" /></a>
<p>...or worse, if it's been bitmapped rather than using EPS or PDF. With some tweaking, however, one can produce much more attractive output. I would much rather look at plots like this:</p>
<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/-S6i5b6GJZ1c/TaCn_Iviu1I/AAAAAAAAAHo/YXlEltnqovQ/s1600/beautiful-gnuplot.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 241px;" src="http://1.bp.blogspot.com/-S6i5b6GJZ1c/TaCn_Iviu1I/AAAAAAAAAHo/YXlEltnqovQ/s400/beautiful-gnuplot.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5593655440314645330" /></a>
<p>In fact it looks better. Blogger doesn't seem to support any vector image format, but here are the <a href="http://brighten.bigw.org/youinfinitesnake/gnuplot/template.pdf">pdf version</a> and the <a href="http://brighten.bigw.org/youinfinitesnake/gnuplot/template.svg">svg version</a>. To produce the PDF version, you need gnuplot 4.4's pdfcairo terminal. Below, you can see the gnuplot files for the above two plots.</p>
<center>
<table border="0" cellspacing="1" cellpadding="0" width="500" bgcolor="#C6C6C6">
<tr>
<td bgcolor="#E0E0E0" align=center>
<button onclick="document.getElementById('boring_default_gnuplot').style.display = 'inline'">Expand boring default</button>
</td>
</tr>
<tr>
<td bgcolor="#E0E0E0">
<div id="boring_default_gnuplot" style="display:none;">
<pre>
set terminal postscript eps color
set output "boring_default.eps"
set xlabel "x axis label"
set ylabel "y axis label"
set key bottom right
set xrange [0:1]
set yrange [0:1]
plot "template.dat" \
index 0 title "Example line" w lp, \
"" index 1 title "Another example" w lp
</pre>
</div>
</td>
</tr>
</table>
<br>
<table border="0" cellspacing="1" cellpadding="0" width="500" bgcolor="#C6C6C6">
<tr>
<td bgcolor="#E0E0E0" align=center>
<button onclick="document.getElementById('beautified_gnuplot').style.display = 'inline'">Expand beautified gnuplot (PDF version)</button>
</td>
</tr>
<tr>
<td bgcolor="#E0E0E0">
<div id="beautified_gnuplot" style="display:none;">
<pre>
# Note you need gnuplot 4.4 for the pdfcairo terminal.
set terminal pdfcairo font "Gill Sans,9" linewidth 4 rounded fontscale 1.0
# Line style for axes
set style line 80 lt rgb "#808080"
# Line style for grid
set style line 81 lt 0 # dashed
set style line 81 lt rgb "#808080" # grey
set grid back linestyle 81
set border 3 back linestyle 80 # Remove border on top and right. These
# borders are useless and make it harder
# to see plotted lines near the border.
# Also, put it in grey; no need for so much emphasis on a border.
set xtics nomirror
set ytics nomirror
#set log x
#set mxtics 10 # Makes logscale look good.
# Line styles: try to pick pleasing colors, rather
# than strictly primary colors or hard-to-see colors
# like gnuplot's default yellow. Make the lines thick
# so they're easy to see in small plots in papers.
set style line 1 lt rgb "#A00000" lw 2 pt 1
set style line 2 lt rgb "#00A000" lw 2 pt 6
set style line 3 lt rgb "#5060D0" lw 2 pt 2
set style line 4 lt rgb "#F25900" lw 2 pt 9
set output "template.pdf"
set xlabel "x axis label"
set ylabel "y axis label"
set key bottom right
set xrange [0:1]
set yrange [0:1]
plot "template.dat" \
index 0 title "Example line" w lp ls 1, \
"" index 1 title "Another example" w lp ls 2
</pre>
</div>
</td>
</tr>
</table>
<br>
<table border="0" cellspacing="1" cellpadding="0" width="500" bgcolor="#C6C6C6">
<tr>
<td bgcolor="#E0E0E0" align=center>
<button onclick="document.getElementById('beautified_gnuplot_svg').style.display = 'inline'">Expand beautified gnuplot (SVG version)</button>
</td>
</tr>
<tr>
<td bgcolor="#E0E0E0">
<div id="beautified_gnuplot_svg" style="display:none;">
<pre>
set terminal svg size 320,240 fname "Gill Sans" fsize 9 rounded dashed
# Line style for axes
set style line 80 lt 0
set style line 80 lt rgb "#808080"
# Line style for grid
set style line 81 lt 3 # dashed
set style line 81 lt rgb "#808080" lw 0.5 # grey
set grid back linestyle 81
set border 3 back linestyle 80 # Remove border on top and right. These
# borders are useless and make it harder
# to see plotted lines near the border.
# Also, put it in grey; no need for so much emphasis on a border.
set xtics nomirror
set ytics nomirror
#set log x
#set mxtics 10 # Makes logscale look good.
# Line styles: try to pick pleasing colors, rather
# than strictly primary colors or hard-to-see colors
# like gnuplot's default yellow. Make the lines thick
# so they're easy to see in small plots in papers.
set style line 1 lt 1
set style line 2 lt 1
set style line 3 lt 1
set style line 4 lt 1
set style line 1 lt rgb "#A00000" lw 2 pt 7
set style line 2 lt rgb "#00A000" lw 2 pt 9
set style line 3 lt rgb "#5060D0" lw 2 pt 5
set style line 4 lt rgb "#F25900" lw 2 pt 13
set output "template.svg"
set xlabel "x axis label"
set ylabel "y axis label"
set key bottom right
set xrange [0:1]
set yrange [0:1]
plot "template.dat" \
index 0 title "Example line" w lp ls 1, \
"" index 1 title "Another example" w lp ls 2
</pre>
</div>
</td>
</tr>
</table>
</center>
<p>Now here's something for which I would pay (some) real money: a gnuplot terminal which outputs directly to Keynote. Then, for example, during a presentation, one could have lines in the plot appear one at a time, explaining each without the distraction of showing irrelevant objects. This should actually be quite doable since Keynote's format is just a zipped XML.</p>
<p><b>Update 2011.04.09:</b> Mac users of macports may note that the default install of gnuplot for some reason <a href="https://trac.macports.org/ticket/27893">excludes pdfcairo</a>. Abhinav Bhatele writes with instructions for enabling pdfcairo in macports:
<blockquote>
<pre>
$ sudo port edit gnuplot
Add these lines somewhere in the file (I added them before the lua variant):
variant pangocairo description "Enable pdfcairo" {
depends_lib-append port:pango
configure.args-delete --without-cairo
configure.args-append --with-cairo
}
$ sudo port info gnuplot
Just to check that pangocairo variant exists. And then:
$ sudo port uninstall gnuplot
$ sudo port install gnuplot +pangocairo
You'll need to keep in mind that if you do port selfupdate,
the edited version of the portfile might get overwritten.
</pre></blockquote>
<p><b>Update to the update 2011.12.04:</b> Looks like macports now includes the pangocairo variant, but still does not install it by default; so it should work if you run just the last two lines.</p>
<p><b>Update 2011.04.09:</b> Added SVG version and made it slightly more beautiful.</p>
<p><b>Update 2013.06.21:</b> Added <tt>fontscale 1.0</tt> in PDF version. Also, it seems the SVG output now looks somewhat different in a more recent gnuplot ... will have to fix that sometime.</p>
Brighten Godfreyhttps://plus.google.com/102845740959911684459noreply@blogger.com15tag:blogger.com,1999:blog-8216930037889709643.post-89188857287868284982010-12-28T13:19:00.000-08:002010-12-28T13:21:47.301-08:00A peer-reviewing horror story<p><a href="http://www.gutenberg.ca/ebooks/james-runes/james-runes-00-h.html">A peer-reviewing horror story</a>. Don't let this happen to you.</p>
<p>Also <a href="http://www.gutenberg.org/ebooks/9629">available</a> in various ebook formats.</p>Brighten Godfreyhttps://plus.google.com/102845740959911684459noreply@blogger.com2tag:blogger.com,1999:blog-8216930037889709643.post-2631848648291970082010-10-23T22:05:00.000-07:002010-10-24T00:47:16.990-07:00Google Frequency Plotter<script src="http://www.google.com/uds/api?file=uds.js&amp;v=1.0" type="text/javascript"></script>
<script type="text/javascript">
//<![CDATA[
function trim(s) {
return s.replace(/\s+$/, '').replace(/^\s+/, '');
}
// This function scales the submitted values so that
// maxVal becomes the highest value.
var EXTENDED_MAP=
'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-.';
var EXTENDED_MAP_LENGTH = EXTENDED_MAP.length;
function extendedEncode(arrVals, maxVal) {
var chartData = 'e:';
for(i = 0, len = arrVals.length; i < len; i++) {
// In case the array vals were translated to strings.
var numericVal = new Number(arrVals[i]);
// Scale the value to maxVal.
var scaledVal = Math.floor(EXTENDED_MAP_LENGTH *
EXTENDED_MAP_LENGTH * numericVal / maxVal);
if(scaledVal > (EXTENDED_MAP_LENGTH * EXTENDED_MAP_LENGTH) - 1) {
chartData += "..";
} else if (scaledVal < 0) {
chartData += '__';
} else {
// Calculate first and second digits and add them to the output.
var quotient = Math.floor(scaledVal / EXTENDED_MAP_LENGTH);
var remainder = scaledVal - EXTENDED_MAP_LENGTH * quotient;
chartData += EXTENDED_MAP.charAt(quotient) + EXTENDED_MAP.charAt(remainder);
}
}
return chartData;
}
var app;
var searchControl = new GSearchControl();
var counts = new Array();
var pendingQueries = new Array();
pendingQueries[0]="new york";
pendingQueries[1]="champaign";
var currentQuery;
var isRangeQuery = false;
var maxPoints = 51;
var template;
var domain;
var underscores = /_+/g;
function OnLoad() {
app = new App();
}
function App() {
// tell the search control to call be on start/stop
searchControl.setSearchCompleteCallback(app, App.prototype.OnSearchComplete);
var searcher = new GwebSearch();
var options = new GsearcherOptions();
searchControl.addSearcher(searcher, options);
searchControl.draw(document.getElementById("searchcontrol"));
}
App.prototype.OnSearchComplete = function(sc, searcher) {
if (searcher.cursor === undefined) {
counts[currentQuery] = 0;
appendStatus("!");
}
else {
counts[currentQuery] = parseInt(searcher.cursor.estimatedResultCount);
appendStatus(".");
}
executePending();
}
App.prototype.OnSearchStarting = function(sc, searcher, query) {}
App.prototype.OnKeep = function(result) {}
function method_closure(object, method, opt_argArray) {
return function() {
return method.apply(object, opt_argArray);
}
}
GSearch.setOnLoadCallback(OnLoad);
function executePending() {
while (pendingQueries.length > 0 && counts[pendingQueries[pendingQueries.length - 1]]) {
// Skip over the ones we've done
pendingQueries.pop();
appendStatus(".");
}
if (pendingQueries.length > 0) {
q = pendingQueries.pop();
currentQuery = q;
searchControl.cancelSearch();
searchControl.execute(q);
}
else {
allDone();
}
}
function message(m) {
document.getElementById('messages').innerHTML += m + "<br>";
}
function alertMsg(m) {
document.getElementById('alertBox').innerHTML += "<font color=red>[!]</font> " + m;
}
function clearAlert() {
document.getElementById('alertBox').innerHTML = "";
}
function appendStatus(m) {
document.getElementById('statusBox').innerHTML += m;
}
function setStatus(m) {
document.getElementById('statusBox').innerHTML = m;
}
function formatQuery(x) {
return template.replace(underscores, x);
}
function execute() {
pendingQueries = new Array();
template = document.getElementById("inputTemplate").value;
clearAlert();
if (template.indexOf("_") < 0) {
alertMsg("You didn't enter a _ in the first text field.");
}
setStatus("Getting data");
document.getElementById('loadingImg').innerHTML = '<img src="http://brighten.bigw.org/youinfinitesnake/loading_indicator.gif" border=0>';
domainString = document.getElementById("inputDomain").value;
if (domainString.match(/^([0-9]+)-([0-9]+)$/)) {
isRangeQuery = true;
var endpoints = domainString.match(/[0-9]+/g);
var start = parseInt(endpoints[0]);
var end = parseInt(endpoints[1]);
var stepSize = Math.max(1, Math.ceil((end - start + 1) / maxPoints));
if (stepSize > 1) {
alertMsg("Exceeded limit of " + maxPoints + " data points; I'll plot " + start + ", " + (start + stepSize) + ", " + (start + 2*stepSize) + ", ...");
}
domain = new Array();
var j = 0;
for (i = start; i <= end; i += stepSize) {
domain[j] = i;
pendingQueries[j] = formatQuery(i);
j++;
}
}
else {
isRangeQuery = false;
domainString = domainString.replace(/,\s*$/, '');
domain = domainString.split(",");
for (i = 0; i < domain.length; i++) {
domain[i] = trim(domain[i]);
q = formatQuery(domain[i]);
pendingQueries[i] = q;
}
}
pendingQueries.reverse();
executePending();
}
function allDone() {
appendStatus("done.");
document.getElementById('loadingImg').innerHTML = '';
max_y = 0;
var y_values = new Array();
for (i = 0; i < domain.length; i++) {
y_values[i] = counts[formatQuery(domain[i])];
if (y_values[i] > max_y) {
max_y = y_values[i];
}
}
var imgURL;
if (isRangeQuery) {
imgURL = "http://chart.apis.google.com/chart?chxr=0,0," + max_y + "|1," + domain[0] + "," + domain[domain.length - 1] + "&chxt=y,x&chs=400x300&cht=lc&chco=3D7930&chd=" + extendedEncode(y_values,max_y) + "&chg=-1,-1,1,1&chls=3,4,0&chm=B,C5D4B5BB,0,0,0&chtt=" + escape("Google results for " + template);
}
else {
var xlabels = "";
var maxLen = Math.max(1, Math.floor(70 / domain.length));
for (i = 0; i < domain.length; i++) {
if (domain[i].length > maxLen) {
xlabels += "|" + domain[i].substring(0,maxLen-1);
}
else {
xlabels += "|" + escape(domain[i]);
}
}
imgURL = "http://chart.apis.google.com/chart?chxr=1,0," + max_y + "&chxt=x,y&chxl=0:" + xlabels + "&chbh=a,0,2&chs=400x300&cht=bvg&chco=A2C180&chd=" + extendedEncode(y_values,max_y) + "&chtt=" + escape("Google results for " + template);
}
document.getElementById("resultPlot").src = imgURL;
document.getElementById("plotPermalink").href = imgURL;
}
function checkEnter(e){
var characterCode;
if (e && e.which) { characterCode = e.which }
else { e = event; characterCode = e.keyCode }
if (characterCode == 13) { execute(); }
return (characterCode != 13)
}
//]]>
</script>
<p>Here's an app version of <a href="http://xkcd.com/715/">this xkcd comic</a> that lets you plot the frequency of phrases according to Google searches.</p>
<center>
<table border=0 cellspacing=1 cellpadding=0 width=400 bgcolor=#C6C6C6>
<tr><td align=center bgcolor=#E0E0E0>
<table border=0 width=400>
<tr>
<td align=right valign=middle>Plot</td>
<td align=left valign=middle><input type=text value="&quot;my favorite number is _&quot;" size="40" maxlength="80" id="inputTemplate" onKeyPress="return checkEnter()"></td>
</tr><tr>
<td align=right valign=top>for _ in</td>
<td align=left>
<textarea rows=2 cols=40 onKeyPress="return checkEnter()" id="inputDomain">0-30</textarea><br>
<font color="#555555" size="-1">Enter a range like 1-10 or<br>a list like monday,tuesday,wednesday</font>
</td>
</tr><tr>
<td colspan=2 align=center><input type=button onclick="execute();" value="Execute"></td>
</tr>
</table>
</td></tr>
<tr><td bgcolor=#E0E0E0>
<table border=0 width=400><tr><td width=30>
<div id="loadingImg" style="background-color:#E0E0E0; width:30px; height:30px;"></div>
</td><td>
<div id="statusBox" style="background-color:#E0E0E0;"></div>
</td></tr>
<tr><td></td><td><div id="alertBox" style="background-color:#E0E0E0;"></div></td></tr>
</table>
</td></tr>
<tr><td align=center>
<img id="resultPlot" src="http://chart.apis.google.com/chart?chxr=0,0,151|1,0,30&chxt=y,x&chs=400x300&cht=lc&chco=3D7930&chd=e:CHPrOaqYXvYKdq8L..dPLBPrQGoQIeMtHoLcGWEqC9KLIeYKhDHoDZGWEqBsBs&chg=-1,-1,1,1&chls=3,4,0&chm=B,C5D4B5BB,0,0,0&chtt=Google%20results%20for%20%22my%20favorite%20number%20is%20_%22" width=400 height=300>
<br><a href="http://chart.apis.google.com/chart?chxr=0,0,151|1,0,30&chxt=y,x&chs=400x300&cht=lc&chco=3D7930&chd=e:CHPrOaqYXvYKdq8L..dPLBPrQGoQIeMtHoLcGWEqC9KLIeYKhDHoDZGWEqBsBs&chg=-1,-1,1,1&chls=3,4,0&chm=B,C5D4B5BB,0,0,0&chtt=Google%20results%20for%20%22my%20favorite%20number%20is%20_%22" id="plotPermalink">Permalink to this chart</a>
</td></tr>
</table>
</center>
<p>Some examples below the fold.</p>
<a name='more'></a>
<h4>Sanity checks</h4>
<center><p><img src="http://chart.apis.google.com/chart?chxr=0,0,512|1,1,15&chxt=y,x&chs=400x300&cht=lc&chco=3D7930&chd=e:__IgFoGQ..DgCgHwCwlAEAFYAoAQAI&chg=-1,-1,1,1&chls=3,4,0&chm=B,C5D4B5BB,0,0,0&chtt=Google%20results%20for%20%22i%20have%20_%20fingers%22"></p>
<p><a href="http://www.youtube.com/watch?v=x4o-TeMHys0"><img border=0 src="http://chart.apis.google.com/chart?chxr=1,0,167000&chxt=x,y&chxl=0:|heat|price|cost|tuition|rent&chbh=a,0,2&chs=400x300&cht=bvg&chco=A2C180&chd=e:AAAAAAAA..&chtt=Google%20results%20for%20%22the%20_%20is%20too%20damn%20high%22"></a></p></center>
<h4>How convenient.</h4>
<center><img src="http://chart.apis.google.com/chart?chxr=1,0,81&chxt=x,y&chxl=0:|sunday|monday|tuesday|wednesday|thursday|friday|saturday&chbh=a,0,2&chs=400x300&cht=bvg&chco=A2C180&chd=e:h-..W6fmeBxxgZ&chtt=Google%20results%20for%20%22I%20was%20sick%20on%20_%22"></center>
<h4>Politics</h4>
<center>
<p><img src="http://chart.apis.google.com/chart?chxr=1,0,123&chxt=x,y&chxl=0:|obama|mccain|palin|toomey|sestak&chbh=a,0,2&chs=400x300&cht=bvg&chco=A2C180&chd=e:..LcBC__CF&chtt=Google%20results%20for%20%22i%20donated%20to%20_%22">
<p><img src="http://chart.apis.google.com/chart?chxr=1,0,134&chxt=x,y&chxl=0:|republican|democrat|conservati|liberal|independent|tea%20party&chbh=a,0,2&chs=400x300&cht=bvg&chco=A2C180&chd=e:..xMKgCYBbA9&chtt=Google%20results%20for%20%22i%20will%20vote%20for%20a%20_%22"><br>[Thanks: Bryan]</p>
</center>
<h4>Most frequent birth year: 1982</h4>
<center><img src="http://chart.apis.google.com/chart?chxr=0,0,2100|1,1880,2010&chxt=y,x&chs=400x300&cht=lc&chco=3D7930&chd=e:AjAfAnAVATAXANA0AdAdBAAdBIAbAyBfBhA0AwBVBnCaBnBfBbBrBtBzCGBfD5DBCMCxD.DSCxDaD.EYGGE.PiFuFbG6GEGyH1HfKJHEIeJLUgLJJyLTOUOQQRPaRXUUQDSjbRa6YQaoh1aifsZYgnbljqcdcSdYoOeHgnauiIfYbricfslLu7gnjqexj9o1g6rRkRnA2Po1..oOlyqqhhoidKcvmGcSYnSxSlQhNOIiFhDgCSBvBdA4A0BABEAqBKAbA0&chg=-1,-1,1,1&chls=3,4,0&chm=B,C5D4B5BB,0,0,0&chtt=Google%20results%20for%20%22I%20was%20born%20in%20___%22"></center>
<h4>What you've got</h4>
<center><img src="http://chart.apis.google.com/chart?chxr=1,0,15000&chxt=x,y&chxl=0:|normal|beautiful|thin|smart|stupid|fat&chbh=a,0,2&chs=400x300&cht=bvg&chco=A2C180&chd=e:A3BiGGNLVa..&chtt=Google%20results%20for%20%22I%27m%20too%20_%22"></center>
<h4>Mind your phone</h4>
<center><img src="http://chart.apis.google.com/chart?chxr=1,0,48500&chxt=x,y&chxl=0:|nerve|house|job|keys|money|phone|wife|husband|mind|wallet&chbh=a,0,2&chs=400x300&cht=bvg&chco=A2C180&chd=e:CICz..DYEAoPF-HvZNII&chtt=Google%20results%20for%20%22i%20lost%20my%20_%22"></center>
<h4>Procreation</h4>
<center><img src="http://chart.apis.google.com/chart?chxr=0,0,17400|1,0,15&chxt=y,x&chs=400x300&cht=lc&chco=3D7930&chd=e:AGAQ..79fHPqFsC.ByBFBAAjAgARAKAM&chg=-1,-1,1,1&chls=3,4,0&chm=B,C5D4B5BB,0,0,0&chtt=Google%20results%20for%20%22I%20have%20_%20children%22"></center>
<center><img src="http://chart.apis.google.com/chart?chxr=0,0,498|1,0,15&chxt=y,x&chs=400x300&cht=lc&chco=3D7930&chd=e:____00..8h9Ct4aFeMRmW4RmOpKaHtIG&chg=-1,-1,1,1&chls=3,4,0&chm=B,C5D4B5BB,0,0,0&chtt=Google%20results%20for%20%22I%20have%20_%20grandchildren%22"></center>
<p>Post permalinks to your favorites in the comments below.</p>
<p>Disclaimers: The number of search results reported by Google's search API is known to be occasionally bogus and is not a reliable indicator of anything in particular. Also, there's some bug here if you do a range query with only a single value. Finally, an exclamation point in the status message indicates an error in one query or the lack of any results.</p>
<div id="searchcontrol" style="display:none;"></div>Brighten Godfreyhttps://plus.google.com/102845740959911684459noreply@blogger.com9tag:blogger.com,1999:blog-8216930037889709643.post-26613487247088975492010-10-19T18:11:00.000-07:002010-10-19T18:14:26.508-07:00Ig Nobel candidate"University of Ljubljana researcher Borut Povse is conducting experiments in which a robot limb repeatedly hits human volunteers on the arm to evaluate human-robot pain thresholds in order to facilitate adherence to Isaac Asimov's first law of robotics, which prohibits robots from injuring people." [<a href="http://technews.acm.org/archives.cfm?fo=2010-10-oct%2Foct-18-2010.html#487652">ACM TechNews</a>]Brighten Godfreyhttps://plus.google.com/102845740959911684459noreply@blogger.com1tag:blogger.com,1999:blog-8216930037889709643.post-13628492836967927262010-10-02T23:27:00.000-07:002010-10-02T23:29:38.977-07:00Public reviewing<p>The New York Times has a <a href="http://www.nytimes.com/2010/08/24/arts/24peer.html">piece</a> on public review, even to the point of crowdsourcing, as a partial alternative to peer review for scholarly publications. The interesting bit is that at least one journal, Shakespeare Quarterly, has tested this open review process. You can <a href="http://mediacommons.futureofthebook.org/mcpress/ShakespeareQuarterly_NewMedia/">view the submitted papers and discussion</a> (including a paper providing an information-theoretic analysis of Shakespeare). The interface seems well designed and allows commenting on individual paragraphs.</p>
<p>There are doubtless situations where this opens the reviewing process to trolls, flamewars, or inerudite remarks. On the other hand, assuming the comments are used to help inform a final judgement by experts, there could be advantages. Reviewing a paper is sometimes like an <a href="http://en.wikipedia.org/wiki/NP_(complexity)">NP</a> search problem, to find the contributions and weaknesses. Public review could be seen as using crowdsourcing to tackle the search problem. Certain comments would be easily verifiable by an expert, even without relying on the trustworthiness of the anonymous commenter, yet would not necessarily have been noticed by any particular expert. (The same easy-verification property is one reason Wikipedia is useful even when you're looking for a reliable answer to a question.)</p>
<p>In computer networking research, the closest we come to a collaborative, real-time form of reviewing is <a href="http://www.sigcomm.org/learn/computer-communication-review/">Computer Communication Review</a> which has just recently started returning reviews to authors as they are submitted, and allowing authors to comment on the reviews.</p>
<p><b>Unrelated fun ...</b> here's <a href="http://seaquence.org/about/">Seaquence</a>, a clever visualization of musical composition.</p>Brighten Godfreyhttps://plus.google.com/102845740959911684459noreply@blogger.com0tag:blogger.com,1999:blog-8216930037889709643.post-91033869095620852462010-09-14T21:56:00.000-07:002010-09-15T22:01:48.865-07:00Programming Language Wars: The Movie<script type="text/javascript" src="http://www.google.com/jsapi"></script>
<script type="text/javascript">
google.load('visualization', '1', {'packages':['motionchart']});
google.setOnLoadCallback(initChart);
var data;
var chart;
var options;
function initChart() {
data = new google.visualization.DataTable();
data.addColumn('string', 'Language');
data.addColumn('number', 'System');
data.addColumn('number', 'Source code length (gzipped)');
data.addColumn('number', 'CPU time');
data.addColumn('number', 'Real time');
data.addColumn('number', 'Memory');
data.addRows([
['Ada 2005 GNAT', 1, 2.49046321525886, 2.22030051296019, 2.22079455164586, 3.24460431654676],
/* ['ATS', 1, 5.87123287671233, 1.08113257127841, 1.07997505907062, 1.54545454545455], */
['Ruby MRI', 1, 1.08327085079169, 203.635186997043, 203.538525997549, 6.86930576321142],
['C GNU gcc', 1, 2.76319610000612, 1.11410201082652, 1.11390393223971, 1.83789151219567],
['Lua LuaJIT', 1, 1.33319342945553, 3.41951865501443, 3.41769056131967, 4.76855234557221],
['Smalltalk VisualWorks', 1, 1.96385542168675, 15.5231700050419, 15.5201649862511, 45.5957446808511],
['Python CPython', 1, 1.23698090162114, 40.2360585552232, 39.8718332039757, 6.36814107571503],
['Scala', 1, 2.1135225375626, 2.25225838667211, 2.25580022701476, 48.1914893617021],
['Python PyPy', 1, 1.1320286308243, 29.4562501597268, 29.4520817853089, 21.1568942474479],
['Python IronPython', 1, 1.1320286308243, 94.9420970783002, 94.5160076268887, 60.2374645737955],
['Clean', 1, 1.94939740998577, 2.9739290914029, 3.06056335619509, 6.79450757575757],
['Erlang HiPE', 1, 1.68373493975904, 7.91854975477838, 7.91700274977085, 9.66906474820144],
['Fortran Intel', 1, 2.12891986062718, 2.17351598173516, 2.17579908675799, 1.65151515151515],
['C# Mono', 1, 2.34243784817494, 3.95123977478984, 3.89482115951512, 8.37798219922087],
['Perl', 1, 1.15963855421687, 91.2764887857695, 91.4549845440495, 9.79982948663297],
['C++ GNU g++', 1, 3.27330417323964, 1.09552032289321, 1.11630135378967, 2.37266787264421],
['Pascal Free Pascal', 1, 2.29385964912281, 2.1025641025641, 3.06829515679148, 1],
['Ruby 1.9', 1, 1.07594936708861, 58.5446160603177, 58.5044629792215, 7.63829787234043],
['F# Mono', 1, 2.07123287671233, 3.19433474056925, 3.19645856980704, 23.6515151515152],
['OCaml', 1, 1.45297121855758, 4.18870111198282, 4.22536601853578, 3.10347466707485],
['Go 6g 8g', 1, 1.83561643835616, 3.70685840707965, 4.54052511415525, 3.51063829787234],
['JavaScript TraceMonkey', 1, 1.13003095975232, 30.9263698630137, 30.9252283105023, 10.7279577258049],
['Java 6 -server', 1, 3.03791469194313, 2.35840707964602, 2.38126843657817, 34.3723404255319],
['Ruby JRuby', 1, 1.09059233449477, 38.4696163447642, 38.4476896823313, 290.431654676259],
['Java 6 -Xint', 1, 2.87616099071207, 19.8946886446886, 27.8594867094409, 27.8191489361702],
['Racket', 1, 1.69250180245133, 6.26768946882237, 6.26642330106626, 38.0824275362318],
['Lua', 1, 1.23847655430083, 24.5546778561873, 29.9254086889846, 4.54654460638845],
['JavaScript V8', 1, 1.16470588235294, 7.56996837328689, 7.5685609532539, 11.6702127659574],
['Mozart/Oz', 1, 1.49122807017544, 47.8086809151681, 47.7472258792552, 18.2727272727273],
['Python 3', 1, 1.25829127613554, 43.6493394271517, 43.340599133295, 7.99808391518318],
['Haskell GHC', 1, 1.92622950819672, 1.95268361581921, 1.95079088999737, 5.38693386515697],
['PHP', 1, 1.39859835219679, 81.6943635676469, 81.6026414244733, 17.4845261121857],
['Java 6 steady state', 1, 2.72054942133695, 1.57164189296655, 1.65501018035128, 59.060309199449],
['Lisp SBCL', 1, 2.71052631578947, 3.98901098901099, 3.9006266786034, 20.8769470404984],
['Perl', 4, 1.28631051752922, 107.31740123509, 58.9649489216799, 11.3617021276596],
['Ada 2005 GNAT', 4, 3.77710843373494, 2.22102682827182, 1.17410632447296, 3.30215827338129],
['C# Mono', 4, 2.34243784817494, 3.99070834864628, 2.56176821668685, 8.58570534876447],
['Fortran Intel', 4, 2.12891986062718, 2.16666666666667, 2.1689497716895, 1.66666666666667],
/*['ATS', 4, 5.87123287671233, 1.08034423860202, 1.01035747021082, 1.70798601774671],*/
['Pascal Free Pascal', 4, 2.41232227488152, 3.08586414008967, 3.08334037697574, 1],
['C++ GNU g++', 4, 2.95041588986089, 1.12677344898508, 0.876784005951452, 2.39157330125182],
['C GNU gcc', 4, 2.76319610000612, 1.11587410119731, 1.00217762489001, 1.82102835665051],
['F# Mono', 4, 2.07123287671233, 3.19560579236461, 2.02868927589368, 23.6515151515152],
['Python CPython', 4, 1.23698090162114, 52.9688056076098, 21.4093599182969, 6.37170774920165],
['OCaml', 4, 1.79966611018364, 3.6324200913242, 2.89757103574702, 2.74820143884892],
['Go 6g 8g', 4, 1.83561643835616, 3.69898769803441, 4.53995433789954, 3.46808510638298],
['Ruby JRuby', 4, 1.07594936708861, 77.8747765503965, 46.967596506058, 195.932806324111],
['Java 6 -server', 4, 3.03791469194313, 3.09523809523809, 1.63989856297549, 34.1936758893281],
['Scala', 4, 2.13080168776371, 2.71660084434155, 1.51229148375768, 49.1808510638298],
['Java 6 -Xint', 4, 2.87616099071207, 19.8397435897436, 19.4565801253357, 21.498023715415],
['Python 3', 4, 1.42714007655028, 52.5521304242726, 27.1868700121705, 7.35390461152788],
['Haskell GHC', 4, 1.92622950819672, 3.90042536736272, 2.57593688362919, 4.73381294964029],
['PHP', 4, 1.42718060455142, 81.8553290850075, 44.2544899554814, 17.47695035461],
['Erlang HiPE', 4, 1.68373493975904, 18.1739071224295, 8.35889092575619, 11.2446043165468],
['Java 6 steady state', 4, 2.68717067474295, 1.60217854697841, 1.33917800512992, 52.6532671768565],
['Lisp SBCL', 4, 3.02397260273973, 4.003663003663, 3.91136974037601, 19.5606060606061],
]);
chart = new google.visualization.MotionChart(document.getElementById('chart_div'));
options = {};
options['state'] = '{"xZoomedDataMax":3.77710843373494,"yLambda":0,"yAxisOption":"4","sizeOption":"5","orderedByX":false,"yZoomedDataMax":203.538525997549,"orderedByY":false,"duration":{"multiplier":1,"timeUnit":"Y"},"nonSelectedAlpha":0.4,"time":"1901","colorOption":"5","iconKeySettings":[],"yZoomedIn":false,"xZoomedDataMin":1.07594936708861,"yZoomedDataMin":0.876784005951452,"playDuration":2000,"xZoomedIn":false,"uniColorForNonSelected":false,"xAxisOption":"2","showTrails":true,"iconType":"BUBBLE","xLambda":1,"dimensions":{"iconDimensions":["dim0"]}}';
options['width'] = 600;
options['height'] = 400;
options['showChartButtons'] = true;
chart.draw(data, options);
}
function plotCPUvsTime() {
options['state'] = '{"xZoomedDataMax":203.635186997043,"yLambda":0,"yAxisOption":"4","sizeOption":"5","orderedByX":false,"yZoomedDataMax":203.538525997549,"orderedByY":false,"duration":{"multiplier":1,"timeUnit":"Y"},"nonSelectedAlpha":0.4,"time":"1901","colorOption":"5","iconKeySettings":[],"yZoomedIn":false,"xZoomedDataMin":1.09552032289321,"yZoomedDataMin":0.876784005951452,"playDuration":2000,"xZoomedIn":false,"uniColorForNonSelected":false,"xAxisOption":"3","showTrails":true,"iconType":"BUBBLE","xLambda":0,"dimensions":{"iconDimensions":["dim0"]}}';
chart.draw(data, options);
}
function resetToDefaults() {
options['state'] = '{"xZoomedDataMax":3.77710843373494,"yLambda":0,"yAxisOption":"4","sizeOption":"5","orderedByX":false,"yZoomedDataMax":203.538525997549,"orderedByY":false,"duration":{"multiplier":1,"timeUnit":"Y"},"nonSelectedAlpha":0.4,"time":"1901","colorOption":"5","iconKeySettings":[],"yZoomedIn":false,"xZoomedDataMin":1.07594936708861,"yZoomedDataMin":0.876784005951452,"playDuration":2000,"xZoomedIn":false,"uniColorForNonSelected":false,"xAxisOption":"2","showTrails":true,"iconType":"BUBBLE","xLambda":1,"dimensions":{"iconDimensions":["dim0"]}}';
chart.draw(data, options);
}
</script>
<p>In computer science and hacker circles, the programming language wars have, it seems, been raging since the <a href="http://www.fh-jena.de/~kleine/history/languages/Kernighan-WhyPascalIsNotMyFavoriteProgrammingLanguage.pdf">beginning of time</a>. A little electronic archaeology reveals some amusing exchanges:</p>
<ul>
<li><tt>"By all means create your own dialect of FORTH. While your at it, you can add the best features of PL-I, F77 and CORAL66. Then, look me up when you get out of college and we'll show you how it's done when you have to make a living"</tt> [<a href="http://groups.google.com/group/net.works/browse_thread/thread/5e34456a547c318e/374288b615fa6436">1985 thread</a>]</li>
<li><tt>"This debate ... is very much like two engineers engaged in building a three-mile bridge arguing over the brand of blue-print paper they use."</tt> [<a href="http://groups.google.com/group/comp.sys.apple/browse_thread/thread/3c701294e71201a7/b53de53f0ef838be">1987 thread</a>]</li>
</ul>
<p>Passionate arguments can often be improved by actual measurements. How fast, expressive, and efficient is a particular language? That's what <a href="http://shootout.alioth.debian.org/">The Computer Language
Benchmarks Game</a> set out to provide, measuring time, source code length, and memory use of several dozen languages across a set of benchmarks.</p>
<p>If you have measurements, why not improve them with a visualization? And so I present to you an interactive, multi-dimensional, dynamic, whizbang-o-matic rendering of the Programming Language Wars.</p>
<a name='more'></a>
<div id="chart_div" style="width: 550px; height: 400px;"></div>
<p>Each circle is a language. Its horizontal position represents the <b>gzipped source code size</b> used to implement the benchmarks, which is intended to measure the language's "expressiveness". Its vertical position represents the <b>real time</b> used to execute the
benchmarks, and its size (and color) indicate how much <b>memory</b> was used.</p>
<p>The cluster of languages in the top left are slow but expressive scripting languages. At the bottom right you will find C and C++, the fastest languages, but which take quite a bit more coding to get the job done. In between there is a tradeoff between speed and expressiveness, where lie languages like <a href="http://en.wikipedia.org/wiki/Objective_Caml">OCaml</a> (which I happen to use whenever possible).</p>
<p>Actually, each point is only a summary of the language's performance: Consider some metric, like real time, and some particular language L. The Benchmarks Game folks ran implementations of a set of about 12 benchmarks (FASTA, Mandelbrot, ...) in L. L's time for each benchmark is divided by the best time across all languages for that benchmark. This gives us a normalized score for each benchmark; we take the median of these to produce a summary real time score for L. Then we do the same for the other metrics: CPU time, source code length, and memory.</p>
<p>The plot shows data for a <b>single-core</b> x86 box (assuming you haven't yet messed around with the controls). If you press the movie button in the bottom left, it will transition to results on a <b>quad-core</b> box. (Still normalized by the best single-core score. The labels say 1901 and 1904 since Google's API wants dates.) <b>TIP:</b> When you play the animation, select a few languages you're interested in and check the Trails checkbox, so the movement stands out.</p>
<p>To better visualize which languages' implementations took advantage of parallelism, <INPUT TYPE=BUTTON OnClick="plotCPUvsTime();" VALUE="plot real time vs. CPU"> and then click Play. The languages that move downward have improved their real time. Some stay in the same spot, probably indicating that the Computer Language Benchmarks Game doesn't have the best implementations. <INPUT TYPE=BUTTON OnClick="resetToDefaults();" VALUE="Reset to defaults"></p>
<h4>Fine. Just tell me which language is best.</h4>
<p>These benchmarks are <i>almost certainly not representative of what you want to do.</i> There are various flaws in this approach &mdash; how we choose to summarize (the median here) will affect the ordering of languages; the implementations are not perfect; some languages are missing implementations for some benchmarks; even for one language there are many possible implementations with different tradeoffs and only the fastest was tested; and so on. Perhaps most significantly, we're completely lacking important metrics like programmer time, maintainability, and bugginess.</p>
<p>Thus, just as someone out there thinks <a href="http://www.spanglercandy.com/spangler/products/circuspeanuts.php">Circus Peanut Gelatin pie</a> is a good idea, so most of these languages are the right tool for <i>some</i> job. We can't use these benchmarks to brand a language as useless. What I think the benchmarks and visualization <i>can</i> do is introduce you to general-purpose languages that may be a better solution for many tasks.</p>
<p>In particular, you might want to take a gander at the <a href="http://en.wikipedia.org/wiki/Pareto_efficiency">Pareto-optimal</a> languages: those which, for every other language L, are better than L in at least one metric under consideration. If we consider source code length and real time as the two metrics, then the Pareto-optimal languages are:</p>
<table align=center cellspacing=10>
<tr><th>1 core</th><th></th><th>4 cores</th></tr>
<tr>
<td>
<table border=0>
<tr><th style="background-color:#000000;color:#FFFFFF" width=200>More expressive</th></tr>
<tr><td bgcolor="#EFFF6F">Ruby 1.9</td></tr>
<tr><td bgcolor="#DFFF7F">Ruby JRuby</td></tr>
<tr><td bgcolor="#CFFF8F">Javascript TraceMonkey</td></tr>
<tr><td bgcolor="#BFFF9F">Python PyPy</td></tr>
<tr><td bgcolor="#AFFFAF">JavaScript V8</td></tr>
<tr><td bgcolor="#9FFFBF">Lua LuaJIT</td></tr>
<tr><td bgcolor="#8FFFCF">Haskell GHC</td></tr>
<tr><td bgcolor="#7FFFDF">Java 6 SteadyState</td></tr>
<tr><td bgcolor="#6FFFEF">C GNU gcc</td></tr>
<tr><th style="background-color:#000000;color:#FFFFFF" width=200>Faster</th></tr>
</table>
</td>
<td width=30></td>
<td>
<table border=0>
<tr><th style="background-color:#000000;color:#FFFFFF" width=200>More expressive</th></tr>
<tr><td bgcolor="#FFFF6F">Ruby JRuby</td></tr>
<tr><td bgcolor="#EFFF7F">Python CPython</td></tr>
<tr><td bgcolor="#DFFF8F">Erlang HiPE</td></tr>
<tr><td bgcolor="#CFFF9F">OCaml</td></tr>
<tr><td bgcolor="#BFFFAF">Haskell GHC</td></tr>
<tr><td bgcolor="#AFFFBF">F# Mono</td></tr>
<tr><td bgcolor="#9FFFCF">Scala</td></tr>
<tr><td bgcolor="#8FFFDF">Java 6 Steady State</td></tr>
<tr><td bgcolor="#7FFFEF">C GNU gcc</td></tr>
<tr><td bgcolor="#6FFFFF">C++ GNU c++</td></tr>
<tr><th style="background-color:#000000;color:#FFFFFF" width=200>Faster</th></tr>
</table>
</td>
</tr>
</table>
<p>From top to bottom, these languages trace the best points in the tradeoff between expressiveness (at top) and speed (at bottom). Perhaps what this does best is to illustrate why it is hard to pick a "best" language. For the single-core case, 27% of the languages are on the list above; for quad-core, 48% made the cut. <b>Even with just two simple metrics, many languages might be considered "optimal"!</b></p>
<p>Coming soon: a visualization of the 3D tradeoff space.</p>
<h4>Notes</h4>
<p>Last year, <a href="http://blog.gmarceau.qc.ca/2009/05/speed-size-and-dependability-of.html">Guillaume Marceau</a> produced informative plots based on the The Computer Language Benchmarks Game, similarly showing the tradeoffs between metrics. The CLBG has now <a href="http://shootout.alioth.debian.org/u32q/code-used-time-used-shapes.php">included similar plots on their site</a>. <i>[Updated: the CLBG didn't use Marceau's plots directly.]</i> The visualization here summarizes more (which can be good and bad), includes the memory metric and the quad-core data, and lets you interactively play with the data thanks to <a href="http://code.google.com/apis/visualization/documentation/gallery/motionchart.html">Google Motion Charts</a>. A chat with <a href="http://www.cs.brown.edu/people/rfonseca/">Rodrigo Fonseca</a> several years ago got me interested in plotting these tradeoffs. Finally, my apologies to those without Flash for the chart.</p>Brighten Godfreyhttps://plus.google.com/102845740959911684459noreply@blogger.com20tag:blogger.com,1999:blog-8216930037889709643.post-6049529652126409392010-08-30T04:54:00.000-07:002012-07-17T11:13:21.883-07:00What happened to the Internet on Friday<p style="border-style:solid; border-width:1px;"><b>Note to readers:</b> <i>Judging from the past, this blog will have posts related to both computer science and politics. If you like, you can view <a href="http://youinfinitesnake.blogspot.com/search/label/cs">just CS</a> or <a href="http://youinfinitesnake.blogspot.com/search/label/politics">just politics</a> posts, or subscribe to feeds for <a href="http://youinfinitesnake.blogspot.com/feeds/posts/default/-/cs">just CS</a> or <a href="http://youinfinitesnake.blogspot.com/feeds/posts/default/-/politics">just politics</a>.</i></p>
<p>On Friday, a large disruption of Internet traffic <a href="http://www.networkworld.com/news/2010/082710-research-experiment-disrupts-internet-for.html">made the news</a> as an experiment gone awry. What actually happened? It's a good lesson in how fragile and insecure the Internet's routing protocol can actually be.</p>
<a name='more'></a>
<p>There was indeed a major event on Friday. A plot by Earl Zmijewski of Renesys <a href="http://www.renesys.com/blog/2010/08/house-of-cards.shtml">shows</a> that at the moment the experiment started &mdash; 8:41 GMT &mdash; about 3,000 IP prefixes became unstable. That is, the routes to these prefixes were quickly changing or being advertised and withdrawn. (An IP prefix is a chunk of destination IP addresses, the basic unit on which Internet routing operates.) Since there are roughly 300,000 prefixes announced globally, this is about 1% of the prefixes on the entire Internet.</p>
<p>We can also observe the effects by looking at the total amount of "chatter" in the Internet's global routing protocol, BGP. I created the following graphs based on raw data from the <a href="http://www.routeviews.org/">Route Views project</a>.</p>
<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_QqyK36WvRNs/THuc8SrNpPI/AAAAAAAAAG8/IZi1kJC_0N4/s1600/updates-linx.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 280px;" src="http://4.bp.blogspot.com/_QqyK36WvRNs/THuc8SrNpPI/AAAAAAAAAG8/IZi1kJC_0N4/s400/updates-linx.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5511171128637105394" /></a>
<p>This plot shows the rate of BGP messages received by one particular router, located at the <a href="http://en.wikipedia.org/wiki/London_Internet_Exchange">London Internet Exchange</a>. Routers are continually exchanging messages about new, changing, or unavailable routes to destinations all around the world. However, as you can see, the event in question vastly increased the rate of routing updates, exceeding the "background radiation" of messages by about a factor of 6.</p>
<p>The event was visible globally. Here is the same plot, for a router at Equinix in Ashburn, VA.</p>
<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_QqyK36WvRNs/THudHsm92SI/AAAAAAAAAHE/HKqqaTwKte0/s1600/updates-eqix.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 280px;" src="http://1.bp.blogspot.com/_QqyK36WvRNs/THudHsm92SI/AAAAAAAAAHE/HKqqaTwKte0/s400/updates-eqix.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5511171324577175842" /></a>
<p>How can a disruption of this magnitude happen? Based on a <a href="http://www.merit.edu/mail.archives/nanog/msg11505.html">note</a> from RIPE, it went something like this:</p>
<ol>
<li>Researchers at <a href="http://www.ripe.net/info/ncc/index.html">RIPE</a> and Duke create a BGP announcement message, which advertises the availability of an IP prefix under their control. The message uses an unusual format, but one which complies with the BGP protocol format.</li>
<li>The message begins to propagate from router to router on the Internet, as normal, until...</li>
<li>...it reaches some router running the Cisco IOS XR software. These routers have (or had) buggy software which, upon receipt of the unusual message, corrupted the message before propagating it to other neighboring routers.</li>
<li>A neighboring router (call it N) has now received a malformed message from the Cisco router (call it C). N then follows the BGP protocol specifications which <a href="http://tools.ietf.org/html/rfc4271#section-6">require</a> that N terminates its BGP connection to C. This disrupts traffic to <b>any destination</b> which N reached via C (and vice versa) &mdash; not just traffic to the prefix originally announced by RIPE!</li>
<li>It is likely (depending on router configuration, so I'm not sure how common this is) that either C or N then attempts to re-establish the BGP connection. In this case, C re-advertises every route it knows about to N &mdash; perhaps all 300,000 of them. And one of these would presumably be the corrupted message, causing the connection to again be terminated, and the process to repeat indefinitely.</li>
</ol>
<p>It's always a good idea to isolate security problems to contain damage. And many BGP problems can be isolated close to the origin of the bad announcement. This event, on the other hand, apparently caused (brief) widespread damage for two reasons.</p>
<ul>
<li>It spread <b>geographically</b> because the original announcement message was entirely valid, and was handled correctly by many routers. Thus, the message could reach buggy routers anywhere on the planet.</li>
<li>It spread to <b>many IP prefixes</b> beyond the original announced prefix, because the BGP protocol spec asserts that if a router sends a bad message for one prefix, it's unsafe to communicate with that router for destinations in <i>any</i> prefix.</li>
</ul>
<p>Similar events <a href="http://www.renesys.com/blog/2009/02/the-flap-heard-around-the-worl.shtml">have</a> <a href="http://www.uknof.org.uk/uknof12/Davidson-4_byte_asn.pdf">occurred</a> in the past.</p>
<p>Despite the headlines ("Research experiment disrupts Internet") on <a href="http://tech.slashdot.org/story/10/08/28/1319211/Duke-Research-Experiment-Disrupts-Internet-Traffic">Slashdot</a> and <a href="http://www.networkworld.com/news/2010/082710-research-experiment-disrupts-internet-for.html">Network World</a> and the Renesys <a href="http://www.renesys.com/blog/2010/08/house-of-cards.shtml">post</a>'s point of view, I find it hard to place blame on the researchers. I assume it was not their intent to stress-test the live Internet. Clearly, one problem was the software bug which Cisco <a href="http://www.cisco.com/en/US/products/products_security_advisory09186a0080b4411f.shtml">quickly acknowledged and fixed</a>.</p>
<p>But we can also think about the protocol design. One way to better isolate the damage might have been for the router N to discard only the single malformed message from C, logging an error message but not terminating the entire BGP session between N and C. The counter-argument is that receipt of one malformed announcement raises the probability that other announcements are malformed, too. Indeed, in this bug, C <a href="http://www.merit.edu/mail.archives/nanog/msg11505.html">apparently</a> declared an incorrect header length; something similar to this could plausibly confuse N's parsing of all subsequent messages from C.</p>
<p>Looking into the much more distant future, a very different approach would be to base routing decisions on end-to-end observable behavior (do my packets actually get through to the destination along this path, or not?) rather than on relatively uninformative and attack-prone control plane announcements. This robustness is one potential benefit of designs like our <a href="http://www.cs.uiuc.edu/homes/pbg/pathlets/">pathlet routing</a> and Xiaowei Yang's <a href="http://www.cs.duke.edu/nds/wiki/user_selected_routes">NIRA</a>.</p>Brighten Godfreyhttps://plus.google.com/102845740959911684459noreply@blogger.com0tag:blogger.com,1999:blog-8216930037889709643.post-85772861334759928332009-06-13T14:13:00.000-07:002010-08-04T13:19:59.048-07:00SIGCOMM 2009<p>For readers interested in networking, I note that the <a href="http://conferences.sigcomm.org/sigcomm/2009/program.php">SIGCOMM 2009 program</a> is now available.</p>
<p>Also available is <a href="http://www.cs.berkeley.edu/~pbg/pathlet_routing-sigcomm09.pdf">Pathlet Routing</a>, our paper with Igor Ganichev, Scott Shenker, and Ion Stoica. Pathlet routing is a new Internet routing architecture which can improve scalability by enabling very small forwarding tables, and can allow senders to choose between multiple paths for improved reliability and path quality. The idea is basically to do <a href="http://en.wikipedia.org/wiki/Source_routing">source routing</a> over a virtual topology whose nodes are arbitrary virtual nodes (vnodes) and whose links are sequences of vnodes (pathlets). Intuitively, this architecture is highly flexible because vnodes can represent arbitrary granularities, and because pathlets can represent policy constraints on routing while simultaneously enabling a large number of path choices. This is because sources can stitch together pathlets to form an end-to-end route in potentially exponentially many ways.</p>
<p>An interesting property of the design is that it doesn't impose a global requirement on what "style" of routing policy is used, but rather allows multiple styles to coexist. One router could choose to have routes like in today's Internet, with a giant forwarding table specifying only a single allowed route to each destination. And the next router could have a tiny forwarding table that still gives the network owner some control, but provides a high degree of path choice for the senders. I think of this as being very much in the spirit of the principle of <i>designing for variation in outcome</i> advocated by Clark et al. in their <a href="http://conferences.sigcomm.org/sigcomm/2002/papers/tussle.pdf">Tussle in Cyberspace</a> paper.</p>Brighten Godfreyhttps://plus.google.com/102845740959911684459noreply@blogger.com4tag:blogger.com,1999:blog-8216930037889709643.post-15925098208900711452008-09-11T00:23:00.000-07:002010-08-04T13:21:56.066-07:00Recent results in nonlinear peer-to-peer reviewing algorithms<p>On Monday I received spam, as many periodically do, from the World Multi-Conference on Systemics, Cybernetics and Informatics, WMSCI '09. Their peer review process having been previously <a href="http://www.nature.com/nature/journal/v434/n7036/full/nature03653.html">demonstrated</a> by Stribling <i>et al.</i> to be susceptible to <a href="http://pdos.csail.mit.edu/scigen/">random paper generation</a>, they have a new strategy:
<blockquote>Submitted papers or extended abstracts will have three kinds of reviews: double-blind (by at least three reviewers), non-blind, and participative <b>peer-to-peer</b> reviews.</blockquote>
(Emphasis mine.) The conference web site further <a href="http://www.iiis2009.org/wmsci/Website/Pptpr.asp?vc=1">describes</a> the peer-to-peer review process as
<blockquote>Informal, nonlinear, systemically interactive methods, for the achievement of what is called bottom-up quality[.]</blockquote></p>
<p>This is great news; as a sometime peer-to-peer researcher myself, I'm eager to see nonlinear peer-to-peer reviewing technology adopted.</p>
<p>I understand that as future work WMSCI is developing an <a href="http://www.nist.gov/dads/HTML/obliviousAlgorithm.html">oblivious algorithm</a> for ad-hoc low-power reviewing.</p>Brighten Godfreyhttps://plus.google.com/102845740959911684459noreply@blogger.com0tag:blogger.com,1999:blog-8216930037889709643.post-50877135530645173032008-08-22T02:23:00.000-07:002010-08-04T13:22:15.811-07:00Computer Science Research Trends<script language="JavaScript">
function setTrendGraph(form) {
var i = document.getElementById("trend-browser-img");
var a = document.getElementById("trend-browser-link");
if (form.research_trend_graph.value == "2") {
i.src="http://2.bp.blogspot.com/_QqyK36WvRNs/SLJPA1Ny8yI/AAAAAAAAADg/iukebqttMQo/s400/trends2.gif";
a.href="http://2.bp.blogspot.com/_QqyK36WvRNs/SLJPA1Ny8yI/AAAAAAAAADg/iukebqttMQo/s1600-h/trends2.gif";
}
else if (form.research_trend_graph.value == "3") {
i.src="http://1.bp.blogspot.com/_QqyK36WvRNs/SLJPBIsQ_CI/AAAAAAAAADo/Ecimn_CPti0/s400/trends3.gif";
a.href="http://1.bp.blogspot.com/_QqyK36WvRNs/SLJPBIsQ_CI/AAAAAAAAADo/Ecimn_CPti0/s1600-h/trends3.gif";
}
else if (form.research_trend_graph.value == "4") {
i.src="http://2.bp.blogspot.com/_QqyK36WvRNs/SLJPBG99SiI/AAAAAAAAADw/kFF0Wde5k_I/s400/trends4.gif";
a.href="http://2.bp.blogspot.com/_QqyK36WvRNs/SLJPBG99SiI/AAAAAAAAADw/kFF0Wde5k_I/s1600-h/trends4.gif";
}
else if (form.research_trend_graph.value == "5") {
i.src="http://2.bp.blogspot.com/_QqyK36WvRNs/SLJPBcmMJII/AAAAAAAAAD4/ahYJVTsXOcs/s400/trends5.gif";
a.href="http://2.bp.blogspot.com/_QqyK36WvRNs/SLJPBcmMJII/AAAAAAAAAD4/ahYJVTsXOcs/s1600-h/trends5.gif";
}
else if (form.research_trend_graph.value == "6") {
i.src="http://2.bp.blogspot.com/_QqyK36WvRNs/SLJPBZTEkbI/AAAAAAAAAEA/3q_eEabNNio/s400/trends6.gif";
a.href="http://2.bp.blogspot.com/_QqyK36WvRNs/SLJPBZTEkbI/AAAAAAAAAEA/3q_eEabNNio/s1600-h/trends6.gif";
}
else if (form.research_trend_graph.value == "7") {
i.src="http://3.bp.blogspot.com/_QqyK36WvRNs/SLJP-gvgNtI/AAAAAAAAAEI/XAMYSGoyAkY/s400/trends7.gif";
a.href="http://3.bp.blogspot.com/_QqyK36WvRNs/SLJP-gvgNtI/AAAAAAAAAEI/XAMYSGoyAkY/s1600-h/trends7.gif";
}
}
</script>
<p>Computer science is a fast-changing field of research. Can we track some of its changes in the past? To answer this or any other question, experimental computer science researchers have essentially three options:
<ol>
<li>Write a program.</li>
<li>Search the Internet.</li>
<li>Write a program that searches the Internet.</li>
</ol>
We'll take the last approach in this post. We can pick a term and look at <b>term frequency</b>, that is the fraction of abstracts containing the term, as a function of time. Conveniently the ACM Digital Library and Guide have an <a href="http://portal.acm.org/advsearch.cfm">advanced search</a> that we can exploit to analyze term frequency across decades of abstracts. We'll start with a generic term, <i>performance:</i></p>
<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_QqyK36WvRNs/SLJN3_TsTEI/AAAAAAAAADQ/VEz9Pn2MX0Y/s1600-h/needs_normalization.gif"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;" src="http://4.bp.blogspot.com/_QqyK36WvRNs/SLJN3_TsTEI/AAAAAAAAADQ/VEz9Pn2MX0Y/s320/needs_normalization.gif" border="0" alt=""id="BLOGGER_PHOTO_ID_5238334940867480642" /></a>
<p>Let me say now, in case it becomes unclear later, that all the data presented here is actually derived from the ACM's database; it is not made up. (As for the <i>interpretation</i> of the data, well, ...) Moving on, we can plot a couple other terms, <i>which</i> and <i>that</i>:</p>
<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_QqyK36WvRNs/SLJN3zWGayI/AAAAAAAAADY/wF2VKnVy4do/s1600-h/needs_normalization1.gif"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;" src="http://2.bp.blogspot.com/_QqyK36WvRNs/SLJN3zWGayI/AAAAAAAAADY/wF2VKnVy4do/s320/needs_normalization1.gif" border="0" alt=""id="BLOGGER_PHOTO_ID_5238334937656355618" /></a>
<p>The naive reader will conclude that conjunctions went out of fashion in the mid 80's, and came back after the dot com crash. However, the inappropriately perspicacious reader realizes that this conclusion is subtly flawed, because the word "that" might be a pronoun, adjective, or adverb: not just a conjunction. Fortunately the astute reader realizes we can look at relative instead of absolute frequencies to help clean up the data set.</p>
<p>The next group of figures are normalized by the term <i>performance</i>. For example, the blue line in the first plot below shows freq(<i>mobile</i>) / freq(<i>performance</i>).</p>
<table border=0 cellspacing=1 cellpadding=0 width=400 bgcolor=#C6C6C6><tr><td align=center bgcolor=#E0E0E0>
<form name=trend_browser_form>
Select plot:
<select name="research_trend_graph" onChange="setTrendGraph(this.form)">
<option value="2">performance, interactive, mobile, ad hoc</option>
<option value="5">proof, simulation, experiment</option>
<option value="3">TCP, RAID, BGP, DHT</option>
<option value="4">supercomputer, cluster, data center</option>
<option value="6">simple, efficient, correct, reliable</option>
<option value="7">system, algorithm, network, database</option>
</select>
</form>
Click on the image below for a larger version.
</td></tr>
<tr><td bgcolor="#FFFFFF" width=400 height=220 align=center valign=center>
<a id="trend-browser-link" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}"
href="http://2.bp.blogspot.com/_QqyK36WvRNs/SLJPA1Ny8yI/AAAAAAAAADg/iukebqttMQo/s1600-h/trends2.gif"><img
id="trend-browser-img" style="border:0; margin:0;" src="http://3.bp.blogspot.com/_QqyK36WvRNs/SLJPA1Ny8yI/AAAAAAAAADg/iukebqttMQo/s400/trends2.gif" border=0></a>
</td></tr></table>
<p>Now some "matchups", where <i>x</i> v. <i>y</i> = freq(<i>x</i>) / freq(<i>y</i>). Click on an image to enlarge.</p>
<table><tr><td align="center" valign="center"><b>distributed<br>v.<br>centralized</b></td><td><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_QqyK36WvRNs/SLJMd58uMuI/AAAAAAAAAC4/lqyqcC-SVKw/s1600-h/distributed-centralized.gif"><img style="cursor:pointer; cursor:hand;" src="http://2.bp.blogspot.com/_QqyK36WvRNs/SLJMd58uMuI/AAAAAAAAAC4/lqyqcC-SVKw/s320/distributed-centralized.gif" border="0" alt=""id="BLOGGER_PHOTO_ID_5238333393240732386" /></a></td></tr>
<tr><td align="center" valign="center"><b>flat<br>v.<br>hierarchical</b></td><td><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_QqyK36WvRNs/SLJMmnWj1jI/AAAAAAAAADA/SigzTeQanYw/s1600-h/flat-hierarchical.gif"><img style="cursor:pointer; cursor:hand;" src="http://2.bp.blogspot.com/_QqyK36WvRNs/SLJMmnWj1jI/AAAAAAAAADA/SigzTeQanYw/s320/flat-hierarchical.gif" border="0" alt=""id="BLOGGER_PHOTO_ID_5238333542867654194" /></a></td></tr>
<tr><td align="center" valign="center"><b>Berkeley<br>v.<br>Stanford</b></td><td><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_QqyK36WvRNs/SLJMBJAkWLI/AAAAAAAAACw/ebvhRT0Sdv4/s1600-h/berkeley-stanford.gif"><img style="cursor:pointer; cursor:hand;" src="http://1.bp.blogspot.com/_QqyK36WvRNs/SLJMBJAkWLI/AAAAAAAAACw/ebvhRT0Sdv4/s320/berkeley-stanford.gif" border="0" alt=""id="BLOGGER_PHOTO_ID_5238332899067189426" /></a></td></tr>
<tr><td align="center" valign="center"><b>Us<br>v.<br>Them</b></td><td><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_QqyK36WvRNs/SLJMunpD8SI/AAAAAAAAADI/_ETDCS36h3Y/s1600-h/us-them.gif"><img style="cursor:pointer; cursor:hand;" src="http://2.bp.blogspot.com/_QqyK36WvRNs/SLJMunpD8SI/AAAAAAAAADI/_ETDCS36h3Y/s320/us-them.gif" border="0" alt=""id="BLOGGER_PHOTO_ID_5238333680384209186" /></a></td></tr></table>
<p>They're winning, but we're gaining on Them.</p>
<p>Now our final, and perhaps most important, plot.</p>
<table><tr><td align="center" valign="center"><b>Good<br>v.<br>Evil</b></td><td><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_QqyK36WvRNs/SLHvC93xGMI/AAAAAAAAACI/Jgqzs2DH2dI/s1600-h/good-evil.gif"><img style="cursor:pointer; cursor:hand;" src="http://4.bp.blogspot.com/_QqyK36WvRNs/SLHvC93xGMI/AAAAAAAAACI/Jgqzs2DH2dI/s320/good-evil.gif" border="0" alt=""id="BLOGGER_PHOTO_ID_5238230675855841474" /></a></td></tr></table>
<p>While computer science is solidly on the side of good, several abstracts in this search were disturbingly megalomaniacal, such as J. R. Landry's "<a href="http://portal.acm.org/citation.cfm?id=1355238.1355256">Can computing professionals be the unintentional architects of evil information systems?</a>", in ACM SIGMIS-CPR, 2008. The author "discusses how the technical rational paradigm supports the creation of systems that embody administrative evil" and aims to "determine if information systems can harm or be evil, the frequency of harm, and response to harm by designers and users."</p>
<p>We can be thankful that according to its abstract, the paper is only "a research-in-progress".</p>
<p><i>This post was originally an Outrageous Opinion at SIGCOMM'08.
Here's a PDF of the <a href="http://www.cs.berkeley.edu/~pbg/sigcomm08-outrageous_opinion.pdf">original presentation</a>.
</i></p>Brighten Godfreyhttps://plus.google.com/102845740959911684459noreply@blogger.com1tag:blogger.com,1999:blog-8216930037889709643.post-12202986303252389652008-08-06T02:31:00.000-07:002011-12-06T16:37:52.956-08:00Graphing English<script language="JavaScript">
function connect(form) {
document.getElementById('waitmessage').innerHTML = '<table><tr><td><img src="http://brighten.bigw.org/youinfinitesnake/loading_indicator.gif" border="0"></td><td valign=center><b><font color="#cc6600">finding connection; just a moment...</font></b></td></tr></table>';
var wordvis = document.getElementById("wordvis");
if (form.word1.value == 'predaceous' && form.word2.value == 'prehension') {
wordvis.src = 'http://brighten.bigw.org/youinfinitesnake/second-largest-component.gif';
}
else {
wordvis.src = 'http://brighten.bigw.org/youinfinitesnake/wordgraph.cgi?word1=' + form.word1.value + '&word2=' + form.word2.value;
}
}
function checkEnter(e){
var characterCode;
if (e && e.which) { characterCode = e.which }
else { e = event; characterCode = e.keyCode }
if (characterCode == 13) {
connect(document["wordgraphform"])
}
return (characterCode != 13)
}
function showSecondLargest() {
document.wordgraphform.word1.value = 'predaceous';
document.wordgraphform.word2.value = 'prehension';
connect(document.wordgraphform);
return false;
}
function hidePleaseWait() {
document.getElementById('waitmessage').innerHTML = '';
}
</script>
<p>Given two words, can we connect them by a chain of synonyms? For example: <i><b>minuscule</b> &mdash; little &mdash; short &mdash; poor &mdash; wretched &mdash; ugly &mdash; frightful &mdash; tremendous &mdash; <b>enormous</b></i>. Try some:</p>
<p><b>[Note, the tool's offline right now due to a system reinstallation ... should come back shortly ... -- Dec 6, 2011]</b></p>
<a name="wordapp"></a>
<center>
<table border=0 cellspacing=1 cellpadding=0 width=550 bgcolor=#C6C6C6><tr><td align=center bgcolor=#E0E0E0>Connect two words with a chain of synonyms:<br><form
name=wordgraphform>
<input type=text name=word1 size="12" maxlength="80" onKeyPress="return checkEnter()" value="black">
<input type=text name=word2 size="12" maxlength="80" onKeyPress="return checkEnter()" value="white">
<input type=button value=connect onClick="connect(this.form)"><div id=waitmessage></div></form></td></tr><tr><td bgcolor="#FFFFFF"><iframe frameborder=0 width=550 height=400 src="http://brighten.bigw.org/youinfinitesnake/black-white.gif" name="wordvis" id="wordvis" onload="hidePleaseWait()"></iframe>
<tr><td align=center bgcolor=#E0E0E0><form method="get" action="http://wordnet.princeton.edu/perl/webwn" enctype="multipart/form-data" name="f">Look up a word:<input type="text" name="s" size=15 maxlength="500"><input type="submit" name="sub" value="WordNet it"></form></td></tr></table>
</center>
<p>In the graphs above, each node is a word and an edge connects each pair of displayed words that can be synonymous. The raw data on semantic relationships is from Princeton's <a href="http://wordnet.princeton.edu/">WordNet</a> project.</p>
<p>Many word pairs are not linked. How large is the largest connected component&mdash;that is, the largest set of words such that any pair in the set can be connected by a chain of synonyms? An interesting graph theoretic question.</p>
<p>The answer is that the largest component has <b>25,105 words</b>, or 17% of all words in the database. Meanwhile, the <i>second largest</i> component is over six hundred times smaller, with only <b>38&nbsp;words</b> (<b><a href="#wordapp" onClick="showSecondLargest()">show it above</a></b>).</p>
<p>Is this structure bizarre? Actually it's roughly what one would expect knowing random graph theory, which I'll now attempt to explain in one paragraph. A classic (Erdős–Rényi) <a href="http://en.wikipedia.org/wiki/Random_graph">random graph</a> consists of <i>n</i> nodes, connected by a bunch of edges chosen uniform-randomly. Suppose the number of edges is such that on average, each node has <i>d</i> neighbors (for us, <i>d</i> synonyms). Imagine exploring a connected component by starting at one node and expanding outward: first looking at the nodes that are one step away from the starting point, then two steps away, and so on. What happens to the size of this frontier? If <i>d</i>&nbsp;<&nbsp;1 then the frontier tends to <i>shrink</i> by a certain percentage in each step, so with high probability it dies out before the component gets very large. On the other hand, if <i>d</i>&nbsp;>&nbsp;1, then the frontier tends to <i>expand</i> by a certain percentage in each step. Chances are pretty good that the frontier just gets bigger and bigger until the component includes a good fraction of all <i>n</i> nodes in the graph. That's the giant component. The second largest component must be very small, specifically <i>O</i>(log&nbsp;<i>n</i>) nodes, since otherwise it's likely to intersect with, and thus be absorbed by, the giant component.</p>
<p>Our graph of English has <i>d</i> = 2.92 synonyms per word. But of course English is not a random graph&mdash;which, with the same <i>d</i>, would have a largest component of 93% of its nodes and a second largest of about 5 nodes. Considering that we get within an order of magnitude without modeling any of the structure of the language except the number <i>d</i>, this is not so bad. And WordNet doubtless does not perfectly represent English: for example, it's plausible that common words are better annotated with synonymy relations than, say, <i>abulia</i> or <i>vituperation</i>.</p>
<p>I was led to think of this topic during conversation with some folks from <a href="http://www.cs.ucl.ac.uk/">UCL</a> and <a href="http://www.bme.cmu.edu/">CMU</a>. But others have built a graph out of WordNet too (after all, it is called Word<i>Net</i>). A paper by Kamps, Marx, Mokken, and de Rijke, <a href="http://staff.science.uva.nl/~kamps/publications/2004/kamp:usin04.pdf">Using WordNet to Measure Semantic Orientations of Adjectives</a>, scores words based on their relative distance to "good" vs. "bad".</p>Brighten Godfreyhttps://plus.google.com/102845740959911684459noreply@blogger.com4