This isn’t bad as a list of possible actions for UI designers to keep in mind, or even as a rule of thumb for how to approach data analysis. I think it’s sort of silly, however, to assert that this is the natural, immutable order of operations, which the author more or less does.

When the user is even loosely familiar with a dataset, he or she may well want to start off by filtering unwanted data (for example, narrowing the scope to a given region or demographic of interest). And if the dataset is relatively unfamiliar, the user may conceivably want to start by looking at some details (for example, when you look at a random sample of individual responses before you start trying to summarize a survey). The author doesn’t show why either of these would be bad practice.

It also bothers me that ‘history’ appears in this list, in an undifferentiated way. “History” — the ability to undo steps and go back to prior states — is more of a feature than discrete _task_. And while it’s a useful function to have, it doesn’t have a natural place in the sequence of analysis tasks–you use it whenever you need it, which may be at any time (if you need it at all).

Finally, I was surprised this article appeared in a journal, because I don’t see it as having a scholarly purpose. The content is fine, as observation, but it doesn’t advance an argument per se, or offer any discussion of alternative methods (let alone explain why this method is superior).

Indeed, “history” & even “relate”,”extract” are more like features. That’s why, infovis community nowadays do not include these three in the information seeking mantra.
I think it has actually a lot scholarly values. In 1996, this HCI group from UMaryland pioneered several interactive visualization systems. This was their first attempt to summarize all the “features” and lessons learned from real user studies.

My comment is that ….my Win8 system really piss me off. I have a lot of things could not be successfully dona. I saw that in D3 and other platforms, they provide link in Github. Is it possible that this course could teach some skills or permit the using of Github? I think I could do better with that.

The article on Vizter highlights new ways of visually depicting networks, which has become a particularly salient issue in the study of political science (e.g., networks of donors, cosponsors of bills, etc.). Being able to clearly represent these relationships is important in order to convey findings to a broad audience. There is, however, somewhat of a problem in that with a large dataset with large networks to characterize (e.g. the 535 members of the Congress), it can be difficult to make sense of the broader patterns for systematic analysis — and while you can zoom in to identify particular linkages, it requires that the person engage in barefoot empiricism or a have clear prior about the relationship (though interactive highlighting can make the patterns clearer with respect to conveying this information easily).

I am amazed by the Many Eyes website, which provides us with a set of
visualization creation and publishing tools. The article ‘Many Eyes: A Site for Visualization at Internet Scale’ gave us a very good sense of how the website is designed as well as how we can utilize it for our data visualization. There is another website that I used before and I would like to recommend to our classmates: http://flowingdata.com. This website provides several free as well as paid tutorial of data visualization. It provided us with necessary code for the specific type visualization we are doing and teach us step by step.

In Grevet, Mankoff, and Anderson, the proposal the authors outline is quite innovative in its approach, but fails to account for certain limitations. You can scour the internet and come across a myriad number of interactive consumption tools that allow you, as a consumer, to map your usage relative to others. I like the notion that more effective reduction schemes could exist which would leverage a de-anonymized environment and be scalable to a community- or city-level. Accountability is a great way to realize savings, and an effective visualization of such is a great way to highlight high performers and shun poor performing ones.

My hesitance (from a public policy perspective) has to do with the high risk of self-selection bias. One of the examples highlighted in the paper is that of http://www.carbonrally.com, which allows users to challenge each other to carbon reduction goals (such as going vegan, bicyling to work, and so on). Perhaps this is a misreading, but to encourage a less anonymous approach to aggregating data on energy use is to invite self-selection bias. In other words, those who are conscious of their energy use are more likely to report the steps they are taking to reduce it, whereas those that are not are not likely to report anything. That is not to say that those who are less conscious are somehow poor consumers of energy; conversely, they may be far more efficient consumers as matter of socio-economic circumstance.

The point is that when it comes to community standards of reporting, the authors should take care to account for the fact that various communities will experience various levels of reporting. This could be due to a number of factors, including those that fall outside of the scope of the project. These communities may, in fact, be more efficient than those that are more likely to report due to factors that may also cause them to underreport.