Ever noticed an apparel that you looked up on a website showing up as an Ad when you are browsing another website? What is going on here? How did a web page show you ads for products you visited on a totally different website?

Partly this is the work of those facebook like buttons and Google’s +1 buttons. Let us say you were logged into facebook on a browser tab. Now you visit many other pages on other tabs. Some of these pages make have the “like” buttons. Now, here is the deal: Every time you visit a page, a series of HTTP GET requests are made by the browser to get the elements (like images etc) on the page. Facebook knows from the cookies that who you are. Now they also get a HTTP GET request for a button along with this cookie and so they know which website this button appears in and so they know you visited that page.

Advertisers and their partners sometimes use cookies or other similar technologies
in order to serve and measure ads and to make their ads more effective. Learn more
about cookies, pixels and similar technologies.

Here are a few plug-ins I use with Iceweasel (that’s the name of the popular Firefox browser on the Debian GNU/Linux system) that help in making web browsing, a pleasant experience.

1. Adblock Edge

Adblock Edge(ABE) is a fork of the excellent Adblock Plus (ABP). AdBlock Plus sold out to Ad companies like Google and included a bunch of ads in their whitelist. ABE is a fork before they made the change. I guess we are indebted to ABP author for the great contribution. ABE with “EasyPrivacy” and “EasyList” filters can make the web browsing experience a lot lot nice! To see the difference, try browsing a few popular websites with and without ABE for a day.

2. HTTPS Everywhere

HTTPS Everywhere is a plugin to force https protocol if it is available, for safe and secure browsing. Most websites which requires one to login (like email, banking etc..) all implement https. But some still don’t or give an option for http vs https. In such cases, this plugin forces the use of https.

3. Duck Duck Go search widget

I had been trying to move away from Google for most of my daily browsing needs including search. Duck Duck Go search quality has been improving steadily and is very much usable for most purposes. DDG explicitly has privacy of its users as one of their goals. They are a company like Google, so they can change their policies (like the way Google did with the “don’t be evil” goal). So, watch out. Until then, enjoy DDG. Unlike Google, DDG does not wrap URLs in the search results with a redirector to track clicks.

4. Greasemonkey + NoScript

It is interesting to see the amount of code we execute on our machines without explicitly invoking a program. Every webpage include a number of JavaScript files which gets downloaded and executed when we visit websites. What do those JavaScript files do? Some of them are libraries like JQuery. Some of them are explicitly there to track users (like the Google Analytics scripts). We, the users, should have control on what should run on our machine and tracking should be opt-in, rather than opt-out.

It is also well known that a user can be uniquely identified from the Browser’s user agent string.

A number of websites work quite nicely without any JavaScript at all. GMail has a mode which works well without JavaScript. But unfortunately many don’t work well (like Amazon.com, for instance). But with NoScript, one could make this experience less painful.

5. RefControl

Everytime one clicks a URL on a webpage, which takes us to another page in the same website or a different website alltogether, the HTTP request message also sends a Referrer header which tells the website, where the request came from. This is a crucial piece of the puzzle in constructing a graph of anyone’s web browsing habbits. We could turn off those referral requests with the RefControl plugin.

6. Disconnect

There is yet another privacy plugin called “disconnect” that promises to keep trackers (twitter, facebook, g+ buttons, cookies etc) away. Since I use it in conjunction with other plugins, I don’t know how good it is working. Looks like Disconnect is some kind of a well funded company.

Apparently there are many in this category being developed by funded companies like Ghostery, DoNotTrackMe and so on. I used Ghostery and DoNotTrackMe in the past. But currently I use Disconnect as its code is freely available.

7. Other Misc settings

A few other tips:

Turn On the private browsing mode in the browser if you don’t want to store the history. Some people like to have the history to make their browsing experience easy and it has its own merits and demerits. I visit facebook only on a browser in private browsing mode. This is not enough. One also need to make sure that no other websites are visited while the facebook page is open in a tab. One need not worry about logging off. If one closes a browser in private browsing mode, no cookies are stored, so the “like” buttons on other websites cannot track the identity. (Remember, they still can profile a user based on the User Agent string)

I also clear history and cookies when I quit the browser. This can be set up on Firefox preferences.

Turn on the “Do not Track” option. Both Firefox and Chrome has this option. But make sure that you turn the DNT option on, it may not be on by default.

Use a browser that has its source code published as Free Software. This means, Firefox or variants, Chromium, or one of those webkit derivatives like Epiphany. Note that Google Chrome is not Free Software but Chromium is. Mozilla is a non-profit corporation and I trust them more with protecting the web users than a for-profit corporation that explicitly wants to know everything about everyone.

Google has access to your emails(Isn’t it ironic that they filter out email SPAM and show you spam in the form of ads on the side?), your likes/dislikes/opinions, your location and also your DNA. They also wants to know what you see and also track your eye movements within the screen and elsewhere. The Moto-X phone from Motorola/Google has its microphones on all the time reportedly to take voice commands. But it is also the new stark reality. In the name of convenience, people are enticed to give up their privacy.

Tor onion router is one of the best guard against censorship and tracking. There are many ways to use Tor along with Firefox at the cost of a bit of latency. I like to use the OS Distribution called Tails on a USB stick when browsing from an internet cafe. Tails is a special GNU/Linux based distribution that can be installed on a USB stick, which has a bunch of privacy tools built in, including a special version of Firefox with Tor button enabled.

Turn on the “Block pop-up” windows option to block the annoying popups.

Install only those extensions that have their source code published. It is a bit hard to find that from the Firefox add-on page. One has to go to the specific page for an add on and look under “Version Information”. Chose only those extensions that is made available under a Free Software license. Remember that browser is a very critical piece of software used by anyone in their daily work flow and it is extremely important that we don’t leave it to others to decide on the issues related to privacy.

YouTube has become as anoying as the regular Idiot Box these days with a lot of ads before and in-between the videos. I use YouTube Center to get rid of them and also give me a few other features like download the videos for offline viewing and so on. Not related to privacy per se, but helps in making YouTube video viewing, a better experience. It is highly likely that YouTube may do something to break this extension by changing their protocol, so that show the ads and the developer has to play a catchup game.

There is another Firefox plugin called RequestPolicy that can catch cross site requests. It is recommended for security paranoids. It gives information on the connections made by a website into other domain names (eg: http://foobar.org making connections to Google Analytics website). These connections are reported and can be blocked as well.

If you are concious about your privacy on the Internet (which every Internet user should), you should read the articles on the Electronic Frontier Foundation.

PowerSet of a set S is a set of all subsets of S. For example, Powerset of {a, b, c} is { {}, {a}, {b}, {c}, {a, b}, {a, c}, {b, c}, {a, b, c} }. For any set of length n, the powerset will have a length of 2^n.

Writing a program to find the powerset is easy to write, if we can visualize powerset. A simple inductive way to think about it is that the subsets of the set S, either has an element in S or not. We recursively apply this rule, the base case being that the empty set is a subset of S. This can be visualized as a binary tree as shown below.

Members of the powersets are the leaves of this tree. You can now easily come up with a relation:

In other words, take the powerset of S without first element, let us call this set as S’. Now, for each of the element x in S’ find the union of the first element with x. This is the first part of the power set. The other part involves those that does not have the first element and this is S’ that we have already computed. The final answer is the set union of first part and second part.

Compiling a C program on a GNU/Linux system involves a lot of magic under the hood. One of them, which is taken for granted is that the kernel version running on a system can be different from the version of kernel header files used to compile a program. The Linux kernel developers work really hard to give this guarantee to the userspace programs. Read on for a case where that guarantee got broken.

ioctl

ioctl(2) is the standard Unix way of controlling a device file from userspace. For example, let us say, for debugging, we want to read and write some registers from an i2c device. One of the ways to do this is to provide an experimental ioctl command to read/write the registers.

The ioctl call in the userspace has the following prototype:

int ioctl(int fd, int cmd, ...);

The driver API is usually implemented using a table of function pointers. The ioctl function pointer API is a little different from that of the userspace API but for this discussion, that doesn’t matter. The key point is that the second parameter cmd is passed unchanged into the kernel ioctl function call.

What is cmd?

cmd is the ioctl command code. cmd can be thought of as a 32-bit bit-field derived from a few other things to make it unique. Here are some things used to define command codes:

a magic number (defined by the kernel for each subsystem in Documentation/ioctl/ioctl-number.txt.

a sequential number that the programmer assign for the code.

type of the command (is it a read or a write or a read-write command?)

size of the data being read/written.

These 4 sets of information is used to create the bitfield by the macro _IOC

The bug

I am writing a video4linux driver for an HDMI input device. Unfortunately, this is suppose to work with a 2 year old kernel (v3.0) shipped with Android JellyBean release running on a TI OMAP4 device. For some reason, the kernel headers shipped with AOSP is a bit different from that in the kernel version 3.0.

The particular control code of interest to me is the VIDIOC_DQEVENT, which is defined as follows:

#define VIDIOC_DQEVENT _IOR('V', 89, struct v4l2_event)

I have the following code snippet in a simple userspace application (not showing the entire code here):

I observed that the select is succeeding but the ioctl call with the command VIDIOC_DQEVENT was failing with an errno ENOTTY. A bit of grepping in the driver source revealed that the ENOTTY is coming from my own driver’s default handler. This means that the switch statement didn’t succeed with the command code we passed. That was strange! This clearly showed that VIDIOC_DQEVENT has different values in kernel and userspace! Printing its value made it clear that this was indeed the case.

A bit more printing revealed that struct v4l2_event which is used to calculate the control code VIDIOC_DQEVENT has a size different by exactly 8 bytes in userspace vs that in the kernel. This was very strange because this indeed means that kernel ABI guarantee is broken.

The kernel header file include/linux/videodev2.h has the struct v4l2_event defined as follows:

Now comes the interesting part. Notice the union u in the struct v4l2_event? The largest element in the union is a 64 byte array. If you do the math, you can see that no other element in the array exceeds this size, so even though userspace has some extra structures in the union, in theory, we are not going to exceed 64 bytes. But struct v4l2_event_ctrl has another union inside which has a 64-bit value.

The compiler decided to align this value at a 64 bit boundary and also align the reserved array by another 4 bytes, resulting in a struct v4l2_event_ctrl with size increase of 8 bytes and this exceeds 64 bytes, making it the largest element in the union.

The fix

I fixed it in my system by copying the relevant portion of the userspace header into the kernel header so that the struct v4l2_event definitions match. I could do that because I know that there is no other user of the Video4Linux events in my system.

C, being a portable assembler, arranges these data sequentially in the memory and aligns them appropriately. What if you want to find the offsets from the base of each of the element?

This macro does the trick. There are other more explicit ways to calculate it. But I found this macro very neat.

#define OFFSET(x, y) &((x *)0)->y

We use it this way:

int offset;
offset = OFFSET(struct T, bar);

How does this work? The idea is based on the fact that if the structure is put in memory starting from address 0, a pointer to the element inside the structure is also the same as the offset, since the base address is 0. So, we cast an integer (0 in this case) to a pointer pointing to the structure and just find the offset to the element whose offset we are interested in. The address of that element should be the same as offset, as the structure is assumed to be laid in the memory starting at address 0. That’s it.

The linux kernel defines a similar macro in the include/linux/stddef.h called offsetof:

I had been doing a few MOOC courses in the past few weeks. Given the fact that I have to juggle between family, work, commute and the courses, I decided to put some breaks on the information intake.

I turned myself off from email, news etc (well, not completely but perhaps ~90%) in the past few weeks by unsubscribing from mailing lists and also by closing browser tabs that I am unlikely to read or benefit from in short term. I consistently kept number of open tabs in the browser to <= 5. I also installed browser plugins to warn myself and block twitter/hackernews etc after 10 minutes of usage per day from 6 AM to 10 PM. (I didn’t put restrictions after 10pm). Overall my information intake was much less.

The result is that I was much happy and got a lot of things done which made me even more happy. I didn’t miss anything (that would have helped me). I had a lot of withdrawal symptoms initially when I unsubscribed from some of the mailing lists which I had been reading for the past ~4 years. But I really didn’t miss anything at all that is of immediate use to me.

I felt a big void when the course ended. For a day or two, I didn’t know what to do with the new found free time. But after that I quickly filled it with trivia. But I learnt a bit from those intense periods of learning. I now have closed down all those pdfs and tabs that I had open and am getting back to work. I had temptations to re-join mailing lists. But I deliberately decided not to do so.

Working on something every day and getting into the “flow” helped me greatly in getting something done and contributing to some happiness. There are many sources of unhappiness in my life which I cannot do much about. But there are a handful that I can do something about, I felt working on some interesting and mind-bending problem certainly was worth it.

About an year ago, while casually browsing, I wandered into Oracle Labs website and found Guy Steele’s page. If you don’t know who Guy Steele is, perhaps watching this video is a good start. In summary, Guy Steele was a student of Prof. Gerald Sussman and they together invented the legendary language SCHEME. I am a big fan of Scheme and its simplicity. I think I have watched almost every Guy Steele lecture videos freely available out there. In particular, I am a big fan of the Dan Friedman 60th birthday lecture, “Growing a language” lecture and so on. Youtube is your friend.

Coming back to the topic, I couldn’t resist emailing him and just say that I am a big fan of Scheme. With his permission, I am reproducing the email conversation we had and his great advice.

> On Jul 6, 2012, at 6:53 AM, Ramakrishnan Muthukrishnan wrote:
>
> Thank you for the reply. Delighted to see a reply from you. I think I
> have watched all the public videos of talks you have given on various
> things Scheme related (Growing a language, Dan Friedman 60th birthday
> lecture etc) and am a big fan! Scheme has totally changed my
> perception about programming. I still have to learn a lot of things
> more deeply but I think I finally found something that I seem to
> really like -- Programming languages and thanks to you and your work
> for that. I also plan to read the "Lambda, the Ultimate" AI memos too.
> Enough to keep me busy for the next few years! Any advice from you in
> my endeavour in Programming language research will be highly
> appreciated.
>
> Thanks again
> Ramakrishnan
The main advice I have is what you are already doing: keep reading!
The "Lambda, the Ultimate" papers have held up pretty well over the years,
I think, but there is much, much more. I recommend any paper that has
Phil Wadler, Simon Peyton Jones, or Charles Leiserson as a co-author.
Also, study more programming languages. Any will do, but there seem
to be a lot of good ideas nowadays in Haskell, Clojure, Python, and Scala.
You are much better off, I think, knowing three good programming languages
than one really great one. And read code in each of these languages, maybe the
code for their standard libraries as well as an application. Good luck!
Yours,
Guy Steele

Okay, I decided to throw away my previous blog posts and start afresh. I plan to write more frequently (let us see how that goes). Last time I put a lot of restrictions on myself on what to write about. I think I wasn’t very successful with that.

I decided to try Hakyll, a static webpage generator written in Haskell. I have always been very bad at creating “eye-candy” web pages. So this is a bare bones first version. I haven’t bothered to even change the default CSS file. Instead I want to spend my energy in writing some content.

The nice thing about Hakyll is that it is so easy to build and install using cabal. The website can be compiled into a binary, which is so convenient.

I have had several attempts at learning plan9, none succeeded. About 2 months ago, I tried it on my work MacBook Air with VMWare Fusion but didn’t get it going. I decided to try again, this time on my GNU/Linux box. Installing Plan9 is just one small step. Real fun starts once one starts to use it. I have just started and am already learning a lot, fondly bringing back the memories of the times when I was installing Slackware on my 486/DX66 with 4MB of RAM in 1996.

Which Plan9?

There are a few versions of Plan9.

the original from Bell Labs

9front, a fork from the Bell Labs version.

9atom (supports a lot of hardware)

9legacy (Bell labs + patch set)

NxM (designed for multicore)

I got into the #plan9 irc channel and mischief (of NoiseBridge, SF) was kind enough to offer some suggestions. He suggested me to install 9front. So, I went off and downloaded the ISO image.

Host machine setup

I installed plan9 as a guest on a host Debian GNU/Linux system. All I had to install was Qemu.

Booting the ISO

First you need to create a qcow image (virtual hard disk) for qemu to install plan9 into.

Just follow the instructions in the 9front install wiki page. You start the installation by doing inst/start. Once installation is done, stop the qemu reboot and start with the following switches.

$ qemu -hda 9front.qcow2.img -boot c -vga std -m 768

Also I configured my screen with 1920x1080x16 which works just fine. I also used ps2intellimouse as my mouse, that gives me nice scrolling.

To take mouse control off Qemu, just press CTRL+ALT.

Mouse

To make good use of the Plan9 graphical console (rio is the graphical shell of plan9), one need a 3-button mouse. acme need a real 3 botton mouse for some of the “chording” operations. One could simulate button-2 (the middle button) with SHIFT+right click. But it is not very pleasant. So, I got a Microsoft Notebook Optical Mouse (from ebay.in for Rs.1000) which has a side button that acts as the third button.

Unix to Plan9

Moving to Plan9 as a regular OS can be daunting (at least for me that was/is the case, but I am slowly making my way in). There is a nice Unix to Plan9 command transition page. Just keep it handy. Plan9 takes Unix philosophy to the extreme and that shows in the commands.

The terminal does not have a separate pager like more or less, infact there is an automatic pager and the command output blocks until one scroll the terminal, though this behaviour of the terminal can be changed using the menu. There is no commandline history. The shell is rc and is very nicely integrated with plan9 in all respects.

Unlike most Unix/GNU programs, plan9 programs need very less customizations. In many ways, GNU programs are antithesis of the Unix philosophy (of doing one thing well). I realized it only after I read and used plan9 a bit.

TODO

get an irc client working inside 9front.

get email client working with upas.

9fs works like a charm. Play more with it.

how to compile C programs?

APE layer

plumber

read all the papers on the bell labs website.

ACME, sam, structured regexps, rc.

… many more.

Community and Support

There is a small but very passionate community around plan9. 9fans list and the #cat-v irc channel on freenode are great places to hang out with other plan9 enthusiasts.

Compiling programs

I tried to compile a few programs. I could rebuild the plan9 kernel and bootloader just fine. Mercurial seem to be the defacto version control system used by plan9 folks.

I tried to compile the Go compiler but that didn’t succeed. I needed a bunch of patches and even after applying those, it didn’t compile and finally I lost interest. A lot of plan9 folks use the paste service called sprunge for sharing error messages and patches. One could send error messages or recieve patches using the hget/hpost commands. For example, to post a file containing an error message into sprunge, this works:

Summary

Installing, using and reading the motivations behind Plan9 makes one understand how broken our day-today computing infrastructure is. Plan9 was a nice attempt at fixing Unix and bringing computing to modern ages. In plan9, everything is a file. Even environment is a file. Each process has its own private namespace. This means that one can get rid of the ugly hack called sudo from the system. Venti, the network file storage system, is a content addressible system, which works similar to git.

How ever the strenghts of Plan9 also turned out to be its weakness. The world had already moved well into Unix and making something too different from Unix, is not for someone without enough marketing muscle. Had ANSI Posix Environment (APE) been done and had there been a linux emulation layer, things would have looked quite different. Also Plan9 doesn’t have a C++ compiler. Almost every web browser out there is written in C++ and the lack of C++ compiler means that one cannot use modern web from Plan9.

I am extremely sorry to see plan9 die a silent death. Or dare I say, it is already dead? There are handful of people using it at the moment. But hopefully, Plan9 from Userspace will live on.

Sometime ago, after reading some LWN articles (highly recomment an LWN subscription if you are interested in Linux kernel). But I found the going tough. There are lots of scattered information. Even though the software is all Free, it was impossible to comprehend why certain things were done the way it was done. It not only required reading articles (paying close attention to the date on which it was written), it also required digging into the past on how things looked before and how it changed. Most of the time, a long list of APIs are given which can only be comprehended by those working on it for a long time. It was very frustrating. I even wondered how anyone new can contribute to such projects after a few years when the current crop of experts have all lost interest in these projects or have passed away.

If you find yourself also in the same position as me, here are some pointers to some gems I found in my journey that does give a big picture of the GNU/Linux graphics/display sub-system. No, I am not competent enough to explain it myself yet, I would rather leave that to masters who have actually worked on it.

To get an overview of the various terminologies involved (DRM, DRI2, KMS, EGL, X, XRender, Wayland, pixman, cairo and other alphabet soup) start reading this overview article ”The Linux Graphics Stack”. Another great overview is this little PDF file which has short explanations of all the key pieces of the graphics stack from the hardware bits to the application. Once you read it, head straight to Wayland Architecture page which explains how X draws the screen and how Wayland is simplifying the picture. Pay particular attention to the journey of an event and its effect on the screen.

Now you are reading to watch this great LCA 2013 video on X and Wayland by X/Wayland hacker, Daniel Stone and look at the corresponding slides.

And then we have the great LWN, which is an essential reference to every linux kernel programmer. There is a bunch of links to the relevant LWN articles and other discussions and slides on thie Linaro Memory Management page.

Graphics

Again I am a journeyman into Graphics, trying to make sense of various terminologies. There are two pages, that I found helpful.

Hopefully, these links will give a good “big picture” view of the low level parts of rendering/video/graphics inside a modern GNU/Linux desktop. Also remember to watch the date on which this post was written (because The Internet does not forget anything and you, the reader, may be reading this page many months/years after the day this post was written). The display side of things being the most user visible and sensitive thing, is ever changing. The picture may look entirely different after a few years.

Last year, when the online AI Class was announced by Sebastian Thrun and Peter Norvig, I was thrilled and immediately signed up. Soon two other courses were offered. At work, I do low level software and have not formally studied AI or Databases or Machine Learning, not did I really see a need, in immediate future, to apply them in my job. Neverthless, I was thrilled at the possibility to hear and work with Stanford professors through these courses and enrolled for the AI course.

The AI class started and I think I could keep up my motivation level for the first 3 or 4 units. There were others distractions like family and work. I had to do a bit to extra work to learn some background mathematics and also read up the text to keep up with the lectures. Somehow, at that point of time, it all couldn’t fit together in my scheme of things, so I decided to discontinue. When I think back, I think I could have completed the course with a bit of extra effort, which I was not really putting at that point of time, instead I came up with some excuses! One of my cynical friends had predicted that I and some others at work who enrolled with me would all discontinue the course and I was sad that he was right.

Then the creation of Coursera and Udacity were announced. When I saw the announcement for the Design and analysis of Algorithms - 1 course from Stanford, I was extremely thrilled. I had always wanted to learn about analysis of algorithms but have never taken a formal course. I enrolled for the course and started working on the lectures. Tim Roughgarden, who was the lecturer for the course was going at a bit fast rate than what I could keep up. But somehow I caught up with lectures by working on them late nights and early mornings. I took notes as I went along. Taking notes meant, I had to watch the same lecture two or three times in some cases. It quickly blew up the time required to complete one week worth of lectures and sometime spilled over to the next week. But for me, it was liking playing a game. The problem sets and programming assignments were staggered by a week and so I could submit them on time. I was looking forward to the lectures and what new stuff Tim is going to throw at us, students. I did not find much time to participate in the forums. The programming assignments were mostly easy and was something I was really looking forward to. I used Racket for my programs and turned out that some others taking the course were also using Racket. It was a joy to program in Racket through out the course. During the last week of the course, I was with my parents and didn’t have a working Internet connection. After struggling with the phone company and wasting a lot of time on it, I decided to download the vides from elsewhere and work offline. In the end, I used my phone to connect to the Internet using GPRS and use my laptop via tethering to submit answer to the problem set and programming assignments. Overall, I think I did the tests very well.

Here are somethings I liked/disliked about the course:

Teacher is the most important element in a class. If teacher is uninteresting, everything else is. No amount of technology can save the situation. Tim is a great teacher. He talks a bit fast but after a while I started loving his style of talking and teaching. Some other classes (don’t want to point fingers at any specific course) didn’t have as good a teacher as Tim.

Free style writing on White/Black board instead of powerpoint was one of the highlights of the course. I think it was crucial for the success of the course. Many other courses which I signed up at Coursera were using powerpoints and the teachers (some of the greatest names in CS) were reading out from the powerpoint slides. I couldn’t sustain interest in such courses, how much ever great those teachers are. The way Tim taught the class is a role model and brought back memories of some of the best classes I had taken in real classroom years ago when I was a fulltime student.

Timing and difficulty level of the exercises within the lecture is another extremely important element.

A good teacher is far far better than self-learning from a book. I learnt tons of new things in these 5 weeks than I would have ever learnt in 5 weeks of reading.

The importance of taking notes cannot be overstated. This was the single best decision I ever took. I carried the notebooks around along with my laptop and used it whenever I got free time (sometime, even at work, when I am waiting for compilation to finish or in the evenings). The notes were handy while doing problem sets and programming assignments for a quick revisit to some particular lecture or to look up specific algorithm etc.

I didn’t use any text book though Tim recommended a few. I have CLR with me, but surprisingly I didn’t use it much while doing the course.

If I have seen one single use of Technology in recent times that positively affects the human beings, then that is this new experiment of online teaching.

Overall, it was a great experience with this course and I would like to thank Tim and Coursera for offering this great course online. I am looking forward to the part 2 of this course.

I also signed up for some of the new courses offered at Udacity. One of them that I am really excited about is the Web design course by Steve Huffman. I really like the style of presentation at Udacity. It is direct. It is short. The listener is tested at the end of (almost) every video. That makes it extremely interesting. Just like playing a video game!