Think. Solve. Triumph.

Menu

This post is about my unsuccessful and incomplete attempt at hacking Wemo Insight Switch for a side project — with purpose of measuring power consumption of devices. Unfortunately, I just don’t have the time to spend on it. It might be useful for folks with just a bit more motivation to use this switch for it’s purpose (after all it is worth 50 bucks!)

I was able to open the switch and see the serial ports as in right bottom corner as in the photo below:

Unfortunately, the serial (write) port seems to be encrypted, hence I am unable to write any command on the command-line. The next line of attack might be to search for the datasheet and dump flash (snarf) from the chip using UART or JTAG and find the offset for the file system. We know already from the output trace that the device uses U-Boot and hence how the memory layout looks like. There is enough material on the web to help anyone proceed if there is time and relevant hardware (for eg. CH340 based adapters with 3.3 volt and 5 volt header / pomona 5250 clip etc).

This is an excellent implementation of firmware and PCB layout to install on a clean, new hardware interface for probing, which I found on interesting!

I have come across this debate of inter-disciplinary research at multiple occasions, most recent one being in Cyber Security Retreat at Princeton University. As a part of discussion, faculty members and students from Computer Science and Electrical Engineering discussed the need to actively engage in it. This event was restricted participation (only invite) and was a full house.

Few days later there was a lecture by Ben Shneiderman at Computer Science Department, Princeton University on Visualization (pioneer in the field), but the student attendance seemed low.

This raised couple of questions. Did the graduate students think
a) Visualization is a not that hard a problem ?
b) It is unrelated to their research and hence skip it ?

How should a graduate students think of research which is not closely related to their field ?

First, I will talk about is the thought process of a newly joined graduate students and then my perspective on some of the discussion on above questions.

When I started my PhD in Computer Science, I thought having a field like HCI (Human Computer Interaction) is actually waste of government funds( oops, I said it), because I was taught in my undergraduate school that having a good interface to your system is a job of a good system developer (yes, everything is devs job for a student at IIT Madras), so why have HCI ? Why don’t just system developers do their job a bit better!

It’s only when I took a course in HCI(by Gregory Abowd and Thad Starner) that I can admire the need for a whole area of research. These might be personal realization but I think a lot of graduate students carry the same views throughout their PhD, downplaying other research fields. I think it is important to understand the need for different disciplines (not so mainstream as arts and music, but something similar to HCI).

I have a recommendation for all graduate students to try to attend as many different talks on campus. May it be proposal, defense talks of senior graduate students but also talks by visiting faculty. It gives you a very good idea about the research in their area and helps you question your beliefs. It also helps you give a vision of how research can fundamentally effect future courses of countries, international politics, law or something totally unrelated.

As much as a PhD has to be grounded in the concepts and principles essential in his/her field, I think they should be equally understanding of the long term non-scientific implications of many things that do not relate to their research. As a PhD student myself, I find the job incredibly difficult but I would say at-least I am aware that it should be possible. This has of-course been a realization over the years as a graduate student, but I would be glad if someone had told this to me when I was starting as a graduate student.

I would like to focus out two aspects of research

1) Interdisciplinary research(as discussed in the Cyber security retreat)
2) Intra-disciplinary (for lack of better term) which is creating a new specialization within a discipline

Let me start with mentioning, the first one is a no-brainer. The real question is how to do it effectively. There were good ideas like encouraging spaces in department where sociology or policy students can sit in Computer Science department or a Computer Science graduate student working in the Electrical Engineering department (similar to how I landed up in EE department due to joint project with Prof. Mung Chiang(EE) and my adviser(CS)). It is not difficult to see that fundamentally different streams such as Sociology and Computer Science do have many interesting problems at the confluence of the two.

Obviously, one needs to have the vision among faculty members in which colleges or institutions have the right expertise to merge students from different disciplines to do research and get the appropriate funding for such difficult (to the extent you can term it very experimental graduate research) topics.

This also brings the point how hard it is for students from either major to develop their taste in other fields early in their graduate life and be adequately placed between the two to carve out their future. To learn deep concepts placed in both the fields and then make observations. Although if they can do it successfully, then their hybrid thinking can be huge assets to the society in future.

There are thoughts on having faculty collaborate and have a student work on a project combining multiple area (I have been pursuing research in electrical engineering and computer science with perspective on building real system using information theory background). I find it incredibly hard job as it takes a lot of effort to gather strong concepts and be able to produce research paper. On the other hand, there are graduate dorms which can possibly solve this problem of developing inter-disciplinary skills when graduate students interact in dinners etc but this does not give them enough ground to do a research project combining ideas (unless the fields are quite close).

The intra-disciplinary research is in most cases (as I have observed) an individual pursuit. Your research is so cutting-edge and niche that after exploring the idea for quite sometime you stumble upon a novel, well defined technique to solve a cluster of problems. Due to development of methodologies, technologies etc, things just seem right to have a new take on an age-old problem. History of Software Defined Networks (SDN) might be an excellent example to visit in this context!

There will be lots of experiences in your graduate life, it is upto you, what to make out of them!

Internet of things is an umbrella for all sorts of embedded devices which were previously on their own, or mostly connected in ad-hoc mesh network. The difference is that the devices are now connected to the Internet. Many of these devices are used to collect data from their surroundings and upload them to the cloud services, which can do analytics and provide some valuable trends to the users. In certain cases, the devices might just be used to obtain information (like photographs/music) from the web to be displayed/played .

There are expected to be 25 billion such devices by 2025 and they are going to be incredibly cumbersome not just to manage, but to be secure. It’s worth to understand them what’s underneath the box. There are plenty of websites with news on devices getting hacked, but no one tells about how to do it. This post will be mostly about the latter. We will break into a Pixstar photo frame, which has been a highly rated IoT device.

At some level, I believe, “If I own the hardware, I must 0wn the software too!”. Since, these devices run some variant of Linux, it’s easy to make them do things you like. There are some things about the computing machines that have not changed substantially since their inception and one of it is the booting process (apart, from the fast boots!).

If you look at your old Desktops, you would see the first screen that pops up tells you about POST check(video card, asynchronous communication, and other peripherals/ports) by the BIOS, followed by the loading of your bootloader and then loading the operating system. Interesting security research continues to happen to secure this procedure from being attacked by stealthy malware and rootkits.

We tap into IoTs just when it is powered on and boots up! and communicate with the asynchronous communication port using UART (universal asynchronous receiver/transmitter) device. We can do this by serial communication using RS-232 and hijack the device, even before it loads the bootloader. For this, we need a USB to TTL adapter like the one shown below.

UARTs most commonly operate at 3.3 volts, but can also be found operating at other standard voltages (5, 1.8, etc). You would have to use a multimeter to find the voltages on the PCB/motherboard before you can plug this device, else it might damage your IoT! Once you have the above components, you have the right hardware to the task.

Next, we look at the PCB and figure out the voltages to connect the IoT device to your laptop using the USB to TTL adapter.

PCB of photoframe

Above shows the connection of PCB with the USB to TTL adapter using wires.

You can presume to do these simple steps for almost all the IoTs before you get into the real deal of actually breaking into the device. These devices will always have serial communication ports for debugging purposes which you can tap into, unless FCC decides to eliminate them with their new rules (Google to know more!). This is a bit primitive from hacking into your Wireless routers(on the higher end), where you can use JTAG to do the exact same thing.

Next, install minicom on your Linux box and configure it to talk on the USB port. Once you have done this, power on the IoT device and see boot messages on your minicom. Make sure your connections are good and the baudrate is correct, else you will see binary blobs on screen!

It seems, this hardest part is the easiest one for the photoframe as you can keep pressing Esc and you will landup with the command prompt before the bootloader is loaded. Following that, type bootcmd to run the bootloader.

0wned by yet another haX0r

Above is a shell on the device where you can mount a USB drive and run your own binary/shell code as you wish.

I attended a seminar on Academic jobsearch at Georgia Tech with an esteemed panelists and I thought I should share this with others who fancy their chances in academia!
Panelists:
Lance Fortnow
Nick Feamster
Taesoo Kim
Ahamad Mustaque
Moderator: Hyesoon Kim

I have briefly written the points which were made during the discussion and question/answer session below.

While writing your research statement, think about spectrum of your research. You might want to have good stories on the research side. When you ask for recommendation letters, it is good that you give an outline to the writer so that s/he can amplify your statements, rather than talk about stuff which might be totally different than what you expected. It is usually not a good choice to get recommendation letter from famous people as they write a lot of letters of recommendation — unless they have something special to say. Doing a postdoc is a good option as it improves maturity about research and useful to get mentoring from your co-workers; You can
think about it as PhD without courses and you should talk to you about it.

You might already know the rankings of the schools to apply, how do you chose which ones to apply? You should self assess what material you are – MIT, Stanford, Georgia Tech etc! You should find good reasons to be interviewing at a university otherwise you should not interview there.

Assume that Teaching statement matters, actually assuming “everything matters” is a good attitude. Teaching is not just lecturing but also TA hours and extra time for concepts. Mentoring an undergraduate while you are in graduate school or teaching a course is a very good experience to share while you apply for academic jobs as they demonstrate experience in handling students independently.

Many recruiters ask the same question to professors, “Do you have someone who is really great ?”, hence it is important to talk and have engaging conversation with other professors while you are in Graduate school so that in the future they might have interesting things to write for your letter for recommendation.

To be become a faculty in top colleges you have to do good visible research, which might take longer time beyond your PhD and cultivate strong references. You will have to take risks in research to publish in top (not just good) conferences! You should do good networking in conferences and let people know who you are. It is a good option to ask your advisor and other people close to you in your area help you in introducing you to new opportunities.

Job talk will be the judgement deal so make sure you practice it a lot and have high energy level and demonstrate that you really want to join the university.

In your one-on-one discussion with the faculty, you should be able to verbalize your research and interests in 15 mins so that you can hear the other side and have good connections with them about their own research and possibly make them interested in collaborations and hence you should be excited about your own research. You might have to hang out with those people for the next 20-30 years which makes it important for you to be at your best so that they can value your presence. You should extend the feeling that you are a positive person to the staff and students. You should dress formally as it displays your sincerity and respect for the higher position you are trying to get. While visiting the school, you should be convincing to others that you are special/different, able to connect across different research area and try to make connections with them. Ideas which cut across research areas are most important! Your research problems
should be at an intersection of talent, passion, problems on impact (all three of things). Remember, you will be your own CTO, CFO — spawn new ideas, engage in discussion and let them know you!

When you are on the job market, you need to have a good attitude to talk with everyone at the university you visit. You should try to find out the niche of the department and shape your discussion according to it so that there are more chances of collaboration. You might need to
fake the job-market is good so that things turn in your favor and you don’t look like you are performing badly in the academic job-hunt.

While doing job interviews, last schools might be exhausting and you should always make a well spanned out schedule for them; As a corollary, never schedule two interviews in 1 week – give yourself time to recover. Do your homework well by reading the webpages of all the faculty you
have meetings with – flush moments are when the candidates come back with stupid questions. Treat every place as your *only* interview and don’t be overconfident that you will definitely get through to be a faculty at any school.

If you are rejected by a school after a job talk, it is imperative to improve your resume/redo your CV to the next level before re-applying.

There are rare cases where systems guys who created start up and later goto academia, but this is highly unlikely and shouldn’t be thought of as a possible carrier path while you are in graduate school.

I get plenty of emails asking questions about tweaking 802.11 stack in Linux.
Having spent a lot of time working and understanding it in the hope for a publication, but unfortunately I didn’t get a deployment on actual machines so that I can get some data and analyse it. I wish I could have written something about modification to Qualcomm’s firmware as I modified it in my internship, but I can’t talk about it due to proprietary reasons.

Most of my understanding has been developed by reverse-engineering hardware registers and driver codebase. I am glad Atheros has made the code opensource that can be tweaked. Adrian Chadd(Free BSD developer) and Felix Fietkau(OpenWrt) have given valuable inputs and so have some unknown IRC hackers(including Daniel Smith) who have been wonderful suggesting ideas in while I tried modifications in the the darkest of nights :-).

In this post I shall discuss some important things about 802.11 and provide a kernel patch for it. I have also included a python library(like tcpdump) which can parse the modifications made to packet headers.
Linux wireless stack currently doesn’t provide a lot of information of the 802.11 PHY layer from the hardware because of lack of standardization of many flags and fields in the radiotap header to make it available. It’s also true that different wireless chipsets don’t provide the same level of granularity/information one needs and these are hard to address for mac80211 kernel API.
802.11 a/b/g/n ath9k driver codebase is massive compared to the older ath5k and MadWifi drivers and can be a pain to work with! I have appended fields in radiotap header to keep it compatible with tools like Wireshark and tcpdump to get per frame information than opening a /proc interface which will be significant overhead to read on an embedded device if one has to sample at per packet granularity.
There is plenty of information that 802.11 driver keeps with it but is not exposed in userland. They are vital and can be used in detecting a lot of problems that your wireless stack network might be facing and you are unaware of it.
For example, hardware frame transmission timestamps from a device are not provided in userland, although this information is present in the driver. Current timestamps provided by tools like tcpdump and Wireshark are the ones provided by the network adapter rather than by the hardware device. Hence, they should not be trusted at microsecond granularity and also not be compared with timestamps provided by radiotap headers for the received frames. There are additional points to consider when looking at hardware timestamps. 802.11 n transmits AMPDU and the hardware returns a callback only when the transmission is complete for the whole AMPDU. So one must understand it is obvious that he might see same timestamp for more than one frame, and has to check a related flag in the radiotap to check that condition has occurred!

One of the PHY layer information that might help in detecting microwaves and other Non-Wifi interfering devices are CCK and OFDM error counters maintained by ANI system in driver. Their frequency increases in the presence of such devices as the hardware fails to decode the frames at the PHY layer. There are other errors which when turned on might increase the work (DMAe’d by device to the main memory) might cause a degradation in the performance of the router, so use them at caution or use oprofile to profile the performance at high workloads. I have primarily worked with ARM based processor- embedded systems, but if you are using x86 machine, then you don’t have to worry about it.

If one wants to know which bitrates were tried, the order in which they were tried and the number of retransmissions experienced by each, that information is also appended to the modified radiotap header.

I have also written a complete parser similar to Wireshark/tcpdump for parsing the additional fields appended to the radiotap headers.
I would be happy if someone can actually collect real data using such patch which can help to understand wireless networks better!
I have done some very basic data collection, results of which can read by building my PhD presentation, which was also presented at IS4CWN 2013.

Well there are many people who are beginning to be web developers and find it amazing to work on (as fun projects)!

It’s one of the coolest things to do but there are certain things that one might want to consider while doing it.

I have had some experience in Web Designing during my undergraduate years in IIT Madras. We had developed an online Notice Board for the student community and later I was the web operations co-ordinator for our Cultural festival, , and I had an intern, part of it was to expose our IP using a Web interface.
This is years 2007-2008 when we used Perl/PHP and MySQL as the backend scripting language and database. We had build a server from scratch, used a LAMP and used dedicated servers for college festival work. There are many things which have changed since then which I came to know only recently when I was helping an undergrad student at GaTech for a financial startup (unfortunately, we din’t get funding for the idea), and I will compare them later in the post.
Although there are many ready-made themes available, I would like to share some thoughts on what should one consider while creating good websites. It’s easy to just drop lot of text on a webpage, but its equally difficult to manage and present it well.

The first and most important rule is the Golden Rule : Everything should be at most 3 clicks away.
This helps users to find information quickly and not getting confused and lost in the webpage. This is violated in so many websites today that it is hard to keep track of.

Organization of Webpage : The content should not be cluttered on the landing web page making it hard for the reader to find information. By dumping everything on the landing page, one has made it hard for the person to locate it and he might as well leave the webpage. This is the best strategies followed by startups who look for validating their ideas by keeping minimal information on the landing page.
One should have soothing color scheme that least strain the users’ eyes and should always try to use Sans Serif fonts(eg. simple Helvetica and Arial) on website. Note that these are followed by Google!

Unless you are not building something really fancy, you should try to stick to the fact that the end user has minimal resource/capabilities in his browsers! Which means, the least use of multiple libraries! The lesser assumptions of remote browser supporting it. Think about someone in Siberia wants to see your webpage with a Netscape browser and a 256 MB RAM card! Your webpage should render itself well on it too(although it is an extreme case, but having a variant of it is quite plausible)!
There are lot of people who ignore this fact, but it will be nice to take care of it.

Digital media is an offshoot of print media – which was the primary means of communicating ideas for several centuries and there certain things it has inherited from it(everything before Web 2.0 atleast). For example the lines should not be longer than 70-80 characters so that mind can synchronize while reading it, easily. 720 X 1280 pixel was a very good resolution with least strain on your eyes (imo),but as the resolutions keep increasing with years, one should always try to design their website keeping it in mind.

As I suggested that there are plenty of server side scripting langugages – PHP, Ruby and Python(Perl is apparently outdated and I guess it has performance issues too because its usual server implementation forked a new process per request apart from it being an hacker language).
There are couple of environment which grappled over spending a day worth of time on. This is particularly Node.js and meteor and Bootrap (which is not exactly an environment but mostly like old Yahoo YUI). All of them use Javascript as server side scripting language (not intuitive to me when I first learned about).
Apparently, current developers find Javascript as an awesome language ( personally, I think is a horrible experience and must be avoided; For once it was created and released in 10 days for the first Netscape release back in 1995 with Windows ’95) as server and client side both can be written in same scripting languages avoiding them context switches.
Node.js was created in 2009 and the main motivation for writing it in Javascript is that these frameworks use event driven programming model (as it has been established that webserver’s processor takes most of its time waiting for IO than doing other tasks). Google’s V8 engine has been phenomenal for such kind of development and exposing the runtime on servers making them quite fast(with respect to other dynamically typed languages).

Meteor.js is another framework but it is a bit more advanced and still in development (as of May 2014), although it has different philosophy, which is more than just writing everything in Javascript. The framework wants the server and client to share different responsibilities unlike the old client-server model where the server is responsible for sending the HTML as well as data to client, instead the client is smart enough to figure it out!

Bootstrap is a product which new web developers can use to quickly build a new website, like drupal, joomla and such, but supposedly it offers more which might be obvious to give a better feel to the website.
Since Javascript has taken over the web-development landscape, I still feel it is not the best way to go about. Recenly, developers have ported real time games to browser using asm.js, which runs the ported C++ code on web browser making it even more powerful to play games with minimal delays. I don’t think if these developers understand how much overhead is a simple click in the browser going to cost the operating system (in terms of context switches and system calls) and if this makes sense. The only thing javascript solves in this big picture is portability, but is it any good ?,

There might be another detail you might be going into- which web-service to use! SOAP or REST ?Depending on what your website is about, you might want to use one or the other. If you are just building a simple website for blogging, or simple pages, you might want to use REST as it is pretty good in terms of modern view of Web. If you are building something complex (like amazon) and may be are stuck with Java based solution (which are quicker in response time, as they are compiled object files running than a dynamically typed Javascript), then SOAP might be a better option to work on. They both have strengths and one cannot simply say one is better than other, unless you define what are you trying to create.