https://www.kunxi.org/blog/feed/index.xmlKun Xi2016-12-08T20:58:21ZKun Xikunxi@kunxi.orghttps://www.kunxi.org/All rights reserved 2016, Kun XiFeed for Node.jshttps://www.kunxi.org/blog/2016/09/update-lg-g3-to-marshmallow/index.html2016-09-19T12:00:00ZI like my LG G3(D850, AT&T variant) for its nice balance between features, build quality, and affordability. However, the AT&T integration makes less sense for me since I use Cricket Wireless:

The network tethering is disabled as “No AT&T sim is found”.

The address book flicks due to the AT&T address book access failure when opened.

No OTA software update, ever.

Last weekend I finally pulled the trigger: I rooted the LG G3, installed the latest TWRP custom recovery, and flashed the custom ROM, Fulmics 6.1; and I just had a new phone:

Root

All tinkering should start with a full system backup. It is essential to backup all the photos, your won’t have a second chance to take photos of your kids in two years old.

Then you need to enable the Developer Mode for ADB debugging, so we can put packages to the internal storage. Also we need to install the LG Driver to communicate the phone via the USB serial protocol used by the Send_Command.exe later.

Download and unzip the LG Root package, and open a command prompt from there. The first adb session requires your confirmation on the device to consent, and you should see the attached device like this:

I could not find the secret sauce of the magic Send_Command, but the Unix port shed some lights on it: it opens the serial port, COM3 in our case, and then writes the command to the stream with crafted packing, — I assume this may exploit the vulnerability to gain the root access?

Within the command prompt, we grant the SuperSU root privilege and launch it when booting up:

You may want to checkout the Magisk for the systemless root approach. At the time of writing, Fulmics 6.1 does not support ramdisk compressed in the gzip format so it cannot load Magisk during bootup. The Fulmics 6.5 adds the gzip support.↩

]]>https://www.kunxi.org/blog/2016/07/learning-react-2-visual-components/index.html2016-07-06T12:00:00ZIn the last post, we went through the visual mockup, state planing; and built the minimum scaffolding for the redux app. We will work on the visual components in the post.

Material Design

Google’s material design is not only the guideline for the android platform, but adopted by more and more iOS apps, such as the SurveyMonkey app. Applying the material design to our web app will strength the conception of the native look-and-feel.

Material UI is a nice react UI component library to implement the material design, and has a rich set of components to meet our need. It depends on react-tap-event-plugin for the time being:

npm install --save material-ui react-tap-event-plugin

AwesomeBar

Recall the visual mockup, the AwesomeBar contains the following UI elements:

It is worthy noting that the this scope change in the event handling: we need to explicitly bind the component itself to the this pointer.

TimeTable

react-big-calendar is probably the most versatile calendar component off the shelf though it is way over-qualified for our needs; thus I decide to build it from the scratch. The TimeTable can be further decomposed as:

current date and the day of the week, called BigDate

the 24-hour timeline

the events

Clearly, the 24-hour timeline can be modeled as a two-column table with appropriate padding. It is a little tricky to align an arbitrary event div to the table cell. Thanks to the relative/absolute CSS trick, we can align the event from 8:30am to 10:00am as:

It looks boring in the first sight, but indeed many interesting things happen here, let’s discuss in more details:

The state is the source of the truth, but maybe not convenient to consume. The props is a view of the state to facilitate the visual rendering. Assume we want to render BigDate with a different color if the date is today, the business logic to determine whether the state date is today SHOULD be consolidated in mapStateToProps; otherwise, our abstraction is leaking.

The function bindActionCreators binds the store.dispatch to each action method, and mapDispatchToProps expose all the actions to the App’s props. It is generally a good practice to pass only the minimum required actions down to the components to avoid surprising state transition. In our case, only selectPlace action is passed to the AwesomeBar as the props.

The connect function dynamically declares a new react Component, Connect with the original App wrapped. The Connect implement various react callbacks, such as componentDidMount among others. In the componentDidMount, it subscribes the redux’s store to detect the state change, and trigger the redraw by invoking react’s setState method.

Update@Aug 25 My colleague Nate pointed out that for a deeply nested component, it is more convenient to inject the dispatch instead of binding actions. Then the component may reference the dispath in its props, and it can trigger the action by dispatching the action payload.

This approach will significantly reduce the boilerplate code to pass actions as props to the bottom. But on the other hand, it couples the react component with the redux store to prevent further reuse; and no type check for the action.

Feed the Data

The webpack pipeline supports json-loader to import json object just as regular javascript module. We just need to configure the webpack as:

Summary

Our web app is in a good shape for a MVP release, but we haven’t touch any traditional front end chore, such as stylesheet among others. Let’s talk about the visual appeal in the next post.

]]>https://www.kunxi.org/blog/2016/07/learning-react-1-getting-started/index.html2016-07-01T12:00:00ZReact has drawn lots of attention inside SurveyMonkey, it has been adopted by the mobile team and become the de-facto UI framework for our next generation user experience. As a backend engineer, the react and its friends are just another layer of the frontend rabbit hole. In the “Learning React” series, I will build a small, but full-fledged app with react to explore the mobile web development.

Prefix

The grand opening of the YMCA at Sammamish is probably the best thing happened in 2016: I exercise daily, diet with calorie-awareness and lost 15lb since then. In short, I recharge myself thanks to the YMCA. The YMCA schedule is merely a PDF file optimized for print, and I want to build a mobile web app, the YMCA Schedule to present the schedule data in a mobile-friendly way:

Native app look-and-feel

Offline first

Personalization

Design Mockup

The visual mockup is highly recommended by the react guide to decompose the monolithic UI element to many small components, more concretely,

The main UI has two components, the AwesomeBar and TimeTable. The AwesomeBar is an instance of the AppBar with drop-down menu to select the facility place. The Timetable renders a daily view with events. The user may swipe left and right to pick a different date.

The internal state is also clearly defined:

schedule data for all the places on all days of the week.

the facility place, aka place thereafter.

the date

The Big Picture

In my humble opinion, react is a typical Facebook approach to solve the problem: reuse the existing program model but abstract away the complexity and performance bottleneck with a new layer. With react, the developers no longer need to manually manipulating the DOM to reflect the state change, instead, we can just render the current state from the scratch on a virtual DOM, and delegate the visual diff optimization to the react library.

The state change is typically managed by a flux implementation, such as redux. redux defines the following concepts:

action, the state change MUST be triggered by an action function. It returns a plain javascript object with the action type and meta data.

reducer, the reducer works as a state machine, it accepts the return value of an action and current state, and infer the next state.

store, the store, configured with reducers, keeps the current state.

Furthermore, the react-redux binds the redux’s state management and react’s visual representation: the Provider is instantiated with a store instance to observe the state change. Under the hood, it implements the react Component interface to trigger the redraw.

Getting Started

Getting started with react and its friends are pretty overwhelming. First, we need react, react-dom, redux, react-redux. Since we are very likely to opt-in JSX and ES6 syntax for concise code, we might need configure webpack with babel to compile, bundle and hot module reload(HMR) the web app during the development. If you need to integrate existing design, you might also need the CSS process pipeline, such as css-loader, sass and postcss.

To be honest, I never fully understand how the webpack works, so I usually copy the webpack.config.js from a redux example, such as this.

A typical redux app may look like this:

constants/ActionTypes.js defines all action type constants. In the retrospect, I highly recommend to use the enum alias instead of string for the action type. It is much easier to catch a undefined variable during the startup than wading through the callstacks. If you have lots of actions to create, you may consider opt-in Flux Standard Action(FSA) and use redux-actions.

actions/index.js defines all actions which can modify the state. I am convinced that a central repository for all the actions are easier to maintain.

reducers define all reducer functions. It is recommended to separate the independent state change to isolated reducers for better maintainability.

components defines all the visual components. They SHOULD be dumb, all business logic SHOULD sit in a container.

containters defines all the smart components. Not only they compose multiple components, but also the business logic to deduce the control signal for rendering.

]]>https://www.kunxi.org/blog/2016/06/outage-post-mortem-poisonous-cookie/index.html2016-06-07T12:00:00ZTL;DR: the standard Cookie library cannot handle the malformed cookie persisted by the 3rd party javascript library due to this bug, which fails our authentication system.

Recently, the SurveyMonkey Contribute, SMC thereafter, had an outage: a group of users could not login or take survey in our site. They were bounced back and forth between the home page and login page until the browser was tired of this game and threw the towel.

Before we jump to the technical details, let me explain how the authentication works in SMC:

We use pyramid’s session to track the authentication: the session data is persisted to the redis, and the session cookie is set to the browser after the user successfully logins.

As a SurveyMonkey product, we also honor the auth cookie issued from *.surveymonkey.com in the login page, we will automatically log the user in and create the session if she can be authenticated via the auth cookie.

It looked like the authentication by auth cookie worked as expected, but the session cookie was somehow corrupted. When the user was redirected from the login page to the home page, the home page failed to authenticate the user with the session cookie, thus it redirected the user to the login page again, — which incurred the redirect loop.

Our first instinct was the cookie was messed up in the client side as the issue only reproduced to specific users, it simply did not reproduce in my machine. But my colleague, Nate managed to reproduce the issue in the Chrome’s incognito mode and also Firefox. No.

Our second guess was the redis operation issue. No, we verified that the session data were correctly persisted to the redis.

We need to dig deeper.

Debug in a sandbox

Luckily (or unluckily?), Nate also encountered this issue in the testing environment, so we could debug this issue in a sandbox with the liberty to poke around.

First we replicated virtualenv in the testing environment, then we started a paster instance with the same app.ini but bound to an unused port:

Thanks to the Chrome Developer Tools, we can replay the request in curl: click the Network tab in Chrome Developer Tools, and right click the request to replay, and choose Copy as cURL from the context menu, see below:

Copy as cURL in Chrome Developer Tool

And in another terminal, run the curl command with the overridden endpoint and host header to hit the paster instance in our sandbox:

This piece of code is how beaker loads the session key by parsing the cookie header, and we found that the Cookie.CookieError were always raised during the redirects! Since SignedCookie is a thin wrapper of Cookie.BaseCookie, we can reproduce the issue as:

Though the constraints are not enforced by many browsers according to the Wikipedia. This issue is also acknowledged by the core python developers, and fixed in python 2.7.10. Later, my colleague Junkang pointed out that the bug is also fixed in beaker 1.8.0.

Lessons learned

Progressive stack upgrade

We take a don’t-upgrade-until-it-breaks approach for our stack, which incurs technical debts over the time. We could potentially avoid this outage if we upgraded the stack more aggressively. We plan to progressively upgrade our stack routinely to strike a balance between the cutting-edge and stableness.

Audit the 3rd party javascript libraries

We integrated a handful 3rd party javascript libraries with great trust in our web site. Just as our code, the 3rd party softwares may have bugs, or oversight of the corner cases. It not only impacts our customers’ experience, but also opens another attacking surface from the security perspective. It is not trivial to audit a minified, uglified 3rd party javascript library, but at least we can:

List all 3rd party javascript libraries we consume and their dependencies.

Identify how they are used, and only include the libraries in the pages which require the specific functionality.

]]>https://www.kunxi.org/blog/2016/03/install-windows-10-on-thinkpad-x201/index.html2016-03-06T12:00:00ZMy wife loved her ThinkPad X201, kind of out of dated but still functional and lightweight enough to carry. Recently, she started to complain the lack of responsiveness of the system, so I decided to throw the following components to revive the machine:

With Windows install media creation tool, you can get a bootable USB drive to for a clean install. In about 20 minutes, I was challenged by the Microsoft Account credential. The password was simply not working, even I was able to login to my Microsoft Account successfully in another machine! So I rebooted the machine and wished a better luck in the second attempt.

The system booted into a black screen with a mouse cursor. There are basically two interpretation for this symptoms.

The official solution considered a potential display driver conflicts, and the proposed solution is to boot to the safe mode, and resolve the driver issue in device manager. In this particular situation, I need to reboot the machine with Windows 10 installation media:

Click “Repair your computer” in the welcome screen

Click “Troubleshoot”

Click “Advanced Options”

Click “Command Prompt”

In the command line, execute the following command:

bcdedit /set {default} safeboot network

It will govern the next system reboot to the safe mode with network, see here for more details about bcedit.

It did not work, in the next reboot, the Windows complained the system could not boot into safe mode before the installation completed.

That leaves us no more choice, but restart from the scratch. This time, I successfully logged into my Microsoft account, and see the blue screen with gradient:

It’s taking a bit longer than usual, but it should be ready soon.

In the internet, some argued that you should wait, maybe even overnight; some said it is OK to reboot the machine, and the installation will pick up where it gets stuck last time. After the reboot, the machine booted into the black screen with mouse cursor, again.

But this time, I was able to summon the Task manager via Ctrl-Alt-Delete, and then I could run a new application from File | Run new task menu. Running the explorer.exe bring the familiar Windows shell, but the start menu is not responsive, neither did Windows search; Windows complained that ms-settings:display is not recognized; the device manager invoked as devmgmt.msc from command console showed that the display adapter as “Microsoft Display Driver”. Overall, the system is in a very weired state.

The official answers is to run a full system file check: sfc /scannow. Run the following command if any error is found:

Note: the above commands MUST run in the elevated privilege. Since we don’t have start menu, we can navigate the explorer to $WINODOWS$\system32, and right click cmd.exe as Run as administrator.

Both commands failed, more concretely dism.exe failed with Error 1726.

Then the Windows update kicked in and rebooted the machine. In the next reboot, the familiar start menu is back, and the display driver is recognized as Intel HD graphics, everything just works.

In the retrospect, I appreciated that Windows 10 trying hard to get the system back to its foot under the hood, but I do wish Windows 10 could provide more introspection for better troubleshooting.

]]>https://www.kunxi.org/blog/2015/12/reinstall-netgear-n900-firmware-with-a-mac/index.html2015-12-10T12:00:00ZIn the fierce wind storm last night, we had an intermittent blackout, and I found the router, the Netgear N900, aka WNDR4500, stopped being functional. According to the manual, the power LED blinking slowly with amber light indicated that the firmware was corrupted, that left me the only option: reinstall the firmware.

First, download the latest firmware from the product page, R4500_V1.0.0.4_1.0.3.chk for the time being this post is written.

The KB article only shows the instruction on Windows, but it hints that the firmware reinstall is essentially done by the tftp. Here are the steps to do it with a Mac, assume your router binds to 192.168.1.1:

Turn off the router off for 10 seconds.

Connect your Mac and the router with ethernet cable, and configure you Mac IP address manually as 192.168.1.55.

Turn of the router, run ping 192.168.1.1, and wait for the acknowledge.

]]>https://www.kunxi.org/blog/2015/12/lets-encrypt-with-saltstack/index.html2015-12-07T12:00:00ZLet’s Encrypt aims to provide everyone a free, automated, and open SSL/TLS solution to secure the Internet. Here are some other free or low-cost Certificate Authority(CA) compared to Let’s Encrypt:

CAcert is also free, but the root certificate is NOT trusted by most browsers.

StartSSL is free for non-commercial use only, and revocations carry a handling free of US $24.90.

The commercial DV certificates usually cost from $1.99(WoSign) to $9.00(Comodo SSL) annually.

Let’s Encrypt also tries to simplify the SSL certificate enrolling process with one liner command. letsencrypt-auto. Based on your server setup, you may choose any of the following options(note: letsencrypt-autoMUST run in the production server):

Under the hood, letsencrypt-auto instantiate a virtualenv at $HOME/.local/share/letsencrypt, succeeded by pip install letsencrypt, then it offload the heavy lifting to the bin/letsencrypt in the virtualenv.

The last command exposes a temporarily URI as one-time token, such as: /.well-known/acme-challenge/_-so7o7rCJZAHjqwZTzVMk6guh83P0vyKkKQBb5mxGc. If the URL is accessible publicly, the domain is then validated, You may find the private key, privkey.pem and the fully chained cert, fullchain.pem in /etc/letsencrypt/live/kunxi.org/. And we can use them in the nginx server context as:

Just put the regex which matches the .well-known directory specifically before the catch-all hidden file regex. According to the nginx doc, the rules are:

nginx first searches for the most specific prefix location given by literal strings regardless of the listed order.

Then nginx checks locations given by regular expression in the order listed in the configuration file. The first matching expression stops the search and nginx will use this location. In our case, the regex ^/.well-known/.*$ in the preceding order will apply.

If no regular expression matches a request, then nginx uses the most specific prefix location found earlier.

Let’s encrypt with SaltStack

letsencrypt-auto is pretty handy for one-time SSL cert deployment, automate the SSL cert management will be more desirable from the DevOps perspective. Let’s get the following task automated via SaltStack:

Install Let’s Encrypt

Obtain a SSL cert on demand

Renew the cert monthly

Install Let’s Encrypt

Since letsencrypt-auto installation is kind of leaking, I will take the pip and virtualenv approach:

With the above snippet, we will create a virtualenv at /opt/letsencrypt, and then install letsencrypt python package with pip install letsencrypt in the above virtualenv specified by bin_env. letsencrypt is used to identify both the virtualenv and pip state, it is merely a personal preference to limit global names.