Stuff Yaron Finds Interestinghttp://www.goland.org
Technology, Politics, Food, Finance, etc.Mon, 02 Mar 2015 23:10:40 +0000en-UShourly1http://wordpress.org/?v=4.1.1How do we exchange identities in Thali without making our users hate us?http://www.goland.org/coinflippingforthali/
http://www.goland.org/coinflippingforthali/#commentsSat, 28 Feb 2015 00:03:45 +0000http://www.goland.org/?p=1253
In Thali identities are public keys. But typing in a 4 Kb RSA key or even a 512 bit EC key isn’t exactly easy. So how do users securely exchange their keys? Our original approach was using QRCodes. But lining up the phones, scanning the values, etc. is all a serious pain. So if ultimate security isn’t a requirement our backup plan is to use a variant of Bluetooth’s secure simple pairing with numeric comparison which itself is just an implementation of a coin-flip or commitment protocol. The main downside of this approach is that it provides a 1:1,000,000 chance of an attack succeeding.

Alice and Bob both have generated public keys on their cell phones. They now want to securely exchange those public keys in a manner that will prevent an unknown key-share attack (UKS). That is, so that Mallory can’t show up and man-in-the-middle Alice and Bob such that Alice thinks Bob’s key is PKM1 and Bob thinks Alice’s key is PKM2 where PKM1 and PKM2 are public key pairs controlled by Mallory.

A constraint in this specific scenario is that there is no cellular or Wi-Fi infrastructure. The key exchange has to occur using local radios (e.g. BLE, Bluetooth, Wi-Fi Direct, etc.). Note, nothing prevents exchanging keys over the Internet, it’s just that for this specific scenario we need something that will work when there is no Internet connectivity.

In our threat model we make the following assumptions:

Mallory knows Alice and Bob’s public keys ahead of time. They are, after all, public.

Mallory knows exactly when and where Alice and Bob will exchange their public keys and has the correct equipment to modify any signals in any way she wants. Conceptually one can imagine a wire running from Alice’s phone to Mallory’s ‘hack box’ to Bob’s phone such that Mallory can alter and delay any transmission at any time she wants.

We assume Mallory can block all radio transmissions at any time for as long as she wants. Therefore we explicitly do not attempt to prevent denial of service attacks, only UKS attacks.

We assume Mallory cannot change the software on Alice or Bob’s phones.

Our goal then is for Alice and Bob to exchange their keys without Mallory being able to successfully launch her UKS attack.

Normally the way we deal with this scenario isn’t with a radio like BLE, Bluetooth or Wi-Fi. We use QRCodes. We would have Alice display her key on her phone and Bob would take a picture and then vice versa. Mallory does not have a good way to alter visual signals so this mechanism is generally considered secure. However it’s also a serious pain in the tuchus. Try it sometime (we have). It’s really annoying. And for some scenarios the annoyance level is high enough and the security required low enough that it’s not workable.

So we need a way to prevent the UKS attack from succeeding that doesn’t involve people taking pictures. Our working assumption however (possibly wrong, which is why we are bringing it up) is that a UKS can only be prevented if there is some secondary channel that cannot be easily faked by which the key exchange can be confirmed to have worked. Anything involving a radio is automatically assumed to be compromised by Mallory and pictures are already out for the reasons described. So what this leaves us with is humans.

Our assumption then is that to prevent the UKS attack, while still having an acceptable UX, we need the key exchange process to end by having the humans involved look at their phone screens and confirm… something. Ideally the something would be the same number showing on both of their screens. Furthermore to meet our UX requirements that number cannot be much more than 6 numeric digits. In other words the end of the exchange is “Hey Bob, does your phone show 123456? Yes Alice, it does” then both Alice and Bob hit confirm on their phones.

The approach we want to use is a coin-flip or commitment protocol. It’s as old as the hills but the nice folks in Bluetooth land did a good job of documenting it so we are going to work off their work. Specifically we are using the Secure Simple Pairing (SSP) algorithm with numeric comparison taken from pg 616, section 2.3.5.6.2, figure 2.3 of Bluetooth Core Spec 4.2 (defined here).

There are however a few minor modifications we will need.

In the Bluetooth protocol the two sides exchange public keys in step 1 of their algorithm in order to perform a Diffie-Hellman key exchange. That is not needed in our case. Rather we will just exchange public keys but not generate the Diffie-Hellman key.

In addition the Bluetooth protocol, due to characteristics of the Bluetooth transport, has an identified ‘initiating device’ and ‘non-initiating device’. In our case we can’t be sure (and don’t want to care) who initiated because both users could have switched their phones into pair mode at the same time. To deal with this we will define a canonical ordering on public keys which will allow both user’s devices to determine who the ‘leader’ (a.k.a initiating device) is without any further communication. The selected leader will be responsible for calculating the compute confirmation value.

We intend to support RSA in preference to EC. Our reasoning is that there are still open issues regarding the security of the curves selected by EC and there has been substantially more work done on RSA attacks than there has been on EC. So we tend to think RSA (at key sizes of at least 2K) is more trustworthy than EC.

The Bluetooth algorithm however is defined only for EC.

For purposes of picking a leader when RSA is used we will compare the modulus’s of each public key as if they were integers and the smaller integer will be picked as the leader. For purposes of including RSA public keys into the hash functions we will encode both the modulus and exponents as ASCII strings encoding their integer values separated by a ‘.’ As given in the HTTPKEY URL Scheme.

For those that don’t feel like downloading the enormous PDF and looking up the table in question I have summarized the algorithm below.

Use a cryptographically secure function to calculate a 128 bit random nonce RNmine

Send public key PKmine and simultaneously wait to receive public key PKother

If the modulus treated as an integer of PKmine> PKother goto step 11

Calculate Cb =AES − CMACRNmine(PKmine||PKother) . That is, generate an AES-CMAC where the key is RNmine and the message is a UTF-8 string encoding PKmineconcatenated with PKother. Both public keys are presented as strings as given in the previous section.

The core of a commitment protocol is that the attacker has to commit to their keys before learning anything useful. So in practice this means that Mallory has to pick PKM1 and PKM2 to give to Alice and Bob ahead of time. So the attack boils down to - given PKM1 will PKM2 when hashed with Bob’s information produce the same 6 digit hash code as PKM1 produced with Alice’s information?

Another way to think of this is that Alice uses PKM1 and gets some 6 digit hash code N. So now the question is - what is the probability that PKM2 when used by Bob will produce the same N that Alice got? If we assume that hash code results are evenly distributed given any arbitrary input this argues that the probability that PKM2 will result in hash code N is itself evenly distributed. In other words, this is the same as saying we have a 1,000,000 sided die (with a 6 digit hash code we have 1,000,000 possible values), what’s the chance that we will get N on that die?

That is, of course, 1 in 1,000,000.

So, yes, this means that if we reduced the hash code size to 5 digits then the chance of a successful attacks is 1 in 100,000 and if we choose 7 digits it would be 1 in 10,000,000.

]]>http://www.goland.org/coinflippingforthali/feed/0Buying a sit-stand-walking deskhttp://www.goland.org/sitstandwalkdesk/
http://www.goland.org/sitstandwalkdesk/#commentsWed, 18 Feb 2015 21:46:19 +0000http://www.goland.org/?p=1248
My job has increasingly become almost completely coding focused which means I’m sitting, a lot. I need to get up and move. Knowing my personality I decided the right way to do this is with a sit/stand desk using a treadmill. But I also need to be able to sit and I don’t have the room to move the treadmill around. So I’m buying the 72 inch iMovR Omega EVEREST desk which has enough room to put the treadmill and my chair next to each other. I’m picking the TR1200-DT3 treadmill. I’m adding an ERGOTRON LX HD Sit-Stand Desk Mount LCD Arm and it’s lighter non-HD sibling. I’ll also need to pick up two VESA mounts for my Apple hardware. Apparently being healthy requires breaking the bank. See below for the absurdly painful process by which I figured this all out.

I sit. A lot. For a solid 8 hours a day. I used to at least wander around when I was on calls but I do a lot fewer calls these days and I try to use video where I can. Which means I’m leashed to the desk all day long. I need to move!

The obvious first step was to get a standing desk. But I know me. There is no way in heck that I’m going to stand around using the desk for hours at a time. It’s just not my personality. Sitting for hours at a time I can handle. But standing for hours at a time? Forget it. My feet ache just thinking about it. I need to move!

So the obvious solution was to get a walking desk. That is, a standing desk with a treadmill underneath it. But this quickly brought up another problem - I’m not going to walk all day either! I can easily imagine myself walking a lot more than standing but neither is going to be a total substitute for sitting. So I need a set up where I can sit part of the day and walk part of the day.

I rejected a bunch of options (see A↓) before deciding on just getting a 72 inch sit/stand desk. The desk is wide enough to fit a treadmill next to my office chair while still being small enough to fit in my office.

Figuring out that the desk, treadmill and chair would fit in my weirdly shaped office (with three doors!) was fun, see B↓ for notes on the open source software I used, Sweet Home 3D.

You can see my complete list of sit/stand desk requirements at C↓ and see a bunch of desks I evaluated at D↓. But something funny happened that ended up changing which desk I wanted. There is a company called iMovR who owns a company called Work While Walking which has a show room in Bellevue, Washington near where I live. So I got to visit them and try out their desks. When I went there I was pretty sure I wanted to buy the ThermoDesk ELITE. But I ended up really wanting the Omega EVEREST. Or, perhaps it’s more accurate to say, I was sold it by, I believe, Ron Wiener, the CEO of iMovR who was in the office that day. But just because he is a good salesman doesn’t make him wrong. :)

What makes the Omega EVEREST different than all the other candidate desks I looked at is it’s keyboard tray. They actually cut out a part of the desk and put in an inclined tray. This turns out to be a big deal because it means that while you walk the keyboard is at a very steep angle. This lets you type with your arms naturally hanging in front of you. This reduces the pressure on the arms and also makes it easier to hold on to the desk while walking. I tried it out using a Microsoft sculpt keyboard and it was pretty awesome. From a stability perspective it’s also a bonus because you are walking closer to the cross bar of the desk which is the center of gravity.

That having been said it is not perfect. During “normal typing” the desk did not move. But as soon as I started any shaking at all the desk shook pretty badly. The desk I was using was on a flatter carpet than I have at home (meaning my office could be even shakier) but the trial desk didn’t have any heavy items on it (which ideally would make it more stable). So really, I have little or no idea how the desk will behave in real world conditions. So much for a “try out”.

In addition the Omega EVEREST is missing things that other suppliers provide. The main one is a cable management system and built in power strip. Apparently they are evaluating some solutions in this area but don’t expect to have anything for at least another month or so.

But I decided to take the risk and put in an order for the 72 inch Omega EVEREST.

A treadmill for a walking desk is not your normal treadmill. First, you want it to be small. Most normal treadmills are fairly large beasts with a waist high stand in front to display status. A walking treadmill needs to fit neatly beneath a desk. Second, you want it to be slow. This is not an exercise treadmill. The goal is to work while walking. From what I’ve read most folks can’t type sanely much above 2 MPH. Some folks can do phone calls and such at up to 4 MPH. But the point is I need a treadmill specifically designed for walking desks. So the qualities I am looking for are:

Can hold at least 300 lbs (no, I don’t weigh even close to that but one wants room to grow =)

Is 20 inches wide (more space means an easier gate left to right)

Should be at least 50 inches or so long (enough space for a good gate front and back)

Is very quiet

Requires very little maintenance

I have a list of treadmills I evaluated in the Appendix, see E↓. But unless I’m missing something the only serious contenders are all from LifeSpan, specifically the TR1200-DT3 and the TR5000-DT3.

TR1200-DT3 My impression from various sites is that this is the workhorse of walking desk treadmills. Lots of people use it and various folks actually white label it. It has a 20 inch wide belt and when I tried it, it felt solid and sounded very quiet. About the only negative issue I have with it is that you have to lubricate it every 40 hours of use or so. This seems to be a fairly quick process but I do have to remember to do it.

TR5000-DT3 This costs $1000 more than the TR1200-DT3. It looks cooler with aluminum step strips and is rated for more weight and longer use times. It also has a built in lubrication mechanism which means it should only be lubricated about once every six months. I tried it in person and it does feel a tiny bit better in terms of tread than the TR1200-DT3 but not enough to be worth the price difference. But the real issue is the fan. It has a fan that has to keep running for 15 minutes after turning the unit off. In a normal office I doubt you would hear it, it’s really quiet. But I am in a home office and it would drive me nuts to be sitting at my desk listening to this silly fan for 15 minutes. It’s not a huge deal but I don’t want to pay $1000 for the privilege of listening to this fan!

I have two big monitors, one is a 27 inch Mid-2010 iMac and the other is a 27 inch Cinema Display. I like them to be directly in front of me in a slightly V shape. And I’ll want that set up both when I’m standing and sitting. After a little math I figured out that I need two monitor arms. The gory details are explained in F↓. In the end I decided to go with one ERGOTRON LX - HD Sit-Stand Desk Mount LCD Arm and its light non-HD variant. The HD is for my iMac and the non-HD is for my Cinema Display. As a side note, in the future I hope to get a 40 inch 4K (or higher) monitor like the Philiips BDM4065UC, I checked and it weights about 19 lbs. So even my non-HD arm will be able to handle it no problem. So hopefully I can re-use at least one of the two arms when I upgrade.

I also have to buy VESA mounts from Apple so I can hang my machines off the arms since the iMac and Cinema Display don’t come with Vesa compatible mounts.

AOptions I considered and rejected for how to switch between sitting and walking

A monitor poll with keyboard tray One can now buy what are effectively floor mounted monitor poles (like this one) with attached keyboard trays. The idea is that the tray can be lifted to a standing position or lowered to a sitting position. In theory I could place the pole in front of the treadmill and then rotate the monitor/keyboard tray 90 degrees to sit. Getting a pole that could handle both of my monitors was a challenge since most can’t handle the weight. Most of the poles have leg arrangements that I don’t think would work well with a treadmill. Most didn’t rotate at all but focused on just moving up (standing) or down (sitting).

Moving the treadmill out of the way Another option is to have a single desk and just physically move the treadmill when I’m not using it. This turns out to not be workable because my office is really small (about 73 sq ft) and so there just isn’t room to drag the treadmill around. Also dragging the treadmill is a real pain since it’s reasonably heavy and it has a controller and power attached to it that I’d need to be careful of when moving it. In fact, my office is so small that even if the treadmill folded up against the wall there wouldn’t be room to push it from under my desk.

Use a ball chair I can buy a ball chair and sit the ball chair directly on the treadmill. This could work in theory but in practice I’d hate it because there is no back support. There are ball chairs with back support but they come with bases that won’t fit comfortably on the treadmill thus defeating the purpose of the exercise.

Use a stool There are stools, like TreadStool, that are specifically designed to be used on treadmills. But the back support sucks and I have a really awesome Herman Miller Aeron that I love and want to sit in if possible.

Raise the floor I’ve actually seen this done in the real world. You put down flooring around the treadmill and desk so that the floor is level with the treadmill. Then you can roll a chair on and off the treadmill. In my case I have my Aeron and it’s legs are about the same width as the railings on the treadmills. This means that if I move even a little, the chair will fall off the railings onto the treadmill. The Treaddesk Treadmill, btw, would be perfect for this setup since it doesn’t have any railings. But honestly, getting enough material to build up a six inch or so “fake” floor is more effort than I wish to expend.

My office is really small. How could I be sure that the desk, the treadmill and my chair would all fit in without blocking any doors? Historically I would solve this problem using some graph paper and cut outs. But hey, it’s 2015, let’s use some of that computer stuff! So I downloaded Sweet Home 3D. This is open source software that that lets one easily model spaces. It has tons of super powerful features and can do 3d renderings and lots of other stuff I never figured out and didn’t want to use. What I did need though was a model of a treadmill. I had to go to their Import Models page and download a few furniture libraries to find what I wanted. I actually first tried the Free 3D Models page but never successfully managed to import any of the models individually.

The only other trick I needed was that when defining the shape of a room I need to start with Plan/Create Walls and NOT Plan/Create Rooms. The other way was really painful.

Other than that the program worked as advertised and I was very quickly able to enter the measurements for the room, the doors, the windows and the furniture. And it showed that I could actually get a 72 inch desk into the office space along with the treadmill and the office chair and not block any doors and still leave myself space to walk around. Yeah!!!!

60 inches between the table legs. My Aeron is about 30 inch’s wide and the treadmills I’m looking at are also around 30 inches. So 60 inches gives enough room for everythis to sit together. More is better. Typically table tops with legs this wide are 72 inches long.

30 inches of depth. I’ve seen 24 but that is a bit narrow for my taste.

Electric motors with memory. I am too lazy to crank and I like the idea of setting my preferred position and then being able to automatically return to it.

An excellent warranty.

Maximum height of at least 46 inches. This actually depends on the design of the table. For a normal table I need to have my hands horizontal and account for the 6 inches or so that the treadmill will raise me. So in my case that reaches around 46 inches. Note however that many tables top out at 45 inches. This is bad because unless the table has 4 legs the higher the table is the less stable it will be and the least stable position is at it’s maximum. I don’t want the table swaying while I’m typing and walking. So I actually want a height even higher than 46 inches not because I require it to position the table top properly but because it means that when I’m at 46 inches I haven’t topped the table out and thus made it maximally unstable.

Minimum height of 24 inches or so. I really like the table to be low so I can type more naturally. The bottom of the table top on my current table is 25 inches by way of comparison.

There are nice to haves like a good power management set up or being really pretty but those just aren’t as important.

Ergo Depot Jarvis This uses the Jiecang base and it has an awesome price. The company is actually based in Portland so it’s not out of the question for us to drop by and try out a desk.

Ergoprise S2S Height Adjustable Standing Desk This is apparently another Jiecang base. So presumably if the base is o.k. then the desk should probably be o.k. It’s certainly a great price. I’d love to try it out. The great news is that they do have a showroom. The bad news is that it’s in Texas.

Ergoprise Uprise Standing Desk I don’t know which base this desk has but it’s measurements seem right. Again, the show room is in Texas.

Humanscale Float It goes up to 47 inches which just makes the bar. It’s lowest setting is 27 inches but that’s o.k. because it has a keyboard tray which lets the keyboard be lower. The legs appear to be 60 inches wide. So this could work. But it costs around $2,300!!! Given how much time I’m going to spend with this desk price isn’t the end all be all but I’d like to see if I can do something cheaper.

Human Solution Uplift According to Work While Walking this desk uses the Jiecang base which they say isn’t very stable at high heights. But over all it looks like a reasonable option. My big issue is, I can’t try it out!

NewHeights Elegante XT I couldn’t find any measurements for the distance between the table legs but it would appear to be at least 65 inches or more, at a guess. So that is fine. Height range is from 24” to 51” so that is great. The only thing I really don’t like is that they have a cross brace between the legs. I’m worried about hitting that brace while walking.

D.2 Ones that didn’t look so reasonable but kept popping up so I evaluated them

Anthro Elevate II Their widest table only goes to 60 inches for the table top. I couldn’t find specs on the leg width but it’s safe to say it’s going to be less than 60 inches. It’s maximum height is 47 inches which is just over the bar but not by a lot.

ConSet I’m not sure I completely understand their specs. I looked at their 72 inch Veneer (the one without a cross brace to avoid hitting anything with my legs) and it says that its height is 63-123cm for the base and that the top is 22mm thick. So this would argue that height is 85mm to 145mm which translates to 33.46 inches - 57.09 inches. Which seems off.

Evodesk This is the lower end brand for NextDesk (see below). Given the problems they have already with their high end brand I think I’ll avoid the low end one.

GeekDesk Their large frame is 55.1 inches wide which goes to the outside of the legs and is too narrow for my needs.

NextDesk Terra Pro Out of the block the desk that really got my attention was the NextDesk Terra Pro. 4 legs! It’s gorgeous!!!!! The only problem is that NextDesk doesn’t have standard 72 inch widths. I submitted a request for a custom desk but never heard back. I found that they got a D from the BBB. Seriously, you have to knee cap your customers to get a D from the BBB. Next.

StandDesk It’s maximum height is 45 inches. Also while it offers a 70 inch wide table top it appears to use the same base which only has 50 inches between the legs.

Stir Kinetic Desk The M1 starts at $3000 or so and the F1 at $4000 or so. To be fair price isn’t an instant killer. I was willing to seriously consider the NextDesk Terra Pro and that starts at $2,700 or so. But here’s the thing. To me the Terra Pro is gorgeous. I really don’t like the Stir desk’s aesthetics. So I don’t want to pay that kind of money for something I don’t love.

TrekDesk Whatever its other issues this desk is simply not designed to allow one to use it as both a treadmill and a normal desk.

UpDesk Their site just doesn’t have enough details to be sure what I’m dealing with. They have a 72 inch desktop but I don’t know what width the legs go to. The high is 50.5” which is fine. But without more data I just don’t know if they will work.

VersaTables Zero Gravity Tables It goes 25 to 55 inches high which is great. It’s largest desktop is 72 inches. But I couldn’t find any more technical details on things like leg width or weight tolerance.

LifeSpan TR800-DT3 The belt is only 18 inches wide which isn’t as wide as I would like. I also saw it in person and it even looks pretty flimsy.

WoodWay Deskmill It only has a 15.75 x 39 inch walking surface. That’s smaller than I want.

Rebel Desk Treadmill This one is a bit slow at a maximum of 2mph but honestly, I gotta figure in practice that is fine. The maximum weight is 250 lbs which is currently o.k. for me but um.... yeah... well... um.. yeah. But what makes me unhappy is that it’s only 18.1 inches wide. I’m a big guy and want more space.

TreadDesk Treadmill It’s belt is only 18 inches wide which is a concern. Also it doesn’t seem to have side rails so when you want to stop walking for a second you can’t just step up to the rails. Instead you have to put your feet on small strips that are right next to the moving tread. This just seems to beg for problems.

Exerpeutic WorkFit Treadmill Desk This is actually an integrated treadmill and desk. The desk isn’t the shape I want so next.

NordicTrack Treadmill Desk NordicTrack actually sells the treadmill with a desk, which I certainly don’t want. The treadmill also goes way too fast and has an adjustable incline. The reason having “more” features is bad is that these are just more things to break. Given that the package costs $2000 I’m skipping this one.

Pro-Form Thinline Treadmill Desk Their base model has a great size but it looks like its desk is actually an integral part and needed to control the treadmill and the desk is completely the wrong shape.

My main computer is a Mid-2010 27 inch iMac. According to Apple this version of the iMac weights 30.5 lbs but unlike some models the stand on its back can be removed and it can be mounted using a VESA 100 adapter that Apple sells. But this review on Amazon says that once you remove the stand the computer weighs around 27.5 lbs. This turns out to matter because most monitor arms top out at 30 lbs.

My other monitor is a 27 inch LED Cinema Display which Apple says weights 23.5 pounds. It seems like it has the same stand as my iMac so presumably if I remove the stand then it should weight around 20.5 pounds.

So from a weight perspective I want arms that can handle 20 - 30 lbs.

The top of the monitors should be roughly at the height of my eyes and the desk will be roughly at the height of my waist. So let’s say it’s about 28 inches from my waist to my eyes. The iMac is about 17.5 inches high without the stand. So the ball of the arm will be 17.5/2 = 8.75 inches down from the top of the mac. So this leaves a distance of 28 - 8.75 = 19.25 inches from the ball of the arm down to the top of the desk. Put another way, from the middle of the monitor to the desk top needs to be roughly 19.25 inches. So I need an arm that can raise the monitor 19.25 inches (or so).

Note that the height I want when standing isn’t identical to when I’m sitting so I really want an arm that is easily adjusted in the vertical axis.

Figuring out the monitor arm reach is more complex. For one thing, do I buy a single arm with two attachments or two separate arms? In general I have found that a single arm with two attachments tended not to be able to meet my weight requirements and they usually were fixed in the vertical dimension. So I’m looking for a solution using two poles, one for each monitor. I find that the best layout for the monitors is in a slight V shape (rather than just flat) and I want that V to center on me when I’m either on the treadmill or sitting at the chair. I want the base of the V to be 24 inches from the front of the desk. Each monitor is 25.5 inches wide and when configured as a V the distance at the base is 48 inches. Using some simple trigonometry I figured out that this requires arms that can extend just over 18 inches.

Ergotron LX HD Sit-Stand Desk Mount LCD Arm Meets all weight and size requirements. Costs $230. The reviews not just for this arm but for Ergotron’s products in general are consistently stellar.

Cotytech Expandable Apple Desk Mount Spring Arm With the 19.7” pole the base can go up to 16.95” and then the arm itself can raise up another 13.4” to a maximum height of 30.35 inches. Far above the 19.25” we need. It’s maximum length at maximum height (which we don’t need) is 21.3 inches, which again is more than we need. It looks like it’s designed to be moved with a single hand. I only found ErgoDirect selling it with the 19.7” pole for a total cost with shipping of $180.94. I am concerned about the comment made here that the base may not be wide enough to fully extend the monitor. I’m also a bit concerned by how few reviews I can find in general.

DoubleSight Displays DS-30PHS Their documentation doesn’t give exact measurements of the parts of the arm so it’s difficult to be completely sure if it meets my needs but it would appear so. It’s pretty inexpensive at around $100 but this review in particular really worried me. The lack of reviews in general is worrisome.

ERGOMART Heavy Duty Monitor Arm SAA2718 It requires a six inch extension kit to give it the full 19.25 inches vertical height so it’s total price was above $300. This is much more expensive than well rated competitors so I skipped it.

]]>http://www.goland.org/sitstandwalkdesk/feed/4Derived keys and per user encryption in the cloudhttp://www.goland.org/esplannercloud/
http://www.goland.org/esplannercloud/#commentsSun, 01 Feb 2015 22:45:47 +0000http://www.goland.org/?p=1243
I use a program called ESPlanner to help with planning our insurance and retirement portfolio. ESPlanner wants to move to the cloud. Below I explore who I imagine would want to attack a site like ESPlanner and what sort of things cloud services like ESPlanner can do to frustrate their attackers. I especially look at using derived keys and per user encryption to potentially slow down attacks. But in the end, I'm uncomfortable with the legal protections afforded me as a service user in the US and so I really want a download version of ESPlanner.

Important Disclaimer - Please Read

Although this article is theoretically about ESPlanner in reality it’s about securing data in the cloud. ESPlanner had no input to this article. They just make an interesting example to use to explore this problem space.

ESPlanner does not have your banking passwords. In fact, it doesn’t even know what your specific bank accounts or investments are. It needs more rolled up data such as “how much cash do you have total across all accounts?” or “how much money in all accounts do you have in municipal bonds?”. That sort of thing. It also doesn’t want or need information such as social security numbers.

So at first glance it looks pretty harmless. Not much here to motivate an attack.

But I believe it’s a goldmine for smart attackers.

My guess is that the most obvious people to have an interest in ESPlanner’s data are 419 scammers. I have been informed that there forums online where stolen data can be sold. A reliable seller (yes, apparently they have reputations) selling a list of email address, names, ages, retirement status, future home buying plans, future college plans, some personal information (such as children’s names) along with some financial data is a gift from heaven for 419 scams. And since the market for this data already exists an attacker can quickly calculate their likely profit.

Another group who I believe would be interested in ESPlanner’s data are thieves who specialize in stealing data from banks. The problem, so I’m told, is that there is a lot of effort in stealing from banks. The process goes something like:

Find a likely target’s email address

Phish them to get a Trojan on their machine

Use the Trojan to collect the user’s name and passwords to their bank accounts

Use the names and passwords to enter the bank accounts and transfer their money to an account the attacker controls

Get a shill to pull the money out of the attacker controlled account and move it as cash to another account (thus killing the trail)

Profit!

This process is not easy. But enterprising criminals have made it easier than one might think. I’m told that there are black markets where one can buy phishing services along with password trojans. There are also black market services for hiring shills to pull money out of accounts. In other words this whole attack can be arranged completely remotely using nothing but online markets. I know this sounds nuts but type in something like ’black market stolen data’ to your favorite search engine and you will get a sense for how well run these markets are. There are even standard prices for standard services like bot nets, phishing, trojans, front end attacks, etc.

But the real trick, I’m told, with bank attacks is actually step 4. Most banks make it hard to move money around. It’s the kind of thing that typically requires paperwork, signatures, etc. It’s expensive and easily detected. That’s why most attackers focus on things like credit cards rather than direct bank hacks.

But... what if... what if our enterprising hackers could find a list of email address (for phishing) associated with their net worths? In other words, what if the attackers could know ahead of time which email addresses where owned by people with enough money in their bank accounts to be interesting but not enough to really call too much attention to themselves?

That’s where ESPlanner comes into the picture. It’s a directory of valuable targets. It lets attackers focus on where the meat is.

What I have no idea about is how likely either of these attacks are to actually happen. I’m not aware of any good statistics on this, especially since there is no motivation for most companies to know that they were attacked (see 4↓). So who knows how common this all is?

There are obvious things they can do. For example, they should make sure that their cloud VMs are as stripped down as possible (e.g. reduce security surface). They can make sure they are running the latest patches. They can encrypt all data in transit. They can eat an apple every day. This is all pretty standard cloud security stuff.

Even trimmed down software still has a lot of security surface area to cover and holes are constantly being found in any and all software. So an attacker can go on the black market, buy access to the latest exploit engines and have a shot at breaking just about any service’s security.

Next up we can make sure there are third party monitors on data access. Remember, the key attack here is stealing the user data. Typically in a cloud solution the data is going to be stored in some kind of storage/table/etc. service. So the cloud provider should be able to provide tools that can monitor storage usage and provide alerts (and shut downs) if things go sideways. For example, if data usage suddenly spikes to levels much higher than normal that is a good sign of an attack. Or... you know... a successful day. The reality is that monitoring is useful but it’s really hard to tell the difference between success and an attack. So on it’s own it’s not enough.

So usually the next step is to sprinkle some encryption pixie dust. A classic approach is to encrypt all user data with some global key. This is a good thing to do for two reasons. First, it makes attacks just against the back end useless since they only get noise. Second, it provides defense in depth against bad handling of the disk drives. Ideally a cloud service should clean drives before getting rid of them but that doesn’t always happen. Although by now I would imagine that just about all cloud services exclusively run their customer’s data on top of encrypted partitions but I don’t know that for a fact.

Unfortunately encrypting the data on the back end provides no protection against a front end attack. If the attacker can get into the front end then they can abscond with the global encryption key and that’s that. Ideally monitoring systems might detect the misuse of the global key but even a mildly intelligent hacker should be able to work around such monitoring.

We can make the attackers life slightly more annoying if instead of keeping the global decryption key on the front end we instead keep the key in a HSM. That way there is, as a practical matter, no key to steal. Instead the attacker will need to pass all the encrypted data through the HSM in order to get access to it. This provides two points of monitoring, the storage layer and the HSM layer.

But in practice nobody does it this way because passing data through the HSM would be ridiculously expensive, not to mention slow. What most folks are going to do is something slightly less secure but more economical.

Let’s say that we have the data for user U, we’ll call that Du. Let’s further say that we generate a unique encryption key for user U, let’s call that Ku. We then store in the storage layer Encryption(Ku,Du) = EKuDu.

The trick then is that we store in the back end Encryption(Kg,Ku) = EKgKu where Kg is a global key for the whole service. When we want to get to user U’s data we read in EKgKu and pass it to the HSM who uses Kg to decrypt it and return Ku. We then use Ku in the front end to decrypt EKuDu to get Du. We then discard Ku when we don’t need it anymore for that session.

What this does is allow us to encrypt all the data in the back end with a key we can’t leak (Kg, which is in the HSM) but still get access to user data without having to push it all through the HSM (which would be slow as all get out).

It’s not a perfect solution since any attacker that has access to the front end can still make requests to the HSM to retrieve various user’s keys and then use those keys to steal from the back end. But at least we now have two points of monitoring, HSM accesses and data accesses, that we can try to use to look for odd behavior. One could imagine doing correlations between the Identity Provider (IdP), the front end, HSM and storage layer to look for things like user data being accessed for users who aren’t logged in.

But we have to remember that monitoring is never perfect. It’s a third string defense at best. If you are relying on monitoring to protect yourself system you are probably going to have a bad time.

The problem with just using the HSM is that anyone who hacks the front end can steal any data they ask for. That isn’t going to really slow the attacker down much. But what if the attacker can only get data for users who are actively logged in? For services, like ESPlanner, where users typically don’t log in very frequently (how often do you mess with your financial profile?) this could be a powerful protection.

Before we continue a very little bit of terminology:

Identity Provider (IdP) This is a service that logs users in. It holds the user’s credentials and validates their identity. Examples include Microsoft Active Directory or Google Plus.

Relying Party (RP) This is a service that uses an IdP to log in users. ESPlanner is an example of an RP.

So let’s go back to user U’s data, Du. We will generate a per user key Ku. So we store Encryption(Ku, Du) = EKuDu. But then we encrypt Ku using a derived key. A derived key is a key created via an algorithm that takes some number of inputs and uses them to create a key. The magic of derived key algorithms is that it’s practically impossible to calculate what a key is unless you have all the inputs to the derived key algorithm. So even if some of the inputs leak, you are still secure.

The derived key in our scenario would be derived from two secrets. The first value is a global secret for ESPlanner we will call Se. The other secret is generated by the user’s IdP and is unique per user/per relying party, we’ll call that Sieu for the secret for IdP I, RP ESPlanner and user U. So now we can run DerivedKey(Se, Sieu) = DKu. So we store Encrypt(DKu, Ku) = EDKuKu.

Now for a front end to successfully access a user’s data the front end has to know two pieces of data, Se and Sieu. Se is presumably recorded in the RP’s HSM and so always available (if not leakable). Sieu is generated and persistently stored by the IdP. The IdP will generate a unique secret on a per RP/per user basis. So users A, B and C of ESPlanner that are using the same IdP will have secrets Siea, Sieb and Siec, respectively, generated for them. The point being that for each combination of RP and user there is a unique secret held by the IdP. This means that even if the front end has Sieu for User U that doesn’t provide the ability to access user A, B or C’s data since they each have different secrets that can only be retrieved from the IdP during login. Hence an attacker can only access the data of users who happened to login while the attack was underway.

Just to hammer home the point. The only way ESPlanner’s front end can get to a user’s data is if the front end has both ESPlanner’s global secret and the IdP’s per RP/per user secret. ESPlanner will only get the IdP’s per RP/per user secret during login and will discard it when done. So an attack can only get access to user data for users who login during the attack and therefore provide the attacker with the IdP’s per RP/per user secret. No login? No attack.

Also note that the IdP can’t compromise user data either. Even if the IdP published all of its secrets those values on their own are not sufficient to recover the user’s data. One must also have access to Se as well, that is, the RP’s global secret.

The point of this exercise is to slow the attacker down and hopefully make the attack less worthwhile. That is, the attacker can only steal data at the rate that users log in. If most users don’t log in that frequently then this is a powerful way to slow down attacks.

In most situations login/permission protocols like OAuth or Open ID Connect or SAML or what have you follow the same pattern. The relying party (RP) will redirect the user to the Identity Provider (IdP) with a login request. The IdP then confirms the user’s identity and responds with a login token. So clearly Sieu gets sent in the login token.

In the simplest case the IdP can encrypt Sieu using the RP’s global public key. That way nobody can see the secret. But in practice this isn’t ideal.

An attacker could launch a man in the middle attack where they silently intercept large numbers of login tokens. Then when they have enough of them they could hack the RP’s front end and try to use those tokens along with the RP’s HSM to get access to user’s data. The idea being to launch the attack quickly and for a short enough period of time to potentially escape getting caught by any threshold monitors. Presumably one is using TLS for all communications between all parties (IdP, RP and user). This means that tokens can’t be collected via a straight forward Man In the Middle attack. But the attacker can still launch an attack against the RP’s front end and just quietly collect tokens, storing them away some where but not trying to use them. Only when a critical mass of tokens have been collected then would the attack be launched.

A way to frustrate this ’sit and wait’ strategy is to use nonce keys for the exchange. The RP’s front end would generate a new key pair inside the HSM before issuing a login request and then include the nonce public key in the signed/encrypted login request. The IdP will then encrypt Sieu with the nonce key. Once Sieu is decrypted the nonce key is discarded inside the HSM. This provides forward secrecy and renders any stockpiled tokens useless. The point is to force the attacker to issue requests against the HSM, storage, etc. in order to increase the probability of detection.

It is a truth of cryptography that keys need to be rolled over. They can be compromised. They can be used to the point where there is enough data around to make cryptography easier. Etc. In the case of replacing Se the easiest approach is to just stop using Se for any new values. As users login we check if their derived key used the old Se and if so use it to get their old DKu and use the new Se to generate a new DKu.

Rotating Sieu requires a protocol extension where the IdP can send both the old Sieu and the new Sieu so that we can do a similar decryption (to retrieve Ku) and re-encryption with the new Sieu.

Rotation Ku is easy if we only do it when the user logs in. If we want to do it earlier then we need a protocol with the IdP to request Sieu in order to finally get to Ku and change it.

Key rotation, it should be pointed out, is one of the more dangerous moments in a service’s existence. This is a point in time where a large number of user secrets are being accessed in a short period of time. So a lot of precautions have to be taken. Otherwise a smart adversary will make it look like an attack has happened (even if it hasn’t), trigger a key roll over and then attack during the key roll over process.

Since I don’t actually have a magic wand I must assume that there will be a long period of time before the features I propose above are adopted. But that doesn’t remove the utility of the idea. As cloud services offer their own stand alone RP services (e.g. a service that handles the login process on behalf of the relying party) it would be pretty easy to extend those services to support a variant of this protocol. In the normal course of business these services work by having the RP’s front end bounce the user to the RP login service who then bounces the user to the IdP, the IdP bounces the user back to the RP login service who then bounces the user back to the RP front end. The RP Proxy handles different protocols, IdP types, etc. so the RP front end doesn’t need to worry itself with these details.

Thus the RP Proxy is a great place to plug in this functionality. When the RP Proxy deals with an IdP who doesn’t support generating Sieu then the RP Proxy can generate the value itself and forward it in the response login token to the RP front end. This still provides a strong level of security because the RP Proxy is a separate service run by separate folks using different software than the RP front end. So the RP front end would still keep its own Se and the RP Proxy the Sieus.

Security is just a form of insurance, don’t buy more than you need. If an RP’s data usage pattern is amenable to per user encryption and if an off the shelf library is available to provide this capability as part of a cloud infrastructure then this is a reasonable and fairly low pain way to frustrate attackers. The pain is in implementing the library, in actual usage it’s just ’Get me user X’s data’ and you are on your way.

But one suspects that nothing can stop a sufficiently dedicated and well resourced attacker. A dedicated attacker can sit quietly on the front end, intercepting actual user data, recording desired information in hidden places (temporary directories, log files, etc.) and then slowly exfiltrating it so alarms won’t go off. A dedicated attacker can hack the developer’s machines using phishing or zero days and spoil the software at the source. A dedicated attacker can break into the data center or bribe cloud hosting employees and so on.

The point is - how dedicated is your attacker? Take a look at the prices listed at the bottom here, where does your data come in? Now multiply that by your number of users. Now you have some idea of what your data is worth on the market and can decide what is and is not worth defending against.

Even if ESPlanner takes these precautions, I still wouldn’t feel comfortable using a cloud version. The reality is that I have no way of knowing what they actually implement or not and how well they have done it and how well they have maintained it. By running the software on my own machine in a restricted VM with locked down networking I can control what data is available and to whom. Of course it’s highly likely that ESPlanner’s VM will be more secure than my local VM. Remember, my VM is running a user level OS with UX and programs and such. It is no where near as locked down as what ESPlanner could run. But nevertheless unless I’m directly being targeted (in which case the attacker need not bother with ESPlanner) I’m still likely to be more secure in practice.

But what makes things even worse is that companies (at least in the US) appear to have little incentive to worry about security. As near as I can tell companies have basically zero liability in the US for hacking attacks, even if such attacks succeed due to the failure of the victim to take reasonable precautions. That is, the T.J. Hooper standard is truly dead. Heck, last I checked, we were still trying to get some kind of federal law to even require companies to tell people when they have been hacked!

And for corporations there seems to be no reputation repercussions from being massively hacked. Go take a look at something like this website. How many of those companies went out of business or were even seriously affected by leaking their employees and customer’s data? Have people stopped buying from Sony? Home Depot? Have people abandoned JP Morgan Chase?

In America it would appear that a service can be hacked, leak user data all over the place, and face literally no meaningful consequences either legal or reputation based.

So personally, I want a download version.

]]>http://www.goland.org/esplannercloud/feed/0node-gyp and node.js on mobile platformshttp://www.goland.org/node-gyp-and-node-js-on-mobile-platforms/
http://www.goland.org/node-gyp-and-node-js-on-mobile-platforms/#commentsFri, 23 Jan 2015 02:58:48 +0000http://www.goland.org/?p=1238
As I’ve previously discussed I want to get node.js running on Android, iOS and WinRT. But to make that happen we need to understand the node.js ecosystem and that includes native add-ons and node-gyp. So I created a node package, node-gyp-counter, to heuristically determine how frequent node-gyp usage is in the node.js world. If my numbers are right then less than 3% of downloads of packages in 12/2014 involved node-gyp in any way. Of that 3%, just 27 packages account for 80% of node-gyp root package downloads. Only 19 of those 27 packages seem relevant to smart phones.

node-gyp package A package that is either a node-gyp root package or has one or more node-gyp root packages somewhere in its dependency tree.

Count

Description

119,887

Total packages in node.js when I ran my query

1,917

Total number of node-gyp root packages

18,625

Total number of node-gyp packages

16%

Percentage of node.js packages that are node-gyp packages

666,199,774

Total downloads of node.js packages in 12/2014

21,118,417

Total downloads of node-gyp packages

3%

Percentage of node.js package downloads that are node-gyp packages

Assuming my data isn’t completely wacked this argues that node-gyp is a tiny part of the node.js ecosystem. Only 3% of downloads in 12/2014 used node-gyp in any way, shape or form.

Next up I calculated the set of node-gyp packages that accounted for 80% of node-gyp package downloads in 12/2014. This returned 81 packages. I then calculated the union of the node-gyp root packages used by those 81 packages. The result had 27 entries (see A↓). In other words, if we support 27 node-gyp root packages we can support 80% of all packages that use node-gyp in node.js land. I went through those packages and I’d argue that only 19 of those are relevant to smart phones. Heck, I bet if we just had support for ws, bson and sqlite3 we would make the vast majority of node.js smart phone developers very happy.

My take away is that if node.js ran on smart phones with no ability to run native add-ons we would still add huge value to the world supporting literally 97% of what people actually use node.js for. If we just figured out how to directly support 19 packages (possibly by building them straight into our mobile node.js images rather than using node-gyp to build and link) we would literally cover 99% of all node.js uses.

ANode-gyp root packages that account for 80% of node-gyp package use ordered by popularity

Package Name

Description

Relevant to mobile?

gaze

Provides notifications when files change, this is primary used by gulp related projects. I don’t think anyone wants to run gulp on their phone. But still, I’ve actually need this kind of functionality on the phone (when I was working with the Tor proxy) so let’s include it.

Yes

ws

Websocket client/server/etc.

Yes

fsevents

Provides access to OS/X FSEvents. It’s used by Chokidar and Karma amongst others. It’s mostly used for build related stuff.

No

bson

Binary JSON parser

Yes

kerberos

Kerberos library.

Yes

phantomjs

A headless version of Webkit’s Javascript API. Used in node.js land for testing apps that have to run on Webkit.

No

dtrace-provider

Provide deep tracing into node.js. Cool but only runs on OS/X and Solaris systems.

No

node-sass

Node bindings to libsass, a stylesheet preprocessor. This is used at build time and so shouldn’t be needed on mobile.

No

contextify

A potentially cooler way to spin off separate v8 contexts. Seems primarily to be used by jsdom.

Yes

base64

Base 64 encoding/decoding. Note that the module says itself that it’s probably not needed anymore because node.js includes its own base64 functionality. But I’ll include it anyway since it’s popular.

Yes

weak

Allows you to get notified before an object is garbage collected. This is primarily used by dnode which is essentially dcom for node.

Yes

hashring

Consistent hashing function. Used for things like memcached. I’m going to go on a limb and say you shouldn’t need this on a smart phone.

No

ref

Mostly used by FFI I’m going to argue it’s relevant to smart phones as it helps integrate with C code which runs fine on smart phones.

No

hiredis

A client for redis.

Yes

fibers

Adds fibers to node.js.

Yes

websocket

A mostly (but not entirely, there’s some C++ code, which is why it’s in this list) javascript implementation of Websockets.

Yes

v8-profiler

Provides cpu profiling of v8.

Yes

v8-debug

Used by node-inspector.

Yes

bcrypt

Used to store passwords in a way that is expensive to attack, so provides security if a password file is lost.

]]>http://www.goland.org/node-gyp-and-node-js-on-mobile-platforms/feed/2Thali and the Mesh Messhttp://www.goland.org/thalimesh/
http://www.goland.org/thalimesh/#commentsTue, 20 Jan 2015 18:20:31 +0000http://www.goland.org/?p=1191Thali's base communication mechanism is Tor hidden services. This enables Thali devices to reach each other regardless of what NATs or Firewalls are in their way in a manner that is resistant to traffic analysis. But what happens when one isn’t on the Internet at all? We still want Thali devices to be able to communicate so a goal has been to support some kind of ad-hoc communication mechanism. That is, if two Thali devices are close enough to reach each other directly via a technology like Wi-Fi or Bluetooth they should be able to communicate securely and privately.

Ideally however we would go a step farther and use a technology that supports ad-hoc mesh networking. We list below some candidates but it is a bit early to jump on the mesh bandwagon. More on that in future articles.

The purpose of this article is to collect information on what appear to be the main players in the ad-hoc connectivity and mesh building contest.

[Note: This is a complete re-write of the existing Mesh Mess article.]

The following sections look at various radio technologies that are either already widespread or expected to become widespread in the near future. These are all technologies that Thali can leverage to provide connectivity.

Range This is just meant to provide a rough idea for both indoor and outdoor use with non-directional antennas.

Bandwidth Measured in Kb/s

Wavelength This is the size of the signal in inches. This gives one a sense of the size of the associated antenna. Mobile antennas can be anywhere from 1/10th to 1/2 the size of the wavelength. Stationary antennas are often uneven multiples of the signal size unless it’s huge.

Open Source/Open Standard Thali is an open source/open standard project so we need an ad-hoc mesh technology that is as well.

Supports IP We really don’t want to re-invent the Internet so we want an ad-hoc mesh technology that lets us route IP packets. That means it needs to be able to assign local IP addresses, listen for communications, etc. Ideally it would show up as an IP enabled network adapter.

Supported Platforms Thali wants to run everywhere, what platforms typically support this technology?

Supports discovery Has a mechanism to enable devices within in range of each other to discover each other.

Supports ad hoc meshes Is able to form a mesh for relay routing. Note that there are two distinct flavors of mesh. Low latency meshes are collections of devices that are connected in real time. In a low latency mesh if user A wants to talk to user Z, who is out of radio range, then there has to be, at the moment of the desired communication, some collection of other users standing between A and Z who are willing to relay packets in real time. If no such collection of users exist then the message won’t be delivered. High latency meshes, or opportunistic meshes, are used in situations where the density of devices is too low to support real time communication. In a high latency mesh devices opportunistically copy messages to each other hoping that those messages will eventually be delivered. For example, Joe wants to send a status update on a work item to Jane in an area like a factory with no Internet infrastructure. Joe’s device doesn’t see Jane’s device but does see Joseph’s. So Joe’s device asks Joseph’s device to take a copy of the message and if it ever sees Jane then pass it on. Joseph’s device might then give a copy to Jake’s device. If eventually Joe, Joseph or Jake see Jane’s device then the message will be passed on.

Range Typically 300 ft indoors. Outdoors a range of a mile is not out of the question if there is direct line of sight. Longer distances are possible under good circumstances and with highly directional antennas.

Bandwidth This is all over the map depending on which variant of Wi-Fi is used. Anywhere from a low of 11 Mbps to a high of 1.3 Gbps.

Wavelength 4.92 inches (2.4 GHz) & 2.36 inches (5 GHz)

Open Source/Open Standard Near as I can tell you can’t even get the spec without paying $99. There do appear to be open source Wi-Fi drivers though.

Supports IP Yes

Supported Platforms Everything

Supports direct ad hoc connections Yes. A device can put itself into mobile hotspot mode (or similar names) where essentially it acts as a Wi-Fi Infrastructure access point and then other devices can connect to it. See the mesh section below for limitation.

Supports discovery Yes, it’s possible to enumerate local networks so long as they are advertising themselves. This can be used for discovery. I’m not completely sure what the battery implications of this are if one just leaves Wi-Fi on purely for discovery purposes. Also see limitations to being a mobile hotspot below.

Supports ad hoc meshes There is no native support for forming meshes. In theory one could use Wi-Fi Infrastructure mode as part of a mesh structure but only if at least some devices have other connectivity options. The problem is that if a device (think iOS 7) only supports normal Wi-Fi Infrastructure mode (and not say, Wi-Fi direct) then when it is connected to a Wi-Fi Infrastructure access point that is the only Wi-Fi access point it can talk to. Similarly when a device puts itself into Wi-Fi Infrastructure mode (e.g. mobile hotspot) unless it supports some other communication mechanism then it can only communicate with nodes that have connected to its mobile hotspot. So it is not possible to form a low latency mesh out of just Wi-Fi Infrastructure devices as one can’t form a chain, only a hub and spoke. One could potentially create a high latency mesh though. And often devices have multiple radios so it’s possible in some circumstances to use Wi-Fi Infrastructure mode as part of an over all mesh building strategy.

Range Typically 300 ft indoors. Outdoors a range of a mile is not out of the question if there is direct line of sight.

Bandwidth This is all over the map depending on which variant of Wi-Fi is used. Anywhere from a low of 11 Mbps to a high of 1.3 Gbps.

Wavelength 4.92 inches (2.4 GHz) & 2.36 inches (5 GHz)

Open Source/Open Standard If you aren’t a member of the Wi-Fi Alliance then it costs $99 just to see the Wi-Fi Direct specification. There are open source implementations of wi-fi direct though but I’m not sure how they pulled that off. It’s also not clear what the costs are to implement or if only hardware manufacturers have to pay. It’s confusing.

Supports IP Yes

Supported Platforms Recent releases of Android, Linux and Windows. OS/x and iOS do not appear to support it. Note that the hardware in the device also needs to support it so just having the right OS isn’t enough.

Supports direct ad hoc connections Yes, sorta. The whole point of Wi-Fi direct is to enable ad-hoc connections. But the Wi-Fi Direct standard apparently requires that a user confirm accepting a Wi-Fi Direct connection. So automated ad-hoc connections aren’t possible without direct user intervention. We are going to run experiments to see if the user dialog is needed for repeat connections and if we can maintain group IDs. But the inability to automatically make connections is a real problem.

Supports discovery Yes, Wi-Fi Direct groups can advertise their SSIDs like any other Wi-Fi access point and that can be used for discovery.

Supports ad hoc meshes There is no built in support for meshes. But unlike Wi-Fi Infrastructure mode a device can simultaneously be a member of multiple Wi-Fi Direct groups and so Wi-Fi Direct can be used to create even low latency meshes if one wants. Wi-Fi direct is also backwards compatible with Wi-Fi Infrastructure mode so in theory a normal Wi-Fi Infrastructure mode client could connect to a Wi-Fi direct group but the groups are required to have access passwords and the user would have to get that password before they could join. There are work arounds, of course, such as well known passwords or supplying passwords over a different transport.

Range The information I can find online is astoundingly useless. It is designed just for discovery (ala BLE) but I can’t find anything on range, frequency, etc. But an article from Broadcom implies it is based on 802.11ac which would argue for the same range as Wi-Fi.

Bandwidth No clue, but it’s clearly just for discovery and meant to trigger the use of other technologies like Wi-Fi Direct.

Wavelength Don’t know for sure but if it’s 802.11ac then see above.

Open Source/Open Standard It costs $200 just to see the technical specification draft! So no, not open.

Supports IP Probably not

Supported Platforms I don’t know.

Supports direct ad hoc connections It’s intended as a discovery protocol but that’s about it, so I’m guessing only tiny bits of data can be sent.

Supports discovery That is it’s point in life.

Supports ad hoc meshes It may be amenable to some of the mesh hacks used on BLE but that isn’t it’s purpose.

Range The target outdoors is around one-kilometer. For indoor we would expect improvements over high frequency wi-fi to be between 3x - 10x (e.g. 1000 ft to 3,000 ft) depending on bandwidth.

Bandwidth 26 channels, each channel has roughly 100 Kb/s

Wavelength 13.11 inches in (900 MHz) Note that spectrum in and around 900 MHz is available throughout most of the world. The big exception that I’m aware of is China where this standard will apparently run in various slices of 40 inches (300 MHz), 30 inches (400 Mhz) and 15 inches (780 Mhz) ranges.

Open Source/Open Standard Like other Wi-Fi standards it is pay to play. It also hasn’t actually been standardized yet. The standard is expected to be finalized in 2016.

Supports IP Yes

Supported Platforms Unknown

Supports direct ad hoc connections Yes, endpoints can connect directly to each other with one playing access point and the other playing client.

Supports discovery Stations can advertise themselves, so yes.

Supports ad hoc meshes There is the concept of a relay built directly into the standard but it’s not clear if the resulting topology is hub and spoke or a mesh.

Range The general goal is to reach at least a Kilometer and given the low frequencies that would apply both indoor and outdoor. But there are various restrictions on its use in order to prevent collisions with existing channel users. In rural areas in the US the FCC allows powerful enough transmitters to cover 10 or 11 miles.

Bandwidth Each channel, depending on frequency, carries between 26.7 Mb/s and 35.6 Mb/s and up to four channels can be bonded to give an overall bandwidth of between 426.7 Mb/s and 568.9 Mb/s. Data rates can be lowered all the way down to 1.8 Mb/s in order to get longer range at low power.

Open Source/Open Standard I haven’t investigated this enough. It sorta looks like if you just don’t actually use the Bluetooth logo you could at least implement software compliant with the publicly available standards for free. I strongly suspect that there is a fee to implement the hardware due to some kind of group licensing consortium.

Supports IP Sorta. There is a native serialization protocol that looks kind of like TCP but is not. So we can’t just drop a Bluetooth server socket into an HTTP server and expect anything to work. However they do support streams so we could probably hack up something. There is a profile for Bluetooth called Personal Area Network (PAN) that does support IP but it’s not supported on all platforms.

Supported Platforms Everything

Supports direct ad hoc connections In theory no because two Bluetooth devices are really only supposed to communicate when paired. But Android at least supports communication between unpaired Bluetooth devices.

Supports discovery Yes but not in a way that is useful for us. Typically there is a high power dedicated discovery mode that the user has to put the device into. Most devices limit how long one can stay in this mode. There is a sorta work around with Android where if one knows the Bluetooth UUID of the other device it’s possible to scan for its presence without going into high powered discovery mode.

Supports ad hoc meshes There is no protocol support for it but it’s certainly possible to connect to multiple bluetooth devices at once so it’s in theory possible to build a mesh.

Bandwidth In practice 35 KB/s, the raw bandwidth is 1 Mb/s per channel and there are 40 channels but in practice most BLE systems only support communicating over one channel. BLE 4.2 is supposed to support larger packet sizes, currently the data load is 20 octets, which should increase effective throughput from an application perspective (e.g. get closer to the theoretical 1 Mb/s limit per channel).

Wavelength 4.92 inches (2.4 GHz)

Open Source/Open Standard I haven’t investigated this enough. It sorta looks like if you just don’t actually use the Bluetooth logo you could at least implement software compliant with the publicly available standards for free. I strongly suspect that there is a fee to implement the hardware due to some kind of group licensing consortium.

Supports IP No. There are proposal for an IP based PAN over BLE but I’m not aware of it being widely supported although it is officially part of BLE 4.2.

Bandwidth 915 MHz is available with 40 Kb/s, this falls to 20 Kb/s for the 868 Mhz band in Europe. In the US there are apparently at least 5 bands available so in theory if one had 5 antennas one could send 5 times the data rate.

Wavelength 13.11 inches (900 MHz)

Open Source/Open Standard It’s based on an IEEE standard but I noticed that you can’t download the spec without registering.

Supports IP Yes.

Supported Platforms Not many. One has to buy specialized hardware and then get drivers for it.

Supports direct ad hoc connections I’m honestly not completely sure. It’s not clear to me how long it takes for a new node to join a ZigBee network. ZigBee can run in a variety of configuration including coordinator mode with pre-provisioned keys, coordinator mode with real time discovered keys and coordinator free mode. The mode will determine how long it takes a new device to join an existing network. But I have no idea what the timings look like. But I suspect it’s at least theoretically possible to just jump on an existing unsecured network without even talking to the coordinator. So ad-hoc connectivity should work.

Supports discovery Yes, but see above. Discovery is handled by announcements which are flooded through the network but one has to first be on the network to send the announcements. ZigBee also supports devices that go to sleep and periodically ping to remind the network they are still there.

Supports ad hoc meshes ZigBee supports forming meshes as part of its protocol but it seems like a coordinator has to be involved. Although I’m honestly not completely sure. I’m also not clear what happens if the coordinator goes down. Is another coordinator allowed to step in? How do they deal with network splits?

Range In densely populated areas between 0.5 - 1 mile up to 6 miles in the best circumstances.

Bandwidth 35 Kb/s (it uses BLE to connect the goTenna device to the phone so bandwidth can’t be higher than BLE supports)

Wavelength 78 inches (151-154 MHz)

Open Source/Open Standard Completely proprietary

Supports IP Here’s a better question, can you use it with anything other than their software? In other words, is this an app or a platform? Given that they say they are going to release an SDK one presumes there will be some kind of API. But it would appear that this is a centrally managed and completely closed platform.

Supports ad hoc meshes No. Apparently there are FCC regulations that explicitly prevent them from storing and forwarding messages so right now they are looking at point to point only. Obviously a devious SDK writer could do something naughty and build a mesh on top.

Serval is a project to enable phones to use wi-fi (and ISM 915 Mhz via dedicated extenders) to build meshes for voice, pictures, web, etc. It’s mostly focused on disaster recovery and providing services in under served or outrageously expensive areas. Serval can run over Wi-Fi Infrastructure as well as Wi-Fi ad-hoc on Android devices. Note however that an Android phone can only act as a mesh relay (via Wi-Fi Ad-hoc) if it has been rooted. Serval’s magic is that if you have a bunch of rooted devices or extenders then you can build a mesh automatically.

Because of their successful app FireChat they get a lot of attention but their website is less than informative. The only thing I know for sure is that at the time I wrote this is that their SDK is not yet available and what little code they have released is GPL v3.

This is a project to build metro area scale ad-hoc mesh wireless networks. Right now the project looks to be in extremely early stages and is not ready for anything like prime time (I know the feeling). They do use Serval for their key distribution but I’m not exactly clear on the relationship since they just talk about Serval in the context of crypto but Serval is its own stand alone mesh system. So do they use a mesh to handle keys for their mesh? Or are they using something more limited? It’s hard to tell.

It took awhile to figure out but it seems the real docs for AllJoyn are here. AllJoyn came up because it provides essentially the same features as UPnP. That is, a way for devices to discover, connect and control each other. Alljoyn provides a set of standard APIs which can in theory be plugged into lots of transports. So it’s theoretically possible to perform device discovery over BLE or Bluetooth or Wi-Fi or whatever. AllJoyn doesn’t exactly support forming meshes (it assumes there is an existing transport that handles that) but it is intended to allow for discovery and ad-hoc connectivity. It has its own set of protocols and could in theory be plugged into other protocols as well.

This appears to be a software API for discovering nearby devices. There is an unverified belief that this is a replacement for Bump, the app Google purchased and that it will somehow support a non cruddy story for talking to iPhones. It might also be a Unicorn. We’ll have to wait to find out.

Sorta. So right now our wireless devices, especially Wi-Fi, use fixed MAC addresses (although under an extremely limited set of circumstances, iOS 8 will send out randomized MAC addresses). This provides a constant ID that can be used to follow us around. But unless there is extra data to associate that MAC address with an identity then it just says “I’ve seen this person before” but not “This is Joe”. If Thali were to do something obvious like advertise a user’s public key as part of mesh discovery then we would not only have a fixed ID like a MAC address but we would be taking it further by using an ID that presumably can be easily associated with an identity. It would be the moral equivalent of doing discovery via email address or SSN. Obviously, a bad idea.

So we will have to use a different approach.

There are a couple of approaches to deal with this situation. First, we can decide to not do worse but not do better. If we assume that MACs are going to stay static then we don’t need to worry about using the same ID. We just have to worry about using an ID that isn’t immediately mappable to a user’s primary key. We could do something as simple as create a second public key to advertise that is only used in mesh and communicate that to friends and such. This is easy and cheap and reduces the problem back to MACs.

But I am hoping we will start to see MAC addresses changed randomly. There are reasons why this can hurt but it’s necessary if we are to pass the security laugh test (along with the ability to completely change how cell phones work, but that’s another story). Once this happens then the mesh ID for Thali would become a security hole, not a feature because we would be re-introducing a constant ID.

So this leads to the second approach. In theory we could try some fun games. For example, imagine that someone has 500 people in their address book. They could advertise their public key 500 times, each time encrypted with the public key of their friends. Of course you don’t want to advertise who your friends are. So what you would actually do is your identity with each of your friend’s public keys but not include any of the headers to identify which public key you used to encrypt the value. The result? Everyone else in the mesh network would need to slurp up the 500 keys you just published and then try to decrypt each and every key to see if any of them use their key. Now multiply that by how many people are on the network. This isn’t impossible btw. Say 500 addresses per user * 1000 users on the local mesh = 500,000 keys to test. Today that’s a bit much for a phone but in a year? Two years? Computing is growing vastly faster that human scale. Pretty much anything on human scale is going to be squashed by computing available. So maybe this kind of brute force approach makes sense?

This isn’t really a contender but it’s going to cause enough confusion with Wi-Fi Direct that it’s worth calling out the differences. Wi-Fi Ad-Hoc Mode is part of the original Wi-Fi standards which contained two basic modes - infrastructure and ad-hoc. Most folks are used to infrastructure mode where there is a single access point that everybody connects to and all communication goes through that single access point. In ad-hoc mode it’s possible to set up peer to peer connections but a lot of key wi-fi features are missing.

There is a speed limitation of 11 Mbps.

There apparently is no signal strength monitoring.

There is no standard discovery story.

The security story is WEP although Windows 7 supports WPA2

A Wi-Fi adapter can be in infrastructure or ad-hoc mode, but not both. With Wi-Fi direct it’s possible for an adapter to be in infrastructure mode and simultaneously accept Wi-Fi Direct connections. So one can be both connected to the Internet and using Wi-Fi Direct.

]]>http://www.goland.org/thalimesh/feed/2I think T-Mobile ripped me offhttp://www.goland.org/the-t-mobile-rip-off/
http://www.goland.org/the-t-mobile-rip-off/#commentsSat, 03 Jan 2015 00:02:07 +0000http://www.goland.org/?p=1232]]>http://www.goland.org/the-t-mobile-rip-off/feed/2ESPlanner – Figuring out life insurance, retirement and morehttp://www.goland.org/esplanner/
http://www.goland.org/esplanner/#commentsTue, 23 Dec 2014 16:22:15 +0000http://www.goland.org/?p=989
How much life insurance do we need? How much do we need to save for retirement? How much do we need to save for our daughter’s college education? These are basic financial questions and they are unanswerable because they require perfect (or at least reasonable) knowledge of the future and as the song goes “the future is not ours to see”. But, regardless, we still have to muddle through. So this is where the program ESPlanner can be helpful, if you understand what it’s doing and what it’s limitations are. Below I explain how our family uses ESPlanner primarily to figure out how much life insurance to get but also as a spot check for our retirement and college savings plans. [Note: Updated on 12/23/2014 to account for switching to ESPlanner Plus]

ESPlanner is a software program that lets you enter in (as explained in gory detail below) a ton of information about your past, present and future financial life. It also lets you enter in data about what happens if you or your significant other (if you have one) dies. The point of entering all this data is to try and figure out things like how much life insurance to get for each person, how much to save for retirement, etc. It also makes it possible to model the effects of different future financial decisions. For example, what if you want to buy a vacation home? Or retire to another state? What about taking Social Security payments early? ESPlanner has programmed into it all the federal and state tax laws (with projections of how they change over time) so it can actually capture the consequences of these decisions and let you see what happens.

I use ESPlanner primarily to help us figure out how much life insurance to get. But it’s also very useful in forcing us to think through our financial lives and understand where we think the money is going to go. To a lesser extent I use ESPlanner to spot check our retirement and college plans as well as get a general sense of how reasonable our spending and savings rates are. Outside of helping to get an actual figure for life insurance I really use ESPlanner as a sort of financial check up.

ESPlanner can’t predict the future. For example, it has tons of information about a wide variety of tax laws. This is really useful in understand the tax implications of various choices (want to model choosing a Roth 401(K) versus a traditional 401(K)? ESPlanner can easily handle that) but remember tax laws can and do change. Of course, ESPlanner can let you put in some of your predictions for those changes but that isn’t useful to me since I have no idea where things are going to go. In fact, ESPlanner requires one to put in a non-trivial number of predictions about the future. Are they reasonable? Beats me.

The point is summarized by a very old software aphorism called GIGO - Garbage In, Garbage Out. If the data we put into ESPlanner is wrong then we shouldn’t be surprised when the recommendations are wrong. This isn’t ESPlanner’s fault. But still, as someone said, it’s hard to make predictions, especially about the future.

Which means one should take ESPlanner’s recommendations with more than a little bit of salt.

ESPlanner has no logic regarding inheritance tax. For the vast majority of people that is probably o.k. because the Federal inheritance tax exemption is quite large and most states either don’t have inheritance taxes or a reasonably large exemption. But if you are lucky enough to have enough money that any of those taxes apply then using ESPlanner can get you into real trouble. If your estate is big enough for this to be a problem then anyway I suspect you have to hire an estate planning attorney to help you figure out how to structure your estate to minimize taxes. You will need to talk to that person to figure out what tax exposures you have and to enter those manually into ESPlanner.

There is now an on-line version of ESPlanner but I’ve never used it mostly because I have more than a little bit of experience in securing cloud services and so I try hard not to put any useful data in “the cloud”. Thankfully there is a software version of ESPlanner one can download but I find its UX to be pretty painful.

It took me a while to figure out how the data entry dialogs work, although once I got the basic idea they were easy. The painful part is anytime I want to change anything. Couldn’t they just let me enter a change right into the table dialogs? Please?

Another problem is that the implications and functionality of certain entries are extremely difficult to figure out and I pretty much never find the help dialogs an actual help. In most cases I just have to play around with different values in the entry to figure out what the heck it’s actually doing and ask a lot of questions in the ESPlanner website.

That having been said the introductory document is actually quite reasonable and the program does work so I think it’s worth the pain, but pain there is a plenty of pain to be had.

In a world of $5 programs for smart phones and free ad driven websites the cost of ESPlanner can seem a bit extreme. The basic version costs $149. But I now use ESPlanner Plus and it costs $200. After a year I have to spend another $50 (for either version) to renew the subscription so I can continue to get updates. However given the complexity of what ESPlanner does and given how small the audience of people who I suspect will ever use it I think the cost is reasonable. ESPlanner contains federal tax rates, state tax rates, social security rules, medicare rules, logic to predict how those rules will change thanks to things like inflation, etc. It’s a whole mess of very detailed work and the laws it is based on change every year so the code has to be updated every year. So yes, it’s expensive compared to most programs but if there’s another program in the same league that costs less I’d love to hear about it.

Until this year I used ESPlanner, the basic version. I have now switched to ESPlanner Plus. The main difference between the two is that ESPlanner Plus allows one to run Monte Carlo simulations of different portfolio allocation results. Since Monte Carlo simulations use a normally distributed curve to approximate stock market returns and as the stock market doesn’t follow a normal distribution the feature isn’t very useful in my mind. But it brings along a subsidiary feature that I want very much. This feature is the ability to create different portfolios of various investments and then assign them to different years. This allows me to simulate things like expected returns as I switch our portfolio from stock heavy to bond heavy as we get nearer to retirement. Unfortunately making this all work is really quite difficult as I explain in gory detail below. For a long time I didn’t care because I was sufficiently far away from retirement that the details seemed kind of spurious. But alas, that is no longer true.

After pressing the “Create Reports” button and then ok I first check the Inputs and Assumptions section of the report. I’ve found on multiple occasions that I either entered in something wrong or had values apparently disappear. I think the disappearing value problem is caused by forgetting to press ’apply’ and not getting a warning that one has an unapplied value when changing tabs. So it’s critical to check that what you expect is actually there. I then usually walk through the Details sections to see if the numbers seem reasonable, again, another check for errors.

When looking at the report the first place I usually go to is Suggestions-Annual Suggestions to see the living standard per adult. I then go to my financial program and take our total expenses for the last 365 days, divide it by 2.7 (we estimate that our daughter costs 70% of us) and multiply that by 1.6 and divide it by 2. The 1.6 represents the assumption that 2 people live as cheaply as 1.6 and dividing by 2 normalizes the 2.7 “people” on the 2 people benchmark. So basically I take (2+1*0.7)*1.6/2=2.16 and divided that by our expenses (minus taxes) for the last year. I then compare this number to the living standard per adult number. Typically the number recommended by ESPlanner is larger than what we actually spend per person according to this calculation.

What this says is that according to ESPlanner we are over saving/under spending. It also argues that ESPlanner’s life insurance recommendation (made in the section “Suggestions - X Suggestions” where X is the current year) are too high since we don’t live at the spending level ESPlanner says we could live. But keep in mind that ESPlanner assumes that our future predictions of income are correct. The reality is that we can’t know that. With the economy in bad shape and with me working in an industry that actively discriminates against older employees I have to take into account the very real possibility that I may end up unemployed or under employed for a very long time. Sure, right now everything is roses, but who knows what the future holds?

So it’s tempting to estimate the life insurance amount down but I don’t and the reason is that if the future is better than I hope then I’ll need even more life insurance and it gets expensive to buy more later. So, since I can afford the insurance premiums now and since it’s very easy to lower the amount of life insurance/reduce our premiums without negative consequences later in case I can’t afford the premiums I tend to err on the side of accepting higher premiums.

Because we save more than ESPlanner recommends I tend to largely ignore its suggestions for saving for retirement, especially the ones that tell me to run down our savings now and save more later. The logic ESPlanner is using is called consumption smoothing and it makes perfect sense if and only if one knows the future with high certainty. Still, the fact that the numbers are lower than what we are actually saving is a good check that our savings program is reasonable. It’s also interesting to run scenarios where we retire early and compare those to our current savings.

I also look at the College Savings amounts (available under Details - 529 Savings) and use it to check our current college savings. Once I’ve adjusted for our projected GET savings I use this to help check how much we are saving in our 529.

In this section I walk through the settings I try to use in ESPlanner in gory detail. The sections below, except for Family Information which is filled out when creating a planner, match the tabs on the left side of the planner UX in version 2.28.0.

Children in household I accepted the default that our daughter will “leave the household” at 19. Keep in mind that my estimates of college costs include room and board so I account for her living expenses there even after she has “left” the house.

Economics-Based Planning This uses consumption smoothing, the idea that one should try to spend the same amount of money every year one is alive rather than say living badly when one is young in order to live better when one is older. When in this mode the program will try to calculate how much one should save/spend each year in order to achieve smooth consumption. I don’t really use this mode anymore.

Economics-Based Planning with Upside Investing The goal with this model is similar to the previous except that in this model the assumption is that one will want to start a process where by one moves all of ones “risky” (read: stock) assets to “safe” (read: cash, treasuries, TIPS, CDs, anything federally insured) assets over some period of time. The model lets you specify how much money to put into stocks each year until the start of the transition to safe assets. Once the transition starts no more additional money will be put into stocks and the stock portfolio will be sold in equal pieces each year until the target date when all stocks are supposed to be sold. The program will then calculate what would happen if all of ones stock money went away and one only had the ability to survive on the safe assets. It will then use (bogus, see previous comments on the Monte Carlo method) numbers to show probabilities that the stock portfolio will survive and provide extra income. This model doesn’t actually represent how I intend to save for retirement. My goal is to have a stock/bond ratio where the bond part getting heavier and heavier each year. But I still intend to rebalance to that goal amount and put new money in as it’s earned since we will make the transition before we retire. None of which upside investing handles so it’s pretty useless to me.

Economics-Based Planning with Monte Carlo Simulations This is a more sophisticated beast. It’s reason for existing is that it allows one to describe ones portfolio and then run a Monte Carlo simulation to see how that portfolio might do. The Monte Carlo part unfortunately is just plain silly. It’s been known at least since the late 1960s that stocks follow a “fat tail” distribution that is not normally distributed. Or in English, good and bad things happen way more often than anything related to a Gaussian distribution (log normal or otherwise) would predict. It’s also unknown if the correlations between assets is stable, in fact, there is a lot of evidence that shows that when things go seriously south correlations tend to increase. But what the Monte Carlo model does let me do is create a bunch of portfolios representing my target stock/bond distribution over different periods of time and then set up those portfolios to be used by the program.

The program supports up to 10 portfolios, Default and Portfolio 2 - 10 (or whatever custom names I assign). Each portfolio consists of a set of assets and their mean and correlation. The goal then is to create portfolios that represent our portfolio over time. To make things more interesting portfolios can be taxable or tax exempt.

What I want to model is how my asset allocation will change over time as I move from 70/30 stock/bonds to 0/100 stock/bonds. There are two challenges. First, how do I map the transition into the portfolios I have available? The second is, what assets do I put into the portfolios?

For how to map the transition I just use algebra. In an ideal happy world I would have enough bonds when we retire that we can meet our minimum needs indefinitely. Then if by some miracle there is excess money we can put that in stocks and goose our living standard at best or just live our minimum at worst.

Currently I simplify things by using a portfolio for each decade. So my current portfolio is the 70/30 portfolio. I then calculate how many decades until I want to hit 0/100 and then divide 70 by the number of decades and subtract that over and over again. Again, just simple algebra. So let’s say that I’m in my 40s and want to be 100% bonds by my 80s. So that would add 50s, 60s, 70s and 80s or 4 decades. 70/4 = 17.5. So portfolio 0 (for my 40s) would be 70/30, 50s = 52.5/47.5, 60s = 35/65, 70s = 17.5/82.5 and 80s = 0/100. This gives me 5 portfolios. For extra accuracy I could split each decade into a taxable and tax exempt portfolio since I tend to have different assets in my taxable than my tax exempt accounts. But at some point it’s just too much spurious detail so I’m not going to worry about it. Instead I use the same portfolio for both taxable and tax exempt accounts. Note that I could easily come to regret this as our taxable and tax exempt portfolios are not the same. We tend to go bond heavy in our tax exempt money and stock heavy with our taxable. But honestly I’m reaching the end of my patience with this game so I have decided to simplify before I give up. I must admit that the endless issues I’ve had with E$Planner have worn me down.

This then brings up the problem of what to put in the portfolios. I tried to create my own assets but that turned into a complete disaster. There is something seriously screwy with E$Planner that makes it lose its mind when users enter their own assets. So I had to give up on that strategy and instead just try to use the assets that are already there.

In theory I just have two types of assets, safe and risky. Risky has a hoped for return of about 4% in real terms. Safe would have a real return somewhere between 0% (what we are currently seeing) and 2% (what we have historically seen).

For “Safe Zero”, that is, a safe asset returning 0% average real return we have to use a tiny bit of algebra. The pre-programmed cash asset that comes with the program returns an average of -3.04% and the Short Term Government Bonds return 0.6%. So we can create a synthetic 0% mean return asset as follows:

(( − 0.03X + 0.006*(1 − X)))/(2)
=
0
− 0.036X + 0.006
=
0
X
=
0.1667

So 16.67% of safe zero money should go to cash and (1-0.72)=83.33% should go to Short Term Government Bonds.

For “Safe Two”, that is, a safe asset returning 2% average real return I need to use another asset because the math requires the second asset have a return that has a larger return than twice my desired return or the math won’t work out. In other words, I need an asset with a return higher than 4%. So I picked Dimensional Large Cap International with a return of 7.12%. Since I’ll only use the median return I can largely ignore the variance and beta.

So, putting this all together for safe zero plus risky, I would get an allocation like:

Stock Portion

70%

52.5%

35%

17.5%

0%

Inflation Indexed Government Bonds Fund

70%

52.5%

35%

17.5%

0%

Cash

5%

7.92%

10.84%

13.75%

16.67%

Short Term Government Bonds

25%

39.58%

54.16%

68.75%

83.33%

Each column is one of the 5 portfolios covering each decade.

So now I have to go to the Implement Portfolios tabs and calculate the years for each decade and enter them all in, three times, once for taxable, once for tax exempt for me and once for tax exempt for my wife. Make sure that on the last decade you set the drop down for last year to the last year the dial supports. This is the year we have set as our death year. I try to eyeball the Portfolio Choice table to make sure I set everything the same. And yes, this interface really sucks and makes experimenting with different portfolios and allocations incredibly painful.

This lets me spend based on different assumptions regarding the actual returns one will get on ones portfolios. This setting is not that useful to me since I’ve already put my hard guesses about portfolio returns in the “Build Portfolios” section. So I set this to Spend aggressively.

Set Retirement Date I picked 65 for myself based on nothing useful. After all, I believe the social security retirement age for my age cohort is 67 and I wouldn’t mind being in a position to retire by 55. I’ll probably play with those numbers but for life insurance purposes I am using 65.

Employee Wages I took my current compensation package (including stock grants and bonuses) and my expected maximum wages (based on a complete blind guestimate, but I did pick a figure reasonably close to what I make now since at my age I’m likely getting close to my maximum earnings) and calculated the geometric yearly average increase. The formula is CS(1 + X)Y = MS where CS is current salary, X is the growth rate I’m trying to solve for, Y is the number of years until I think I will achieve maximum salary and MS is maximum salary. A little arithmetic gives X = ⎛⎝(MS)/(CS)⎞⎠(1)/(Y) − 1 . I assume once I hit maximum my salary will only go up at the rate of inflation.

Just a data entry exercise (although I do enter my ETFs as mutual funds not individual stocks). The only big gotcha was remember to not include any cash we are using for our reserve fund. That money is entered in the reserve fund section later.

I use a Roth 401(K) so I put my contribution into the Roth IRA column. I do get an employer match and I need to see what I can do there since I suspect ESPlanner treats the Employer match as going into a 401(K) and not a Roth 401(K). Note that I specify a 0% growth rate because I already contribute the maximum and expect the maximum contribution to only go up at the rate of inflation. If I’m reading the report correctly I believe that ESPlanner treats both the employer contribution and the Roth IRA contribution as increasing at the rate of inflation so everything appears o.k.

I used the defaults. This means we aren’t planning on annuitizing any of our assets. As we get closer to retirement I may change my mind. But given that apparently there is no level of financial malfeasance on the part of the financial sector that will be punished by our government the idea of handing over a bunch of money with a vague promise of a guaranteed return for life is just too absurd to accept. Which is sad because there are actually very good reasons why an annuity is a great idea both in theory and in practice. But, alas, as I get older the risk reduces so perhaps at some point I will use annuities. But for now I’m just not going to worry about it.

Because I expect to get a higher Social Security payment and thus will benefit more from suspending I specify myself for file and suspend. This is something I’ll play around with more as I get closer to retirement. The rest of the values are set automatically and I leave them alone.

Once upon a time the IRS used to mail out statements (they are supposed to start again soon) with a list of past earnings. We had kept our old ones so I entered the data from there and used our tax records to get the missing years. It’s theoretically possible to get the earnings statement via ssa.gov but I’ve never gotten that to work for myself. I don’t remember ever blocking access on-line but it certainly sounds like the kind of thing I’d do.

Desired Percentage Change in survivor’s living standard 0% for both. Based on everything I’ve read spending in retirement doesn’t usually go down unless one has absolutely no choice. One, after all, wants to enjoy retirement. Not just sit in a room waiting to die.

Special bequest No plans for one.

Funeral expenses in today’s dollars It seems like funerals cost around $10k for all the bells and whistles.

In general I want E$Planner to make a life insurance estimate for me. That’s one of the main benefits of the program. But there is an interesting complication. My employer has a very nice policy where they let all unvested stock grants vest instantly if the employee dies. So I treat this as effectively an insurance policy set to the amount that my grants generally hang around. I use my employer’s long term stock price (not it’s current price) as the estimate for the face value. Since this is effectively term insurance it has no cash value.

I also have a mandatory life insurance policy through work so I throw that in too, again, this is effectively term insurance so it’s face value, not cash value.

Note however that since it’s possible I might inconveniently die while unemployed when I actually purchase insurance I tend to ignore these numbers. I include them here though so I can understand the cost of paranoia.

One of our big financial goals is to have a home owned free and clear by the time we retire. The idea being that this will stabilize our cash flows during retirement since we won’t have to come up with the rent every month. It will also provide some hedge against inflation. Our thinking is to try and save enough in cash to buy a residence rather than taking on a mortgage. I randomly picked 10 years from now for when we will buy the home. I set the purchase price based on housing in our area. Property taxes I got by looking at local houses on Redfin. For homeowner’s insurance I used the estimate from the New York Times rent vs buy calculator. I use 1% for maintenance costs. And I used the New York times estimate for closing costs.

We do not own any nor plan on investing in actual real estate assets directly. If we were going to invest in real estate directly we would probably use REITs. But honestly we have enough exposure to housing through our stock index funds (all the public home builders) that I don’t stress this much.

It’s tempting to put college expenses here but that isn’t right in our case, instead college expenses are covered in the 529 section.

We buy new cars every 10 years. So I take the current cost of new cars (based on our recent purchases) and remembered to add the 9% sales tax and then set that up as an expense that recurs every 10 years until we die. Yes, I realize we probably won’t be driving cars at 90+ but that just means there will be some other expense to take its place. I do wonder if car2go or zipcar might eventually change this logic, but I’m not wiling to bet on that quite yet.

The other cost I put in here is Private School. In addition to the base cost we also have to include the optional mandatory donations. Based on looking around Seattle we can expect 1st – 5th grade to cost around $17,000 and 6-8th to cost around $17,500. It looks like high school costs somewhere in the range of $29,000 for 9-12th (no, I’m not kidding, the 1-8th grade costs are taken from Villa and high school is taken from University Prep in Seattle). We are actually planning on sending our daughter to public school but I want to make sure that we are budgeted for her to go to private school if it turns out public school isn’t working.

The emergency fund is intended to handle unexpected expenses and/or unemployment. Once we retire we are completely dependent on our retirement savings and so we don’t need the reserve fund anymore. Put another way, it is a requirement that our retirement assets are structured so that a reserve fund isn’t necessary. So I set up the reserve fund to have a fixed amount of money from now until we retire. I set the growth rate to 1% nominal (meaning a negative return, which is mostly what I’m seeing these days).

I put the nominal rate of return at 3% (equaling the inflation rate for a real rate of return of 0%). The size of the fund is tuned to how long we want cash to keep us going in case I lose my job. We use that value for step 2. We skip 3 and we don’t worry about 4 since the goal isn’t really to increase it and we would more or less like to stabilize our spending.

Note: The reserve fund money is treated separately from the money entered in “Assets & Savings”. This confused me because I just put all of our cash in “Assets & Savings” and was surprised when our net worth was higher than it should have been. So I have to make sure to subtract the current value of the reserve fund from our Assets & Savings.

I have an article I wrote explaining my projections for college costs. But I have to make an adjustment to those numbers to account for the fact that we have a Washington GET plan. The way I do this is as follows: I take the UCLA out of state tuition and fees (available here) which acts as my “reference school” for college costs and I subtract from it the current UW in-state tuition and fees (available here) plus an extra 25% and I use that as the base cost. The extra 25% accounts for the fact that we are saving for 5 years of costs in the GET program but spending it across 4 years (which is explicitly allowed in the GET plan).

So the base number is $13,194 + $22,878 - ($12,394 * 1.25) = $20,579.50. Then I add UCLA Room and Board for off campus apartments as well as books and supplies to get a final yearly number of $20,579.50 + $1,599 + $14,571 + $585 + $1,638 = $38,972.50. That covers tuition, fees, room and board and books. This does not include health insurance because I am assuming we will be able to offer our daughter health care coverage under our policy. I then multiply this by my estimated rate (see article) of 2.85% real increase in yearly college costs. So for my daughter’s first year of college in 2024 this would be: 38972.5(1 + 0.0285)10 = 51618 in 2024 in 2014 dollars. Note that the 2.87% probably underestimates the benefit of the GET program since it rises at the rate of tuition/fees which typically go up faster than room and board, but oh well.

So I enter $51,618 as “tuition” (I know, it covers multiple things) for year 2024 in today’s dollars as a qualified expense to be funded by 2024. Then I multiple by 1.0285 to get $53,089 for 2025, $54,602 for 2026 and $56,158 for 2027.

Finally I put the nominal rate of return of the 529 plan as being 3%. Which means, since I’m modeling inflation as 3%, I expect zero real growth in our 529 plan’s value. Given the nightmarishly bad 529 plan options available this is actually better than I expect to get due to fees. The only reason I even use the 529 plan is that the effective return after accounting for the tax benefit is high enough to make it worth my while.

Federal Taxes/FICA/State Income Tax Left blank (you think I know? Although this is a great screen to torture ourselves with in order to play ’what if’?).

Municipal Bonds I just calculated what percentage of our taxable assets were in municipal bonds. Note that I have no target amount. I try to place bonds as much as possible into tax exempt accounts but sometimes I need to put some in taxable for a variety of reasons.

Dividends and Capital Gains 100%. All “income” received from these assets is either capital gains or dividends.

Unrealized long-term capital gains/losses on regular financial assets I just calculated the difference between our taxable asset’s current value and their original cost.

Given America’s snarling hatred for its own people has risen to such an extent that it literally allows its citizens to go without water I can only assume that the funding issues with Social Security won’t be addressed. According to the Social Security trustee's report this means that in 2033 Social Security will only be able to pay 3/4s of it’s scheduled benefits. So I set a change to Social Security to 2033 and reduce benefits by 33% so I entered -33.

The same trustee report also covers Medicare. Unfortunately it doesn’t really (nor should it) try to explain how the deficits will affect premiums. Those deficits could lead to premium rises, payroll tax rises or more likely, some combination of both. I have more faith in Medicare being taken care of than Social Security for the simple reason that a whole industry of parasites has arisen around Medicare ranging from drug companies to doctors to hospitals who try to squeeze every penny they can out of the government and in the process make Americans pay literally twice as much per capita for Medical care than anywhere else in the world. The parasites will want to keep that food train going so they’ll figure out something to keep Medicare funded. This doesn’t mean premiums won’t go up, I just don’t know by how much. So my current guess will be at least by the rate of inflation (thought that has not been the case so far) so I’ll leave the real rate at 0.

Inflation is set at 3%. This has been the long term average of inflation in the U.S. although with our government printing money with abandon one is forced to wonder what the future holds. The “other” settings are at defaults.

I removed the dead spouse’s car and added $30k a year worth of extra expenses until our daughter is 18 to deal with being a single parent household. I picked that number out of the air, btw. I also marked it as not tax related.

The decision to switch from Java to Javascript continues to be interesting. One of the consequences of it is that it made it much easier to have conversations with the IoT community who it turns out like Node.js a lot and have problems that Thali is perfect for solving. So we are talking to potential customers who we can then leverage to get resources to build Thali. I wrote an article explaining what it is we want to build in that context. Please give it a read and let me know what you think!

]]>http://www.goland.org/thaliandiot/feed/0An update on charitable givinghttp://www.goland.org/updatedgiving/
http://www.goland.org/updatedgiving/#commentsSun, 26 Oct 2014 22:46:23 +0000http://www.goland.org/?p=1221
My assumption is that my dear reader has reviewed my previous article on giving. It explains my philosophy and approach. The purpose of this article is to update things a bit.

Education

I increased the budget for Education from 50% to 57% of my total giving by reducing the legal reforms budget. My giving goes to the Center for Economic Policy Research, Democracy Now, the Electronic Privacy Information Center (EPIC), the Free Software Foundation, the Greg Palast Investigative Fund, the Press Freedom Foundation, ProPublica, Wikimedia and YES! Magazine.

Democracy Now and EPIC get the bulk of the money.

The changes since my last article are:

YES! Magazine I used to give to Truthout but they are so rabidly biased and sensationalistic that I’ve never been comfortable with them. So instead I’m switching the money I used to give to them to YES! Magazine. I’m not super happy with YES! Magazine’s review on Charity Navigator but I love their magazine and reporting and honestly my donation isn’t so large that I’m overly concerned. And yes, I did make a donation to Charity Navigator to help support their awesome service.

Freedom of the Press Foundation I added them this year. They are focused on protecting journalists. They provide platforms for securely submitting information to Journalists. They paid to have a stenographer at the Chelsea Manning trial when the government refused to provide records. They help to provide a mechanism to fund WikiLeaks, Tor and other organizations relevant to their mission. Their board of directors is a who’s who of hero’s including Daniel Ellsberg, Edward Snowden, Glenn Greenwald and Laura Poitras.

There are also two other organizations I give to that I didn’t mention in the previous blog post. I’m not quite sure why. Perhaps because they don’t really fit well into any of these categories. But I believe both are important.

Electronic Privacy Information Center They receive the 3rd highest amount of money from me (after Democracy Now! and the ACLU) in my yearly charitable giving. Their “super power” is that they know how to work the government. Probably their most successful approach is freedom of information act requests which inevitably they end up having to sue the government to get fulfilled. They also love to the sue the government anytime it doesn’t something in contravention to its own laws or the constitution. They also provide frequent testimony before various Congressional committees, provide supporting information to other people’s lawsuits against the government, know how to work the comment periods of various agencies, file complaints, etc. If it involves electronic privacy, from filling FCC complaints about whatsapp and snapchat to suing the NSA for bulk data collection, EPIC is there.

Free Software Foundation The original font of open source they are one of the key providers of the open source software infrastructure that makes the world run. All of us in the software industry commercial, open source or both, owe them a debt of gratitude (even if we don’t use the GPL).

Palliative Care

No changes here.I still give to the ACLU, Electronic Frontier Foundation, Verified Voting and the Tor Project. The ACLU and the EFF get the bulk of the money. This is still 40% of my giving budget.

Legal Reforms

This is where the biggest changes occurred. I lowered my giving to this category from 10% to 3% and as I explain below there is only one organization left in this category, Move To Amend. But this is a bit misleading. These articles focus on my charitable giving. But I also have taxable giving, this year mainly to mayday.us and 15now.

Rootstrikers They aren’t tax deductible and anyway the money I would have given them now goes to Professor Lessig’s other organization, mayday.us.

Represent.us Are they still even alive? I’ve looked at their website and tried to figure out if they really are up to much anymore but I just don’t see much. So I’m pulling them out.

Move to Amend They are the little engine that can’t. But they are trying! They have affiliates all over the U.S. and are constantly organizing. I recognize this is a long term effort and am willing to be patient. My main annoyance is that they are still a ’project’ of Democracy Unlimited of Humboldt County so I can’t see their finances broken out.

So basically this entire category has collapsed to just Move to Amend which is why I lowered the allocation from 10% to 3%. The balance transferred to education.

There are quite a few meaty issues on our ballot this year. There is State Initiative 1351 which would force the state to fund our schools at something like a reasonable level. An easy yes. There is Initiative Measure No. 591 which would further reduce rules on the ability to transfer killing machines (known as guns) without any form of safety check. Bad idea. No. And of course Initiative Measure 594 which would require that nobody can just hand out a killing machine without a background check, an easy Yes! We can return Jim McDermott to Washington, always a good idea. There is an infinite number of judicial races, most of which I’m not going to vote in because I believe the candidates have tainted themselves by raising money that puts them in hock to the people they are supposed to oversee and in many cases candidates couldn’t even be bothered to put up websites to fully inform voters. If a judicial candidate can’t spend the time to talk to the voters then don’t expect the voters to vote for them. For those in Seattle there is a metro bill we really need to support. For those who haven’t read one of my ballot cheat sheets before you should probably know that with the exception of Jim McDermott I generally don't vote for Democrats or Republicans.

Education is the forge of democracy. It is literally the bedrock upon which a fair society is built. An ignorant populace cannot rule itself. So we must always pay careful attention to our schools. This bill would increase funding to our schools by mandating smaller class sizes and more support staff. It still leaves school districts with flexibility on how to use this money but it sets up models to define how much money they get. Looking at the class room numbers I saw only good ideas. Most schools would have classrooms with just over 20 kids and poor schools slightly less. Based on what I’ve seen this makes a lot of sense. Our society puts a crushing burden on its poorest members and they absolutely need every advantage we can possibly provide.

This bill would cost somewhere around $1 billion a year in additional costs. Of this roughly $70 million or so will come from higher property taxes and the rest from general state funding. Now $1 billion/year sounds like a lot until you realize that the state’s entire revenue for 2015 using the baseline estimate is $33.332 billion. Yes, $1 billion is a lot but given the size of our revenue we can afford to actually invest in our fellow citizens.

Also note that currently Washington State is 47th (yes, 47th) in class size. Also note that the State Supreme Court has ruled that Washington State’s funding for schools is so derisory that it ordered it to be increased. Courts hate doing that sort of thing so you can be sure it was really, really, bad before the court was willing to step in.

Those against the bill whine that it doesn’t go mostly to teachers. Well duh, do you have any concept how many people it takes to run a school? What I like about is that the support staff it does call out are ones that schools really need like librarians, nurses, social workers, psychologists, guidance counselors and custodians. This is good stuff. Will our taxes go up? Yup. They would. And a good thing. Hopefully we can target those tax rises mostly to the rich, something we get automatically with property tax increases.

The first part of this bill is just bull baiting telling straight out lies about the government coming for guns. Our state constitution already prevents taking any property, including fire arms, without due process. So right away you can be sure that the intentions of those writing this bill are dishonest. They are trying to scare people by making them somehow think the law does not already protect their property. It tells you a lot about the bad intentions and low opinion that the people who wrote this bill have of the voting public. Therefore the only legally meaningful part of this bill is requiring Washington State from enacting any background check legislation until a national standard is passed. That is a violation of State rights. The whole point of the multi-state model (something our Supreme Court seems to often forget) is that it is a ’laboratory of democracy’. So long as the states don’t violate the federal constitution they are free to experiment as they see fit. This law would prevent that in the area of background checks.

My own belief is that our current background check system is filled with endless loopholes that need to be closed. Those behind this bill want the current gun free for all we have. I don’t. I’m voting No.

Right now in Washington State we only require background checks when a licensed gun dealer sells a pistol. All other weapons are exempt, private sales are exempt, gun shows are exempt. Just about everything is exempt. This bill will completely change that. It requires all transfers of guns involving anyone in Washington state to be passed through a background check. To me this is screamingly obvious common sense. Guns are tools designed for exactly one purpose - to kill. We absolutely shouldn’t allow killing machines to be handed around without stringent checks to make sure they don’t end up in the wrong hands. The bill isn’t perfect. I think it still has too many loop holes. But it’s vastly better than what we have now.

Bring some measure of sanity to our state, require background checks before allowing the transfer of machines whose exclusive purpose is to kill. Vote Yes!

Jim McDermott He voted against the patriot act, he voted for health care reform (and pushes for single payer) and in general he represents a lot of the things I feel and believe. An easy yes vote.

Craig Keller I don’t vote for Republicans so this one was easy. But I read his candidates statement anyway. The sfear mongering of his voter guide entry was nauseating. Just straight up xenophobia. “Immigrants are coming to take my job!!!!”

Mary Yu She is running unopposed but she still raised $46,255.59 at the time I looked is up. Seriously? Judges have no business raising money when it puts them directly in conflict with the folks they are supposed to be making decisions about. She has received donations from law firms, from PACs, etc. This isn’t somebody who should be sitting in the top chair in the state Judiciary.

Mary E. Fairhurst Clearly Mary Yu is an under achiever because Mary E. Fairhurst, running unopposed, has raised $91,747.50 when I looked. And yes, from PACs, tribes and an endless stream of Attorneys. Again, someone this tainted has no business on the highest court of the state.

Eddie Yoon Looking him up at votingforjudges.org I came away with the impression of someone who is simply not qualified to handle the variety of cases that a State Supreme Court justice must handle. You want to see someone with at least appellate experience much less judicial experience, he has none.

Charles W. Johnson So I have to admit, I’m kind of impressed. Even though he is running against someone he has only raised $22,938.14. Unfortunately it’s from a mess of PACs and attorneys. But given his generally good record and his low level of fund raising I got curious so I went to his website. His supporters are overwhelmingly Democrats although a few Republican organizations do get a look in. What’s scary is how many of the people giving money are judges. I realize it could be because they support Justice Johnson (or just don’t support Mr. Yoon). But it also feels like they need to pay homage so that when their decisions go before the Supreme Court perhaps Justice Johnson will be less likely to embarrass them by rejecting their decision? The same logic goes for attorneys, special interests, etc. Having to declare fealty to a Judge inherently removes that Judge’s impartiality. Having Judges raise money and get supporters seems like a recipe for corruption. Politicians are explicitly not expected to be impartial, but Judges are. How can that be when they need to run around and get people to swear loyalty to them? But on balance Justice Johnson seems reasonable so I’ll vote for him.

Debra L. Stephens It seems like there is a line somewhere about fund raising after which I get uncomfortable. I could sorta live with Justice Johnson raising $22k but Justice Yu’s $46,255 was too much. So that clearly puts Justice Stephens’ $68,208.39 over the bar as well.

John (Zamboni) Scannell A quick look at votingforjudges.org shows that literally nobody thinks Mr. Scannell is qualified to sit on the Supreme Court. It is also probably relevant that he was disbarred by the State Supreme court for misconduct in a disciplinary hearing. I did try to go to Mr. Scannell’s website but unfortunately it wasn’t working.

Michael J. Trickey On the bright side, he didn’t raise any money. On the not so bright side I can’t find anything about the guy! He is running for election and he can’t even bother putting up a web site! Talk to us!

Johanna Bender On the negative side she did raise money but only $2,615.90. Apparently in amounts small enough not to need disclosure. She does have a website! But it doesn’t work. She also has a long list of Democratic endorsements. Because, you know, judges are supposed to be partial. Not. Sigh.

Phillip Tavel He has raised $6,688. So I immediately like him. He has a website (although not much content there). He is rated as well as Justice Chow. He has a background that seems well fit to the position. So he gets my vote. I also listened to the district candidate forum linked at the bottom of http://www.givethegaveltotavel.com/news.HTML and I was more impressed by Mr. Tavel than Justice Chow.

Mark C. Chow Apparently he has been up before the disciplinary board but apparently for relatively minor things, mostly not taking crap from a defendant who was abusive and for being a bit too familiar with other fellow Asians. Not great but I’m not going to throw someone in the drink for it. However he did manage to raise $74,033.85 for his campaign from lots of attorneys (whom I’m sure he is supposed to treat without bias when they are up before him). And his endorsements are the usual infinite number of Democrats. That having been said, I have deep respect for Justice Chow’s most awesome accomplishment, the creation of a dedicated mental health court. That is really something to be proud of. But on balance I just don’t want someone who raises this much money on our bench.

Eileen A. Kato She lists a website but it’s just a parked domain with nothing in it. But at least she didn’t raise any money. Nevertheless I think it’s fundamentally disrespectful to the electorate to not take advantage of the Web to educate us about your opinions and approaches. Next.

Anne C. Harper Her candidate statement talks about the court, but not her. Thankfully she does have a website! And her website does talk about the things she did like help with Justice Bonner’s work on a community court to help with homeless people in Seattle. She did put in $2,539.50 but it looks like her own debt. So yes, you get my vote (I’m sure you care =).

Jon M. Zimmerman He attacks his opponent based on the results of the King County Bar Association judicial evaluation survey, one that has been recognized as not being statistically accurate. And um... so what? The people who she is paid to keep in line don’t like her, that could be a bug or a feature (as we say in software land). He then lists endorsements from the 46th District Democrats, Teamsters and others. So much for being impartial. I looked at his website but it really said nothing about his judicial approach or what he wanted to accomplish on the bench. And he raised $36,249.37 although $12,181 of that is debt so I assume that came out of his own pocket. The rest of the money came largely from Attorneys, perhaps the ones who will appear before him? And the final kicker, his opponent has been rated Exceptionally Well Qualified or Well Qualified by lots of folks. Jon Zimmerman was rated by one organization, King County Bar Association and they rated him as Not Qualified.

C. Kimi Kondo She is in charge of the mental health court which is good. She is also endorsed by a whole bunch of Democrats which is bad. She has a really pretty website (which I suppose the pretty part isn’t relevant but nice design is always appreciated) which explains, amongst other things, that her fellow judges elected her to a second term as Presiding Judge of the Seattle Municipal Court. She then goes to town listing her numerous (genuinely impressive) accomplishments. On the downside she did raise $25,658. I’m not happy that she raised so much money (and no I don’t care that she has apparently only spent $7,978 of it, it’s the raising that removes impartiality, not the spending) but on balance I can live with it. She gets my vote.

Steve Rosen Good news, he has a website! Bad news, it doesn’t work. What’s worse is that he actually raised $24,265 but only spent $2,500 (which probably explains his website). So he can’t bother to talk to the citizens he wants to elect him and he raises money? No.

Willie Gregory He has a website, which is awesome. It’s almost completely content free but at least he is trying to communicate with the people who are supposed to elect him! Unfortunately he did raise money but only $2,619.33 so I can live with that.

Karen Donohue No website and in her candidate statement she openly shows her bias by listing endorsements from various Democrats. She also raised $9,680.92 of which only $300 has been disclosed so far and that disclosure is from a lobbyist. Um... no.

Damon Shadid Wow, he is on the attack against Justice Bonner. But I really don’t like that he lists a bunch of political endorsements. But he does have a website! I’m not sure why his website had to mention his brother, a world renowned middle east correspondent who died in Syria. But, it’s his brother. I’ll give that one a pass. But he raised $66,327. That is just outrageous! On the other hand everyone with a breath has rated Mr. Shadid as either well qualified or exception well qualified. No one else bothered to rate his opponent.

Fred Bonner Justice Bonner probably has the best candidate statement of any of the judges. He really shows what he has done helping to innovate in multiple ways to keep people out of jail. It’s actually quite impressive. It is true that Bonner hasn’t been going to judge’s meetings but given the vitriol involved I can’t say I blame him. And as for the Bar Association survey, that isn’t statistically accurate and anyway questionable. A hard nosed judge isn’t going to be liked. Justice Bonner has raised $14,016 of which a good $5,000 are personal loans. I still don’t like all the money but it’s a low enough number that I can just about live with it. I think Justice Bonner’s record on the bench looks solid so I’m giving him my vote.

Proposition 1a I’m not completely sure exactly what this initiative actually does. Parts are clear. It accelerates $15/hr minimum wage for teachers. O.k. fine. But then it creates a “Provider Organization” to “facilitate communication” with the city with a bunch of requirements for that organization that make me think it’s some kind of fix up for some existing body, read the ballot, there’s a list of requirements that I bet only a tiny number of organizations can match. It then creates a Professional Development Institute that is funded by the city but run by the previously fixed up organization. The initiative states as its goal that it would reduce childcare costs to less than 10% of a family’s income but it doesn’t specify how or where the money is going to come from. Honestly this looks more like a job guarantee for a bunch of administrators than a way to get kids into early education. Arguments against 1a are that it potentially creates an unfunded mandate to provide universal preschool for all of Seattle’s children and that could cost a lot of money. I’m fine with that. As a society we must take on the protection and enrichment of our children through universal preschool education. It’s our job as adults to figure out the money, not our kids. Nevertheless I just can’t escape the feeling that 1a is a fix up for a small group of people to make money through the various organizations and training institutes mandated by the initiative. Fundamentally I just don’t trust this bill.

Proposition 1b This initiative raises property taxes to provide a voluntary program for 3-4 year olds with financial help for families that make less than 300% of the federal poverty level and with subsidies of some sort for everyone else. I would have liked to see the care be free but I can live with sliding scale. What I don’t like is that there is nothing permanent about this program. The city can make it go away at will. Even worse, this is not universal pre-school. This program is limited to 2000 children. To put that in perspective there are tens of thousands of children age 5 and under in Seattle. I don’t like this bill but it is better than nothing and given how much I don’t trust 1a, I’m willing to vote for it.

This creates a committee, with a multi-million dollar budget raised from car tab fees to study the monorail. No, not build it. Study it. The damn thing has been studied to death. Enough. Let’s focus on buses and light rail. If the monorail has a role, awesome, let’s put it into the over all plan. But more millions dropped down the study bucket is a waste of money.

The bill would put more money into transit. But it does it in a deeply awful way. First, it raises sales tax by 0.1%. Sales tax is pretty much the most regressive form of taxation we have. It’s a bad idea and hurts the people who need transit the most. Second, it raises car fees by $60. Yes, it has a $20 rebate for low-income individuals but it’s still a net $40 increase and that’s the last thing we need to do to our city’s poor. Furthermore the problem this proposition was created to solve has largely gone away. Originally King County was going to cut a bunch of routes that would hit Seattle hard. This proposition would have provided the funding to restore those routes. But in the end King County blinked and decided not to reduce routes after all.

So why am I voting for it? Because we desperately need more mass transit in Seattle. Have you ever actually looked inside a bus on any reasonably main line? It’s full! Furthermore because our network isn’t large enough most people (myself included) can’t really use the buses that well because getting anywhere requires jumping on a bus that might only run once or twice an hour and making a transfer. By massively increasing the number of buses and lines we can increase bus run rates and produce a system that could really do something about our traffic.

Seriously, have you actually driven in Seattle recently? It’s a nightmare! Traffic is off the scale and the city is growing like crazy (Seattle is literally the fastest growing big city in America). We need a lot more transit if we aren’t going to be LA in the small. So I’m voting yes.