Set it up so he can't fall more than a few inches and he could rollerblade / skateboard around a room with very low risk. This would be a one time thing, but a few hundred dollars could set this up for an afternoon.

Instead of the hassle of putting together a high-end PC with Vive/Rift, one simpler option could be a Google DayDream [1] (or Samsung Gear VR [2]) and a latest gen Android. DayDream still in pre-order, but will be shipping in a few weeks.

If you live in NYC, they are are available for testing at Google NYC Pop-up store [3].

Sounds like your grandpa is pretty cool dude, sorry to hear hes getting towards the end. I was watching a skateboarding show called King of The Road and the young skaters went to Tony Hawks house and they successfully skated the full loop. When they were done Tony Hawk said, "This was a life's work for me. These guys come here and do it in an afternoon and are just like see ya." Anyways, that doesn't help you with an application, but might be fun to find a skateboarding documentary (Bones Brigade) and some other stuff and go thru the history of skating with him. The improvements and the skill is unbelievable. Good luck, enjoy the time while you got it.

I know you asked for VR, but are there any options via a wire and harness? Something that prevents falling but lets him really try out skateboarding in a safe way? They have this for learning gymnastics, for movie stunts, so why not for seniors as well?

Several others have mentioned some of the issues with VR (need a gaming pc, not really a dedicated VR skateboarding game, etc.) maybe you can try something like the Wii or PS3 Tony Hawk Ride game that had the dedicated board?

It's not quite the same thing, but there are many self-propelled wheeled contraptions when balance is a concern. Standard and hand propelled trikes are safe. Some even provide bucket seats with seatbelts. There is a bit or risk of tipping in sledges, bit most people are fine going straight and slow. (Users are also strapped in to reduce the chance of injury.)

It's basically a trade-off: it is exchanging the skateboard or inline skating feel for something that is more real and independent.

There's a problem with your use case and the current tech--artificial locomotion in VR tends to cause people to experience motion sickness. It's similar to sea sickness in that some people don't feel affected at all and others can be full on vomiting within a relatively short time of exposure. It's a natural reaction to the dissonance between your eyes visually saying that your body is in motion but your inner ear saying your body is at rest. Skateboarding/rollerblading specifically are both pretty extreme sports in terms of movement, so any sims with current tech are liable to make him feel nauseous after any kind of serious exposure. Most current VR experiences have 1:1 movement in the real and virtual world to avoid this problem, and the ones that don't tend to limit artificial locomotion to slow forward movements to try and cut down on the effects. Actual skateboarding/rollerblading are going to be pretty risky for him to try and enjoy in VR. Odds are they're going to just make him feel sick to his stomach. (Also, as a former skateboarder--most of the skill in the sport is balance and footwork. None of the headsets are tracking your feet, so it would be pretty hard to get a realistic sim built for it.)

That being said, the best tech out there currently is the Vive IMO, with the Rift likely tied once its touch controllers ship this December. Both have 6 degrees of freedom when tracking you. But right now the Vive is the only system that officially supports tracking your movement within a few square meters, and has motion controllers supported. That means within a room, you can walk around in both real life and the game world simultaneously and reach out and interact with the virtual world. The presence you get from that kind of experience is impossible to describe. Once the Rift's touch controllers ship the two systems will likely be on par with each other.

The mobile headsets all have 3DoF tracking. That means that the rotation of your head is tracked, but not its position in 3D space--taking a step forward in real life won't also move you a step forward in the virtual world, but the direction in which you look will be 1:1. You don't have as immersive experiences on them because of that, but for experiences where you're a passive/seated observer you can still get a VR experience for a tiny fraction of the price of a Vive/Rift + VR capable PC. Their performance depends on the quality of your phone.

For your grandfather I'd actually recommend he try and get a demo of the Vive or Rift on the floor of a PC store. Microsoft and Micro Center stores were both giving demos of them when the Vive debuted. That way you could gauge how much he enjoyed the experience and see if it's something you want to invest in for him in general. Maybe pick up a cardboard and find 360* skateboarding videos on Youtube just for him to experience it, if you were going to buy anything blind--that would be a ~$20 investment, and for those sports specifically you probably aren't going to find anything better on the high end systems.

On the contrary, it is the result of a concerted effort to reduce friction.

With SIM cards, users can switch to a new phone by just moving the SIM, or switch to a new provider while keeping their phone (assuming its unlocked) by just replacing the SIM.

Prior to SIM cards phones where frequently programmed to be tied to a specific provider.

A pure software solution could work, but requires the network operators to be able to trust the phone manufacturers to secure it well enough to not let end users change things in ways they're not supposed to (e.g. consider a hacker harvesting authentication details from phones). The SIM card is the simple solution.

The actual reason it's still a thing is because changing how thousands of network operators work in over 200 countries is quite difficult to coordinate. Even Apple tried to push a soft-SIM and couldn't get it going.

But I'm glad for it, because the foresight of the designers of GSM to put your private key in a smartcard has absolutely improved consumer choice worldwide. I can buy an unlocked phone, travel to any country, buy a SIM card at the airport and pop it in my phone and the GSM(/UMTS/LTE) standards say it must work.

A software-based system will quickly devolve into a "oh we haven't approved this phone on our network, sorry we won't activate it" and other anti-consumer activities you saw on the ESN-registration-based US CDMA networks.

Hopefully when the GSMA adds eSIM to the standard, they add protections for consumer choice, but in the current corporate climate I fear they won't.

SIM: Subscriber Identity Module almost says it all, on top of that a SIM can store your contacts (up to a certain number).

The SIM is what separates your identity from the hardware of the phone (which has its own identity called 'IMEI').

A 'software solution' would need a carrier, that carrier IS the SIM.

Another nice benefit of having the SIM device is that it makes it much harder to 'clone' a subscriber ID, something that would regularly happen in the days before the SIM card, note that the SIM was a development that came along with GSM, and that GSM was the first mobile phone standard resistant against cloning. It's one part of the 2FA (something that you have) that gives you access to the phone network (the other being the PIN code (something that you know) required to unlock the SIM).

There are many poor design decisions in the cellphone infrastructure, but the SIM card is probably one of its best pieces.

Broken phone? Pop the SIM card into another phone, and you can immediately make and receive calls & texts on the new phone using your phone number.

If you had no SIM card, how would you authenticate yourself to the cell network (that's what the SIM card does)? Going online and then providing a username/password? This would be horrible security-wise as we all know people are terrible at picking secure unique passwords. So hackers could try to guess your password, then they would use your account, receives your calls & texts, and they could steal your cell data, causing you to receive large cellphone bills, etc. A total nightmare.

A form of this has existed for a while but never caught on for fairly understandable reasons.

Quite a few years ago (2005?) a family member purchased a Samsung-branded dumbphone on a contract. (Monochrome LCD (something like 128x64?), polyphonic ringtones, 3 fixed games, a (really slow, GSM data) WAP browser; that was it. Model SGH-something, I vaguely recall.)

It had no SIM card slot. It was locked to the network (Orange - in Australia FWIW) via software. In order to unlock it we had to call up the telco and go through some process, which we decided not to do in the end (whatever it was, I don't recall), since the phone had less capabilities than the Nokias that flood India and similar places, so we concluded there was no point selling it by the time we dug it out one day and tried to figure out what to do with it. (It's still buried in a box somewhere IIRC.)

I think this is why SIM-less phones are reasonably rare - it's really, really hard to de-contract them, unlock them and put them into sellable (or whatever) condition. Then once you've done that the recipient has to go through some equally arcane process to get the thing linked to a plan/contract too. And considering the ability to pass a phone on is a fairly major selling point - phones aren't solely purchased [preconfigured] on plans, then disposed - I think this was explored somewhat by the industry but ultimately left alone.

Some of the other things I've found in this thread are really interesting, although I wonder how difficult it is to "unconfigure" such a device to sell or pass it on.

You know if that happen then flip phone users will have hard time because network will promote only high end selective phones. SIM card gives you freedom of putting it in $25 or $640 phone and it works just fine. People with security, budget and privacy concern go for flip phones. Just like net neutrality, phone neutrality is a good thing. One should never be forced to purchase smart phone if he does not want it. A dumb phone just works fine for calling and text messaging. I have never used internet on my phone and I will never be excited about it (3G 4G, 5G or anything). I carry my laptop everywhere I go and it serves my need well.

I must add you can find flip phones cheaper than cost of lightening cables.

'eSIM' is on the way to replace sim cards. The biggest challenge of 'downloading a sim card' to a secure enclave on a phone is of course security.

The GSMA and members (i.e. telcos) have been working on secure remote provisioning. I think it'll take a while for the technology to make it in to consumer devices, though it's likely to be used in IoT relatively soon.

It takes a long time to spec these things up collaboratively and then even longer for telco's to act on it!

Because they handle private keys that is soldered to chip and can't be retrieved at all. Before sim cards there was something in the phones that can be easily reprogrammed and you always have to walk to your carrier office to "program" your phone. Swapping of sim cards is much easier.

> Feels like this is probably the result of telco networks wanting as much friction as possible to change providers, but is there something more to it?

In 3rd world countries, people regularly swithch their SIMs as they travel across borders because no one has cross-country access. Taking a SIM out only uses up a minute of your time, and standizing on a hardwardware dongle like that is great because if company A goes out of business, you just grab a new SIM and stick it in.

It's a bit harder in the US, where phones are locked to their providers, and you need IDs to buy SIMs but that's really all just a regulation issue, not a technical one.

> Feels like this is probably the result of telco networks wanting as much friction as possible to change providers

No, it is the opposite.

It is exactly done like this so you only need to get the sim card and not need to have the operator decide for you (of course people shoot themselves in the foot by signing a long term contract while getting a locked mobile phone)

Personally I really appreciate the fact that providers have SIMs. Verizon (major network in the USA) used to NOT have SIMs, and it was a huge pain to change phones out. Now it's as simple as swapping out the SIM.

I hear you that it should be doable in software, although I'd argue that if anything you should still need the SIM as a sort of second factor. (Otherwise you run the risk of people stealing your phone account remotely).

3. The IM card is what securely identifies the owner of a phone number, and makes sure they are not two phones with the same number. With a software SIM, if it is done wrong, you risk getting malware that steals your phone number.

Personally, I think we will eventually see SIM-free data only connections without a phone number. You really should be able to buy an LTE tablet, get online and just pay for some data. Apples has been trying a bit with the Apple SIM, but it is US only, and only works with a few operators.

As others have pointed out, SIM cards are basically smart cards. There's PKI, private keys, the ability to perform mutual authentication (although that's not usually done, at least in .us), and much more.

Honestly, I wish their use would expand into other areas of our lives -- replacing username and password combinations for various devices (working for an ISP, home routers are one good example).

As much as I'm against the idea of a mandatory "national ID", I'm convinced that it will happen someday (in .us, where I live). When it does, I believe it'll be something similar to US DoD's CAC [1]: a physical identification card that doubles as a smart card. The private keys stored on the card will allow you to prove your identity to your banks/financial institutions, e-mail account (100% encryption of all e-mails? Yes, please!), and so on.

My 5 yo phone eventually died at the beginning of October. I put the SIM in my tablet and I kept going until I received the new one two days later. A pure software solution would have worked as well, but the SIM is an authentication token. 2FA are all the rage nowadays and if we went pure software I bet we'll have to use a separate token anyway.

>Should there not be a software solution that lets you select which network/s the phone should connect to?

If I recall correctly german ISPs are trying to find a solution there by embedding the SIM into the device and then branding it on changing provider.

The problems SIM cards are (trying) solve is largely to "secure" the phone network. This mostly boils down who to send the large bill when shit goes fan. (The mobile network is pretty much non-secure, which is why SMS-2FA is not a good solution at all)

(They're also technically a backdoor for your ISP to do whatever they want)

Anyway, the reason SIM cards haven't died yet is probably because there is not much reason to replace them. They're tiny (so Apple doesn't kill it for half a millimeter of thickness) and pretty useful for the ISP to setup certificates and connection details.

The software equivalent would be a TEE (Trusted Execution Environment), but it relies on hardware support. Only a few arm processors and a few Android phone support this option. Apple has its secure enclave, but you cannot download trusted application in it, only Apple can do that.

A 100% purely software solution can be built based on white box encryption. It's slower and may be more easily attacked than a hardware protection (you never know if/when some genius mathematician or physician (quantum cryptographic attacks) breaks your encryption. But it has the advantage that it can run on all devices. cf. eg. https://www.trustonic.com/solutions/trustonic-hybrid-protect...

Then of course, there's the problem of key management and distribution thru software. Using a physical token has several good security properties. Replicating them in software (encryption) is difficult and error-prone. For end users, and service provides, it's much easier to swap a SIM card, than to install securely cryptographic keys and authentication tokens into his trusted execution environment even with the help of well written software.

SIM cards make it easy to change phones, by moving the SIM card to a new phone. CDMA phones make this hard, and sometimes impossible. They also make it a little easier to change carriers, since you can just switch the SIM card. It'd be even easier to switch if phones had that functionality built-in, so you could sign up for a new carrier and switch entirely via the phone, but in that case I think you'd find that carriers frequently broke that functionality.

Can someone explain the appeal of so-called "slim SIMs"? As I understand it, this allows you to load two accounts on a single device? And carriers don't like this aspect---or is it a security concern on their part?

It amuses me that these slim-SIMs, and SIM cards in general, are one of the few pieces of technology that are utterly opaque to the user and yet are so widespread.

Edit: For example, I recently upgraded to an iPhone 7, at the Apple store. This required a new SIM card, but the salesperson was very careful to return the old SIM card to me. Why? What am I supposed to do with this old SIM card?

>Feels like this is probably the result of telco networks wanting as much friction as possible to change providers

I don't understand how you came to this conclusion.

I move between networks very regularly due to frequent travel to different countries. Pulling out your old sim card and putting in a new sim takes maybe 2 minutes. You are then immediately off your old network and on the new network. Once you have the sim in your possession you don't need to talk to anyone, fill in any details, log into anything or even remember anything.

Short of some process that is 100% automatic I can't imagine a more low friction process.

There is actually an eSIM (embedded sim) specification (http://youtu.be/mLouo2mYjAU) that was released quite a while ago by the GSMA and its mostly up to the device manufacturers and carriers to implement it now.

It lets you virtually subscribe to a network, so for example if you're traveling, you don't need a local card just pop up some software and choose a new network.

At least one can change the SIM and can un-locked phones that dan be used all around the world and I can easily swap the SIM card. Why change it, it works great as intended and all software service solutions would mean a middle man is in the game - that would suck, right? (except you eant to be the middle man)

The concept of SIM cards will slowly fade over time as M2M/IOT devices start to emerge as consumer oriented products, devices will become more oriented around "SoftSIMs" and other embedded or virtual SIM products. The ability for IOT products to move across multiple networks will become a big aspect of the IOT, you need full redundancy and reliability when your product can never be offline.

Why would I want a SIM card with one IMSI on it when I can have a SIM card with up to 20 IMSIs from various networks all around the world, or even better the ability to constantly swap and trade IMSIs from various networks, new connectivity set everyday. A global community calls for global connectivity.

Because of a power struggle between os vendors, hardware makers and telcos. The SIM provides a neutral way for them to coexist. Also, this decouples a lot if certification. A SIM and a phone are easier to work with than a phonesim

So clearly the only thing stopping the industry is the telcos who would very much like to make it as difficult as humanely possible for you to switch carriers. Especially in the US where there is a lot of competition and hence high churn.

After Apple "broke the back" of the telco monopoly with their 2007 5-year deal with AT&T[0] it's been a slow progression in North America to the European-style subscriber-owned phones that are compatible across most networks.

I, and many others were surprised at that deal because, up to that point, ppl had essentially carrier-owned phones and long contracts that locked subs (subscribers) to their network. This deal would allow ppl to install any software from the app store without telco approval.

Telcos see the SIM card as their last beachhead. They are looking for at least 2 revue streams from this NFC SE (Secure Element)[1] real estate:

Carriers and Issuers (the bank that issues your credit card) are now fighting over that potential revenue stream (spoiler: it's tiny) while Apple has gone and deployed it with Apple Watch et al and is making a cut of the transaction fee. In contrast, the transaction fee is a huge stream however one can imagine the fun of negotiating a contract between all the parties involved (likely all multibillion dollar companies with teams of lawyers).

Apple had tried to push a software SIM (containing a SE) but the carriers, from their POV, rightly and vigorously fought and will continue to fight against that[2]. Google is also trying with Android Wallet/Pay/...

I suspect Apple will eventually use the same "wedge" approach with one of the US carriers and the others will fall in line.

1) One SIMs are a bit harder to tamper with than the OS of a phone which I am assuming would be the alternative to a SIM card i.e storing the same information on NAND flash accessible to the OS. SIMs have some threshold(it used to be 3) of unsuccessful attempts to read the card. A lock is activated and can only be unlocked entering the unlock code.

2) Carriers can talk directly to the SIM - A "SIM" is basically a Java applet that runs on UICC(Universal Integrated Circuit Card - the smart card itself.) I think a lot of people don't know that SIMs run Java - well Java Card. This mean that they can remotely lock a SIM card to prevent it from further accessing their network. If someone stole my phone or even just my SIM card I could call my carrier and they could lock the SIM remotely and consequently unlock it. They can also use the SIM to push new PRLs - preferred roaming lists. This is generally called OTA or over the air provisioning.

3)Convenience, if I use a pre-paid services with an MVNO or travel to another country and buy a pre-paid SIM while on holiday, I don't need to do anything else except insert the new SIM and power on the phone. What would the non-SIM card alternative look like? Its hard to imagine it being easier.

4)Carrier-locked phones, such as what you get when you are under contract to a carrier. The way phones are locked is by having the phone only accept SIMs from the carriers network. An unlocked phone will accept a SIM from any carriers network.

If anyone is interested this DEFCON presentation - "The Secret Life of SIM Cards", is pretty interesting:

Soft-SIM makes it trivial to sign-up for new mobile plans. This doesn't matter much domestically (maybe it does for multisim or cart abandonment) but it does internationally because of high roaming fees, which are a revenue stream carriers don't want to give up.

1) Security: telco laws these days often require registration of accounts to your personal ID (i.e. no anonymous usage any more). How would a pure soft-SIM be able to fetch the data from the network?

2) Flexibility: SIM is pretty much standardized. This means a newcomer MVNO just has to issue SIM cards and the customer can use any kind of phone (or other interface, like a modem, a 2G/3G shield, ...) to use the network. And if a device breaks, then the SIM card usually stays intact and can be placed in a new device. Not sure how to securely do this with a soft-SIM.

I have witnessed several enterprises move from 100% email to 90% Slack and alternatives while using email primarily for scheduling purposes. I have a feeling corporate email will slowly die off over time.

Perhaps using a community messaging tool with built-in end-to-end Signal encryption will be the way to secure lines of communication in the near future.

Because we better understand the threat vectors that are imposed on the company from sloppy IT practices and as such are more willing to take security measures to prevent these things from happening.

We are also, at the same time, too stupid to realize that not everyone want's 5 applications just to encrypt their mail with a PGP key. When we make it so that by logging into a service with a password your browser can derive a private key and public key and use that to sign and send email we will have larger adoption. This will only be the case if it automatic.

As a side question, can I ask how much experience you have other than your work at said startup? Within a year I'm going to have to make the choice between going the tradition route in tech or joining a high-frequency prop shop. The starting offer seems to be around 130-150k base with 20-80k bonus at the end of the first year (which is implied to scale heavily after the first year, unless things go really poorly).

To be candid, I have no idea what it feels like to be blind and have never paid much attention to accessibility other than reading a tutorial or two and making sure I use alt tags on my images. The main reason for that is that I'm lazy and based on my experience, most developers are in the same boat.

Now, if there was a service which would spin up a remote VM session inside my browser (a bit like BrowserStack or SauceLabs do) with all screen reader software setup and no screen (only audio), it'd make it a lot easier for me to experience my software as a blind user. There should probably also be a walkthrough to help new users use the screen reader and help them get started. If you're lucky, you could turn this into a business and it could indirectly help you achieve your goal of making better software for the blind by exposing more of us to your issues.

Anyways, I know you probably have more pressing issues to solve and I hope I didn't come across as arrogant, just throwing the idea out there.

I'm also a blind software developer. I scrape by building apps[0] and services[1] for other blind people, and running the occasional crowdfunding[2] campaign.

First off, you're 100% correct when you talk about how devtools are inaccessible. This problem is an historic one, stretching back as far as early versions of Visual Studio, or other early IDEs on Windows. Basically, the people who build the tools make the tools for themselves, and not being blind, make inaccessible by default tooling.

I do most of my work on Windows, using the NVDA screen reader, and consequently I have the ability to write or use add-ons for my screen reader to help with a variety of tasks[3]. This being said, this always means more work for equal access, if access is even possible.

I'm interested in any sort of collaborative effort you might propose. Targeting accessibility issues in common devtools does seem to me like a reasonable place to start attacking this problem.I had read a few months ago that Marco Zehe, a blind developer at Mozilla, was pushing some work forward for the Firefox devtools[4], but haven't heard much about that recently, and I think they might be distracted trying to get a11y and e10s working together.

Basically, I'm interested in helping in any way you might suggest, and from the thread here it looks like there are some enthusiastic people at least willing to listen.My email is in my profile, let's make something awesome.

I am sighted myself but I work with a company called Bristol Braille Technology and we are trying to make an affordable multi-line Braille e-book reader.

If you have an interest in Braille and have software development skills there might be something to do there. The UI program that drives our prototypes is open source and available on GitHub. https://github.com/Bristol-Braille/canute-ui

We have plans to open source the hardware as well.

If you want to add support at a lower level, our current USB protocol is outlined in this repository. It is a a dev-kit I knocked together to allow some Google engineers to write drivers for BRLTTY (and thus for ChromeOS).https://github.com/Bristol-Braille/canute-dev-kit

I am an adjunct professor in a CS department. I usually end up with introductory level courses, often for non-majors. This semester I have a visually impaired student in an introductory Java course who is unable to see the screen. He uses JAWS as his primary screen reader. To my great surprise, most of the tools we typically use were completely inaccessible to screen readers. I spent the first several weeks of the semester scrambling to find a reasonable set of tools that would work for him. We settled on Notepad++ and the terminal. Also, I provide him with special versions of the slide decks, readings, assignments, quizzes and exams.

I would be very interested to learn how visually impaired developers such as yourself and others got started, and for any suggestions for how I can make my student's experience more positive.

Hi, I'm also a blind dev - successfully been developing back-end systems and libraries at Microsoft for over a decade. There are certainly accessibility problems, but the awesome thing about being a dev is that you can also make your own solutions. Look at T V Raman at Google, and Emacspeak - which whilst not everyone's cup of tea, certainly serves him well.

For any developer, it's important to practice your craft, and when looking for a job, it's valuable to have a portfolio of work you've contributed to. So you can get multiple benefit by helping create a tool which will help you be more productive, and also show your skill.

Clearly, this project should be something that you're passionate about, but one project I've had on my when-I-have-time list is below - I would be happy to work with others who are interested (@blinddev @ctoth @jareds).

After your text editor / IDE, one of the next most important tools is a tool for tracking bugs/tasks. Unfortunately, many of the common ones, like VSTS, Jira, and Trello, are either not accessible, or at least not productively usable with a screen reader.

Over my career I've developed my own scripts for working with such systems, but it would be good to have something that others can also benefit from. I should probably put my initial bits on Github, but time is currently consumed by other projects. Email me if this interests you. Also happy to mentor on general career challenges around being a blind software engineer.

Low vision programmer here. I've made a few tools that make my own programming easier, like a lightweight version of Emacspeak https://github.com/smythp/eloud (now in melpa) and just gave a talk on blind hackers and our tradition of bootstrapping: https://www.youtube.com/watch?v=W8_O3joo4aU Would be happy to help out with a project, email at my name + 01 + @ + gmail.com.

I'm a totally blind developer and have some of the same issues you do. As far as Chrome dev tools go I've given up on doing any kind of UI work, partially because of accessibility and partially because it does not interest me. My current job is working on a large Java web app. Luckily my company is understanding when it comes to UI so I don't do much of that work but do a lot of API and database work. API's can be tested using curl and database stuff can be done from a command-line. The advantage to working on the app is if accessibility gets completely broken it's discovered early and made to work well enough. We use Eclipse as an IDE and it works pretty well with Jaws. I've used IntelliJ a bit and it's what I'd call barely usable. I am hoping it will continue to improve, the impetus for adding accessibility appears to have been Google switching from Eclipse to IntelliJ when coming out with Android studio. Hopefully Google will continue to insure accessibility improves. As far as JIRA goes I agree with you. I'd really like to hear from Atlassian why they can't display dashboards and issue lists using tables to provide any kind of semantic information. I've found your best bet with JIRA is to have someone sited help set up filters that display what you need. Export the results of the filter to Excel and you can brows a lot of issues quite quickly. I haven't used Gitlab but find Github to be fairly easy to use in the limited experience I have with it. I'm not particularly interested in building tools from scratch since I don't have a lot of free time but would be interested in trying anything that comes out of this.

I am working on a project to parse an image which then synthesizes an audio representation which retains all the information of the source image ... next step is to parse live video to enable people to hear what others see ... shoot me an email as well ... its not specific to dev tools yet could parley into a general enabler ... I am using a Hilbert Curve ... nice intro video at https://www.youtube.com/watch?v=DuiryHHTrjU

I am not blind nor a programmer. But I do have serious eyesight problems and other handicaps. I also have moderating experience. Given the number of people saying "Shoot me an email" I have gone ahead and set up an email list via Google Groups called Blind Dev Works. If anyone wants to use that as a collaboration space, shoot me an email (talithamichele at gmail dot com) and I will send you an invitation.

I think you are smart to consider your developer skills as a separate thing to improve. One way to objectively measure this might be to explain a technical concept to someone.

For example, could you read this article and then give an overview of the main issues of web site performance? Could you then come up with one recommendation for a performance improvement in a code base you're familiar with? Could you justify in practical terms why your recommendation was the best bang for the buck, vs. other other possible improvements?https://medium.baqend.com/building-a-shop-with-sub-second-pa...

Now, how do you judge yourself?

1). Have the conversation with a dev whose skills and opinion you trust.

2). Record your answers on audio, and ask someone on HN to give you fair and constructive feedback. Many here would be glad to do this (feel free to ping me as well).

Great initiative! Are you considering setting up a blog / github page / anything to keep track and coordinate the effort?

I'm asking because though I'd love to help I know I won't be able to commit to it full-time. So it would be great to be able to follow up and get an idea of where the project is going, what areas it is tackling, etc.

Also, maybe a "Show HN" could help spreading to a wider audience whatever you set up.

Out of curiosity, what operating system and screen reader(s) are you using?

As a partially sighted developer, I generally use a screen reader for web browsing and email, but read the screen visually for my actual programming work. So I don't have significant first-hand experience with the accessibility (or lack thereof) of development tools. But some of my totally blind programmer friends have expressed some frustration with the accessibility of some tools, especially web applications. They generally use Windows with NVDA (http://www.nvaccess.org/). At least with NVDA, you can write add-ons (in Python) to help with specific applications and tasks.

Any chance you could start with an education component? I think most of us don't really know the nuances of a blind developer's workflow, especially which tooling breaks down where and if there is anything that is infeasible.

I am a completely blind developer and have been working on and off with code for about 20 years. When I started I was able to see well enough to work without the assistance of a magnifier or screen-reader, now I rely completely on VoiceOver and JAWS.

I too find frustration with some of the tools with which I work. Although they may slow me down, they seldom create complete barriers. Most of my work at this point in time is with PHP and Javascript, so this may help the situation, I am less familiar with the current state of affairs of the accessibility of developing with other languages.

All of the complaining I do about JIRA aside, I do find it to be a reasonably usable tool for what I need (page load times annoy me far more than accessibility issues). There are some tasks that I cannot complete (reordering backlog items), but I collaborate with team members, which can help us all to have better context about the rationale for changes.

Gitlab I find quite poorly accessible, but thankfully it is just a UI on top of an otherwise excellent tool (git). I find that the same trick that works with evaluating GitHub PRs works with Gitlab MRs. If you putt .diff after the URL to a PR or MR, you can see the raw output of the diff of the branches being compared.

Debuggers are definitely my biggest current pain point. I tend to use MacGDBp for PHP. This is quite reasonably accessible. It allows me to step through code, to see the value of all variables, and to understand the file / line number being executed. It isn't possible to see the exact line of code, so I need to have my code editor open and to track along.

I haven't found a very accessible Javascript debugger. For Javascript and DOM debugging I still find myself using Firebug. I use lots of console.log() statements, and would rather be able to set breakpoints and step through code execution. That being said, other than "does this look right?", I find there is little that being blind prevents me from doing with Javascript. As recently as last night I was squashing bugs in a React app that I am helping to build for one of my company's customers.

I'd be happy to learn more about any projects you take on to improve web application development tools and practices for persons with disabilities. Feel free to reach out on LinkedIn if you would like to talk.

Currently, I'm launching an app that is reading Slack messages out loud - Team Parrot http://teamparrot.artpi.net/ .Once launched, it will be open source (built in react native)If you think it's useful, I will welcome contributions.

I am not blind, but I designed it to operate without looking at screen.If the app will take off, I'm considering into forking/pivoting into RSS reader that also is not using screen. App is already accepted in the app store, I'm sorting out launch details.

Please accept my deepest apologies for the shitty job we (the developers ) are doing at providing interfaces for vision impaired.

Probably when we're all old, we'll have vision problems of our own :).

Would it be helpful for a news site or blog to call out software that won't take easy steps to improve accessibility?

My sight issues are not comparable to being blind, but as an example, I've asked Pandora for simple accessibility improvements for years and they never take action. Have even offered to write (less than a page) the code for them.

Would they (and software tool vendors) feel the same way if this were highlighted on a high traffic web page?

Legally blind developer here. I still have some vision in one eye and I make extensive use of it as far as being able to primarily code with screen magnification and some spoken text for select code all using OS X). I've had good success in my career but I will say I've had to at times work a lot harder to get the same results as fully sighted coworkers.

I'm mostly responding to encourage you to keep at it, and if you haven't tried Mac OS, maybe give it a whirl. Apple is pretty good about accessibility and their accessibility team is very good at accepting and acting upon feedback.

I'm not blind but I have very poor eyesight in my left eye which makes reading tiring so I started this experimental Morse-based system https://github.com/Hendekagon/MorseCoder for the Apple trackpad. It's not very successful. What I really want is a fully haptic dynamically-textured surface.

Just in case you don't know this still, there is a latex package for braille. You can write a .tex file in english an put \usepackage{braille} in the preamble of your tex files and your output will be translated to braille automatically. The pdf can be then raised printed with the appropriate hardware. You could find it useful for documenting your software (tutorials, faqs, manuals, etc...), in both languages even if your collaborators don't know a single word in braille.

You will need to have the package texlive-fonts-extra installed.

You could want also to contact with the maintainers of brltty, cl-brlapi, ebook-speaker or brailleutils

I have a tool I'm working on that is specifically geared towards assisting the transcription of printed books to braille for The Clovernook Center for the Blind [1] built on top of Atom as an extension.

A super-rudimentary basic version will be something I finish when I've the time in the coming months. I was hoping to get some interested from the blind community and get ideas for further OSS work involving that general space (editors).

Not a blind dev, and would love to help community as much as possible, though I don't even know where to start :( Would be really helpful to have centralized place which directs developers effort to valuable open source projects.

Another interesting idea: try using braille screen for ourselves, so we as dev's will be able to work at complete darkness without any light :)

I am not blind but might have the chance to offer you a dev job (related to blindness). Here's the product we are working on: http://horus.techJust send me an email at saverio at horus dot tech if interested

I'm a seeing student with an upcoming six week block of time to do a out of school project. I have previous experience developing accessable software and would love to work with you. If you're interested, shoot me an email at eliaslit17@gmail.com

I know you're asking for collaborators and not recommendations for tools, but since you were mentioning Chrome Devtools I wanted to make you aware of kite.el [1] which I believe TV Raman had working with Emacspeak [2] at one point. kite.el is unmaintained, but it might make for a good starting point?

I'd be interested in discussing this with you further offline. I'm not blind, but definitely interested in exploring ways to help. If you add some information to your profile about how to reach you I'll reach out.

I'm part of the Blockly team (https://developers.google.com/blockly/), an open source project for visual drag-and-drop programming, usually targeting kids. Despite being a "visual programming editor" first, we are exploring blind accessible (i.e., screen reader ready) variants of our library.

Right now, it is effectively a different renderer for the same abstract syntax tree. We'd love to see people evaluate the direction we are currently going, and possibly apply the same accessible navigation to our existing render.

Granted, Blockly programming is far from being as powerful as other languages. It is aimed at novice programmers, whether for casual use or to teach the fundamentals of computational thinking. You can write an app in Blockly (http://appinventor.mit.edu/).

There's a guy called Octavian Rasnita who's a blind perl developer who I've met before. We basically never agreed on anything but he always disagreed with me intelligently. You might be able to contact him by emailing user teddy on domain cpan.org. No idea if he's a good person to talk to or not tbh but I've always found him worth arguing with.

If you do contact him please blame me so he can shout at me, not you, if I made the wrong guess here.

I'm making web apps and mobile apps and I'd like to learn how to make my apps more accessible. Is there a community where I can ask for assistance with accessibility testing? I'd be willing to pay for testing. Thank you kindly. Nanch.

Im a programming language developer, and Im interested in developing languages and programming environments for the blind and visually impaired, or at least making existing languages more usable. Feel free to get in touch.

If you develop frontend stuff, please get in touch at pavelkaroukin@gmail.com . Company I work at developing single page app for higher ed and we constantly struggle with proper practices to make UI to be accessible to people with bad sight. Who knows, maybe it will perfect opportunity for both you and my company.

Sighted person here. I'm very interested in this question. Most developers I know are not considering accessibility as part of the intrinsic design of an app. Blind people are more keenly aware of this problem, unfortunately, because it affects them a little more directly. Accessibility to screen reading clients is considered a "good to have," nonessential, an optimization. And yet when you ask the same people if search engine optimization is considered a "good to have," many will laugh and say no, that it is a necessity, if for no other reason than their clients demand it.

Clients want sites that implement current SEO best practices. What sort of best practices are those? A Yoast SEO plugin, maybe. Developers often mention the URL structure of the site itself, say it's "clean." This might be appreciated by future admins of the site, but it's unrelated to the goal of making pages that can be scraped.

It surprises me developers and SEOs overlook the difficulty of scraping the web. Keyword density does very little to help a page that cannot easily be serialized to a database. It's true that machines have come a long way. Google sees text loaded into the DOM dynamically, for example. But its algorithms remain deeply skeptical of ( or maybe just confused by ) pages I've made that make a lot of hot changes to the DOM.

And why wouldn't it be? I ask myself how would I cope with a succession of before and after states, identify conflicts, and merge all those objects into a cached image. Badly, sure. At this point, summarizing what the page "says" is no longer a deterministic function of the static page. Perhaps machine learning algorithms of the future will more and more resemble riveting court dramas where various mappings are proposed, compared to various procedural precedents, and rejected until a verdict is reached.

I wasn't very good at SEO. I found web scrapers completely fascinating, I spent way more time reading white papers on Google Research and trying to build a homemade facsimile of Google. Come to think of it, I did very little actual work. But I took a lot of useful lessons that have served me well as a developer.

I realized, for example, how many great websites there are that are utterly inaccessible to the visually impaired. With very few exceptions, these sites inhabit this sort of "gray web," unobservable to the vast majority of the world's eyeballs. The difficulty of crawling the web isn't simply related to the difficulty of summarizing a rich, interactive, content experience. They are instances of the same problem. If I really wanted to know how my site's SEO stacks up against the competition, I would not hire an SEO to tell me, I would hire a blind person.

FYI, I sent courtesy invitations to nine people who said in this discussion "shoot me an email." One email address provided here was invalid. One or two other people who said "email in profile" did not have an email in their profile. If you want an invitation, contact me (talithamichele at gmail etc).

I'm a blind programmer. I'm currently working on the Rust compiler [0] and a large library for 3D audio that is essentially desktop WebAudio [1]. I'm the kind of person who people often ask for help with their college classes because I went through everything without trouble and came out of college with a 3.9 GPA, and the only reason I'm not making significant amounts of money at the moment is that I have other health problems that I won't go into here (but I would trade with someone who is only blind in a heartbeat). I think I am qualified to say that this is a bad idea.

Firstly, just offhand, the following stacks should be fully accessible with current tools: Node.js, Rust, Python, truly cross-platform C++, Java, Scala, Ruby, PHP, Haskell, and Erlang. If you use any of these, you can work completely from a terminal, access servers via SSH through Cygwin or Bash for Windows, and do editing via an SFTP client (WinSCP works reasonably, at least with NVDA). Notepad++ also makes a perfectly adequate editor, again with NVDA; I'm not sure about jaws if you're using that.

GitHub has a command line tool called hub that can be used to do some things, and is otherwise pretty accessible. Not perfect, but certainly usable enough that NVDA (one of the most popular screen readers) uses it now. Many other project management systems have command line tools as well. If you write alternatives to project management tools, you will have to convince your employer to use them. Replacing these makes you less employable. You need to work to make them accessible, perhaps by getting a job on an accessibility team.

The stacks you are locked out of--primarily anything Microsoft and anything iOS--can only be fixed with collaboration from the companies backing them. Writing a wrapper or alternative to msbuild that can let you do a UWP app without using the GUI is not feasible. I have looked into this. Doing this for Xcode is even worse, because Xcode is a complicated monster that doesn't bother to document anything--Microsoft doesn't document much, but at least gives you some.

I imagine this is not what you want to hear, but separating all the blind people into the corner and requiring custom tools for everything will just put us all out of work. if you're successful, none of the mainstream stuff that cares even a little right now will continue to do so, and you'll end up working on blind-person-only teams at blind-person-only companies.

For whoever doesn't know what Enigmail is... it's a security add-on for Mozilla Thunderbird. It allows you to use OpenPGP to encrypt and digitally sign your emails and to decrypt and verify the messages received.

I didn't realize this was just a discussion thread here, and I opened two tabs like I usually do - one for the main link and one for the HN thread. In this case I got two tabs with the same HN thread. :)

The problem with asking this question here is two-fold. First, you are potentially admitting that you think you may be violating that patent. Just because a site is not working, does not mean they are not a patent troll.

More importantly, you will get a mix of answers, if any. Patent law is complicated and, in most cases, not a simple yes you are violating or no you are not violating answer. The responses you get by asking that question in a public forum will fall along that line. Some people may think you are and others will not. In the end, you still don't have the answer to your question and are most likely more confused than when you started :)

Finally, just looking at your site, not sure if you are aware but I know of several patents that are being utilized for similar technology. Check out a site: http://zugara.com/virtual-dressing-room-technology and on their site they list the patents they are using.

I bought a Rosewill mechanical keyboard[1] (Cherry MX) about a year ago. People claim it's the best bang for your buck.

A few weeks ago, I spilled tea on it. While it was drying, I switched back to my Logitech K310 washable keyboard[2]. My enjoyment and productivity on the K310 are exactly the same, and I won't be switching back to the mechanical keyboard because it's not waterproof.

I've had the original Das Keyboard for 7 or so years and it works just as well as when it was new. I use it heavily, too. I tend to beat on a keyboard (literally) a few times a week, along with the usual 40+ hours of normal use per week.

When I take notes (usually in class, as I'm still sadly a uni student), I mainly take them on my computer, using a markdown-like syntax on a tool I created that sends everything in a builtin postgreSQL database (that syncs itself with my server every time I connect the laptop to internet). Every note is identified by a unique identifier like "<day><month><year><document number><document type><category>", for example, my latest economy course was noted "0710161CECO", making it kinda weird to others, but crystal clear to me. But as I take notes on paper too (due to professors forbiding us the use of a laptop in class) (completely stupid as these classes are about linux and C development) I write the code on the top right corner of the paper, usually with a small cat drawn (as I love cats)

I take a lot of notes during lectures - just very simply on A4 lined paper that I file at the end of the class. For personal notes I use a small notebook, but I don't have any special system for it. When it's full it's full...

I'm not a contributor but I've worked with core contributors to major projects. This is usually how I see things go down:

1) You work on an open source project and an altruistic company hires you to keep working on it. This is ideal, and I've only ever seen it once (Sendmail hired a couple of core contributors to keep making Sendmail awesome back in the 90s).

2) You work on an open source project, people see the work because they use the project, and then offer you a job to keep working on the project, but slowly over time you are working less on things that are great for the community and more on things that are great for your company. I've seen this a lot.

3) You get hired by a company that uses a big project, and they ask you to start making modifications that are useful for the company. It turns out what you did was useful for everyone so you contribute it back. Sometimes it turns out to be a huge win and so you keep working on it. I saw this with Cassandra and some of the folks at Netflix.

4) You create a cool project and your company lets you open source it. It becomes well known and then other companies want to hire you for either 1, 2 or 3. I saw this a couple times were people left Netflix to go to Facebook or Google to continue work on an OSS project.

If you work on Chromium or Firefox, you'll pretty much be limited to Google or the Mozilla foundation (with some exceptions). So if you want to do it to learn some great code but don't have a particular project that you love, I'd suggest one of the more infrastructure projects that are widely deployed if you want to increase potential job prospects.

In summary: There are lots of ways to get paid to write OSS, but you may not like them all.

Once my OSS project became popular, I started a business and switched to an open core model. Businesses buy additional features, I get recurring revenue so that I am paid to maintain the OSS and commercial parts.

And to answer the inevitable question: many of my paid features are also available as 3rd party OSS plugins. Many companies prefer to pay for the commercial version so they know the features will all work well together and be supported years from now.

One person who has spent a lot of time looking at and thinking about this problem is Nadia Eghbal. She has a repository called "Lemonade Stand"[1], which is a resource that lists a number of ways to fund open source development, and she wrote a paper on the topic of "digital infrastructure" being built on top of open source projects[2]. She also co-hosts a podcast called Request for Commits[3].

Another person worth looking at would be Eric Holscher, who's Twitter feed frequently has interesting insights into running an open source project as your full-time job[4].

The best bet if you want to do open source full-time would be to work at a company like GitLab[5] or Sentry[6], but that does restrict the exact kinds of open source work you can do (at least during working hours).

Some projects choose to use Gratipay. If you take a look at Gratipay's website their goal is to provide voluntary payments (and eventually a payroll system) to contributors for open work. Any team/project can apply to join Gratipay, but the main stipulation is that "public issue tracker with documentation for self-onboarding, and be willing to use our payroll feature."

Previously Gratipay was Gittip, and worked much like Patreon - essentially a donation or ~tip~ system.

There's still some work to be done, but I've been following this project for awhile. I've been working full time now on other stuff, but I keep up with their updates, and Chad (founder) is a great dude.

(Nearly) all our work (github.com/hammerlab) is OSS. We're hiring experts in ML if you're interested in working on big genomic data in the field of cancer immunotherapy. We're a lab (academic not-for-profit, part of Mount Sinai's medical school, run off foundation, grant, and gift money) of software engineers from industry and academia in NYC trying to make research better, and cure some cancer while we're at it (running some clinical trials).

Yep, I get paid to work on QEMU. I would suggest that your chances of getting paid to work on something and what kind of work that turns out to be depend quite a bit on the project. You can have a look at the git commit history or the mailing lists: if the project really mostly worked on by a single company (as I suspect may be the case with Mozilla and Chromium) then the only paid employment prospects are likely to be with that company. A niche project might not have any opportunities for paid work on it at all. At the other extreme, if you look at the Linux kernel it has a huge range of corporate contributions of various kinds (as well as a lot of work that's purely downstream) and you have better chances of finding one that does the kind of work you might want to do.

Incidentally, previous experience with the specific codebase isn't necessarily a requirement to get a job working on a project: if you have general experience in the field and can work with an open source community then these both transfer over (this is how I got into working on QEMU). Learning a new codebase is something that you typically have to do when you start a new job in the closed source world, after all...

This is not the answer you're looking for, but technically it is a valid answer to your question.

I'm a software engineer at Google, where I've contributed to Google Servlet Engine, Omaha (https://github.com/google/omaha), Firefox, Chromium, and Android, among other open-source projects.

Some of these are closed-source projects that were later open-sourced, some are developed in the open, and some are run as a hybrid between the two. I also develop random crap on the weekends, and Google gives us wide latitude to open-source that work if we want.

I recognize you're asking whether one can start with open-source contributions and eventually receive compensation for it. I'm answering that in my case I am compensated for a job that happens to involve lots of open-source contributions, which is the same end result but starting from a different place.

You gave the example of Firefox. In fact, the Mozilla organization, which manages Firefox has plenty of paid employees. See https://careers.mozilla.org/. The same is true for some other major open-source projects such as the Linux Foundation or LetsEncrypt. Being an employee at one of these organizations means you are literally being paid to contribute to open source projects.

I've gotten hired at my last 3 jobs primarily due to open source contributions I've done in my free time for fun. Instead of focusing on doing the code for money, focus on doing it for the challenge, the fun, and most importantly the networking.

I've been hired by a friend to co-develop the EQCSS JS library ( http://elementqueries.com ), and everything turned out very well :)

So I'm here to tell that friends or family aren't always the worst possible clients. If you both know what you're talking about, and are well organized, and define precisely what's the price for each task, it can be a great experience.

today the project has 900 stars on Github and a lengthy Smashing Magazine page :)

If you're interested in academia or scientific research, there is scope to work on open source projects there. My full time job is working on a particle physics data analysis program, which is entirely open sourced. You won't necessarily have to do a scientific project either - other people who work at the same research facility work on configuration management systems, or databases.

Note that this doesn't have to involve doing a PhD or actually being an academic - it's more a providing the tools that enable academics to do successful research kind of thing.

- I grew up in a family without a lot of money, by using open-source software growing up I got to learn a lot of different aspects of digital production, sofware development, and try software I never would have had the opportunity to try if it hadn't been open source. This was incredibly formative in shaping my skill set today, so I have a lot of past open-source contributers to thank for where I find myself today

- I believe businesses have a responsibility to the community in which they operate and where their employees live. This is corporate stewardship, for a big business maaybe they invest in a local school, or sponsor kids sports teams or summer camp. I'm a freelancer, so I wont be sponsoring any sports teams, but I feel its important for my 1-man business to give back in a 1-man-sized way!

So with those two things in mind, a desire to give back to open source, and a desire to help the community I come from - I have tried to find challenging new work that pushes the limits of current technology. I stretches me as a learner & worker, it provides a solution for a problem that meets the clients needs, and if I can find a way to give the solution I came up with back to the community, then others can save time and money by using my work as a springboard for their own solutions.

I write and release lots of use cases and examples demonstrating techniques and solutions, and pour a lot of time, and even some of my own money into getting them out there!

If your aim was to help a project like the Firefox project, and you wanted to be paid for your time - I would try to find a client who has an ee case not currently supported by Firefox, having them pay you to solve their problem, and also arrange that your solution can be sent back to Firefox and included in their codebase. Its a win/win/win for you, the client, and Firefox, plus a bonus win for all Firefox users at the same time!

I'm not getting paid for open-source, so most of my GitHub stuff is for fun/profit https://github.com/wkoszek <- none of these generated any $$$. So if you contributed something, your learning goal could work, but there's no money to be made.

Learn/money idea to what you want is to get a job where open source is welcome, cherished and used. Internships are good to try it out. HIIT in Finland was such place and I interned there, and the result is here:https://github.com/wkoszek/freebsd_netfpga So I've got $$$ for stay + food + cinema for hacking project which I knew we'd publish and I learned a bunch.

If you're not for a rich country, Google Code In and Google Summer of Code be an option. You get a $5000 stipend for spending your summer at home hacking code, you get a decent mentor from a project you're interested in and you get experience.

Another model is to reach out to projects which are backed by a legal body. For example the FreeBSD Foundation helps and support the FreeBSD Project, and they have sponsored projects. If you're good in FreeBSD and have an idea, I feel there'd be some $$$ if you can deliver something useful. FreeBSD has feature idea pages and if you see a fit, you could just ping people and start collaborating.

Last, and I think the hardest, is to start hacking good code in a product you see is (1) open-source (2) backed by a company. I don't know how many hiring managers are techies merging pull requests etc., but even through individual engineers you can get a reference. After 10th pull request accepted by a guy who reviewed your stuff and with whom you've worked, I feel like it's easier to shoot "Are you hiring? I NEED THE MONEYZ!" email.

This, I believe, is the core idea behind BountySource [1], and similar sites [2]. People place bounties on problems in open source software that they'd really like to see solved, and "bounty-hunters" (i.e. potentially you) solve these problems and get paid the amount pledged.

And of course, if you're starting a new project, there's the Kickstarter model - followed by, for eg., Neovim, Chocolatey, etc.

Look at issues, engage with the developers and do smaller commits into Chromeium or Firefox and then apply for a job at Mozilla, Google, or Apple once you have a bit of a portfolio with the project. Firefox has the benefit of hiring remote so you don't need to be near a campus of one of the other big companies.

I work in Open Source and there are plenty of companies that build their business around a product and hire at market rates, normally they have a SaaS model of operation, but you'll have to set your sights a little lower than Chrome or Firefox. These companies include Ghost, Mongo, Elastic, Basho, Cockroach Labs, Automattic (Wordpress), Silverstripe, and countless more.

Another way to do it is to do postgraduate work at a university and get a grant, I know people who work on the Rust compiler in this capacity.

I know one common technique for contributors is to put a bitcoin or donation type ID in their profile. This allows for people who see your contributions to go to your profile and give ya something back. Not sure how effective but this is one method I have seen.

For the past 2 summers, I've been paid by Google through GSoC [1] to work do open-source work, all on Homebrew. It's been a great experience in terms of skills learned, and the pay is just a nice incentive for work that I'd be doing anyways.

I'd highly recommend it, although I believe enrollment in a university is required for eligibility.

If I need something fixed or added to a FOSS project for $WORK, that's work related and it's perfectly reasonable to do so.

Now getting someone else to pay you is a much bigger stretch. Outside of a couple people who work for really big companies that market commercial versions (or support packages) for FOSS projects, I don't know of anyone that gets paid to work on FOSS.

You might consider talking to Tom Christie of Django Rest Framework. I know he's working on it full time now. IIRC the pay he gets is less than a normal salary, but there are plenty of upsides to the freedom and flexibility he has.

The main benefit is giving back to the community, and possibly learning some new skills, and maybe building a github profile. I wouldn't depend on it for more than a little bonus every once in a while. Experiences may differ.

I've made my living based on OSS projects for almost my entire professional career (about 17 years, specifically working on Squid, SciPy, Webmin, Virtualmin, etc.). Not a great living, mind you, but it's kept me in food and houses and given me a lot of freedom. Also, most of the stuff that made money hasn't been the stuff you want to be doing; the code doesn't make money, in the vast majority of cases. It's the stuff you do other than code, but that requires deep understanding of the code, that makes money. Deployments, support, customization, packaging, documentation, bundling and "productizing", and occasionally getting jobs on the strength of your prior contributions. I've written a lot less code than I would have liked in all those years, and much of the code I wrote has been fleeting ephemeral things like shell scripts and packaging scripts.

I don't know a lot about the frontend OSS world, but I know that if you pick a project that has a lot of money being thrown around (in large deployments, for example) then you'll find that it's easier to get some of it to land in your pocket. Niche projects are difficult to get paid for, but can be good places to learn; smaller projects may be happy to have some help and will lend more guidance when you ask questions. But, then again, some big projects have people specifically tasked with bringing new developers up to speed and "community management", so that may even out that difference.

It will never hurt you to have OSS contributions on your resume. It's gotten me jobs, and has allowed me to round up good paying contract work when I've needed it (even in unrelated fields; I've recently done some Ruby work, even though I've never had a real project in Ruby). And, as someone who has hired people, I can say I've only ever hired people who had OSS work I could see. Sometimes unrelated to what I was hiring for, sometimes they were already working on what I was hiring for and I just wanted them to be able to spend more time on it and get them on board with the company road map.

All that said, it's not the easiest way to make a living in software. Getting a real job is probably the easiest way, and if you're lucky you'll get to work on OSS stuff to one degree or another. I've worked on tons of stuff that I never got paid for, and don't expect to ever get paid for. And, if you aren't really directing your efforts toward making something pay, you're unlikely to find that it'll pay.

OSS contribution does not, in the general case, lead to getting paid. But, it can lead there if you want it to.

I mean, not for nothing, but you do know you can still buy the old style MBP with 2x usb3 and an hdmi port, right? You don't have to get the new style ones that they announced.

That said, I hear you. I wanted a 32gb model with crazy good battery life also, but to be honest, Windows laptops are kind of all shit right now. I'm in the exact same boat as you. I hate the new MBPs and need a new laptop soon, BUT I'm still landing on the old style MBP as the way forward every time I look through the available options.

Probably not much help, but that's my 2 cents.

e: that said I am going to keep refreshing this thread and hope someone mentions something I haven't looked at yet.

As an add-on question: how can one figure out if a laptop supports Precision Touchpad rather than trying to emulate a mouse with the trackpad? Does Precision Touchpad actually make Windows trackpad usable and enjoyable like the Mac ones? Is Precision Touchpad supported by Linux distros?

I have been using a Windows laptop for over 5 years (Yeah a Lenovo!) and have went thru the whole upgrade from Windows 7 --> Windows 10.1 Anniversary update (for free), and to much shame of mine, my b!@#ch of a laptop still doesn't cry considering a boot up time of 6 seconds!

Yet I needed a dedicated Unix environment and although Bash is available natively now on Windows, it's not going to be stable soon enough for me (6 months from now maybe, Creators Update is coming in Jan' 2017). So, a week ago I did buy the MBP retina 13" Early 2015, and trust me I am not disappointed, after last night's #AppleEvent.

I might be biased but coming down to your query:> Are Lenovos worth considering post-superfish-gate?

If you want to spend less, but now around the same performance of a MBP, maybe get the Asus UX330. It's basically a toned-down UX390, but still awesome.

If you want a cheaper STILL, consider a Clevo reseller like PCSpecialist (UK), Scan (Everywhere?), Sager (US), XMG (Europe). They are the ultimate in performance per cost, it's just that they tend to not be the most asthetically pleasing.

I would get a MBair or the last gen MBpro that is still for sale, I can't use any other track pads for starters and for development I love OSX.

I plan to get another MBair or maybe the new MBpro without the touch bar. I might go for the touch bar but right now it just doesn't interest me but I'll go in and try one out, read reviews and see how it's integrated with software I use.

For those wishing to run Linux, which of laptops require no (or minimal, I suppose) closed source firmware? I tend to prefer Debian sans non-free but I'm practical enough to just weight that as one factor among many.

Any opinions on why developers on HN/elsewhere seem to prefer laptops over desktop machines?

Do you really need to be coding on the bus/train/plane or in hotels? When you go to a meeting, do you really need to bring your entire development setup with you?

When I go into a client's office for a meeting I usually only need to take notes and do presentations. So I get a cheap $500 laptop for that. If I need anything at all from my home workstation, I just remote into it. Actually, I did this even before I was working for myself - I'd just remote back into my workstation at my desk.

Is it just that you don't want to deal with 2 machines? Are you just doing it because that's what everybody else is doing?

I am going for two computers. Server and budget Asus ultrabooks with touch support. This should be ideal for me. I will not invest in high end laptops or All in One because upgrading them isn't so smooth but whereas desktop (especially servers) are built for long term use with option to upgrade any time.

I have heavily relied on Microsoft Remote desktop in past and my experience has been smoother for task like running and computing in MatLab or Mathematica.

I have also been thinking of getting Razor blade pro laptop if this setup does not work out. They have 2 TB PCie SSD with RAID 0. That's insane in less than one inch thick laptop.

I tried the Zenbook for a year, but the GPU didn't listen to sleep/lid close events even after trying several fresh kernels, the Wifi wakeup on linux sucks compared to macos (even with the mDNSresponder desaster), the trackpad was much worse.In the end it burnt/overheated because of the overheated GPU.The casing was also too edgy, I constantly did hurt myself with the right hand.

A fully upgraded MacBook Air is still the best. (i7, 500GB HD, 16GB RAM) I got several of them.

Lenovo is too heavy to carry around. Even a MBP is too heavy and bulky for me.The Elitebook has no 500GB HD available and also only 16GB RAM.

Most of my colleagues on our SRE/devops team have Lenovos and swear by them. I have a System76 and I wish I could recommend it. Great system software packaging, decent mainboard and display, horrible keyboard, touchpad and case quality. I'd go with a Dell or Lenovo for my next one.

As for me the only decent laptops on the market are HP Spectre x360 (second generation, recently updated, comes with Kaby Lake) and Acer TravelMate P648. Unfortunately not the Dell XPS laptops due to the widely known coil whine problem and in general a quality control issue.

I would recommend a Surface Book. Even if you don't plan to use it as a tablet at all it still beats every other laptop on the market. Initially it had a lot of issues but now they have been fixed and it really is amazing.

You don't mention if Linux support is vital to your search. It would be good to know, since there are plenty of great Windows ultrabooks that are not-so-great if you're planning to load Linux onto them.

The build quality definitely isn't as good as Apples. I had to return my first T460s due to severe light-bleed issues. The new one is fine, though the TrackPoint isn't as good as it was in previous generations. The TrackPad also is not as good as Apples, though I try to keep a keyboard-only workflow so it's not that much of an issue (till it is)

T460. Install Ubuntu. Install powertop and tlp. Run powertop to make sure its happy with the settings on your machine. With the extended battery this setup will get > 10 hours battery life (for programming workloads) and is near silent with an excellent keyboard.

I am a dev who needs to travel light. I was looking for an alternative to the MBP in the MS world.I found it: the Lenovo X1 Carbon.Pros: light, well finished, classic design, technical specs, keyboard is a dream. Cons: some crapware to remove

HN has a great and I mean absolutely great search feature via Algolia https://hn.algolia.com and this particular question keeps springing up every now and then, no one seems to use the feature despite the search bar being at the bottom of every page.

It depends on your focus, of course. Andrew Ng's coursera is famous, and it's ideal for someone who wants to get into the mathematics behind various ML algorithms. However, this class is will take you into implementing algorithms, but is less about applying them.

If you want to just try them out, I'd honestly recommend just going through the scikit-learn documentation. Almost all of the algorithms provide an example, and the API is pretty consistent across different ML algorithms, to the extent that it can be.

People learn differently, some people prefer to get into the math right away, others will never be interested in it. I'm interested, but I tend to be more motivated when I've used the algorithms, start to learn about how and why they perform well or poorly under various circumstances, and then dig into the mathematics specifically to find out why.

Also, I'm not going to be creating new ML algorithms. So, you know, that also influences my level of interest. I do care about the mathematics involved, because I do want to genuinely understand why some outputs are available for random forests but not naive bases or logistic regression, why performance and/or accuracy is great in some circumstances and not others, and I don't want to have to rely on too much hand waving. But if you want to actually develop and research novel ML algorithms, you'd need to get considerably deeper into the math.

For big data, 'Big Data' by Nathan Marz was an excellent read. The conceptual chapters are top notch, and the implementation chapters give you a good look into the tools used for the field at the time of publishing.

Shameless plug: LearnDataScience http://learned.com is a git repo with Jupyter Notebooks, data and instructions. It's meant for programmers, assumes no math background and addresses data cleaning issues which most classes ignore. Having said that Andrew Ng's class on Coursera is gold.

For a ML intro Coursera's machine learning course https://www.coursera.org/learn/machine-learning is great. I have not been through the entire course but for someone who has no background in it, its a good intro as the video themselves are solid.

I've been using a Retina MacBook Pro 13" (early 15) and 15" (mid 15) and just picked up the xps 13 9350 with iris pro.

There are definitely some quality control issues but once you get a working model with no faults (I had one that wouldn't reboot and had terrible coil whine, one that had loose trackpad and yellow tint on screen but this could also be because of Amazon's shitty packaging where the laptop was in a box with only some brown paper crumpled in) - atleast they took them back no questions asked. I'm amazed how far windows laptops have come along.

The only real downsides are that it power throttles (and thermal too, but I placed my own aftermarket thermal paste and it doesn't cross 66 C on full load now) due to the iris GPU itself consuming 18W at it's rated turbo boost with the SoC's TDP being 15W (long turbo) and 25W (short turbo). Perhaps go with the i5 model that has the HD 520 or the new 9360 that has kabylake with better thermal and power consumption (HD 620 is roughly similar to HD 540 but won't throttle). You can also use Intel's XTU to undervolt and better battery life and throttling if you're going to use windows.

Linux runs flawlessly, infact so does OS X if you can replace the wifi card. AMA

I went with a ThinkPad P50. It's not nearly as bulky as some think. I specked mine with the Xeon processor, and 1080p screen (but 4k is an option). Also specked everything else as low as possible (HDD/SSD, RAM). I upgraded these items myself after. I now have a server grade processor, have put two 512 SSD in it (one M.2 NVMe), and 64GB of RAM. It's a beast. I can also swap out the LCD panel directly for the 4k panel if I so choose with an after-market one later if I so choose. Initial base price was only around $1400 (was on sale), and about $500 for the SSD/RAM on my own dime (would have been well above $1000 on lenovo's site).

So for ~$1900 I have something that blows the MacBook Pro out of the water.

It's sturdily made, I take it everywhere. The only thing I miss from my mac is the trackpad. You can't beat mac trackpads. However, the trackpad on the Inspiron is great, much better than many of the others I've tried. When you take into account it has better graphics acceleration than the $2800 macbook pro, you find that dollar for dollar, it's one of the best value laptops out there. (Seriously, compare it to even Dell's XPS 15, you'd have to pay ~$1650 for an XPS 15 to get comparable specs to the $1300 Inspiron 7559. The Inspiron even has double the graphics card RAM of the $2550 XPS 15!)

Bad fan control means it was sometimes noisy in near-idle contitions (though in idle it was very silent)

there were some flicker issues with the GPU (might have been resolved though)

one key was bouncy, meaning it sometimes triggered twice

it woke up from sleep randomly, sometimes while in my bag, often completely emptying the battery

In the beginning it also crashed very often, however this was resolved with an update.

So all in all the quality wasn't on the level of a Mac.

And I wouldn't even start speaking about the OS.If you're used to macOS, it's still such a day and night difference.

Connecting a normal low dpi display to the 9550 with HIDPI display lead to so many annoyances with Windows and all the programs that won't support this for the years to come. I'd barely consider it useful. Although the display itself was quite nice.

I carried a 2013 Dell XPS Developer Edition and a MacBook Air in parallell for a while this year. Note that not all of this necessarily applies to more recent versions of the XPS.

* My XPS has a really awful touchpad. When I first got it, it was definitely my main reservation. I tried a 2014 model and noted that it wasn't much improved.

* The battery life is much, much worse on the XPS, which is probably the main reason why I find myself reaching for the mac. I've kept Ubuntu 12.04 on it, so Linux power management has likely gotten better but there's still no comparison. * other than that, I've loved my XPS. It's super light, has a brilliant keyboard, excellent specs and still works well after three years.

I have had a few models of the XPS 13 now and each one seems to get better and better. They're both light enough to carry around and they are still quite strong, I have seen a lot of them dropped without any damage.

Initially I though I would never use the touch-screen, but it is actually quite useful when reading things (scrolling) or quickly clicking basic things when not really sitting behind the keyboard on a desk. Same for the light in the keyboard, very useful when working at night and on airplanes etc. The screen in general is really really good, some colleagues have the 1920x1080 screen, I would pick the 3200x1800 screen again next time since it's much nicer to read from and allows you to use smaller fonts (= more code on one screen)

Linux support is generally much better than other relatively new notebooks I've had, but still sometimes things break. The Developer Edition is released a bit later than the Windows models, probably to stabilize Linux support. I've only used it with Ubuntu, but I see others use several other distros which seems to work without much issues.

14" Razer Blade 1060, grab a dbrand matte skin to cover the hideous Razer logo. No problems with Ubuntu 16.10 currently installed. Also have a 13" Dell XPS dev edition used for specific work. It's a nice system as well. The Razer def has more power if you want a full replacement.

My girlfriend recently bought one for her daily driver. We decided to go with Arch Linux to gain access to packages as they release, rather than wait for the next iteration of Ubuntu or Fedora to get updates.

Here are my pros and cons:

Pros:

1. The hardware is great; the developer edition favors more Linux-compatible hardware (obviously), and for us, it didn't require very much setup. Usually the default configuration will be enough. The touchpad, like the MacBook, has a glass surface and feels excellent.

2. Like the MacBook, it's very light. The screen looks great, and honestly on Linux I prefer 1080p.

3. Dell has a very reasonable warranty, and is very quick to respond. Example: You can install whatever Linux distribution you like, replace the SSD (so long as you don't ruin anything while you're there, of course).

Cons:

1. It's fragile. Unlike the Macbook, you have to be at least (more) careful with this thing. We ended up breaking the screen without much effort; I wager it was the fact that it was in a backpack that got dropped somewhat aggressively.

That being said, we also bought the $60 accident protection, and Dell sent out a technician from a local repair shop to fix it for us within that week. If the technician can't fix it, they will over-night you a shipping box and a FedEx label to send your laptop back in.

Just be careful with it; treat it like the $1000+ machine that it is.

2. No replacing the RAM. It's soldered onto the board. That's not a problem for me because I barely push ~4GB.

Conclusion: I use a MacBook now; my XPS 13 is actually coming in tomorrow and I'm very excited. I think it's a great machine and a great MacBook replacement, and has excellent Linux compatibility. Dell's customer support is great, just be careful with it; it's not an aluminum body or several layers of glass in front of the screen. Make sure to buy the one with the right amount of RAM so you don't regret it later. If you're worried about storage, there's a $150 500GB M.2 SSD on Amazon, buy the lowest storage version and upgrade it. Get the protection plan. It's cheap compared to the cost of buying a new device.

I have a new XPS 13. It's impeccably built, you can use it comfortably on an airplane, and it runs Linux with no issues. But I wouldn't use it for daily development. The 13 inch screen just doesn't have enough real estate, and it often feels like it's struggling to drive the 4k screen whenever I try to switch applications.

If you don't mind something heavy, check out the new Thinkpad P50 or P70. They have actual desktop-level performance, terrific screens (matte, color corrected 4k IPS!) and the new NVMe SSDs. I do most of my daily development on a P70, and increasingly just lug it along when I travel even though travel was the reason I bought the XPS 13.

The XPS 13 was recently upgraded to Kaby Lake, so if you're fine with the smaller display, I'd say go for it, I've heard great things.

Also, do realize that the UltraSharp model will have a significant impact on battery life. The comments I've looked at for the XPS 15 9550 (4K display) say that the battery life is basically halved, but it's supposedly still around 4.5 hours of battery life.

If you prefer the 15-inch, you might want to wait for a while - they still only feature Skylake CPUs and I think an upgrade is imminent (given the recent XPS 13 upgrade and all.)

I don't have any personal experience with the machines, but I'm planning to buy the XPS 15 once it gets an upgrade.

Question for the iOS developers on the thread thinking about switching (or have already switched) away from macs as you dev machine. How do you plan on continuing to do iPhone/iWatch/iPad dev given apple's requirement to use their hardware?

I'd love to see some real competition with macbooks but I haven't seen anything close yet... Alternatives do exist but they are still very expensive... I mean really, they are bloody expensive. When I'm thinking about putting this kind of money on the table, I just go to apple store, no?

Why on earth there's no startup which just puts together linux laptops? I'm sure you can grab Chinese/Taiwanese/Korean whitelabelish product customised with linux friendly peripherals or just put the box together yourself with engraved penguins here and there. Half of devs would love it, another half would hate it - but that should be enough to survive, no?

I have one for the work. It is absolutely wonderful.I am currently using Ubuntu 16.04.1, and I feel it lightweight and performant. The battery lasts about 9 and a half hours (doing web browsing and light programming).

I suppose that the thing will only improve with future Ubuntu Hardware Enablement Stacks that include new kernels and so...

I got a Dell XPS 15, the 9550 edition.Before purchasing I was scared of the bad stability reviews it had when it was release. However Dell treats it with updated drivers regularly, and with latest drivers it works great. It can even handle 3D shooter without thermal throtteling etc.I use it as a developer machine with Windows 10.

Just be careful with the Dell Thunderbold 3 TB15 dock (not sold any more I think). I got one, and with the latest drivers it works, but has some quirks.Also be careful to sort out complaints about the XPS in the net: may have problems using the dock, not with the laptop itself.

Be warned that the current XPS 13 is kaby lake and has a rather slow (non-IRIS) gpu. 3200x1800 is quite a few pixels and the built in GPU is pretty weak.

Might want to consider the skylake version, sure it's the previous generation, but the CPU perf is pretty similar, and the Iris 540 is a significant GPU upgrade. Not a nvidia/ati killer by any means, but much better than the normal intel integrated graphics.

Either that or way for similar to ship in it's kaby lake incarnation.

Also keep in mind that the "upgraded" 3200x1800 screen about halves the battery life and is reflective. Not really worth it for me (at least in a 13" screen).

Sadly you can't get more ram or an i7 with the 1080P (they called it FHD) screen.

I have xps 9550 FHD with 512GB SSD and 16GB RAM. I installed Ubuntu gnome 16.10. Everything worked out if the box. I even played steam games on Linux (like firewatch). I get around 6 hours of Rails and Ember development.

Same boat here, although I'm a bit worried about the downsize from 15" to 13". Perhaps with a docking station, it might not be so bad. But hopefully dell jumps on this opportunity to make a 15" developer edition

Looks like they come with Ubuntu...is that bog-standard Ubuntu, or does it have custom drivers for things like the touch screen? (Basically, I would be curious whether it's easily replaced by another distribution.)

Granted I owned the older 9343 model but despite the many BIOS updates (and several Linux distros) my laptop kept up phantom right clicks and cursor jumps - very annoying! No issue with Windows 10 though.

I sold it onward and happy with the real-estate 15" provides me once more.

I don't know about the developer edition specifically, but I just had a XPS13 (with a skylake processor, so not what's available on the store right now) with stock Windows 10 pro on my desk, and debian stable installed just fine except for the wireless card (which is the only network interface on the machine). It wasn't mine to play with so I didn't try to figure that out but whatever version of KDE that gets installed with Debian 8.6(?) seems to be 99% of the way there.

I'm on my fourth motherboard for a year-old Dell XPS 13. They've released a new model since then, but the experience was pretty awful. Each failure is hours down the drain dealing with tech support and a hard drive wipe. The overall experience of using it as your primary computer is made painful by the inability to depend on it working.

The warranty just ended, so unless they finally fixed it for good, the machine may have been a waste of a thousand dollars. I'm crossing my fingers and hoping it keeps working.

I recently got a Dell Latitude 12 7000 Series for $work and I was surprised by how good it is:

- Plays 4k video under Windows 10

- Runs Arch linux without any hardware compatibility problems

- Silent, portable, fast (pick all three)

It sounds like an advert but this was a machine I didn't pick myself and it is the best computer I have ever used. The next time I spend money on my own laptop I will move from Thinkpad to Dell. This is after using Linux on Thinkpads for the last 13 years or so.

I thought this was worth mentioning as the Latitude is probably a bit cheaper than the XPS.

I had the first gen XPS 13 and while it was a beautiful machine it ran extremely hot at time. So much so that I couldn't use it while on my lap. I'm not sure if the deciding has changed much since then but it's worth taking into consideration.

I ultimately went with a retina MBP (early 2015). My next laptop is likely to be either a Lenovo T460 or a Dell P50 (or their successors).

I have been using an XPS 13 9350 for around 6 months now, coming from an 13" 2015 MacBook Pro. I picked up a basic FHD core i5 model with 8GB of RAM on sale, replaced the WiFi chip with an Intel 8260NGW and replaced the 128GB SSD with a 256GB Samsung 950 Pro NVMe. All said and done, I spent less than $1,000.

Ubuntu 16.04: Pretty much works flawlessly as long as you have Intel WiFi- I had some issues with a flashing screen at first but they all seem to have been resolved using `apt-get upgrade`. Suspend/resume, audio controls, and brightness controls all work fine. I run docker images for pretty much everything and it's great to have native docker without a VM involved.

Physical Characteristics: It is very light and easy to use on the lap, on the couch, or in bed. It feels more like a MacBook Air than a MacBook Pro. Fans are on the bottom but they don't really spin up that much, even when I don't have anything under it.

Keyboard and Touchpad: Keyboard is fine. Touchpad is a lot smaller and more "clicky" than a MacBook Pro. The force touch on the MacBook Pro is way better (it's pretty much the gold standard of touchpads).

Screen: I have the FHD screen because I don't care about touch, and it is Matte (the QHD+ touchscreen is glossy). DPI scaling in Ubuntu 16.04 is hit or miss. In my experience, some apps, like Chrome, only respect DPI Scaling if it's in multiples of 0.5 Other apps, like Firefox only respect DPI scaling if it's an even number. JetBrains products do a good job of respecting DPI scaling though. I keep it at 1x DPI scaling, so everything looks pretty small at 1920x1080. If you go with the QHD+ touchscreen, native resolution is 3200x1800 so 2x DPI scaling will be an effective resolution of 1600x900, and it will look great. I think most apps should work fine at 2x DPI scaling.

Webcam: The webcam location really is stupid. I dislike video chatting on this computer so much I'd rather use my phone. I use Android and kind of miss iMessage and FaceTime from the mac (it's how I would talk to some Apple friends), but whatever.

Other Thoughts: Linux FTW. IMO, the last good release of OS X was 10.6.8. Everything after that either changed the scrolling direction or added some sort of bloat to the OS. I'd run 10.6.8 still if I could. Ubuntu 16.04 feels like getting your life back. It's super quick, you can use apt-get to install dev tools instead of hacking around with homebrew, you get the real version of `sed`, and you don't feel like Apple controls your life anymore. Gotta say it twice- native Docker support and no messing around with VMs anymore!

Take the leap of faith and get the XPS 13. Or a Lenovo with good linux support. Part of me wants to try out the big ass trackpad on the new MacBook Pro but none of me wants to go back to paying $2k every time I want to upgrade my laptop.

Leave your daily routine for some time and give your brain room to work undisturbed. Do something that keeps you physically active all day. Do something creative.

Explore your feelings and try to find out where exactly they come from. Don't give up just because thinking about it hurts more. It will help your brain to cope. Actively think about the countless positive memories you have. Bad emotions are much more powerful and tend to overshadow everything else.

Thanks everyone. My mom suffered a massive asthma attack and is in the hospital on life support. She was without oxygen for too long and lost all brain function. I'm 22 and this is just so unexpected and hard.

Read A Guide To The Good Life: The Ancient Art Of Stoic Joy by William B. Irvine! It is a modern summary on the stoic techniques to overcome negative emotions. It handels also grief. Or read the letters of Seneca to Lucilius (nr 63).

Give a listen to some of Alan Watt's lectures. They are on youtube, and there are shorter (<10 min) clips on specific topics. He's not going to help you get over grief, but to learn to accept it as part of life.

Might take a dozen listens to internalize half the content, but it'd be adopting a different perspective on life. Won't come easy

once, in university, I worked really hard coding a homework for an OS class (spent around 2 hours or so). It was modular, well engineered, had inode stuff all taken into account, it was efficient and I was feeling very nice. I then lost it all.

Dulling psychic pain with physical pain seems to help me. (I mean working out, getting a tattoo, etc, not self-harm.) Treating the body brutally backgrounds your mental processes for a while, and gives you time to process things subconsciously.

As a long time macOS developer and user I too went down this road and ended up pretty frustrated. Even when you get things working perfectly (tackling iCloud services, driver issues, custom boot options, etc...) you're left with an installation that feels static.

Updates are slower to roll-out to the hackintoshes, major OS upgrades can be quite a bit slower to come. This can include security fixes too.

I ended up installing Linux and never looked back. It turns out most of what I used on macOS was just the unix-like subsystem. Having Linux was just as good, if not better than being on macOS.

Of course this doesn't help if you're doing iOS development, or need to use Xcode. I've moved away, myself, but have talked to others who have used a Mac Mini as a build machine. You could also install OS X in a Virtual Machine under Linux and use it for development, which requires its own set of hacks but fewer.

Linux distributions I would recommend:

* Solus, I really like where this project is going and it's my daily driver now.

* Arch, allows/forces a truly custom setup, you end up learning a lot about your system, but might be too distracted with your system to get work done ;)

* Antergos, an Arch alternative w/ batteries included.

* elementary OS, it's the Linux distro made by the folks who loved macOS. It's beautiful and you might like it more than macOS itself.

The closest you can do, is to install Darwin (opensource), and GNUstep (opensource).

Then you can develop Openstep/Cocoa applications on your non-Apple laptop legally and in total freedom.

If you have customers who would want a MacOSX executable, you would then give them your sources, they would download Xcode, and they would compile them. This is why the GPL has been invented (or other licenses such as BSD, MIT, etc).

Now of course, GNUstep doesn't track the evolution of Apple Cocoa very closely. Your application will be compilable for MacOSX if you take some care to write it portably, and you won't be able to take advantage of Cocoa specific features, only the most vanilla and plain Openstep features. Depending on the kind of application, this may be more than enough.

Save your sanity and money - you can get some nice specced refurbished/used MacBooks that can just run the latest Apple stuff for a good price and save a lot of frustration with foreign hardware issues now and down the road.

Doing the whole hackintosh thing has improved from previous years. Before it was an absolute nightmare. It's still a lot of pain. Just less. So you have that option as well. Just remember it doesn't play nice with all hardware and you will have to fiddle a lot. Every update you will have to do your research and pray to the god of moving bits that it's a smooth transition.

I previously used the 11" MBA as my dedicated "running around to meetings" machine, and basically just used Evernote and MS Office on it. When the new MacBook came out, I switched to it.

The keyboard on the MB takes some serious getting used to (actually, in the ~8 months that I used that computer, I never became fully used to the keyboard), but the screen and physical size of the MB was better than the MBA. (Of course, the processor was noticeably slower... so YMMV.)

So IMO, they did the right thing, there was too much overlap between models. The MB still needs to get faster and come down in price, but I do think it will be just as ubiquitous as the MBA within a few years.

I agree with others that the current line up is sorely lacking an affordable laptop, but that will likely change (slowly; hopefully) as manufacturing & component costs come down.

I would recommend seeing a physical therapist to get started instead of trying to put together s plan yourself, especially since you're already experiencing pain.

Based on my experience I'd also recommend seeing one in a private practice, not attached to a large hospital, but ymmv. Hospital centers I went to would only treat one thing at a time for insurance reasons even though I had two problems.

Anyway, I think good professional guidance will help you recover faster and more completely because they'll be able to identify problems, design solutions, and give you feedback & knowledge more effectively than if you DIY.

* Code review and care.* Always leave code in a better state than you arrived.

This project is big, it's definitely not your average webapp express node.js you see these days.

From my experience in projects this big with a big team, there's absolutely no better solution than reviewing code carefully and caring about quality.

I was in the same situation as a consultant a few years back and the rule to leave code better than you arrived at it is a real trigger to most people that respect what they do.

I'd say that the VP ENG should be involved in the process and set some rules for what is acceptable quality and what isn't.

One more thing

Everyone knows a smell, every single member of your team has some piece of code he/she saw that doesn't make sense.Keep a document with all of these and just make sure you scratch them off EVERY single day.

Stuff like: * User Creation is using LOCK on table_x and it shouldn't* Form submit code is too complex, need to be better* Extract component X into a microservice

etc...

If you go through a list like this and fix things one by one, you'll be better off in a short amount of time.

Don't try and take it all at once, create manageable consumable pieces that your team can relate to, understand and get behind.

The main business goals of code quality improvements are to 1) reduce the number of problems customers encounter thereby reducing customer support costs 2) reduce the amount of unplanned work/firefighting engineers do and 3) increase the pace of innovation. Therefore along with code metrics you should be tying these quality changes out to business metrics such as number of customer-filed bugs per a month, number of customer support calls a month, etc. This should provide validation to code quality improvements, and give you a way to sell further improvements to management. Also keep in mind that there are non-technical ways to improve IT efficiency (improve project management, improve release management, improve testing etc)

So how do you go about making code quality improvements:1) See if you can remove unused code/dependencies/features. Less code means less code to support, and faster compile and testing times. Look at metrics like code removed

2) Focus on the most problematic areas of the code and eliminate errors and bugs. If you can eliminate a significant source of unplanned work/firefighting, you'll have more time to spend on planning development instead of just reacting to work. These problematic areas are where tests will be most useful

In our company, we use community-edition of SonarQube to help improve Code Quality. SonarQube can help you setup different metrics and fail builds if they are not met via maven-sonar-plugin. The new SonarSource project has plugins for all the modern IDE's and does a quick analysis on the current file.

If it's an application suite then, from my understanding, you'll be building a main set of libraries and then a set of tools that all use these libraries. Have you considered a hierarchical plugin design? Have a main application that starts and setups all of your main rendering and CAD/CAM magic. Then go from there to working out a simplest of APIs to what everything actually needs access too.

Your main application basically just manages UIs/drawing to an OpenGL port. From there you can load modules to do other things. If you abstract what is needed then each module should only need to define How a functionality is executed, not where and what a functionality should look like in the UI. For instance refactor your code to follow such a structure:

Master UI does not need to know anything about Design Plugin and CAM Plugin.

Drafting Plugin needs to know about Master UI but nothing about CAM Plugin.

CAM Plugin needs to know about Master UI and Drafting Plugin.

That's what I would try and do if this was a new project but this isn't one and uprooting your entire (or even any recognizable percentage of your code base) is unreasonable.

> We have a couple tens of million LOCs, with ~50 projects and 1000s of packages

If you've got that many packages then you might want to find out what sort of abstractions are being used, not working correctly, and remove them/replace them with simpler solutions. How much of these packages are filled with Interfaces/Abstract Classes/Implementations of interfaces

> After ~10 years of neglection we need a strategy to increase the code quality (lots of dependencies, feature envying inheritance hierarchies, spaghetti code, similar problem are solved in myriad ways, all that jazz).

One at a time:

> lots of dependencies

Slowly replace dependencies by either abstracting features further, replacing with new standard library features, or by implementing other solutions to the same problems. Every dependency is an added layer of complexity in my book so it's best to avoid this as much as possible.

> feature envying inheritance hierarchies

This comes as a side effect of not knowing what a level of abstraction is actually meant to be doing. Have a team meeting and ask what each team thinks the actual problems that are needing to be solved are. The people knee deep in crap will have a better idea of what's the correct or natural abstraction for these cases if the ones currently being used are unnatural. It may just be that the code base has had too many large scale changes or even just have had too many features pushed in at once (which for a CAM/CAD tool is definitely not unheard of, this is a very specialized and hard task)

> spaghetti code

Get some sort of static analyzer. I remember one group I worked with used Sonar. Also remember that the best code quality tool is a good agreed upon set of standards. Somethings that have worked for me on some group projects I've worked on has been: Avoid complicated constructors, always default a variable to final, avoid complicated logic statements always exit early rather then filter before in a for loop, use all the up-to-date constructs to aid with code clarity (try(stream), for(var:set), and more).

> similar problem are solved in myriad ways

If there is one problem that exists in two places this is an opportunity for you to pull the part out, abstract it, and use it as a library. This is a double edged sword since these two parts actually need to contain the same problems which some times is not the case.

Now to the nitty gritty:

> How do you measure code quality? How do you interpret the metrics?

(How many times does the code result in an error) * (The time in hours that it takes to debug the code).

Larger number is worse. Keep a notebook/log of these times, graph them, and use that as a map to decide what is worth refactoring. If a piece of code "just works" but looks ugly it can wait to be refactored if there is another piece of code that looks "visually appealing" while still causing daily side effects in the active development of the project.

> What are good tools for a windows/java/eclipse dev environment?

I've always managed ANT scripts for my group projects since they are very very cross platform. Maven works great but I'm not a fan of the complexity of install for non-linux users. Also check out IntellJ for built-in maven support.

> How do you act on the metrics and actually improve code quality?

Change your code by coming at it from a different perspective. If that perspective yielded a more promising piece of code (that is easier to understand, causes less side effects, and uses less external/non-standard functionality) then you keep it. A lot of my code I write is code I throw away. This is much harder to justify to business people but it's an important part of the process to sketch up what you think might work even if the attempts aren't always fruitful.

> Can you recommend any resources of success stories on how companies managed to increase code quality of a big, tangled system?

Check out the U.S. Digital Service for the only recent success story that comes to mind [1].

If anyone knew the secret sauce they wouldn't give it out for free. The ability to "Fix" all the "Broken" projects isn't an issue on the scales that we think they are. A large portion of all technology-related projects fail [0]. If anyone could prove they where able to reliably fix these issues they'd be billionaires over night.

I curse every time I accidentally click iTunes or Photos, as it means I have to wait minutes for my computer to stop lagging before I can close it. Worse, if you exceed physical memory, it seems to have a permanent slowdown until you restart. That's a feature back from the Win XP days.

The latest updates to iOS seem to make things laggier as well, to the point where the keyboard might pause after I hit a key.

Not saying that these are valid reasons however two that spring to mind are thickness (USB-C is fairly significantly thicker) and elegance (one could argue that Lightning is a more beautiful and slightly more pleasant to use connector).

In addition, Apple controls Lightning completely so when you buy an accessory that uses Lightning, as long as it isn't counterfeit, it has been tested by Apple and is guaranteed to work. They can't guarantee that with USB-C.

That said, it's an awkward situation. Overall I'd like USB-C on my phone however if they were going to do it, surely it would have been with the iPhone 7. It wouldn't be great if they announced it with the 7S after people had gone out and bought Lightning headphones.

An app to remember any longtime bet : I bet dner in a French restaurant than in 2020 the Russian team will have more golds medals than Usa . Then on august 2020 an email reminder is sent to the participant

It is not at all uncommon for the people who want it to be free to be high maintenance users who bleed your company for product support. Having more customers does not always translate to more income.

The message you need to worry about is almost certainly not "Why you should spend money on my product instead of getting theirs for free." The message you need to focus on is why people should want your product at all. In fact, before you get to that stage, your message may need to be as simple as "We exist."

This was the marketing goal of Aflac's initial duck commercials. They were a little known insurance company with low name recognition. They tested two commercials. One was a more respectable, conservative commercial. It achieved around 40% name recognition, which was the industry standard and would have been a big improvement. The second achieved 90% name recognition, but was a silly duck making fun of the sound of the name. It was considered hugely risky, but the CEO latched onto the 90% name recognition metric and went with that campaign.

After becoming a household word, they changed their marketing campaign goal to trying to educate people about their product. It was poorly understood as to how it differed from other insurance products. Yet, they were already a Fortune 200 company because everybody at least new their name and that they existed.

> Start from scratch and, in 4 months, be employable as a junior developer.

No, not possible. I mean you can fake your way in, but you won't be employable on pure skills alone. If you have never touched an instrument or cared for music, you cant play in an orchestra after only four months of learning when you are 37. Maybe some prodigy can, but not 99.999% of the population.

I'm less familiar with the world you're coming from, but I'll give a variation of the advice I've given before:

1) Maybe more than 4 months to be an employable junior developer, but certainly under a year if you're willing to put the time into it.

2) "Front-end development" is very broad. You could work for anyone making anything doing that. And that could easily be a feasible first step towards a career in development, especially as it'll give you paid opportunities to practice. But

3) You have 12 years of professional experience. Capitalize on it. Are there applications that would help in those niches? Are there solutions that could be provided? Maybe not an application, but the synthesis of several to create better workflows and environments for workers in non-profit project management/fundraising.

What were your pain points in your prior career(s)? What did you see organizations struggling with? Create solutions or find solutions to fill those needs, and then market them (you have a marketing background, should be helpful, and industry connections, even more important). If you aren't interested in doing a startup or consulting yourself, maybe look for existing companies that are trying to fill these needs.

EDIT: Also, for anyone else reading this, particularly from technical backgrounds in other engineering/science disciplines, I highly recommend considering that 3rd statement. You have a great breadth of technical knowledge, unless you just hate the field or have a true passion for something else, no reason to abandon it.

Lets assume I am an engineer that has decided I want to transition my career and devote my career to helping non-profits because I will be professionally happier. What could I do to demonstrate to you my commitment and interest in this new career path?

Maybe I could show how much I care about the cause, by volunteering at an event. Or helping out running a fundraiser.

I'd ask yourself the same question for engineering.

I think a good answer would be something like: "I thought building cool web frontends was a really interesting problem, so I taught myself the basics (books, online courses, classes) and built (INSERT THING HERE) to build the kind of thing I would love to see in the world."

Stick to your specialty and scratch the programming itch as a hobby. If you get good at that hobby, see if you really want to be a professional dev and look for a job. If you get nothing you like start your own thing (your experience with startups might be handy here). I would wait at least a year to see if this is just an infatuation or deeper love. And while you are learning, pay special focus to algo and design, a bit more than knowing nifty things with popular frameworks in the language of your choice.

How do you know that you want to be a developer if you have 0 knowledge of software development? A lot of jobs require logical thinking and learning on the fly. That doesn't mean you'll enjoy all of them.

I'd say get your feet wet by learning Python and some basic algorithm knowledge first to see if you're actually interested in programming. Also, learn a lot before trying to Stack-Overflow your way to an app. That approach is exponentially harder the less you know about software development.

I've taught three people to code from scratch (my brother and two friends from college). All were able to get junior-level jobs in the SF Bay Area within 6-12 months. All have since made significant progress in their careers.

To answer your questions:

1. Is it possible to do in 4 months? Certainly, but you will have to work your ass off, and also work very efficiently.

2. How is it possible? Work 60-70 hour weeks (easy if you love programming and can finance yourself without a job, hard otherwise), and have a super efficient curriculum.

3. What path should you take? I'll give you my advice below. To do so I'll have to make lots of assumptions about your situation, but here goes anyway:

---

- Avoid bootcamps. Assuming you're more motivated than the average person in your class, they will only slow you down. (One of my friends started a bootcamp against my advice, regretted it, and quit halfway through. The pace was too slow. Very few of the graduates got jobs afterwards.)

- Give up on being full-stack. Four months is not long enough. You will need to strategically cut corners, and this is a big one.

- Buy a Mac, ideally a MacBook Pro. Get one used if you have to. Don't try learning on a PC.

- Right off the bat, start using the Terminal for everything: downloading files, installing programs, opening programs, navigating the directories on your computer, copying files, deleting files, etc. When you don't know how to do something, Google it. It will be painful at first, but you will get good eventually, and it will save you pain later.

- In general, remember that learning new things (everything below) will often involve lots of pain and frustration, but push through it. Once you start to develop mastery in an area, it tends to get much more fun.

- When learning, you want to "see saw" between reading and doing. Too many people try to read and memorize everything, but that's impossible. Reading is just to orient yourself so you can figure out where to start. Doing is how you learn and remove confusion. Then you read some more to answer specific questions. Repeat.

- Start with HTML/CSS. Find cool website screenshots on Dribbble.com and try to build rough versions of them from scratch. Don't neglect to learn flexbox. Do this regularly for a cpl weeks and you'll get good.

- Meanwhile, learn basic programming. Use JavaScript. Remember to see saw. Read just enough to get the just, then dive in and practice. There are lots of algorithmic practice problems online, e.g. http://codingbat.com/java. Do hundreds of them until recursion, and looping, and writing functions, and solving basic algorithmic questions you find online is easy.

- When you are good with HTML/CSS and familiar with JS, it's time to combine the two. Learn about the DOM and learn about jQuery. You'll see how JS can make your pages interactive. Work on small projects, the first of which should be a portfolio that you can showcase your subsequent projects on. Use Git and GitHub for these projects.

- Continue reading about JavaScript. Read books on it. Learn the intermediate and advanced parts of the language. People will tell you to learn frameworks like Angular, Ember, React, etc. Ignore them. Even jQuery (which is much simpler) will be a bit much for you to handle and will seem like magic at first. You don't have time to dive too deep. This is just the price you pay for learning in 4 months. But that's okay, you can still get a job.

- When you do get a job, don't stop learning. Everyone I've taught stopped (or significantly slowed) their learning after landing their first job. They regretted it later and eventually resumed learning.