The list of skills is there to catch automated keyword searches. A simple bullet list is fine.

Right above the list of skills I have a 'Summary of Qualifications' which explains my overall caliber.

> I guess the main point I want to bring across is that I'm an all-round developer who cares about getting things done and uses whatever means are best for the job.

This is what you put in the summary, written for a resume of course, e.g. "Veteran developer with X years of experience using a pragmatic and goal oriented approach to development. Focused on solving problems and shipping software" etc.

As somebody that has hired a few IT people, it's frustrating when you receive a CV where the main focus is a list of skills. For example, I can receive 100 CVs that all say "I can do JavaScript", but skill level ranges from those who can just about activate a jQuery plugin through to people that could probably build jQuery from scratch. I want to see examples of how you've used those skills, because then I can form my own opinion of your capabilities.

So talk about your achievements, and mention the skills you used as part of that. Be specific, and focus on the most important bits instead of listing every single item. Remember to include human skills like planning and leading.

Unless you don't have any experience (fresh grad or something), I would minimize /eliminate the skills section, and focus more on the experience - that way, you are mentioning skills in the context of specific projects and organizations.

Consider that your resume is a landing page for you, the product. A list of "Skills" is, essentially, early aughts SEO... though I suppose it can also double as a feature list.

How are you going to use this resume? Sending in applications, posted online? Would affect my advice.

In general, two types of people will read your resume: hiring decision makers, and their agents/gatekeepers. Ideally the resume speaks to both groups. Gatekeepers use pretty simple filtering, though it won't all be disclosed. For example, if you've got 5 years of experience, and the rest of the applicant pool has 2, and they all went to Harvard and you went to University of Phoenix, you're getting filtered out unless there's something really amazing about you. The "top school" filter may not be disclosed in the job posting, or even known prior to seeing the applicant pool. In some cases these institutional biases are more or less public knowledge, in others not. Worry about passing the obvious, stated filters. It should be clear, in under 3 seconds, that you pass or exceed them. Don't be afraid to ELI5.

For the reviewers giving more than a passing glance, tell a short story. This is like pitching your startup idea, or selling anything, really. Quick, punchy, hook them and let them call you for more.

The resume gets you the call. The call gets you the meet. The meet gets you the job.

I like to have a clear title and opening purpose statement at the top, to set the direction for the rest of the resume.

In fact you already have something to work with in this quote: "I'm an all-round developer who cares about getting things done and uses whatever means are best for the job. I'm able to learn/understand tech quickly but this is just a means to an end. I like to focus on the team and there interaction / openness." (but fix the sp of "there interaction")

If the audience sees something like "Senior Software Engineer" followed by the above paragraph it helps them understand how you see yourself fitting into the organization.

Next I would follow with a simple tabular format of skills (languages/frameworks/platforms for example) that is quickly scannable and has been pruned to remove outdated or out of favor technologies.

There is no such standard in the Web. Everybody is going to tell you something different. The other guy suggested a full JS stack, i would suggest Rails. The next may suggests the .NET stack. There is simply no standard, just use what you think fits your style.

Things to look at:

* Rails

* Node

* Django (maybe?)

* Spring (if you like Java)

There is a completely different tooling ecosystem behind each of those.

What is a fair, market-rate deal for a first-time technical author writing about a popular subject?

You're going to be shocked and dismayed by the offer they give you. Let's get that out of the way now. This is the model: they'll tell you to not do it for the money, in exactly those words, prior to explaining to you terms which imply that they're not doing it for ~92.5% of the money.

You will likely be offered something akin to a $5k advance and a 8% royalty rate on paperback sales with, perhaps, a modestly higher royalty rate on e-book sales. The advance is guaranteed contingent on milestones. The royalties first "earn out" the advance and then start getting actually paid to you. (i.e. You have to sell $5k/0.08 = $62,500 of books on those terms prior to receiving any additional money.)

Most tech authors do not earn out advances. You should expect to receive just the advance and then occasional minor payments ($1k to $2k) for foreign rights if the book turns out to be so popular that e.g. it gets translated into e.g. Japanese.

If you want to make money in writing books, it is very possible. You'll want to start collecting email addresses, start publishing more things which get more people interested in trading you their email address, publish the book yourself, and sell via your own site/email list. For bonus points, sell packages (book, book + videos, book + videos + code samples) and price much, much higher than you're comfortable with ($49/$99/$249 is popular and works well).

There exist numerous people on HN who have successful businesses doing the second model. Understand that the second model is far more akin to running a business than it is to writing books. This is true of writing books generally, but it is more obviously true when you don't have a publisher to blame for your marketing/sales outcomes.

I am in the final stages after over a year and half. I was in your position not too long ago and researched as much as I could before accepting, so I will skip things you will find in a quick Google search.

Negotiating:

1. Everything is negotiable, remember that

2. Under promise on #pages, they ask you for an estimate but lowball it because thats how they price it and then will ask you to fill it.

3. Get in writing what they will be contributing.

Choosing the topic:

1. Choose a topic you are passionate about

2. Choose a topic you know a lot about

3. Choose a topic you can write a lot about

4. Don't be afraid to tweak the topic half way in.

5. Choose a topic that will sell in a year and half.

Writing the book:

1. Write an outline, write the first chapter, throw it away, and rewrite the outline agin.

2. Its better to lose work than to keep going in a direction that isn't working

3. You'll be busy but be reading other books as much as you are writing.

4. Wake up early or stay up late, 0 distractions is the best for writing

5. Talk out loud, like youre presenting to an audience to get unstuck from writers block.

6. Get feedback as soon as possible.

Marketing the book:

1. If you can, get in writing what marketing they will do for the book.

I've written 3 technical books. I highly recommend talking to an agent. I used studiob.com. Not only will they negotiate money on your behalf, but they'll get crappy clauses taken out. Stuff you've never heard of like "cross accounting clauses". My last contract had a right of first refusal for my next book in their opening offer. The acquisitions editor even laughed when we asked about it because she didn't know it was there.

The agent will take 15% and believe me it's worth it.

Whatever deal you cut, pretend like it's not going to earn out its advance. Because it probably won't. As everyone says, you're not doing it for the money. You're doing it because you love the topic, love to write, and to be seen as an expert in the field. Writing the books themselves earned me very little money. Follow on work, like articles, and using it in salary negotiations, more than paid for itself.

I'd suggest you figure out the book you want to write then shop it around the publishers (again, agent helps). And be picky about the publisher, talk to other authors that used them. They're not all the same. When one publisher hires editors and pays their technical editors and another expects a lot of self-editing and offers a free book to the tech reviewers, the outcomes will be vastly different.

As someone who's currently 1/3 of the way into doing exactly what you've been asked... I'd advise you to really strongly consider the time it takes. I was wrong by a factor of 4. Thinking I could spend an extra few hours a week takes hours a night.

Expect no support, their staff are only there are a conduit to move things around. The other thing that shocked me is how crude it all is. The publisher I'm engaged with only has email and word docs. No form of document management nor version control outside of manual naming of the documents.

Even if my book sells well, it won't cover the cost of the time if I'd simply consulted that many hours. If you are considering it for the money, don't accept. If you want to study a subject in detail and get partially paid for it, then it might work out.

As an author of several books with an "established publisher" I guess my main advice is that you know 1000x more about your domain than your publisher and you have a better idea than them what the book should be like so that it will sell a year and a half from now.

The best way to write a financially-successful book is not to negotiate the best royalty rates possible, but instead to make sure you write a book people are actually going to want to buy once the book comes to market... make sure you get a reasonable contract, but after that use any remaining leverage you have to make sure you get to write the right book for your market.

I was approached by Packt a few years ago, to write a book in a fairly niche topic (http://amzn.to/29LR9Ly). Their reputation is not the most stellar, but I put a lot of work into making a high-quality book with source code on GitHub and live examples running on Heroku. So I "beat the odds", I guess... earning several times the (small) advance, and getting some positive reviews on Amazon and Goodreads. The money doesn't justify the time that I put in, but it's tremendous resume fodder and played a large role in taking my career to the next level with my next job change.

However, my biggest regret is that I signed a clause giving Packt the right of first refusal on my next two books. Realistically speaking, I doubt I'll ever get around to writing another technical book again. If I do, I'll probably pitch it to Packt and then self-publish if they don't want it. But it sucks being locked in like that.

Truth be told, if I had known then what I know now then I probably would have just self-published in the first place. There's no real money in book authoring no matter which route you take. Publishers don't really promote books, and you're really on your own anyway for author support during the writing process. I doubt that most employers looking at my resume would care whether the publisher was Packt or Leanpub, so I probably would have gone the latter route just for the freedom.

Of course, if one of your publisher options is O'Reilly, then maybe that's another discussion.

I've published two books through publishers and the actual writing experience was very similar to the book that I self-published.

For the self-published book I followed he basic path set forth in Nathan Barry's Authority[0]. Financially speaking it was definitely more lucrative to self-publish, but I've got access to a large audience to sell to.

There is another approach that I've seen that I find interesting and that is of the "open source" variety where the book is given freely and later published by a major publishing house. You Don't Know JS is a recent example.

My personal experience with OReilly was a good one. They send you a framed cover and it was a decent experience. We got no advance and the pay was peanuts, but it served to get me recognized at the time in a specific space.

Sure, money isn't the only motivator when setting out to write a book, but it's definitely a motivator!

I wrote a kids programming book.1.5k advance and 8% royalties if I remember correctly.

My biggest takeaway, is forget your tool chain. I wrote it in org mode, and exported to Libre Office and then Word. As soon as the first draft was returned I ended up battling with Word the whole time, for edits and redrafts.

Also, designers don't read code like programmers. Indentation was often messed up, incorrect quotes used and line breaks added to suit formatting. One particularly badly formatted piece of code had me ripping my hair out. Everything beneath the first line was messed up. The designer came back to me a bit confused, saying the only problem he could see was the first line of code was 4 spaces too far to the left. Being accustomed to an absolute left limit on code, defined by the first line, it hadn't occurred to me to view it like this.

I did this with Pearson (SAMS imprint) and wrote HTML5 Unleashed. My first time = $10K advance and 12.5% (I think, forget) royalty rate. I'd gladly answer any questions.

> what else should I be thinking about?

Time. I spent a better part of a year of my free time researching and writing the book. Its almost like I skipped a year-ish of life. Of course, the next time I write a book will be a lot quicker.

In terms of hours worked, its not a lot of money. If you like writing (and I do), it may still be worth it. Like most projects, the second one you do will probably be much quicker and feel a lot better than the first!

By the way everyone I worked with at Pearson was an A+ friendly, helpful person. I have a much less favorable impression of Packt, but never wrote for them.

- Writing a book is a huge undertaking - make sure that you have the time to do it.

- Find a good environment to write the book in - if you have kids/pets/others that will interrupt you, try to find somewhere else to do the writing - Marcus Hammerberg (the co-author of Kanban in Action) wrote his in coffee shops, with the side effect that he gained 10lbs, so factor that in.

- Try to pause on blockers if you can - time spent on coding/writing does not always result in the equivalent amount of book being produced.

- Check what the tech landscape for your book's subject is like - if it's very fluid then plan around it - since I started my book, Node.js was forked to create IO.js, Node Webkit was renamed to NW.js, IO.js merged back into Node.js, and then Electron emerged on the scene and now overshadows NW.js (which is ironic given that they have something of a shared history).

- Keep your editor informed - I've had quite an eventful 18 months and it helps to let them know what's going on in your life.

- Writing a book is a great feeling when you see your name as a published author.

I think the one takeaway is that it is like doing a dissertation whilst also doing a job - ask yourself if you can commit to that. If you can, great. And find the right motivation too, because you are putting at least a year of your life into this project.

I was a reviewer for a Packt book (http://droppdf.com/v/UTl8X). The review process was terrible, they ended up not taking my feedback into account and the book was published with lots of technical mistakes.

The final book is essentially a random tutorial from pre-existing web content. Avoid them.

I went into it for the "feather in my cap" and "fun experience" aspect as well. It's grueling, stressful, and the piddly amount of money you'll make for it doesn't do much for motivation. I stopped about 3/4ths of the way through writing it and finally admitted to myself that I couldn't go on.

Some other folks have put out information about the advances and royalties, and it's about what I saw as well. If you're doing it for the money, I think you'll be disappointed, but at least you can gauge that before accepting. If you go the self-publishing route, I'm sure the amount of money you can make can increase if you have the right network, but you may lose that sense of "making it" by being published by one of the big names.

In my opinion you will make very little money from writing the book (unless it is a rare O'Reilly blockbuster) and it will take a horrible amount of time.

I think if you do the calculation based on expected income versus the time required to write the book, it is likely be close to minimum wage. Or it sure seemed close when I did the calculation for myself.

I think that those books make money for the publishers, but not much for the authors, especially if you can make market wages in a major market, or possibly more if you can get stock or some other form of potentially lucrative compensation.

I haven't followed through, but just as an FYI - many people who have written blog posts have been approached by publishers. They don't put all of their eggs in one basket - for a given topic, they go through the "outlining" stage with multiple people.

I am late to this discussion but I have one important thing to add: make sure in your contract that the rights to the book revert to you if the publisher does not publish the book in a timely manor.

This happened to me just one time, and I learned my lesson: I signed a contract with good terms, and half way through writing the book discovered the publisher was also producing a competing book. They decided to not publish my book, but did not revert the rights. I got to keep $5000 in advance money but I was unhappy. I wanted to give back the $5000 in return for the book rights but the publisher said no.

Edit: I have published 12 books with mainstream publishers like McGraw-Hill, Springer Verlag, etc. and many more self published books via Lulu and most recently Leanpub. I totally enjoy writing.

I wrote a technical book for Wiley. 12.5% royalties for paper sales. 25% for book. Negotiate the ebook rate.

The main thing that surprised me (which shouldn't have) was the complete lack of marketing Wiley did. You write it, you find the tech editors, you edit it. They assign someone to you who basically bugs you to turn in chapters. I had the cover designed myself so it wouldn't suck. Then it comes out and you are the one who has to market it. but for me it was fun just to do it.

I have authored a book for Manning (Java testing with Spock) and was approached like you are (they noticed some of my technical articles)

My advice

1) Unless you are going to write a big hit, you are not going to earn match. You write a technical book for prestige, not money.Writing a technical book for money is the wrong reason to write it. Stop now.

2)You should really know your topic well. I mean REALLY know it.

3)The amount of effort it will actually take will be 6x the effort you think it will take.

4)Make sure that you have enough free time and you have discussed the matter with your significant other (and that she/he approves)

I've written 10+ books for O'Reilly, and one for Wiley/For Dummies, back in the day. It's a great experience, and I'm still writing for them now. We really enjoy it, and gain credibility, clients, and speaking opportunities from it. The money is fine, but the opportunities are great. Once you've done one or two, it gets easier and quicker.

I have written some tech books, one of them best-selling, tech edited others, and written a preface for one. I am also the author of published novels.

The amount of money and contract terms vary widely with the publisher. This is where a contract with one of the big publishers is in your favor. You can also consider self-publishing, but that is a different business model, which I will not cover here.

Don't be afraid to ask for different terms.

In particular:

- You can negotiate for an escalator clause on the royalties. This means that the more books you sell, the higher your rate.

- You can ask for a higher advance or a lower advance and higher royalty rate. You can ask for the advance to be split differently (on contract, 25%, 50%, full MS, final acceptance, etc).

- You should negotiate the option clause. This is the clause that says they get the first option on your next book. Specific terms to negotiate include limiting the scope - not "next book" but "next book on the topic of game development with Python". Also make sure they have a limited time to consider your proposal before deciding to buy it or not. 60 days seems to be a common number, but you can probably negotiate that down.

- Strike any non-compete clauses (that you will not write a book on this topic for anyone else or self publish one).

- Strike any cross-accounting clause. This is where you must earn out the advance on every book you have for a publisher before you can receive royalties on any book (and royalties for book 2 can be counted against the advance for book 1, and so on).

Some publishers pay monthly, some pay quarterly, some bi-annually. Take this into account.

Ask your publisher what their marketing and promotion plan will be for the book. How much support are they putting into it?

Earnings vary a great deal, but I have been very happy over the years, and book earnings have paid large chunks of my mortgage. How much you make depends a lot on the size of the audience, what other books are in the market, the timeliness of the book, and the publisher's approach to distribution and marketing. It tends to be on the small side and I realize my experiences are not typical.

It has opened doors for me and my co-author. We've been offered jobs, contracts, writing opportunities, and speaking opportunities.

I would say this was the best career decision I ever made. It's also draining and time-consuming, so be aware it's a large project.

I wrote a book (https://www.manning.com/books/relevant-search). Considerations: are you willing to write the book assuming that you will effectively make nothing in direct proceeds from the book? (Close to true.) The upside from a book is in monetizing on it indirectly. For instance, as a "though leader" you'll be able to garner a higher salary at your next job or you'll be able to ask for higher rates for consulting. But the catch here is that you've got to be willing to put yourself out there, otherwise it's a nice bullet point on your resume.

Oh yeah - and don't underestimate the time and effort involved. (I know you mentioned it in your post, but still.) A due date, even if fairly far off, looms ominous. For a year you'll have trouble relaxing and having fun because you know you should be getting work done on the book. You won't have weekend back for quite some time.

Would I do it again? Yep.Will I do it again? Nope. (At least not likely :D )

- I wouldn't bother with the advance. It was a pitiful amount of money and was paid in instalments that I constantly had to chase. Instead see if you can negotiate a better royalty rate (as your first book, I don't know if this will happen). Supposedly I was lucky to get advance at all, not sure how true that is.

- Another commentor mentioned the free books they give you; I think this is an area you can likely easily negotiate up a bit.

- As I wrote on a popular topic, I did make some money, but nowhere near what I would have made consulting. However, it has opened a lot of doors for me. I have given a lot of talks in a lot of countries and people come to me for advice, which is a nice place to be. In terms of my career, it's certainly been a big help.

- I wouldn't write for one of the publishers that push out lots of low quality titles. It demeans your work and you'd probably be far better off self-publishing.

- O'Reilly have a reasonably good infra set-up; I wrote the book in asciidoc using Vim and pushed to a git repo. There was an on-line app that would then build PDF/ebook versions of the book on demand.

I wrote a book for one of the pubs you mentioned and have talked to several other authors. Know what you're getting yourself into - you're going to be responsible for all editing so it is A LOT OF WORK. Never ending work. And it will never be good enough but dates move forward.

You don't make any money publishing with a publisher - not directly from the book, anyway. Maybe a dollar or two an hour. Do not do it for the money. Once you're done, however, you can demand a higher salary as an 'expert' - say an extra 10% or 15%. Or double your income if you move into contract or consulting. :)

One STRONG recommendation is to not promise the publisher first stab at any future books. Some publishers have this in their default author contract - that you can't publish with another publisher without first offering the publisher the opportunity to publish first (maybe for your next 3 books). That clause is one I would demand be removed from the contract.

Consider whether you're writing a book you want to be a lasting resource, or whether it's a book that needs to get published quickly to keep up with software trends. This will help you prioritize and set deadlines appropriately.

I have been asked to write books too, but I go with a no response always.My reasoning is generally that my I have read free books and learnt a lot of what I have ( was not really rich to buy all trendy books being a middle class Indian student) , my book whenever it is written would be open and free access. That will be my way to pay back.If I have to make money , I will rather do it through my work, not letting publishers have control over the knowledge source I write.

I was approach by PacktPub, and I did a bit of Google'ing before I started, to check it wasn't a scam but didn't find a great deal of advice. I really wish I had the good sense to ask on HN, like yourself, before I picked up the contract. I've been meaning to write it up on a blog, when I put mine together (ironically considering the subject matter of the book).

So for me I knew it was never about the money, I was intending at least not to take the advance (which is actually a kind of loan that comes out from your royalties), it was about having my name on a book, but in the end it was so much work I figured I deserved it at the very least.

The main thing that shocked me was how much time it would take to do, in the end 18 months and that was from the half way point that my co-author had got it to, to the point where I gave up with it.

It should be noted I'm not a free-lance developer, I have a full time job so all this work had to be done in my spare time. This was not well understood by the publisher even though I explained it often, but deadlines came and went without any feedback. I was asked to make Skype calls during my working hours, never times that inconvenienced them, just when it inconvenienced myself. I believe they were based in India so calling in my evenings would be 2am or so for them, but I wouldn't be able to write their book at all if I was fired.

I went though 3 (or 4) different 'project leads' who always told me the end was just around the corner. There was no outline as to how much work was involved, the contract stipulated chapters and some re-drafts, but there is WAY more effort in it than that. No one seemed to have a coherent view of what was going on.

In the time I wrote the book and it being published, I got engaged, and married, and the thing that really annoyed me, and the point where I just stopped responding to them, is that I was still being contacted while on my honeymoon after being explicitly asked not to, that and I wasted my time 2 days before my wedding writing the pre-amble that turned out had already been written by my co-author after I was told it was just one last thing (again).

As for the quality of the book? I just don't know. There are errata which get sent to me, but I am done writing books for a very long time.

I co-authored a technical book about 15 years ago. The advice I got from another author was that you "can't get rich writing books", and it's not worth it for the money.

Since then, book sales have declined... particularly technical books. I wonder if things are now also worse than the experiences of other commenters? For example, David Flanagan, O'Reilly's star author on Java and Javascript, switched back to consulting because even he couldn't make enough money.

All that said, it's great to be a published author! I couldn't help but smile when I re-read some of my work recently.

I've delved into this some. I wrote a chapter in a multi-author tech book, and got a decent way through writing an entire book, until the project fell apart because the co-author they paired me with turned out to be completely useless. I would definitely not recommend it, but I'm not sure if my bad experience was worse than usual, or if I just have a lower tolerance for bullshit.

Don't even think about the money. Assuming you're working as a programmer or similar job now, the money you'll get from the book will be an absolute pittance. Your income divided by your labor will come out to well under minimum wage.

Fun? Maybe. But you'll be dealing with people at the publisher who aren't out to have fun. Expect poor communication, horrendously unrealistic deadlines that only exist to make you work faster, ridiculous feedback on your writing, and a general attitude that you need to answer them ASAP, but your requests to them can be safely ignored.

Find out what format they need you to use. In my case they required MS Word. They said it would be fine. It was not fine.

Feather in your cap? Yes, for sure. But there are other ways to do that too.

My advice would be to think about what you really want out of this, and how much of that you could get from, say, starting a blog, or writing more for your blog if you already have one. The money won't be as good, but the money sucks anyway. You won't have professionals helping you, but you also won't have professionals screwing with you. You can easily get more exposure, since tech people tend to read stuff online more than in books these days.

If you really really really want your name in print, consider self-publishing. There's a lot less stigma in it these days. You'll probably sell many fewer copies, but you'll get much more of the proceeds and you'll have all of the control. It's easy to get your book into online stores like Amazon and iBooks.

I've written two books (O'Reilly and No Starch). Do it for the pleasure of having a physical token you can give to people as a gift. The money is poor, it takes forever, but when it's over you've written something and have learnt from it.

I've written a book, on publisher's request. It helped that I had about 7 years of experience as a freelance writer for one of their well established IT magazines.

While others in the thread have already stated the obvious (don't think of doing it for money; negotiate everything; under promise, over deliver etc...) - there are a few things you should be aware of as a first-time book author.

1: It will take a LOT longer than you ever thought possible. For every page of final product, you will have written 3-4 pages of text.

3: The workflow between you and your editor is crucial. Treat the book the same way you would treat a complex software project - your editor will want regular progress reports, and you want to provide work-in-progress manuscript revisions or chapters. A good editor is going to send back lots of editorial comments, questions, requests for expanding (or sometimes contracting) on varying topics and so on. Approach this the same way you would approach a very thorough code review.

3b: Ask to meet in person with your editor before you sign the contract. The two of you have to be able to work together over written media.

4: You will learn a lot about how to use written and spoken language. You may want to consider the entire project an opportunity for an extremely intensive course on written communications.

5: Before you embark on the project, find out what activity helps you relax. Then find a way to record audio. When you feel blocked, take up the activity but keep the audio recording device at hand. When your mind comes unhinged and ideas pop up, record them immediately.

6: Don't even try to write your book "in order". Focus on one or two chapters at a time, and think of them as 90% independent results. The time to tie the chapters together comes towards the end of the project, as you and your editor realise that a previously "logical" chapter order may not work after all. Leave yourself some wiggle room to make the chapter shuffle easier to handle.

And because we in engineering discipline are even more likely to suffer from existentialism...

7: Do your best to maintain an emotional distance. You are not your book, and your value as a human being is not dictated by the progress of writing or the (lack of) success of the published book.

Basically, writing a book is easy. You sit down, open your veins, and pour the blood out.

P.S. Having an ISBN to your name is one hell of a CV item. But don't ever imagine that you're doing this to buff up your Publications section.

I won't go into exact numbers, but you can expect an advance of around 1000 GBP (obviously, if you live outside of the UK then this is worth less now). This is paid in instalments, as you progress through writing the book.

Royalties will be around 15%, but you will need to pay back the advance before you see any of this. There will also be a minimum threshold before anything is paid out, otherwise it will be rolled over. You won't get any visibility of how well your book is selling apart from the quarterly accounts.

I don't think you can value writing a technical book solely on the direct financial return. You could probably make more in a week of contracting than you will see for your six months of effort in writing a book.

Look at it as more of an investment in marketing your personal brand. It can open doors to higher paid jobs and better contracting work. If you enjoy education (both learning and teaching) then you can look at it as a virtuous endeavour even if it doesn't pay well. As long as you go in with your eyes open to this then you won't be disappointed.

It's not normally a problem, but be careful about any lock-in on options for future works. You may wish to switch publishers after the book.

Be prepared for a very low tech process. Depending on where you work now, writing a book may be much less sophisticated than what you do to write code. The quality over time curve will not always go up and it is quite a complex oscillating wave function. :)

I may write more about this on my blog if anyone is interested. I'll also be at the London .NET User Group tonight if anyone wants to pick my brains or see a copy of the book.

- I have been working in a particular speciality long enough that I felt I had something worth sharing with others, and was also lucky enough to be in a company that allowed me to write about the work we had done.

- Just having done it, being able to see a book with your name on it on the shelf, give it away to friends and family etc.

Some observations:

- I did the book with co-authors. I have other colleagues who are attempting to write books by themselves, and are struggling with motivation under the sheer enormity of the task. If you have coauthors you not only cut the per-person workload down, but also get the benefit of peer pressure to keep things moving along. Being the lead author, I also learned a lot about managing and organizing a distributed group with strong opinions and different approaches, which was an education in itself.

- You mentioned you have existing technical articles. Having a starting point is a great leg up versus a blank page. Our book came out of a conference presentation for which we had to write up detailed notes. So we effectively had about 30% of the raw material ready before we started on the actual book.

- As others have said, it is a lot of work. Hofstadter's Law definitely applies.

- It forces you to really know your stuff, and to double and triple check things before committing them to immortality.

- The first royalty check is surprisingly decent, I believe due to initial sales to libraries etc, but it quickly drops off. As others have said, you are almost certainly not going to be doing this for the money.

- Publisher quality varies of course, but don't necessarily expect too much from them. In terms of editing, they will pick up on some typos, but not much else. We did everything in latex so there wasn't really much extra to do. But if you calculate how much money they make from the book, it is not surprising they do not allocate a lot of resource.

- You will get completely sick of reading and rereading the text, while still finding more errors.

If I was to do it again I think I would go the self-publishing route, just to try it, since I honestly don't think the publisher did much for us in terms of marketing or editing.

It's generally agreed that you are unlikely to make money from the book alone, but there are other reasons for doing this too.

Is the topic is something that you really want to promote? Do you have the spare time to do this for the experience and learning alone? Will this lead to career advancement or consulting opportunities? Often things like this are the reasons why people write books.

It is going to take a lot of time though, so you need to make sure you can spare this time for what is not going to generate a large income on it's own.

Before I decided to write the book, I talked to a number of friends who were authors and they all basically gave me the same advice:

* It's a massive amount of work, especially if it's your first book. If you're working a full-time job at the same time, depending on the length of the book you have in mind, expect it to take on the order of 2 years.

* It's a different type of work than programming or even blogging. In programming, you get feedback on a near constant basis at all levels: your IDE (sub-second), your compiler (seconds), your test suite (minutes), your co-workers (hours), and customers (days or weeks). This helps keep motivation high and gives you the info you need to improve your work. With a book, unless you make a massive effort to seek it out, you get more or less no feedback whatsoever for months or even years. For a project that takes such a long time, this can really sap your motivation. You have to come back and write a bit every single day, day after day, and yet on any individual day, it feels like you've hardly made any progress at all. You have to be very good at driving yourself and you need to put in an effort to give talks, to send out chapters to friends and family for feedback, to join a writing group, and anything else you can to get the feeling of tangible, incremental progress.

* You won't make much money from it. If your book hits the New York Times best seller list, sure, you can make money. But most tech books don't sell anywhere near that much, and even if you have "decent" sales, when you factor in the massive amount of work (see point #1), it's a comically small return, especially compared to a programmer salary.

* Despite that, every single author I talked to was writing their 2nd, 3rd, or even fourth book, and they all recommended doing it, subject to the caveats above.

As a result, I took the plunge, and I am very happy that I did. The real reasons to write a tech book are:

* It's an unbelievable learning experience. I used to think that experts became authors, but the reality is that authors become experts. I did a ton of research for my book, met a lot of interesting people along the way, read a huge number of books I had been meaning to for years (http://www.hello-startup.net/resources/recommended-reading/), improved my writing skills, got better at marketing, and so on.

* It's a great way to develop ideas. I started a company not long after writing my book (http://www.gruntwork.io/) and many of the ideas for that company came directly from what I learned during the writing process. As a bonus, the book is also a nice sales and marketing tool.

* It opens doors. People treat you just a little differently when they find out you are a "published author." They are more willing to listen. You get more opportunities for jobs, talks, meeting people, and so on.

* It feels good. I love teaching and sharing knowledge. I love seeing something that I created have a positive impact, even a tiny one, on someone's life. I love creating things. It feels wonderful to see positive reviews; to get emails from readers telling you how much the book meant to them; to hold your book in your your hands for the first time; to give your parents a copy; to find it on shelves at bookstores and famous libraries (my book is at Harvard, Oxford, etc!); and so on.

I wrote a book most of a decade back... http://nostarch.com/xen.htm - No-starch approached me, based on some blog posts that I thought were absolutely terrible. Now, I don't know if they just wanted to know if I knew someone, and included the "or you" bit to be polite, but I'm all about grabbing opportunities to do things I'm not qualified to do... as far as I can tell, that's how you get qualified to do things.

I think I said something like "My English is Presidential, but I know a guy." I called up the roommate of a friend, and we got all excited. "We'll be done by Christmas!" We got a two bedroom apartment by the Lawrence Caltrain station and filled it with computers. Writing that book took forever. I still call it "the hardest thing I ever finished." but, it was super rewarding, and I still brag about it. I am very glad I did it.

Now, don't get me wrong, I personally am super proud of the tiny checks I get quarterly; and hell, they seem to still be coming in, like 6+ years later, and I am the sort of person who refuses to pretend like salary doesn't matter, but on an objective level? It's just so little money compared to what I get as a bay-area contractor that sometimes I think I'd be happier framing the checks than cashing them. I made literally thousands of dollars!

When I negotiated, it was explained that I got a better percentage for giving up the advance, so I did, because the advance was like a week's pay from the dayjob, but if I could go back and tell myself what to change in that negotiation? I'd tell me to take the percentage payout as if I had taken the advance, in exchange for the publisher promising to spend the advance money on publicity for the book.

Honestly, I have no idea if that's a standard thing, but I don't see why they wouldn't do it if I asked.

Also note, in my case? the e-book royalties were comparatively quite substantial, even though my book isn't available in the Kindle store; you can only buy the e-book, as far as I can tell, direct from no-starch, and it costs almost as much as the regular book and the e-book. They don't sell very many e-books, but my percentage on the e-book was way higher. I imagine that percentage drops a lot if you sell the e-book through amazon; I believe the vig on a kindle book is more than what it costs to print a book, (though I know printing a book is not the only cost of distribution, so the kindle book is likely still cheaper, as the 30% covers not only printing, but distribution and retail profit.)

The people I know who make good money off books are already famous, and they sell without a publisher. But... that's really hard to do if you aren't already famous.

I went with a publisher because going with a publisher gives you a lot of credibility if you are not yet famous, and because seeing my physical book in physical stores tickles me to no end. And because it was their idea, and to be completely honest, I never would have finished without the publisher working with me.

I guess my main point is that you should think carefully about the non-monetary things you want when deciding which publisher you want, way more than you should think about the monetary things. Sure, get the best deal you can, because why not, but looking back? I totally would have traded away some money for more fame.

Like a focused blog, it's really a marketing effort. It may or (more likely) may not generate direct income, so don't do it for the riches, do it for the experience (you will probably have to delve into certain topics to clarify them for the book) and the publicity.

I would say choose the publisher wisely though. I've had a couple of experiences where I was asked to review books and I had to give up after a couple of chapters because they were just unreadable; don't be that author...

I wrote an introductory Python book for No Starch, Python Crash Course. Bill Pollock, the owner of no starch, invited me to consider writing a book after I gave a lightning talk at PyCon a few years ago. Writing for no starch was a really good experience, and I'd do it again.

I feel fortunate that my first writing experience was with no starch. They take each book seriously, and work hard to craft a high-quality book. They have their own editors on staff, and they asked me to recommend a technical editor. They trust their authors to know their field well enough to identify an appropriate technical editor. I am deeply grateful to my technical editor, Kenneth Love. Kenneth has a deep knowledge of Python and a strong background in teaching. He caught many technical issues, and we had numerous conversations about how best to present certain concepts to new programmers.

The writing process was clearly defined. I drafted a chapter, got feedback from a no starch editor, and then sent the chapter off for technical review. After that it went to a copy editor, and then the chapter went through a final layout process. It was my responsibility to respond to feedback at every stage. Every so often Bill would read through the chapters and offer feedback as well. At first this process felt like a bit much; in the end I really appreciated the attention to detail, and I can't imagine writing for a publisher that doesn't have a rigorous approach like this.

I committed to this work for several reasons. Writing at the introductory level is a little different financially than writing about a niche technical topic. The market for an introductory Python book is much larger than the market for just about anything else. I think of the audience for technical books as a pyramid; introductory books target the base of that pyramid. Any topic that requires background knowledge is higher up the pyramid, and the opportunity to make a meaningful number of sales is lower.

I teach high school math and science. Writing an introductory technical book has opened many doors, and I don't feel stuck in teaching at all now. I can write more, and I can easily shift to teaching CS full time if I want to. Just the process of completing a quality book has taught me a lot about following through on the less enjoyable but necessary aspects of a long-term business project.

Short lessons: ) Strike out the contract part about rights for books after 1st one) Beware of how fast projects that you cover move) Put the source code on Github) Have a blog where you write about your discoveries that don't fit into the book) Ask for discount codes and throw them around; that may be the only marketing _anybody_ will actively do for the book...) If you are doing the first book after all, make sure you are treating it as something you will leverage multiple times after that ("published author", increased chance of presenting, proof you can write a different book for a better publisher, etc).

Background and happy/sad story:I wrote my first book for Packt (on Apache Solr). It was a small beginner-oriented book (about 64 pages of real content). I wanted to write a book for long time, so when they approached me to do it about a popular open source project I was working with and blogging about, I jumped on it. I figured that a small book would be perfect way to see the book process end-to-end.

It did not take _too_ much time, but longer than I expected (of course). However, support from Packt for the process was terrible, both in terms of initial information, explanations of process stages, reviews, formatting support or marketing.

I believe I did a good job _despite_ that as I wrote tutorials before as well as working in a senior tech-support position, which gave me visibility into research and explanation techniques.

Still, they managed to nearly destroy my book by publishing a free sample chapter from another book on the same topic that overlapped by the topic-name with my single-focus work. The technical content was actually complimentary and used very different explanation approach, but the general titles looked similar. The beginners (target audience) would certainly be confused. Packt did not realize they were publishing both books at the same time. They did not realize they created a conflict. And they did not see a problem until I escalated the issue 3 levels up all the way out of India into the UK level of management. They replaced the free chapter in the end.

Then they screwed up on the pricing and - just after release - accidentally moved the decimal point and made my book 10x priced. For several weeks while I notified, begged, and - relatively politely - escalated.

In the end, I pushed really hard repeatedly and am happy with what I got. I just feel it happened despite the publisher, not because of them. The book is now obsolete, but a couple of people keep buying it, despite Packt increasing the price on it for some reason. Overall, I got a couple of thousands out of that. A good chunk of that was actually not from individual sales but from some sort of global subscription (perhaps via O'Reilly Safari library).

Later, I also reviewed a couple of other Packt books, supposedly on the same subject. Or I tried to review. They were so bad I could not even start providing viable feedback. So, I pulled out. Yet, I think they all got published. (Yes, I understand what this may REALLY mean about my own book. If anybody wants to privately review and provide honest feedback on an outdated Solr book, let me know.)

I also had a go at Leanpub and O'Reilly. The topic for Leanpub book was too big for me and I cancelled it, refunding all the money back (Leanpub were awesome at that slightly-complicated logistics).

Current O'Reilly book is probably too big as well. Solr is moving way too fast to do anything but small books on with classic publisher schedule and update capabilities. I am not the first one who hit that problem and I know of another book about Solr that got cancelled when the page count went into the second thousand.... I tried to get my scope smaller, but the things are still changing faster than I can process them, never mind explain.

My current thinking is that it may make sense to go back to Leanpub and do super-focused micro-book that is absolutely up-to-date. Something like Solr mega-tutorial with all bits working and using latest features and command lines. Sell it for 4.95 with discount to my mailing list subscribers (yes, I built one). Update it as Solr updates, do other micro-guides, etc.

Lots of great advice in these comments. Adding a few bits based on my limited experience writing for O'Reilly:

E-books are important. They're more than half the unit sales in my case. Look for good e-book royalty rates. Someone mentioned 50% and I have no idea if that's realistic for tech books, but it should certainly be much higher than print. Some tech publishers participate in all-access online libraries, and you get royalties from these too when anyone accesses your book.

Toolchain is important. MS Word intake may mean you'll have less control over quality in post production. O'Reilly can do DocBook/AsciiDoc end to end, and can even push author-submitted ebook updates after launch. Of course if you prefer MS Word and staying hands off in post then great. But I'm always grateful for text markup in a git repo, enough that I'd consider it a big plus when picking from multiple publishers.

When you pick a publisher, make sure that you like their books and would be proud to have a year or two of your own work sitting next to them. Based on stories I've heard from author friends, there seems to be a correlation between production values and author happiness. Typesetting, paper quality, error rates, etc. are all things I care about anyway, and they're also a proxy for other parts of the experience like editorial and technical support. There are big publishers I wouldn't even consider because their catalog is so poor.

Once you start writing, don't stop. Find a steady pace and stick to it. Treat each chapter like a magazine article that's due at the end of the month. The biggest pain for my first book was writing for five months, pausing for two (weekends went to the day job for a bit), then feeling guilty about pausing, procrastinating, and nearly burning out from the stress. The work won't burn you out, the guilt will, so manage the guilt.

My editors were all good people willing to chat, answer questions, and connect me with resources. None of my editors gave me writing feedback. I don't know what's typical in this regard, but I felt quite on my own when it came to drafting and editing. I had mixed feelings about this at first: I was hoping to learn more about writing from an opinionated editor, as with magazine writing or fiction (I imagine). My editors were all good about pestering me for new material on a regular basis, which is a valuable contribution, and we had some good project planning discussions at the beginning.

Keep expectations very low for marketing help from the publisher, especially for niche titles, beyond the publisher brand itself and the occasional full-catalog ebook sale. Ask about marketing channels run by the publisher, such as companion videos, live streaming events, and publisher booths at conferences. Plan to self-promote online, and don't be shy about it. You're writing this book so people will read it, and they can't read it if they don't know about it.

Not sure if this is controversial, but personally I would trade some or all of the advance for a higher royalty. The advance doesn't come close to paying for my time, which means it doesn't shift the risk or up-front production costs to the publisher in a meaningful way. The case where I deliver a completed manuscript but the advance doesn't pay out is one I want to avoid: I want as many people as possible to read my book! If it's a failed investment for the publisher, it's a failed investment for me, even with the advance. One not insignificant advantage of an advance is that it usually pays based on drafting milestones, so it's a nice motivator, but that's merely psychological.

Manning once offered me the co-authorship of Hello Python for 5% royalties and (unspecified, possibly 0) advance. To collaborate with an Australian co-author in an Australian timezone. I declined. Figured if I wanted to write a Python book I could do it for sole authorship, at time and pace most convenient to me, for 100% ownership, 80-95% royalty, most format freedom, etc. with the biggest loss being less marketing power. To this day, not sure if I made the right choice or wrong choice. But declining appeared right at the time.

I worked at Kodak as a summer intern in '85. Was the era of the disk camera. Was also my first programming job. Lotus 1-2-3.

Most people today can't comprehend the scale of American manufacturing as it still was at that time. The Elmgrove plant where I worked (one of a dozen facilities in the Rochester area) has over 14 thousand employees. Our start and end times were staggered in 7 minute increments to manage traffic flow.

That none of that would exist 20 years later was inconceivable at the time. The word "disruption" wasn't in business vocabulary. Nor was the phrase "made in China". Some senior technical managers saw the "digital" writing on the wall. But what could they do? What could anyone do? There was no way to turn that aircraft carrier on a dime.

At the end of my summer internship, I attended a presentation that our small team gave to more senior managers at the top of Kodak Tower in the conference room adjacent to President Chandler's office. One of the managers took me to the window and pointed out to me different plants and facilities of the vast Kodak empire spread out across the Rochester region. I assumed like many that Kodak had a bright future ahead because they had a world-renowned brand and excellent scientists and engineers. What many at the time didn't yet recognize was that there was no business model in digital cameras that would employ 100 thousand engineers, managers, factory workers, technicians, and staff. There were certainly no senior managers willing or able to sacrifice the golden goose of film to pursue something entirely different.

My dad devised the "Bayer filter" used in digital cameras, in the 1970's in the Kodak Park Research Labs. It is hard to convey now exactly how remote and speculative the idea of a digital camera was then. The HP-35 calculator was the cutting edge, very expensive consumer electronics of the day; the idea of an iPhone was science fiction. Simply put, my dad was playing.

This was the decade that the Hunt brothers were cornering the silver market. Kodak's practical interest in digital methods was to use less silver while keeping customers happy. The idea was for Kodak to insert a digital step before printing enlargements, to reduce the inevitable grain that came with using less silver. Black and white digital prints were scattered about our home, often involving the challenging textural details of bathing beauties on rugs.

I worked for Kodak from 2002-2009 in their Windsor, CO plant, in the Thermal Media Manufacturing division and also wrote the occasional post for the corporate blog (thermal media == Dye sublimation printer media used in Kodak Picture Kiosks mainly). AMA.

It was the same thing every year - digital is cannibalizing film faster than we thought, we need to close down X or lay off N people. A year or two of that sure but over and over again and it became clear the executives were just not getting it. Looking back I wonder if they just decided to slowly ride the ship down, extracting nice salaries along the way. I still can't understand how someone - activist shareholders, a board member with half a brain, an executive willing to speak out - didn't make a bigger stink and try to get fresh leadership.

I remember one year at an all division meeting they showed the latest corporate "motivational" video - "Winds of Change" (http://m.youtube.com/watch?v=JYW49bsiP4k). We thought finally, they get it and are admitting we've been stagnating and now we're gonna turn it all around. Everyone was super pumped up for weeks.

Then we realized the only thing that had changed was our ad agency who had produced the video.

What about Kodak? asked Bill Ruane. He looked back at Gates to see what he would say.Kodak is toast, said Gates.8Nobody else in the Buffett Group knew that digital technology would make film cameras toast. In 1991, even Kodak didnt know that it was toast.9Bill probably thinks all the television networks are going to get killed, said Larry Tisch, whose company, Loews Corp., owned a stake in the CBS network.No, its not that simple, said Gates. The way networks create and expose shows is different than camera film, and nothing is going to come in and fundamentally change that.

Youll see some falloff as people move toward variety, but the networks own the content and they can repurpose it. The networks face an interesting challenge as we move the transport of TV onto the Internet. But its not like photography, where you get rid of film so knowing how to make film becomes absolutely irrelevant."

I lived in Rochester, NY and got a 20 hr/week job interning at Kodak in the slide film research department during HS in 1999-2000.

I remember on one occasion my boss, a mid-level executive (head of new slide film research? something like that) asked me what I thought about digital cameras, I think both because I was young and seen as "the computer guy". I didn't own one but I'd read about them a decent amount. I told him I understood they were expensive/low-quality at the moment but the advantage of ditching film and using ever-improving digital tech still seemed huge. I don't remember his exact words, but he couldn't really see the appeal or promise.

The 7-story building I worked in is just a mound of grass last I looked. I have thought from time to time just about the institutional inertia that fights against seeing what's going on and adapting. There were just tens of thousands of people with very highly specialized skills around film and chemicals and processing and dark rooms and paper and so many things most of which simply aren't relevant to a digital photography world.

Now, granted, other film/pre-digital camera companies did a far better job making the jump. I'd argue maybe in part again that Kodak seeing itself as emotionally wed to film while other companies saw themselves moreso as camera/photo companies. That Fuji has been able to survive is more surprising to me than Canon/Nikon/Olympus/etc.

My high school physics teacher worked at Kodak's research labs on CCDs, including some that were for use in "specialized applications," which I assume to mean spy satellites. He said that he had quit when Kodak decided to stop shooting themselves in the foot and shot themselves in the head instead. That would have been the early 90s, I think. I don't have any specific mismanagement stories, but I'll send you his email address (the most recent that I could find.) I always wished that he would talk about it in more detail and now that it is some years later, maybe he will.

He told us one fun story. One of his research papers had been cleared by the censors for publication and accepted by a scientific journal. However, just before the journal went to print, the censors changed their minds. It was too late to fix the layout, so the edition was released with a sheaf of blank pages in the middle. He said that it was the proudest anyone had ever been of some blank paper.

Edit: Apparently, there is no way to private message an HN user. If you have a Reddit account or similar, I can pm you there.

Ten years ago, "the mobile web" might have been a reference to WAP. Nine years ago, when the iPhone arrived, mobile web was Edge: where you could get it and when you could afford it.

Companies like RIM and Nokia had smartphones. The people running them were smart. Their engineering was good. It had to be because wireless access to the internet wasn't ubiquitous. Suddenly the companies faced the first mover disadvantage.

It's difficult to imagine how revolutionary the technology of Kodachrome was. It utterly disrupted consumer photography and photojournalism and professional photography. The fact that Kodak was experimenting with digital photography in the 1970's shows how out front they were and they were right to treat it as a technology that wouldn't be viable for more than two decades...and one that nobody has figured out how to monitize except via the sale of hardware.

There's probably no plausible alternate universe where Kodak managed to produce sustainable profit from processing and storing digital images or selling media or anything related to their core business. Digital photography moved image production out of retail channels. I could text an image to ten of my relatives without a trip to Walgreens for one hour processing, and $0.08 3x5 prints in an hour available in the mid 1990's was a pretty amazing innovation versus the four or five days and significantly higher prices that were typical in the 70's and 80's.

Kodak wasn't a company standing still. It just didn't have a good way to make money from digital imagery: HP had them beat in the printer ink as liquid gold market and the camera manufacturers weren't going anywhere: optics are still bounded by the physics of optics.

Circling back, I think it wasn't so much people sitting around and thinking "we're Kodak" as it was the fact that Kodak wasn't Nokia and hence didn't have a history of selling off it's mainline business and moving into a new industry. Not that as a publicly traded company in the US that would have ever really been a viable option. Quarter by quarter, Kodak was obligated to maximize stockholder value for the short rather than long term.

Another reason why Kodak was slow to change was that its executives suffered from a mentality of perfect products, rather than the high-tech mindset of make it, launch it, fix it... Bad luck played a role, too. Kodak thought that the thousands of chemicals its researchers had created for use in film might instead be turned into drugs. But its pharmaceutical operations fizzled, and were sold in the 1990s.

On a related note, I just purchased a 10 pack of Kodak Portra 400 4x5" sheet film last night. I normally shoot Kodak's Tri-X and 5222 cine film, or Fuji's Acros 100, but thought it'd be fun to get into large format color photography on occasion.

I worked for Creo when it was acquired by Kodak in 2005 for $1B and then later for Kodak for quite a few years. At the time Kodak still had cash but film was obviously in decline. This acquisition was part of a $3B acquisition spree.

Kodak's mangement proceeded to run Creo into the ground through a series of layoffs, remote micromanagement, shuffling things around etc. At the time of the acquisition Creo was profitable (though definitely with some challenges) and had a few growth initiatives that looked promising (all cancelled). Very capable management, good people, and well run. There were a lot of opportunities to create some long term value in different areas but the only Kodak strategy was to keep cost cutting and milk all the businesses to their death.

What was amazing to me is that the CEO kept his job even after Kodak's market cap went below $1B. I forget what that market cap was at the time of the acquisition but probably in the $10-$20B range. Gotta be one of the top ten value destroying CEOs of all times.

For many many years it didn't matter what management did, film kept printing money for the company. Only when things changed you could tell that management was actually incompetent. Before that you didn't need to be competent to keep making money. Kind of like Warren Buffet says, only when the tide goes out you find who is swimming without their bathing suits...

I just saw something on Bloomberg the other day about Kodak finally getting some anti-counterfeiting technology that AFAIK is the same one developed in Creo over 10 years ago (Traceless) released.

EDIT: Another personal anecdote is that the first "real" digital camera I ever used, I'm guessing around 1996, was a Kodak. It was pretty decent. I think the price tag was quite high. At that point in time Kodak had a good reputation in digital cameras. The problem is digital cameras would never replace film as a business even if Kodak went 100% into digital. They needed to diversify.

I (and I've noticed this in others my age) still have a lingering thought that photography is expensive, and to conserve 'film'. Back in the day, even with slides, the finished cost per slide was about a buck each. It's just hard to dispel the habitual thought "is this shot worth taking?"

I'm still getting used to the ability to snap a pic and casually text it to a friend instead of trying to describe it.

I just downloaded a phone app that turns it into a scanner, complete with OCR. It's not as good as my flatbed, but it's incredibly convenient.

To grab a large slice of some niche, you need to be invested in exploiting it. In Kodak's case, that meant having all these huge facilities for film technology. There's no other way to be a big player in this niche. The upside is nobody will ever avoid thinking of your brand when they are shopping for film.

The downside is you can't just change direction. If film becomes obsolete, the market in the replacement is not going to be big enough at the stage when you can see it. And its size is in itself what you will be using to judge whether it will happen, so it's quite hard to decide to dump all the old tech in favour of a tiny industry that might run into trouble. I'm sure there were many naysayers pointing out various flaws and potential roadblocks with digital.

If you do change direction, you inevitable step on the toes of someone else in the value chain. You make your own camera, you annoy the camera manufacturers. You want to be a chemicals business? It's full here, too. Remember you're a huge operation, and there's only so many things to do.

We should not mourn the death of large corporations though. Just like in nature, it frees up resources for new players doing new things.

Steven Sasson, the man who invented the camera gave a talk to my program (CE at Rochester Institute of Tech) about the process and downfall of his invention at Kodak. It all started as some backroom research project with no funding to see if they could do something with the new technology of CDC cells. The camera ended up at .01 Megapixels and stored on a tape (with about 20+ second write time). What most people did not realize is he also made a device for displaying the image onto a TV (what good is a digital image if there is no way to display it). IIRC the first "public" demo of this technology was during the Tiananmen Square Massacre to smuggle/transmit the video out of the county. If you watch one of the original broadcasts the only sign of it was a little Kodak logo at the bottom of the screen.Now there is something you need to know about film.... it was probably one of the most profitable products ever. Also Kodak was not really a pure technology company, it was more of a consumer tech/chemical company. While they did have a lot of engineers (many of my professors all worked there), some were more focused on the manufacturing process rather than an end product. A lot of their internal structure was focused on making and processing chemical film. When he showed this product to the upper level (however never the CEO) the demo went something like: walk into room with the camera, take picture of attendees, show picture on TV. While this was totally unheard of the time, mostly they all just laughed at the quality of the image and brushed it off. They also saw no appeal in viewing pictures on a TV screen vs an actual picture. A patent was filed and that was mostly the end of it.A lot of people laugh at Kodak for that mistake, but it really its just another example of the inventors dilemma. At that time though it made perfect sense for Kodak to make that decision because they were just making so much money. One could say the technology was ahead of its time.Well then digital really started to take rise. They ended up turning a lot of their focus into making personal photo printers. In 2004 Kodak then realized that almost every company that made a digital camera infringed on their patents and started to sue them all. They ended up winning a lot of money but they could never get their foot back into the market.TLDR: Camera was ahead of its time and threatened to undo the whole company whose main focus was producing and processing chemical film.

It's worth noting that Eastman Chemical, which was spun off from Kodak in 1994, is doing just fine as a Fortune 500 company with about 15,000 employees and a market cap of $10B USD.

It's stock is up some 200%+ since the spin-off from Kodak in '94 (compared to 400%+ for the S&P500).

If you had kept your Eastman Chemical stock after the spin-off from Kodak in '94, sold your Kodak stock, and reinvested the proceeds into Eastman Chemical, you would have more than doubled your capital, not including dividends.

Grew up in Rochester. Both my father and Grandfather did about 30 years at Kodak at a fairly high level.

How can you transition from the highest margin CPG product of all time to a new regime where there are almost no marginal costs per use? Kodak had well over 100k employees at its peak and was a global brand just like coke. They even saw their demise decades before it occurred.

But what is management supposed to do in a quarterly environment where CEOs are getting ousted other than double down on their success and cut costs? There was never any real money to be made in digital photography and if anyone was going to figure it out, they certainly weren't living in Rochester.

I worked for Kodak UK for a few months in 2002. I was a contractor working on their ERP systems, specifically reporting and BI tools. I wasn't privy to anything strategic but I could see some odd things.

The oddest thing was that they were still trying hard to push the APS format, despite the fact that is was a pretty poor format when it first came out it was now looking even worse when compared to early mass market digital cameras.

There was a certain arrogance amongst higher up managers. A sort of "We're Kodak, we're the best, we dictate the market!" There was a noticeable staff churn problem as a result. I was on a team of 20 of which 14 were contractors.

I can see some interesting parallels between Kodak and Nokia. Two giants who dominated their sectors and just didn't anticipate changes in consumer demand and then couldn't/wouldn't adapt.

Recent design grad from RIT. I learned from / worked with many former Kodak employees from the late 90's into early 00's. The majority of them worked on the consumer product side of the business.

It is my understanding that Kodak's efforts were too little too late. The organization was driven by the business people and was hemorrhaging money and only staying afloat through licensing and selling off their patent portfolio and the medical imaging devision. They watched their consumer product profits from film evaporate and failed to transition into the digital age.

These designers and engineers that I have learned from and worked with are certainly brilliant individuals. It seems that the organizational culture did not provide them with the creative freedom needed to envision and develop products competitive with those coming from smaller, nimbler companies.

Here is the irony.... They were not too late, they were too early. [0] Their CEO pushed hard in digital from the early 90s, but they couldn't handle the losses. This is an example of a company's executives seeing the right thing, but not being able to survive the losses until the market caught up.

I worked for a subsidiary of Kodak around 91' and can offer a small view. My impression was that they were very aware of what was coming. They came out with the PhotoCD during this time. If you look at it was really a way to lock people into film for a longer time since you still needed to take your negatives in to be transferred.

I also remember taking a photography class in 95 and having the teacher say right at the beginning that in 10 years everything is going to be digital. My impression of the time is that everyone knew what was coming.

Rochester NY resident since 1982, Kodak's peak employment year. Wrote embedded software for Xerox for a couple years around 2000. My belief is that companies like Kodak, Xerox, HP, IBM, etc. have business structures that cannot survive in a low margin business. These companies drove and thrived with technological change for decades. The leadership could not accept the change of their key lines of business into low margin commodities.

Interesting and sad too. I worked there from 1987 to 1996 mostly in the film business. However, the lab I worked in used digital imaging to analyze and measure silver halide grains used to make film emulsion! So we were well aware of progress being made in digital imaging.

When I got to photocd, the managements focus was on replicating the quality and detail of film in the digital space, and they did not notice/understand that the lower level of quality of digital would still be useful. Their marketing was driven too much by the voices of professional photography and the ad industrys need for the quality capability of enlargement.

My colleagues and I actually started work on a business plan to use lower resolution digital imaging for applications in real estate and other industries that could use snapshots of lower resolution. We had really just gotten started on this, when management found out via a personal dispute of one of my colleagues and hauled us in for discipline. We stated our case and said we thought Kodak should get involved, but that fell on deaf ears. Our group fell apart when Kodak did not renew one of our contracts, and he went to work for Sony writing their digital image storage software, which became one of the top tools at that time.

The impression I got was that upper management was well aware of the digital advance but completely oblivious to the speed of its progress. They thought in terms of the huge length of time it took to develop new film products and their associated factories. In 1992, they actually thought that digital would not be a serious threat until about 2005. Oops!

Spent a 3 weeks integrating the kodak robotic microfilm system with VMS based Vax in a joint bid with Kodak and Digital in this time frame. This system was supposed to save kodak. The Kodak VP of Government Systems from Washington flew in to "supervise" and royally screwed everything up. Guy was the biggest nut bag in the world.

-- They developed a massive patent portfolio including many digital photography patents that had an estimated value of $1.8 to $4.5 billion at the time of their Chapter 11 filing (read the article if you're interested for why it got sold for far less - it's fascinating) Source: http://spectrum.ieee.org/at-work/innovation/the-lowballing-o...

I have no inside information what so ever, but I am still today using a digital camera with a Kodak sensor almost daily (a M9). I am still baffled, how a company of this size and revenue can fall apart like this - probably only paralleled by the demise of Nokia. Till the early 2000s, Kodak was not only a leader in film but also in digital photography. Their CCD sensors are still made today, their sensor division now owned by OnSemi.

All the first Nikon and Canon DSLRs had Kodak technology in them. Not sure how things happened in detail, but eventually Nikon and Canon had their own digital technology, mostly based on CMOS sensors. Kodak had one more CMOS camera of their own - the 14n based on a Nikon body. But while being the highest resolution body at that time, it got mixed reviews. Still at that time Kodak probably would have had enough cash to buy Nikon.

Yet they withdrew from professional digital cameras, the last life sign was providing sensors for the Leica M9 and S. And with no other leg to stand on, the company mostly vanished with the collapse of the film market.

Ironically, the price you pay per roll of film is on an all-time high, so in theory, the business of making film should be more profitable than it was, but the scale just is not there any more.

I hate these topics. After the fact it is always easy to make better choices and every big company has made more than a few mistakes, some of which really stand out in hindsight...

Kodak was a darling of the entire business world, for like a century. They printed money, they provided great jobs to hundreds of thousands of people. They were disruptive and put photography into everyone's hands. I think companies have a fixed life and theirs came and went.

Trust me, if they had a big pivot that was blown off, it was in to industrial chemicals or energy chemical manufacturing or something, it wasn't simply a matter of repositioning as an imaging company from a film company.

Tell me, how should Pony Express have pivoted when the telegraph came on line? Kodak, the brand and most of it was in a similar position.

This isn't exactly insider, but offhand experience. I worked at a pharmacy from 2005-2013 that had Kodak film processing and digital printing machines. Early on, there was a still a great deal of actual film. Digital cameras still were expensive or not rivaling the quality of film. Many machines didn't offer negative-to-cd, but offered a floppy disk. This slow updating of their equipment was a theme in the company. As film sales slowed, Kodak grew worse. Outsourced tech support for the machines, who were graded on whether or not they had to send a technician out to the store. That was switched to another system, where a tech called back. Most times this was recorded as done, but wasn't. This sort of thing was very common, and didn't improve as they sold off and outsourced bits of the company. They did finally start updating machines, and having fewer locations that processed film quickly and phased out send-away processing - truly focusing on digital products. By then, their advantage had slipped. While they relied on quality film before, their prints seemed no better quality than competitors.

Part of the problem was obviously that they didn't take digital seriously soon enough. This in combination with what seemed like a poorly run company meant software always lagged behind sometimes by years. For example, it took some time before folks could put video from their camera onto a dvd, and the cd's themselves could only hold 120 pictures while SD cards were keeping thousands. It was a software issue. In addition, some of the retail outfits they contracted through seemed overpriced.

I think the official story was filled with lament over not taking digital technology seriously early on. While I read some stuff about them, it has been some years and I don't know how much was reserved for the workplace environment.

I was a summer intern working in OLED research at Kodak in the early 90's. While not that many people associate Kodak with OLED, Kodak was considered one of the world leaders in OLED research at that time. There is a short paragraph about the folks I was working for, on the OLED page on Wikipedia ( https://en.wikipedia.org/wiki/OLED ). While Kodak was able to license the associated technology/patents, it didn't seem like they came out with any consumer OLED products of their own in the years that followed my short stay, there.

Big ships are slow to turn. One systemic cause based on my experience in old-big companies is that if profit margins on some products are high, mid and some high level managers maximize that product (e.g. roll-to-roll manufactured films in Kodak). With such optimization, you end up at "local-maxima". This continues until you have an "iceberg right ahead" moment when profits start to spiral downward. Depending on the pace of growth of the technology/sales of your competitor(s), this descent is rapid and irreversible.

We lived in upstate NY during the 1960's and I visited the Kodak campus in Rochester, I think as a cub scout. It was an amazing place. I remember feeling awe as we were walked through this long dark place where they proofed new rolls of film. The memory of the feeling, at least, is still quite clear after all these years. Great to read all the anecdotes about the company here.

The story of staggering institutional inertia and failure is amazingly similar to Xerox's, and similar to hundreds of other US corporate manufacturing giants declining into bankruptcy over the last 40 years, by failing to be concerned with, or collectively act on (board, executive, staff) the future year risks of on a several-decade time scale. This list of inertially failing organizations, among others, includes General Motors, IBM, HP, AT&T, and others.

[Can any public company plan on a multi-decade time scale? This is an anthropological, sociological, cultural and business question of interest in the academic literature.]

Others, along with Harvard Business School professor Clayton M. Christensen, have written extensively on the general cultural difficulty of spending money now for an uncertain future while making money presently on a multi-decade investment.

Xerox was originally under the thumb of Kodak, in Rochester, New York, as the maker of photographic paper, chemicals and related equipment, known as the "The Haloid Photographic Company" (halides -- bromine and related chemical elements, along with silver, being the key photographic chemicals of photography) and founded in 1906.

The owner-leadership of the Haloid company in the 1930s and 1940s knew the company was doomed in the long-run with their then-present capability. Later on, it turns out that neither the Xerox of 1995, nor the Kodak of 1995 understood the necessity of having a keen paranoia of their technology's changing future and future unreliable income.

Xerox's, Kodak's, and other corporate stories are both studied in graduate-level business and engineering programs looking at the risks of failing to plan for change, the end of patent monopoly and market domination, and the cultural and financial necessity for understanding, committing to, and planning on the demise of the present technology currently sustaining an organization.

The Haloid company was not crushed by Kodak, in part, because of antitrust laws, but it was always in danger of failing in a competitive market dominated by Kodak, and from the 1930s onward had committed money to find and own a technology that was sustainable and patentable to survive: the Haloid company invented the term "xerography" (dry image transfer).

After the growing success of xerography in the late 1950s, and 1960s, after near-death experiences bringing the technology to market, the culture of developing and ultimately surviving on the new market of new-but different technology was lost in the profits of xerography in the minds of the then-present executive leadership, addicted to the amazing profits of xerographic patents: thus the results of new research and technology failed to be supported to make it to market.

Yet the Xerox company's creations, developments and discoveries are the present backbone of the present era of computing: ethernet, laser printers, personal computers, graphical user interfaces, Smalltalk (see also Object Oriented Programming and Virtual Machines, and the like). Look up the history of Xerox's "Palo Also Research Center (PARC). The Smallalk VM was the foundation of Self, and the later Java VM, with the addition of a couple decades of further work.

See "Fumbling the Future: How Xerox Invented, Then Ignored, the First Personal Computer" by Douglas K. Smith and Robert C. Alexander for that story, published in 1988, nearly three decades ago. The cultural failures and non-response parallels are remarkable.

Doubtless there are dozens of similar reports on the Kodak cultural inertia and failure to understand change, in academic articles, and books. Google is your friend.

Kodak leadership thought that they could not profit from digital cameras, and early and expensive technology could only be used by large institutions (NASA). As costs dropped, Kodak thought that their market was commodity consumables, so they licensed the technology they created. The consumables chemistry and photographic films and papers market ended with with the rise of digital photography (no negatives), and digital, color ink-jet and xerographic reproduction of images.

In some sort of alternate history. Setting Apple acquisition aside.. Kodak would have acquired Kinkos and branched out into online ondemand photo proofing for digital images. Then went totally Android and came out with something more like an iPad than a phone, perhaps leaning in the direction of GeoTagging and Environmental sensing. Phone tech might take a second seat to messaging and something more akin to video calls on demand.

Yeah Shoulda Coulda Woulda, it just seems there were less barriers to survivial and thriving than simply egos and business as usual attitudes.

What I don't get is that my very first digital camera was a Kodak (must have been in 2000). It wasn't very good (but no camera were then) but it was one of the cheapest. So it's not like they were absent from the digital market. But still somehow they missed the boat.

I used to work in aerospace as a hardware/software engineer and while my team did use unit tests that wasn't how software was tested and qualified.

The process in aerospace is vastly slower and that can be signified by the fact that on average a programmer doing aerospace produces just 1000 lines of code a year. There is a clear reason why aerospace engineers produce a lot less code:

- Documentation - You get a pile as tall as your desk for a few thousand lines of code, the software is designed to the degree of knowing precisely what the maximum N can be going into a function and its exact runtime, every function has a fixed maximum and minimum values and a runtime associated with it preallocated before anyone writes code. Then the code is checked against the documentation.

- Bench testing - We had a complete dev/test environment in which all in dev hardware/software could do missions and had all the different teams and their avionics device within a network interacting. You then flew countless "missions" (real cockpit parts but swivel chair + simulator) testing all the different scenarios and what you just did.

- The wider team of engineers working on the avionics software would print out the entire code and all branches and "walk" the entire system from beginning to end with documentation in hand. Every line is validated by someone from a different team and every decision scrutinised.

- System test - multiple phases of. Completely separate teams would test every new release to death rigorously, against the clients specs, against the documentation produced and also with their own independent simulator and cockput setup. Not only that but there was a second team doing the same thing trying to catch the first in missing something.

- Then it spends years being tested in prototype vehicles before finally being signed off as ready.

End to end it took about 10 years to get 60k lines of code released working full time on one avionics device. We had unit tests but only for the purpose of testing the hardware not really for our own software beyond startup tests.

All of that is the rigour coming out of one principle, every time you find a problem in the code at any point its not the individuals fault its the teams fault and you work out the genuine cause and ensure it can't slip through again. Everything must be testable.

There is quite a lot of parallels with unit testing and indeed it could be a better way to capture tests for aerospace, potentially a slightly less man intensive way as to run all the tests everytime a new release is produced, but it wouldn't look anything like what your typical commercial company does as to them its not worth reducing the 10 bugs per 1000 lines of code they currently have down to more or less 0. Unit testing in aerospace would be about efficient repetition of running known tests much as it is in the commercial space but not driving any form of design process, but I could see test definitions being produced from the function lists. We did a lot of specification testing and limiting of language to allow that to occur and make the code more straight forward.

Unit Tests are not the one and only metrics for software quality. Believe it or not, before unit tests were coined, we managed to build reliable software too. Actually a lot of the internet was actually built that way.

But your assumption is very interesting, because it states a very common misconception of quality (whatever that means, reliability, maintainability, resilience, etc):

* "Unit tests imply reliable software". Well, it does not, you'd need reliable unit tests for that, and from what I've seen in the industry, not every unit test out in the wild increases reliability.

* "Without unit tests, you can't have quality software". That is just not true either. First, not all tests are unit tests, end-to-end (integration/functional/whatever) test are extremely useful and raises the quality too. Even without any automated test, the quality of documentation, contract programming, manual testing, etc also help a lot.

Also, note that apart from testing, a lot of different practices contribute to reliability. Separation of concerns, the language level of abstraction, the overall experience and involvements of the developers, and many others.

Unit testing is not a method of achieving reliability, but of accelerating development. They do not prove that a system functions correctly, which is more in the domain of systems and integration testing.

Also, testing is only one small part of building reliable systems. Instead, reliability requires actual engineering practice, which is non-existent in the software world except in a few life-critical domains.

NASA and their contractor's software engineering practices have been covered extensively elsewhere. Here are some links.

In addition to everything else mentioned here they spent a TON of money[1]. For example they spent what would be 3.6 billion 2016 dollars on Guidance and Navigation. If your current project could drop a few hundred million on QC I bet quality would go way up regardless of automated testing or not.

Avionics software is subjected to multiple rounds of peer review (often by different companies), integration testing, and in some cases multiple different implementations are developed and black box tested against each other for different outputs given the same inputs.

Unit tests aren't really about testing (in the validating sense) software. They in fact are quite deficient in that role. No integration tests, no end-to-end tests, no fuzz testing, etc.

Unit Tests are foremost a software development tool. They force (in order to be unit testable) separation of concerns, definition of interfaces (i.e. the "contract" with function's users), etc. Mechanically, they also enable validating that internal refactoring has not violated that contract.

A while back someone posted some programming guidelines issued by nasa. It detailed requiring assert statements at least every 10 loc. It spoke of many of other things as well, but that was one that stuck in my head.

From http://link.springer.com/article/10.1007/BF01845743#page-11) Integration testing, 2)Systems testing, 3)load testing and 4) user acceptance testing. This would have been done with simulation tools alongside manual testing. Think about the way hardware testing is done and extrapolate those techniques to software.

This is an article about the team writing code for the space shuttle, it is not so much about how they tested but more about the (insane) amount of time and energy spent on making sure everything worked.

The funny thing is it's not just "NASA way back when". Even now, the most critical code (e.g. implantable heart defibrillators) likely has less "unit test" than advert code that posts junk to your facebook wall.

Cacti had a nice summary. Some of their methods are also detailed in NASA's Software Safety Guidebook that was recently posted. My comment below tells you which pages have good stuff plus, as usual, has a link to the PDF itself.

Here's a recent comment with so-called Correct-by-Construction methods for software development that knock out tons of defects. Usually don't have unit testing. The cost ranged from being lower (eg some Cleanroom projects) due to less debugging to around 50% higher (eg typical Altran/Praxis number). Time-to-market isn't significantly effected with some but is sacrificed for others. So, you don't need several hundred million in QA as some suggested.

CNN loads something like 10 megabytes of resources when you open the front page. That's just code and images; all that code has to be parsed, JITed, built into the DOM, etc. and all of that requires exponential-ish (not scientific, but it feels right) more memory than the code itself. All those images have to be unzipped into pixel buffers and painted on screen.

Even assuming your browser could do all of these things right every time, look at how many goddamn standards the browser has to support; many of those features have to be loaded hot and ready to go to improve apparent performance, and those features require a nonzero memory overhead as well. So, in the end, in order for your browser to do anything useful it has to snort memory like a coke addict.

On a Mac, as far as I know, Safari uses the smallest amount of memory. (It's the only browser I've used on OS X for a while.)

I'm surprised to see Firefox taking up 3.5GB of RAM. On my Windows 7 PC, it takes about 1GB after a day's browsing (without Flash). I usually have 80-100 tabs but only 15-20 are actually open. It's fine unless Flash goes ape and eats 2GB of RAM for no reason.

I also use Vivaldi and Opera on Windows 7. Both are based on Chromium and both consume more RAM than Firefox. Vivaldi does "lazy loading" so it only loads tabs when you click on them, whereas Opera still tries to load everything.)

http://tvtropes.org/ As an fiction writer wannabe, this site is both inspirational and disheartening. There are no new ideas under the sun but the variety and potential for new combinations is dizzying!

It's really quite surprising the differences you'll see as far as what is covered, how it's covered, and the placement & size of the articles. Not to mention of course the content & bias of the articles.

By combining the different views, I like to think I'm able to get a more complete picture of the "truth", although as you can see I read relatively left-leaning publications (apart from TASS, obviously) so that is not to say my worldview is completely fair and unbiased.

Other than that the usual techie stuff: ars technica, phoronix, nautilus, etc

Well, people have already mentioned TV Tropes and Reddit, but they're definitely two places I can spend hours on end browsing for content.

Other than that:

http://www.hardcoregaming101.net/ - If you like obscure games, this will keep you reading for hours. The amount of Japan only arcade games you probably never knew you wanted is insane.

http://www.spriters-resource.com/ - And its various associates for 3D models, textures and sounds. I can easily get sidetracked just looking for examples of strange games, tilesets or enemy sprites. If you're making a fan game or game mod... you'll probably spend more time here than actually making the game.

Various Wikis in general. Super Mario Wiki, Zelda Wiki and Bulbapedia can keep you reading for weeks if you're a fan of those franchises.

https://tcrf.net/The_Cutting_Room_Floor - A very large wiki for unused content in games. There's a lot of fascinating stuff there, and just looking for ridiculous developer rants can take up a fair amount of my time.

For example, this rather amusing rant about warez teams and the demoscene:

Forums in general. If I find a new forum, I'm probably not doing anything else for the next weekend or so while I try (and sometimes fail) at catching up on what happened over the last ten years.

http://www.suppermariobroth.com/ - Yeah, it's pretty obvious what my focus is when it comes to the internet. But it's basically one very long feed of Mario tech gimmickry and randomness and it makes for one hell of a trivia source.

A few times each year I come back to Randal Munroe's "What If" [1]. By then I've forgotten where I left off last time and just kinda binge-read them over the course of a few days. If anyone here hasn't seen them yet (or, like me, hasn't checked in for some time), do yourself a favour and do it.

Twitter is a big one. Sometimes you just find that someone retweeted somebody else or that someone replied to them so you just peek into their page and you scroll down and occasionally find something interesting or funny; repeat until I'm tired of it.

For the past 8 years, I have used 2 Lenovo Thinkpad models (the current one is E series, don't remember the exact model of the previous one). Used both of them with a Linux OS. I have had no issues with either of these over the past 8 years and their performance too has been excellent - I use these just for development activities which involves working on IDEs, running multiple servers (for coding and testing with them) and for the usual browsing and other things. I am not a video/gaming user.

The only issue that I sometimes have with them is that they heat up really quick, so using them on your laps is a problem. It's not a problem for me, because, most of the times, I use them connected to an external monitor, keyboard and a mouse - it's pretty much on my table with its lid closed down.

What I find painful in modern laptops are the crap chiclet keyboard layouts. Why can't we just have a classic, compressed keyboard like on the thinkpad t420 or x220? I get cramps from the keyboard of my dell e5570 I have from the workplace, I started using external keyboards 2 days after starting on it.

Here's hoping for a retro thinkpad next year. Until then, I'll hold onto my money or just invest in an ergodox keyboard.

If you want something with good hardware support for Linux Lenovo is always my first choice. I have a Thinkpad x220 that runs Linux flawlessly. I've also got an ASUS UX305CA which isn't too bad either. The brightness controls didn't work out of the box, but I've got it all working now. It's thin and light, comparable to a Macbook Air. I use it as my personal development laptop. Easy to take anywhere since it's only 13 inches, but it has a QHD touchscreen so there is plenty of space to have multiple code windows open at the same time. Battery life isn't too bad either and it's cheap (about $700 USD new).

I'd like to know experiences using a newer model Thinkpad T460S or T460P with discrete graphics experiences using any distribution of Linux. Does sleep, suspend, hibernate work well? Dual display? The retina (? pixel scaling?) work well? What doesn't work?

I personally love it. I've only written a couple of small, really simple apps, but I've got a couple more under way. To me, the idea of writing desktop, native apps, utilizing the web development technologies I already know, is bad ASS. the sizes are on the high side for some things, sure, but that's not really a HUGE deal breaker for me.

In my experience, Electron apps tend to be bloated, not very performant and/or energy inefficient. This means that developers make a choice regarding users' disk space, user experience and battery life.The extent to which users notice any of this is of course dependent on a combination of their machine (e.g. old/new hardware, laptop/desktop) and the application being run.

Also, since you don't use the platform's native widgets, there's poor (or nonexistent?) accessibility features for users with special needs - not to mention the implications (HCI-wise) of redefining/replacing standard interface elements.

In my opinion, all this reeks of poor design, and is arguably (in some cases) downright user-hostile and unethical.

To portray it as "native" is, frankly, ridiculous.

Why:

The main thing Electron does well is lower the barrier to entry and maybe increase development speed.

I can definitely see use cases, such as quick prototypes and internal applications. And it is cross-platform. There's probably other good reasons to use it as well.

Bigger drawback is that it's bloaty, I have a mix of Chrome apps and Electron apps, the Electron are big (40+ MB) and take much longer to start because they have their own Chrome to open, while the Chrome apps just use the global Chrome instance that's already open.

I wonder if we'll someday get something akin to the JVM for Chromium-based stuff. We'd finally have a single install that could be updated and every app being kilobytes instead of megabytes.

- Productivity: The browser renderer and Node (as well as the open source ecosystems that accompany those environments) offer you a lot. HTML, CSS and JS are relatively easy and because there are a lot of web programmers out there, your access to developers, developer resources and libraries is vast.

- Multiple language support: Electron uses JavaScript for application logic but you could alternatively use CoffeeScript, TypeScript, ClojureScript, Elm or any language that can compile to JS.

- Performance when you need it: Use something like node-ffi [1] or SWIG [2] if you need C or C++.

Cons:

- Memory usage: An Electron app is essentially a fully-featured Chromium browser and a Node process that communicate via IPC. This might be a bit bloated for some applications, particularly if you're targeting systems where memory is limited.

- Large builds: Packaged Electron apps contain everything they need to run so they're typically quite large. Don't be surprised if your "hello world" app ends up being over 30MB.

- Not truly native: Sure, you can make your app look great but the only way you can make it look 100% "native" is to use the widget toolkits of whatever operating systems you're targeting. Electron does not give you access to this.

- The DOM can be your worst enemy: This is where the performance issues of many Electron apps (Atom comes to mind) arise. Depending on the complexity of your app, you'll want to either limit how many elements you render to the page at any given moment or look into using something like HTML5 canvas to render.

- Single threaded: Node is single threaded. If true concurrency is important to you, you might consider another platform.

Overall, I love Electron. It's not the right choice for all applications but if you have experience developing for the web and want a desktop application platform that is easy to get into, I'd give it a shot.

Being totally honest, I have very little experience when it comes to designing desktop apps. Most of my experience in UI development has been in either Swing (ew) or Qt/C++ (slightly less ew, but still ew).

I kind of hate the idea of Electron, and this is for a variety of reasons:

* I think Javascript is a terrible language.* I think HTML/CSS is a terrible language.* I think the web is too bloated as it is, and extending that paradigm to the desktop only encourages more lazy, bloated development.

That being said, I get the appeal Electron has for a lot of people. If you already know web development, then writing a desktop app becomes a no brainer. If you're already a destkop UI guy, you can learn Electron and be a web front end guy too. All good things.

But the idea of using a fucking gigantic framework (I mean think about it, just Electron by itself is all of Chromium basically, or at least the big complicated exciting bits) is a massive code smell to me. It's like using an ORM or a heavyweight web framework: whenever you're leaning on that much code just for your baseline of functionality, you're making a lot of assumptions that should, as a developer, at least give you pause. One of the things I love about Swing is that I can spin up a basic prototype in a couple hours with just a couple hundred lines of <insert non-java JVM language here>. And besides the built-in language UI framework, I own every piece of that code and can tell you what it all does. I feel like web development (an by extension, Electron) doesn't have any of those desirable attributes.

Web development often involves hundreds of lines of boilerplate, the import of a several-thousand line Javascript API, a heavy CSS framework like bootstrap (for prototyping, at least), and the assumption that the magic will all just work. I know that fundamentally there isn't that much difference between writing an app in Swing versus Electron if we're talking about complexity, nor is there that much difference in performance for simple cases. But it just feels better for me if all I'm doing is a few API calls (even if the underlying code is similarly complex).

Now, there is one exception to all of this: one of the apps someone on my team supports at work is our monitoring dashboard/customer service portal. Because of some bizzar-o rule from up top, we aren't allowed installing anything except for absolutely essential apps on the support machines. Electron proved to be a lifesaver here because we could wrap our web-based portals in "desktop-y" apps and get rid of the web browser entirely. For cases like that (or like a friend of mine, who is currently working on medical instrument displays and uses JS/Electron in a couple of experimental projects because it's easier than Qt and the machines run Linux or Windows 7 anyway) where really what you need is a website, wrapped in a clean sandboxed container, I can kind of understand its niche.

My personal opinion: They failed because they launched big and didn't let people use it. I remember first signing up for beta, waiting, waiting, and waiting, and never getting an invite. I even tweeted at them but never got a response. They seemed to be only focused on creating buzz in silicon valley without actually letting people use their platform, maybe they thought that was marketing. But what good is marketing when no one can use the product? Later on they eventually opened up but it was too late. No developer likes to build on a platform that's going down.

Didn't they also change scope entirely? When I signed in (and never got invited) it was more like a visualization library or something to do cool 3d stuff.. and then moved to UIs and mobile apps and whatnot when it came out.

Funny enough, the same exact thing happened with Outracks' "fuse", started as a tool (called Realtime Studio) for realtime GPU-driven, optionally interactive content like music visualization (had a slick demo for that) and demo-production (Outracks is also a demogroup) and then switched to a mobile UI tool all of the sudden. (at which point I started completely ignoring because it's completely not what I wanted)

I tried to get into it a couple of times and I could never understand it. I don't think I'm particularly slow at picking up new stuff in general, for me the learning curve was just massive and I wasn't ever quite sure why it was worth the investment. Maybe if I was trying to do browser game dev or something but I've never needed a magic periodic table[1] in my everyday development life.

My primary complaint about Unix CLI is that the commands try to do two disparate things simultaneously: outputting text for humans and generating data for consumption by other commands. In a properly designed command line ecosystem (it's far, far too late for that now), there would be good human readable output in one mode, and tools would exchange structured, already escaped data amongst themselves in another mode. Alternatively, the shell would render the output for humans, and tools would just communicate in machine format.

Consider for example executing a compression command on all jpg files in a folder and below. Currently, the syntax goes something like this:

find . -iname "*jpg" -exec gfxcompress {} \;

Naive Unix users would have expected this to be achievable by piping, but no! In reality there is a special syntax of the find command that does this, and it's not completely clear how you would deal with file names further down the line, or how you would achieve further chaining. This syntax is also completely unique to the find command. You can't just plug in another data source and perform the same operation. In a CLI where tools are designed to be chained, it would look something like this:

find | match :name="*jpg" |> gfxcompress :name

In this example I used an imaginary |> operator to indicate I want something performed on every item, and I can refer to the "name" field of each directory entry directly. If I wanted to filter by time, for example, I might use the :time field instead, without any need for explicit support. If I wanted to mutate the name somewhere in the chain, I could do it. If I wanted to chain additional commands, it would be obvious how. And it would just work.

The same goes for error handling. Imagine a compiler spitting out structured error information which you could use directly in other tools without having to parse and interpret it first.

I'd basically like to have powershell. Potentially with a better introspection and attributes so that you can build a more interactive CLI. For example output size estimation + tagging local read-only commands to allow showing some output preview as you combine the commands. On-a-side display of help for arguments.

My pet hate is that commands themselves cannot tell you things about them in a structured way. There's no reason why completion and --help are somehow separate from the execution itself.

Love: context aware shell like zsh/oh-my-zsh provide. If I'm in a git project, show me a good git summary for example. If I'm in python venv, tell me about it. If I'm in some python project, extract its name from setup.py.

People say Windows' Powershell using some sort of objects rather than a text stream is good.

For me I want better control over how my shell records and stores history. I want it to record more and probably not in a flat text file. I don't want it to write commands that fail to disk but I do want them in my history so I can correct my typo and run again.

I'd like lines to be rewrapped if I change terminal dimensions. I'd like more automatic pagination (rather than forgetting to use less and nuking your scrollback). I think is an issue with mintty/putty but I'd like its command history to be printed correctly after changing the terminal width.

Pet hates: terminal bells that I didn't explicitly ask for; middle click paste; the obscure terminfo file that sometimes controls the features a program uses.

History should be recorded in non-volatile storage immediately, not just on clean exits (can do this in bash, but it needs to be the default).

You should be able to step though history for just arguments (can be done in bash, but needs to be easier).

Take a look at TOPS-20: it had "question-mark help"- any time you hit ?, it told you what the next argument is for.

The terminal emulator should have inline graphical output- it should not be text only. Someone should define a protocol so that you can have full graphical programs running in the terminal emulator. When you exit the program, the last view remains in the scroll-back history.

Some plan9 things are good: network access should be in the filesystem so that you don't need special programs like wget: cp /net/www.google.com/http/index.html foo

Same with local I/O: parameters for opening a serial port should be part of the filename when you open, something like: cp foo /dev/ttyS0:9600n81

The filesystem should be extended to also be a database. SQL is ugly, but some kind of relational algebra query language should be built in:

cat /mydatabase/name="joe" is like 'select * from mydatabase where name="joe"'

Don't google errors. Don't network anything that isn't explicitly requested to be networked. Don't give any built-in affordances for HTTP or other poorly-designed, politically-imposed protocols. I don't even know what you're talking about with microservices, as that notion only makes sense in the context of implementation details of RPC services ("Cloud APIs" in 2016 parlance).

A modern shell would:

- Never use text as the interchange format, but rather keep binary representations in memory and link all commands in a given interaction into a single computation, doing inlining and optimization across the boundaries of what are currently separate processes communicating via pipes/IPC.

- Know about the structure of valid invocations of commands (essentially, their type signature and the algebraic type structure of their parameters), preventing any "invalid invocation" error messages a priori. Only valid options would be permissible for tab-completion, and un-tab-completable options would give error feedback directly without executing the program.

- Have a principled, data-race-free alternative to the filesystem for persistent storage of data, which maintains type information without preventing useful generic operations like bytewise encryption.

- Support unicode as the only character set for text.

- Encourage proportional fonts, so as to make box-drawing characters not work properly. An associated UI system would permits applications to refer neither to character cells nor pixels, but only to a graph of modifiable or immutable values and a set of presentation hints. Platform-specific modules would interpret these based on the capabilities of the local terminal. Full graphical applications over SSH are trivial under this model--no more "I use a shitty 1980's-era UI because that's the only thing I can export across the network consistently".

- Allow addressing both local and remote system resources (RAM, CPUs, I/O devices) via a uniform interface, and allow dynamically attaching and detaching these from running processes, both local and remote.

- From a more traditional shell perspective, it seems important to support "by" operations with good ergonomics. This is stuff like "sort all these files based on the output of applying this other file-accepting command to each" or "filter out all paragraphs in a text file containing the string 'foo', but only print the last seven characters of each one.".

I think it's an interesting idea that a terminal editor can have text only input, but the ability to draw real high-res graphical output. e.g. a canvas to draw your matplotlib diagram in the terminal output or a more graphical version of htop (with time data graphs etc).

My biggest pet hate is the single tasking. Which is more to the nature of scripting languages. It would be great to have an event command queue.

<rant>Each time I do a npm, apt-get, brew command it irks me that all the downloads happen then they unpack, then they process. Where really it should be put on an event command queue. That way downloads can continue whilst processing is happening.

I have the same bug bear about the GUI, if I want to transcode a set of videos and move them to a NAS it's a linear process. I keep promising myself I should write a small app to get it done properly. I just wish the operating system was a little better at it. The main pain point is that if I send 10 files to my NAS it will transfer them all at the same time and take forever because of the seek time. What OS's should be doing is queuing up these things to make the transfer quicker.</rant>

In TCC/LE (a CMD replacement for Windows by JPSoft) there is one little feature that I miss in Powershell because I got so used to it: if you just enter the name of a folder (as if it's a command), you change dir to that folder (so no need to type the 'cd').

In Powershell you just get an error that this thing is not executable. A waste of semantic space, IMO.

I have my urxvt configured to open http(s) URLs in my browser, but it would be nice if I could open URLs of arbitrary protocols via handlers. Similarly, being able to distinguish valid pathnames and open them in $EDITOR/$VISUAL (or perhaps via special file-type handlers) would be a nice.

Overall, I'm not particularly dissatisfied with the state of the command line in 2016. My experience has been that I make no more (or fewer) mistakes with command-line interfaces than I do with graphical ones, and that spending a few seconds thinking about what I actually want to do prevents me from using the wrong approach (i.e., a graphical or console program) to a problem.

I'd like to leave this cautionary tale against getting too clever from the Jargon File:

Warren Teitelman originally wrote DWIM to fix his typos and spelling errors, so it was somewhat idiosyncratic to his style, and would often make hash of anyone else's typos if they were stylistically different. Some victims of DWIM thus claimed that the acronym stood for Damn Warrens Infernal Machine!'.

In one notorious incident, Warren added a DWIM feature to the command interpreter used at Xerox PARC. One day another hacker there typed delete $ to free up some disk space. (The editor there named backup files by appending $ to the original file name, so he was trying to delete any backup files left over from old editing sessions.) It happened that there weren't any editor backup files, so DWIM helpfully reported $ not found, assuming you meant 'delete '. It then started to delete all the files on the disk! The hacker managed to stop it with a Vulcan nerve pinch after only a half dozen or so files were lost.

The disgruntled victim later said he had been sorely tempted to go to Warren's office, tie Warren down in his chair in front of his workstation, and then type delete $ twice.

I'd like to see a good CLI design involving sandboxing-by-default, and handling object-capabilities (e.g. file descriptors) as first-class objects.

Right now, "convert a.jpg b.png" passes the strings "a.jpg" and "b.png" to ImageMagick's convert command, but that command is free to open whatever files it wants, since there's no distinction between strings vs resources such as filenames, network sockets, etc.

You can do this with pipes in simple cases, and bash has some support for this, but it's very primitive and the syntax is cumbersome.

Although, through a long career, I've become accustomed to a wide variety of CLIs, I tend to agree: We've moved beyond 1999 a bit. The fact that a lot of these systems have legacy and tradition isn't sufficient. Languages like COBOL also have a long history. But now we have lots of other languages to choose from.

First off, as much as possible, I'd eliminate non-words. I tutor a lot and people think perhaps I have a speech impediment the first time I say "grep". (Yes, I know, it's not technically part of the shell. But okay how about "fi" and "esac"? Really? Why not "enod" then?)

I'd take command completion further: IDEs often have drop-downs and function signature descriptions as you move along coding. In a hypertext world with tooltips, etc, I think there's be a lot of room for those types of ideas.

I do think the philosophy of "do one thing and one thing only, but do it well" is a good one. Piping stuff together is wonderful. I wonder if it would make sense and be shorter to explain in plain English, if the "water flowed the other way" instead. For example:

"sort the output of the find result" = sort | find

as opposed to:

"take the output of find and sort it" = find | sort

(That last bit was just a random thought of the moment, just trying to think a wee bit outside of the box.)

I've tried using Elixir's iex as a login shell. It was an interesting experience. The pipelining in Elixir actually makes it feel a lot like bash. In Elixir, remote shell [1], autocomplete are built in. Strings are binary. Structs help structure output for human vs machine use (e.g. see HTTPoison.get("http://example.org") vs. HTTPoison.get!("http://example.org"). Processes are cheap. Functions are compostable. Rock on!

* Embeds a full programming language - but you don't need that language to run straightforward commands

* Completely embraces parallelism without pain

-- Including being able to spin off and manage daemon processes

-- And having a painless way of managing (plumbing, in other words) inputs and outputs of each stage - {Uni,Linu}x shells haven't really improved the way they do this in over thirty years, and they didn't get it right the first time!

* As much as possible of the operating system's capability should be exposed, to as deep a level as possible, through the CLI

* Not tied to one operating system - inherently capable of being applied to any (or at least any common one)

My preference would be that the CLI presents a genuinely object-oriented representation of data and the programs that know how to interpret the data, plus the operating system services as well.

Microsoft's PowerShell satisfies a lot of these requirements...however, it falls way short of the "Low bar for entry" criterion, as anyone who's tried to learn it casually will have discovered the hard way.

A Pythonic shell could also be made to work (yes, I've tried using Python as a shell. No, it didn't go well), but would require a bit of nurdling to bring it into compliance with most of the requirements above, particularly the human interface ones.

A lot of questions here - a lot fewer answers, but I thought I'd throw out these thoughts for the assembled masses to contemplate, be inspired by, or tear to pieces.

Please build me an app that lets me search "prebuilt commands" from the command line itself. I am always opening Chrome to google "how to grep recursively only files of a certain type to find phone numbers"... why can't i just search that from the command line itself and then select from the outputted suggestions?

A standard method for programs to provide help, examples, and tab completion suggestions to the shell. Right now we have man, info, --help, -help, -?, etc, and tab completion is part of the shell rather than the apps. If all of that was standardized the tools could be much better.

You should have a separate output for humans, instead of having a flag for it, and programs which are the last in a chained series should default to it, but you could also redirect stuff to it in the middle of piping.

I have used IRC in some open source projects for a good number of years. Before that I hadn't heard anyone (within the set of developers I worked with) even talk about it. To me, it seems like it's one of those things which is too nerdy that most developers haven't even tried it in their day to day communication. Till date, I haven't seen any of the companies I have worked or known about, use IRC as their internal or support communication tool.

Just to be clear, I don't hate IRC - to me it's just another "chat" tool and it serves its purpose most of the times. One of the things that I would have liked in it, is in-built support for some kind of formatting for lengthy texts like logs etc... We used to use (and still do), external text hosting sites like pastebin, gist etc to share such logs/code snippets, which I never liked.

And, of course, Hacker News is a really bad name for a community of software entrepreneurs, for a number of reasons.

First, it encourages people who understand the correct meaning of the term to associate this place with the art of computer programming, leading them to submit articles about Haskell as though they belonged here. There are a lot more people who care about that sort of thing than people starting software businesses (startup or otherwise), so we get a bit lost in the noise.

The big one, though, is that "Hacker" doesn't mean what computer folk know it to mean. To anybody but us (meaning roughly 100% of people), it means "Criminal Who Breaks In To Your Computer". That's what we associate ourselves with by naming this site the way we do.

I'd much rather go back to being "Startup News". At least then our only concern would be convincing people to stop posting general tech news here.

>Is Niantic allowed to spawn virtual creatures in the coordinates of the land I own?

<pedant>Those creatures aren't actually in those coordinates. They appear onscreen when your device is at a certain set of coordinates, but they never actually 'leave' the app in any meaningful way. Since there's really no such thing as a "virtual space of a physical space" for them to reside in, the question is moot. </pedant>

I think so. It's just an RNG on a server somewhere. Physically, they aren't putting anything in your house. Heck, I'm not sure if the RNG is centralized or not. Pokemon locations may be generated locally. Given the server load, I'd certainly do it that way.

On my statement, the charge from, e.g. my local grocery store doesn't appear as from a particular store, it seems to come from some central corporate office. You'll also see this problem with things bought online, at certain big box stores or kiosks, and airlines/travel.

My naive intuition is that sharding on two or more axes with some denormalization makes sense: e.g. sharding on both geospatial location and information layers. Infrequently modified elements that overlap several geospatial regions could be stored alongside each. This implies eventual consistency and high availability. On the other hand, some elements might need higher consistency and therefore have lower availability.

Which is to say that the proper architecture is one that allows accurate metrics and high levels of tuning based on actual use and application requirements.

Having to query against 2 or 4 nodes is not bad, because you can and should run them concurrently, so you've still got the latency of one query. I wouldn't want to overlap data because that opens a new door for inconsistencies to occur.

I'm currently reading through 'The Machinery of Life' by David Goodsell, and I'm really enjoying the intuitive explanations it has of the basic workings of cells. It's not a textbook by any means, but it provides a gentle intro to thinking about biological systems and seems like a great starting point.

I would leak stories to the press about how we have a secret internal fund that generates mind blowing returns year after year thanks to our army of math phd geniuses but is closed to the public. (Yet conveniently publishes its returns publicly.) Yea, that's what I would do.

Only accredited investors can invest in hedge funds. The requirements for becoming an accredited investor, as described by Rule 501 [1] state that an individual's net worth, excluding real estate must be over $1,000,000 or have an income of over $200,000.

I think one of the biggest struggles to "growth hacking" a hedge fund are the regulator requirements. [2]

Are you really a hedge fund in that you actively hedge your positions or are you just an actively traded long-mostly fund that calls themselves a hedge fund because it's a trendy label? What benchmark do you compare yourselves to? Are you sure you haven't outperformed the market simply because REITs have been such a strong asset class since 2010? It's extremely unusual for any actively managed funds to consistently beat their benchmarks net of fees, and those that do, typically have no problem attracting huge amounts of capital (to the point where they lose their ability to outperform). A Random Walk Down Wall Street by Burton Malkiel [http://fave.co/29ZnkN6] is a fantastic book for anyone who wants to learn more about why this is.

An RRSP in and of itself is not a fund type. Similarly to a 401K it is an account which holds investment positions (e.g. Mutual funds, ETFs, single-stock positions).

As for 'growth hacking', that's really not a mindset I would recommend for any investment strategy. First, think of the asset class (real-estate), the investment strategy and the securities/investment products involved: can you scale them up and be as profitable? If you can demonstrate that you can, then growing the AuM shouldn't be too hard. 'Growth hacking' sounds like smoke and mirrors - I wouldn't recommend that terminology with investors.

Caruana- as a more constructive comment, I would say your question is more one of marketing retire than investment strategy. In that sense, one advice I would give is to focus on customer education. The majority of funds, whether they are institutional funds or retail funds, do a poor job of educating clients. Especially with technology involved - a client who understands, is comfortable and truly engaged about your technology will probably stick around longer in the not-so-good times. From a sales point of view, education is a differentiating factor. My 2 cents, Canadian.

I would reach out to the MrMoneyMustache/Ramit Sethi's of the world if you can offer a compelling pitch to their target audience (24-40 years old, tech-oriented) about why I should start funneling some of my investment dollars into your fund (vs Vanguard target date).

Other options: ad targeting people that are customers/users of Vanguard, Betterment, Wealthfront, Robinhood, Simple, etc

Could you go into detail about how you are technically advanced relative to your competition?

Would be curious to hear what that entails / differentiates you from other investment firms in the same space. The alternative data space taking off among hedge funds is pretty interesting, and I'd be interested to hear how a real estate fund approaches it.

This happened to me and you can see the community response in my submission history. First, this totally fucking sucks and I am sorry this happened. The short term technical fix is pretty trivial, but there are some important gotchas:

* depending on what data was there it may be possible to file an early tax-return, thus they could capture your fathers tax credit if applicable. File on the appropriate date.

* monitor his credit. This can be done for rather cheap or you can pull them manually.

* often, they will capture the contacts of all of your fathers correspondents. This is often a vector for follow on scams. They typically use this as an infection vector or to ask for money in the typical :relative in danger abroad" type scam.

* reinfecting the computer periodically and asking for follow on help.

As mentioned, it is probably a good idea to dump the entire hard drive and reinstall the operating system after backing up only necessary files/data. I ended up buying a mac for my grandmother, but others suggested google chrome / chromeOS type operating systems as they often limit the users ability to compromise the machine but have a limited tradeoff vs. their actual requirements.

If you have any follow on issues, as someone who had to deal with this like 2 months ago, feel free to shoot me an email if you have any other issues: klevvver@icloud.com

The degree of compromise depends on the sophistication of the attackers. It might just be unnecessary credit card charges. It might be that plus full system control.

My take is that the only way to assess the level of compromise is to see if Intel's Asset Management Technology [1] has been enabled. That means looking at the BIOS. It might be easier to ship the computer back and forth than to walk through diagnosis and repair.

My 2p's worth: Think about a screen that can be mounted vertically, as most websites are now designed for phone orientation.

Also use a browser where the user-agent string can be switched to mimic Android or iPhone - this will trick websites into showing mobile friendly version of pages which will typically have larger text sizes by default.

If he's a Facebook user, then https://touch.facebook.com has larger text sizes and less on-screen clutter, can be easily zoomed, and works on desktop browsers.

You are going to have to define what you plan to do with the information. e.g. let's say you are harvesting email's, address and contact information on construction contractors and then you will create a database from it and sell access to that information to home owners looking for a contractor in their area. Your company is not sending unsolicited emails to the contractor, and in a sense you could be seen as a lead aggregator that is providing them with valuable business leads.

A difference scenario, you do the same harvesting, but you are contacting each contractor to determine if they want to join your lead generation business (or whatever you are planning). In this case, yes the email could be considered unsolicited but it is specifically business in nature and not a scam or spam and the company was advertising the email address/phone so they could be contacted for specific business purposes. You will need to understand the specifics of the Canada anti-spam laws but almost all leave exclusions for genuine business purposes. If you are contacting the business to sell him Viagra, then you are in obvious violation as a general contractor's business has nothing to do with Viagra.

All that said, to stay 100% above board still talk to an attorney and get a reasonable legal opinion. Just remember this, if you talk to an attorney and do it anyway after you have been advised you are breaking the law you loose the 1 time ignorance (oops sorry) defense. While ignorance of the law is not an excuse overall, most of these types of "violations" are excused with an oops sorry defense and maybe a small fine, e.g. its not the same as committing an armed burglary and saying oops sorry didn't know that was against the law.

As for your reputation, that is more important then what a VC thinks of you at this point. Guard that more carefully then caring what one or even ten VC's think initially. In the end, if your methods work, you gain traction, sales etc and you have not shown you have a reckless reputation the VC's will look and judge you on the results. There is a fine line between reckless and forward thinking or aggressive, just find it and walk it the best you can.

Startup founders who "end up like Parker Conrad" are doing relatively well...in the context where "startup" doesn't mean any old new business but instead means a company structured so as to allow exponential growth. On the other hand, a local business probably depends more on reputation.

In any case, where reputation counts is among customers and reputation among customers closely trails product and service and less closely trails things that the customer doesn't care about. Of course if this weapon thingy isn't actually the product then spending time on it is a bit of a distraction. If it is part of the product but not part of making users love it, it's also a bit of a distraction because it puts a technology that scales in the middle of trying to find product market fit.

Among the reasons not to do Ycombinator would be a desire for a business that isn't built on the startup model, where "startup" is used in the specific sense it's used among Venture Capitalists and not the more generic sense of "new business".

Some people want a business that lets them focus performing the service rather than running the business, e.g. a consultancy, a workshop, etc. A chef might open a restaurant so that they can create a menu and stand in front of a stove and talk to diners enjoying the work of their hands. That's different from opening a restaurant in hopes of franchising it so as to become a restaurant industry executive.

Well incubators are really cool for a younger person with no family and kind of not good for older people with houses and families. I would be interested in hearing about anyone 35+ with a family and house who made YC work.

So that would be my guess on a large segment of the population that dont persue incubators. And if you dont apply obviously you wont have an offer to accept or reject.

Well the right question is why would one pick a different incubator over YC. There's a multitude of great reasons for not going with an incubator at all. Or going with an angel that you really love.I'm sure some might pick NYU over Columbia but it's rather rare. I doubt there's many that turn down YC for another incubator. When they do the reasons are probably pretty specific to the company's/founders' situation.

It's good for young people same as college. The value is in credentialing (and notch in belt confidence building). You learn most professional lessons while doing your job which has little to nothing to do with YC per se.

Any incubator or accelerator being the right fit is really dependent on the company.

None are right for every company, and as you may know, incubators and early stage accelerators have exploded in growth over the past couple years (there are so many now, and not all are worth your time).

Vetting that any one has worthwhile resources for you is important. Mainly you're thinking of longterm benefits like the network they provide and not shorter term ones like how much cash they exchange for 310% of your company.

The best way to get advice about any accelerator is to talk to founders who have already gone through it about their experience. You can easily find YC companies and early- to mid-stage founders are more likely to reply to your emails. Though a referral from a mutual connection would be best, you can reach any YC co with the founders@ email.

By the way -- this advice is about improving your odds in applying to YC, once you're at the offer stage it's a different question. I don't know many founders that would turn down the YC offer after being accepted.

Yes, if you look at bigger companies like Google, teams and individuals have Key Performance Indicators (KPIs) that are tracked and used to evaluate performance. The trick is to define your KPIs to align with the organization's values. Some useful metrics to look at in terms of software development are: Mean time to delivery, mean time to recovery, bugs per line of code, number of deploys per time period. Really it comes down to delivering business value, usually defined as return on investment (ROI).

I run this website: http://step2scheduling.comIt is solving the problem of scheduling step 2 CS exam for medical students. It uses selenium to refresh and check for schedule openings and automatically book tests for students. I coded it up for my G/F some time ago when she needed to schedule a test and couldn't find an opening. Now I'm making $500/month for doing next to nothing.

I built a drop-ship site, it started as a way to play with some ideas I had at work. As a platform, I think it can scale pretty well... it has a super flexible EDI system allowing me to automate suppliers, inventory system, custom website, order management system, advertising feeds etc. It's built to handle thousands of suppliers, products, and orders, and has all the bells and whistles.

But I only really make enough to afford enough beer to wash away my regrets of spending so much time building this thing.

When I am not doing software development, I like spending time hiking and backpacking in the outdoors. In 2014 my family and I successfully thru-hiked the Appalachian Trail. To fund the project we produced a video series about the hike, which we sold on a subscription basis. We launched a Kickstarter project to pre-sell a bunch of subscriptions (https://www.kickstarter.com/projects/dtougas/beyond-our-boun...). We now have the video series for sale on Vimeo (https://vimeo.com/ondemand/boundaries) which sells a little bit of something every month (not a lot).

My next side project has been to create a social/micro-blogging platform for outdoor enthusiasts (https://outsideways.com/). It currently isn't generating any revenue (it just launched in May), but I have big plans for it if/when it gets a core group of users contributing regularly.

Generating about 16k/mo after about 6 months building an Amazon FBA business selling private label goods from Alibaba. Spend about 1 hour per week managing inventory, the rest (shipping, returns, support) is automated through Amazon and other various services.

I built http://www.whatsmyua.com/ for myself back in college, it reports and breaks down browser's User Agent strings. It now brings in $20-30/month via adsense.

I also built https://github.com/nfriedly/node-unblocker around the same time, it's a web proxy for evading government/corporate/school/etc. filters. When I had a copy of it online, it got a lot of traffic from around the world, and used to earn anywhere from $10 to $100/month from adsense, depending on the month. However, it wasn't worth the effort of keeping it online due to abuse + clueless sysadmins.

Unblocker also earned me a consulting fee a couple of times from folks who wanted help integrating it into their project, and I ended up converting it from a standalone site to an Express middleware to make that easier.

https://hackerlists.com/ - The goal is to create awesome lists of hacking & programming related resources for people to reference. I've had a couple of lists do really well on social media so far [1][2], but I'm quickly learning that my knowledge limitations are preventing me from creating the quality of lists that I want the site to be known for. Therefore, I'm starting to put my energy into finding a freelance writer with a technical background that can write at the quality level I want for the site. But so far I've broken even in the first month.

I don't have a project but I have always been wondering if there is some open-source style software (with a license that allows this of course) that would let me customize the software and sell it to other businesses and get small recurring income as a SaaS product (like $70 a month). Think small restaurant inventory management, pricing, CRM, software for real estate brokers to manage clients, dentist online appointment tracker etc. etc.

I'm running a website called Narrow (http://narrow.io) on the side. It's a Twitter marketing automation company that was started not for the love of Twitter but the desire to get the benefits of Twitter without having to do a lot of work. Lazy, I know, but I'm a programmer after all. Let the computer do the work.

I built my first SaaS app http://sharpplm.com, Quote Management for manufacturing companies. It has been pretty much autopilot and is still making $2k per year.

I am currently working on http://gemssports.com. It is activity, payments, member management for sports companies. Think sports trainers, tournaments, softball teams, etc. We are just getting rolling but having some good success.

I have a 25% stake in the company - but I don't get much money myself, less than $1000/mo on average. This is because of our thin margins, and because my other business partners own the other 75% (50% is owned by the founder who created the sites back in 2008 using cheap outsourced labour that I was brought in to rewrite) and the other 25% is owned by another person for historical reasons. Furthermore because none of us spend much time on it anymore - or get much money from it - we're all very hesitant to spend money on marketing or advertising (we got marketing quotes starting at $5000/mo - and my co-owners are unconvinced that advertising would help at all), hence the impasse.

We do plan to create some new additional sites and services that have higher margins, including launching a "Twilio-lite"-like API which would offer much less functionality (as our backend is not as sophisticated as theirs) but would also cost less per-call. But that, too, would require marketing to get the word out.

If anyone here is a VC or is otherwise interested in investing for marketing, or is a marketing expert yourself, or interested in using our system - we'd be happy to talk a deal.

We did sell one of our older sites to a web publishing company a few years ago that went for almost a six-figure value - but unfortunately for me, that was after I wrote the entire stack, but before I was granted part-ownership (so I was just paid for my time: 3 hours to set a clone of the system up on the buyer's infrastructure).

One of my "rivals" - the creator of WakeUp.io posted his site to last month's "Show HN" thread, now it's my turn :)

Written in F# using the Suave.IO functional web libraries. It will be a site coaches/athletes can use to view/analyze data from their workouts, races etc. Lots of tables, graphs. Runs on Linux cloud vms.

I run a small platform for a couple of web designing friends. They can deploy webshops at the touch of a button while having full control over the design. It saves them the effort of having to ask more techy folks for assistance, and it lets me focus on more interesting things.

Started off at 3k EUR revenue a year, currently at about 2k. It doesn't cost me any effort.

None anymore, my new contract explicitly disallows side ventures. But about a year ago a friend and I built a screenshot-sharing service when CloudApp raised their prices. It's basically a dock app that watches for new screenshots and auto-uploads to the cloud, the same thing Dropbox and tons of others do. We never made money off of it, but learning stuff like how to incorporate and some new tech was worthwhile.

In all honesty, I find that when people working full-time jobs look for side projects, either their not happy with the job they have or their job underpays them. (That's not to say that working on fun side projects that happen to lead to revenue is a bad thing, but seeking side project revenue is not a great indicator).

I dont care if you go PC or MAC, but this is not typical macbook pro behavior. Ive owned 4 since 2008 and have gotten 3 separate ones in that time frame for work. So Ive had damn near every single macbook pro that has been available in the last 8 years. I currently have a mid 2014 macbook with 16gb 2.5ghz 512. It can get hot on occasion, all of which are totally explainable:

1) I have Windows 10 installed running on Fusion- 'nuff said. Actually Im sort of kidding, win 10 is a lot better than previous versions heating up the mac. I do have SQL Server Developer installed and do a lot of data heavy development which is what really cranks up the heater.2) I use Logic Pro and have a lot of DSP running in the form of plugins.

Either way, it gets hot, but doesnt slow down or cause me issues. However I would say that the first thing you need to look at is do you have that aluminum case wrapped in a heat blanket/"protective case"? If so you are not letting the aluminum do its job and dissipate the heat. If not, then something is faulty on that macbook and I would have it looked at. Again, the behavior you describe is not normal.

Activity Monitor may show that a process named kernel_task is using a large percentage of your CPU, and during this time you may notice a lot of fan activity. This process helps manage temperature by making the CPU less available to processes that are using the CPU intensely. In other words, kernel_task responds to conditions that cause your CPU to become too hot. When the temperature decreases, kernel_task automatically reduces its activity.

Theres a high chance its reaching these temperatures so easily due to dust buildup in the MacBook. If its possible to open it up, Id recommend doing so and dusting it out. If its not possible to open it up, you may have to take it to an Apple Retail Store and have them do it for you.

Hey, I faced the exactly same problem with my MB Pro(late 2013) about a couple of months ago. Tried all the solutions I found on stackexchange/apple forums but with no luck. Then I installed Ubuntu and have had little problems since. Checkout any online tutorial for installing Ubuntu on your Macbook.

Serious question - what's the ambient air temperature of your working environment? I have a mid 2012 MPB Retina - 1st generation. I've experienced similar problems with it, but only in the Summer. In my case this is a personal machine and is being used at home. I'm the kind of person who doesn't turn on the AC until the ambient air temperature in the house gets into the upper 80s. My MBP will have the fans a whirring and even get hot to the touch in those conditions. When I finally turn on the AC it's behavior goes back to normal. My conclusion is they don't take heat very well. Meanwhile the Macs I use at work never experience these issues since it's a climate-controlled office.

All laptops are tradeoffs, especially with performance versus battery life . I have been very happy with my Dell M3300 which is a MBP clone (15 inch "retina" screen and large buttonless trackpad). Great performance, excellent build quality, everything just works. I believe the current model is the precision 15 5000 which I haven't tried. Battery life leaves a lot to be desired, that's the design tradeoff for the precision line - perf over battery life. Works great for me because I always work plugged in. Not viable for work on a long flight.

EXACT same thing happened to mine, 3 visits to the Apple Store later until someone finally opened it up completely and dusted the insides. The computer is running like new again. At home dusting without opening it up didn't work.

MAKE SURE THEY DUST THE INSIDES. They do it for free, even if you don't have AppleCare anymore.

I'd agree with the Thinkpad T series (namely the T--p range, like T460p) and what used to be the Thinkpad W series which is now the Thinkpad P-- series (like the P50) which are "workstation replacements." Heavier but also much more powerful.

There's also Microsoft's Surface Book, which is wonderful, but a little too expensive in my OPINION.

I've also had personal success with Asus's ROG range of super-heavy laptops (e.g. G752VT), they literally weight 10 pounds(!) but the cooling is incredible. They're designed for "gaming" but due to the raw power and ample cooling, they're wonderful development machines no matter what the workload or workload type. They're definitely only for "around the house" levels of mobility.

All Mac products come with a year of Apple Care, even if they're refurb units. If you purchased it within the past year I highly recommend getting in touch with Apple support about the problem - what you're describing is far outside the normal behavior of a MBP.

I have the Lenovo W541 w/ 3K screen, and am VERY hapy with it. Battery life is low, at 1-2 hours, but the 32gb of RAM is essential when running VMs. (though, admittedly, now that we've gone to docker, my memory requirements have drastically reduced.)

The one issue I have is that the keyboard is off-center. (with a number pad on the right). Installing Ubuntu was super easy, no tweaks necessary for the nvidia/intel combo (unlike the W530). Power Management is weak, at 1-2 hours... but I haven't run up against that enough to look up what I can do to make it better.

So, a LONG time user of Lenovo laptops, (10 years)... Still great laptops. I look forward to the P series (which replaces the W series) with the Xeon processors.

At the dayjob a little earlier, we were talking about office move and I was asked what kind of laptop I have. I couldn't even remember the brand. It's just a featureless, bland chunk of black plastic that'll die of some mechanical failure within three years. And I got it in February (as a replacement for the previous featureless chunk of black plastic that died).

I thought it was HP. Turns out it was a Dell. C'est la vie.

For the OP, what you're experiencing is not typical MBP behavior. Something toxic in your environment, probably in a browser. If you use Chrome, try switching to Firefox, or vice versa (I find Safari is actually the most robust browser I've used). If you want to be radical, try setting up a partition and booting into Linux or Windows instead of OSX.

Nobody makes better hardware than Apple, period. The only thing I've seen that even comes close is a Microsoft Surface Pro. Most PCs are junk.

Clean it out? Blow out any dust from the fans, either by simply blowing into the air vents (output vents, of course, you want to blow the dust out through the input vents), using a compressor to do the same (at a reasonable pressure, you want to clean it, not air-cut it nor use the fans as generators) or a can of compressed 'air'. This has restored many a sluggy laptop to working order here.

If OSX is not your thing you might want to consider installing something else on the thing, some Linux of choice or Windows if you so prefer.

Otherwise just get an old Thinkpad, they work and keep on working - typing this on a T42p, 12 years and counting...

I've never had such a flawless Linux desktop experience. Almost everything just worked out of the box, it's super-zippy, and the laptop is so light and portable. Never heard the fan yet.

But if you want to an external GPU, this is not the system for you. Also, if you get the hi-res screen, you're forced to play the LCD lottery (LG vs. Panasonic, one has PWM, and you don't get to pick.)

Desktop Linux has really come a long way. Just in time too, now that MS is probably gonna force us to 'subscribe' to Windows, send us ads, etc...

I have had heating issues on laptops in the past that were caused by a poor thermal interface between the CPU cooler and the CPU. Either the thermal interface material was insufficient, or just not there at all. I was able to solve it myself by disassembling the laptop, removing the CPU cooler and cleaning the cooler and CPU heat spreader with alcohol before applying an appropriate amount of quality thermal paste (I used Arctic Silver of some sort... IV? V?)

Can confirm that I've had the same problem with my El-Cap / 16GB MBP system too. I still suspect it was a faulty USB Network connection that caused it to throttle so much - but in the end, they diagnosed it with a faulty heat-sink on the MB and replaced it out. I've not had any other problems since then.

As to your question, I'd recommend the Surface Book. I got one last December and couldn't be more pleased with it. Hi-res screen, good RAM + HD space, fast processor, keys feel nice to type on...the list goes on and on :)

Surface Pro / Surface Book. I presume Linux support on them is good enough now, but at least if you're stuck with Windows you can get a proper bash environment now without cygwin that I hear works well.

Something weird is going on, my friend. I have been using a 2012 rMBP 15" w/only 8GB of RAM (ordered the day they were announced before we found out the RAM was not upgradeable, whoops) every day, 10 hours+ a day and never have any problems. The only time my fans kick on is when you'd expect; when I'm doing heavy load tasks like playing a game or doing anything using Adobe Flash. I can stream HTML5 video full screen for hours and the fans never become audible. I can run Xcode for hours and as long as my app isn't being a CPU hog, everything's kosher.

So, something is wrong with your machine. The OS X reinstall failing to resolve the issue points away from malware, and towards an environmental or hardware issue. If your machine is only a few months old Apple will take a look at it for free, I'd consider that option to rule out any hardware issues if you can't think of any obvious environmental causes (Do you smoke? High temperature in your working area(s)? etc.)

My daughter has a laptop for school. She's tried Toshiba and HP, and on both she's had the hard drive die after a few months. (In fact, I think she's never not had the hard drive die after a few months - she's never gotten a year out of one.)

She doesn't seem to drop it or step on it or pour liquids in it or any other form of obvious abuse, but the hard drives always die.

I've got a Dell at work, but I use it almost as a second desktop - I (almost) never move it. It's been quite reliable. (If I lugged it around, though, who knows?)

That said, the ones that sucked the least for my lifestyle of web development, writing and email are (in order):

1. X1 with Ubuntu 14.04

2. iPad Air 2 paired with a linux VPS w/remote desktop

3. Chromebook

The X1 takes the cake because of the weight and the ability to have a full OS on it. The iPad A2 comes in as a close second, only because it requires a bluetooth keyboard and internet to be useful. The chromebook takes 3rd because I cheaped out and didn't get one with a 1080p panel in it. I'm convinced that a 6+ hour ultraportable laptop would displace the X1, and can't wait for my budget to let me pick up a refurbished Pixel.