Hello! I'm part of the Convox core team and happy to answer any questions.

Convox is an app deployment platform that you can install into your own AWS account. It uses ECS, ELB, Kinesis, and many other great AWS services under the hood but automates it all away to give you a deployment experience even easier than Heroku.

Convox uses Docker under the hood so if you want to customize anything (distro, dependencies, etc) you can simply add a Dockerfile to your project.

Holy hell is this a puff piece. Don't get me wrong, I'm a huge fan of SolarCity, but this story is written as if they invented the solar boom that's happening now.

Three major factors contributed to making solar as viable as it is now, none of which Solar City invented (but it is executing on them better).

1. Germany basically subsidizing China (and others) to build the massive solar panel production pipeline it has now that are producing ridiculously cheap panels[1].

2. Financial innovation to allow consumers and installers to pay for panels over time (PPA, leases, loans, etc.). These were originally pioneered by SunEdison[2], and the industry has been innovating on them since (including SolarCity with its MyPower Loan).

3. The California Solar Initiative[3] and Department of Energy's Sunshot Initiative[4]. Both were incredibly well designed to gradually wean solar off of subsidies and foster innovation through strategic grants, and they have worked incredibly well at teaching the solar industry to stand on its own two feet. We are well on our way to $1/watt[5].

SolarCity has executed incredibly well in this industry, so they definitely deserve tons of credit for installing so much solar. But it's a disservice to the rest of the companies, governments, and financiers to say they invented the recent growth. Everyone, including SolarCity, has come together to build a mature and exponentially growing industry that was 1 out of every 78 jobs last year.

> An enclave is any portion of a state that is entirely surrounded by the territory of one other state. An exclave is a portion of a state geographically separated from the main part by surrounding alien territory. Many enclaves are also exclaves.

If I understand this correctly, the WaPo article really means to refer specifically to exclaves, not just to enclaves, throughout.

I wonder how was life for people living in one of those enclaves. Were there actually passport controls at the frontiers? My guess is that the Indians living there were actually living just as if they were citizen from Bangladesh, but I could be wrong.

Thanks, you just gave me a powerful way to think about a type system. And that final diabolical implementation was proof. Even before you told us how to interpret it I managed to use your set theory approach to break it down.

For optimised ray tracing you don't beam the light from the camera as that has the same chance to bounce to the sun through indirect illumination as a ray from the sun has if it is going to bounce to the camera.

What they are saying here is wrong or rather extremely simplified for a younger audience.

Disney's global illumination tech is spectacular. The quality level since Monster's University is just leaps and bounds better than everything before it. Well, technically their short Partysaurus Rex was it's debut. But Monster's University was the first feature length.

This epiphany he writes about sounds almost religious in nature. Did he really not consider writing before that? Does his memory fail him, is this a memory construct that makes for a good story but doesnt hint at what really happened? Ive never experienced anything like this myself.

I often tell people, when asked about why I do what I do, that I knew I wanted to be a software developer at the age of 10. But it wasnt a sudden realisation. It came to me after getting an Amiga 500, playing games and then realising I could make my own games. I vividly remember a summer trip to my aunts place in Norway where I brought a book on BASIC and devoured a tutorial on how to make a hotel booking system. I was utterly enthralled and thought maybe one day I can make hotel booking systems for a living. From then on there was little doubt I would do this.

Except of course when I turned 13 and got into playing rock music on my guitar. I spent the next 7 years feeling lost, because I really wanted to be a musician but the career opportunities seemed unfeasible. It wasnt until after a brief stint as a data entry clerk in the UK that I came to the conclusion that I needed to get my shit together and go to uni. So I studied Computer Science and here I am at the ripe age of 35 with 10 years of software development experience, thinking I always knew I wanted to do this. But I didnt, and writing this made me realise that.

Among my group of friends, I've found that everyone's favorite Murakami book is always the first or second they read. After you read a couple, they all start sounding the same. Another angsty young man searches for a girl with serious emotional issues, and there's some metaphor heavy dream sequences and talking animals along the way. I haven't read his recent books but I've read five books of his and it was really tiresome, repetitive reading after book 2-3.

Small anecdote: I was living in Taiwan in 1999 and found an English translation of Pinball, 1973 in a bookshop there. It was a Kodansha imprint, for English learners I think. Anyway, I'd already read a number of Murakami books by that time (Wild Sheep Chase, Dance, Dance, Dance and Wind-up Bird Chronicle), and enjoyed them so I snapped it up. It was an ok read but far from his best (which for me, still remains Wind-up Bird Chronicle).

Later, I researched that Kodansha edition on the net and realised that it had never really been made available in English speaking countries and was fairly rare. So I put it up for sale on eBay, probably in 2000 or 2001. Thought it might attract a bit of interest but was pleasantly surprised when it attracted a lot of interest and eventually sold for something like $300. Can't remember the exact final bid but it was around that. Was happy to sell it for that and still happy now. I doubt it's appreciated in value since then but I could be wrong.

Anyway, it appears from this article that Pinball, 1973 and Hear the wind sing are now finally being published in a generally distributed edition. I'm a little bit surprised because it was my understanding that Murakami thought they were early, weaker works and wasn't particularly keen for them to get new attention.

I've read this story before, and often think about how if people don't have examples in life these 'epiphanies' are hard to come by. Non-fictional heroes may get more attention with the access to knowledge we have nowadays, but there's still the problem of focusing in and having the possibility made real in our own minds.

My story:- The pretty big theater I worked for had a crackerjack unix head running a windows/exchange environment like it was no big thing, all from an iBookG3. I was in the shower, at some point in the late fall. and I thought to myself 'I'd like to have a job/get paid to "fix computers"'. I sent away for Apple's Tech Training, someone's husband was starting a consulting company, and 10 years later... well, I like what I do and am lucky to have as many advances as I've had in my short career. Workstation-level sysadmin may not be highly regarded, but at least the book I wrote wasn't by hand.

> A simple way to begin to control decline is to cordon off the blighted areas, and put an attractive faade around them. We call this strategy SWEEPING IT UNDER THE RUG. In more advanced cases, there may be no alternative but to tear everything down and start over.

I've found the most successful way of dealing with these code bases is definitely not this. It is to surround them with a body end-to-end functional tests and then slowly refactor what is underneath, which mostly just involves doing this:

* De-duplicate wherever there is code duplication.

* Pull the different modules apart so that they are minimally dependent upon one another instead of tightly coupled.

* Replacing re-invented wheels once they are de-coupled with higher quality modules.

An unappreciated facet of dealing with big balls of mud is also that unit tests usually range from unhelpful to downright harmful. If you don't have self-contained, loosely coupled algorithmic modules to test, unit tests aren't helpful. They will almost inevitably mean tightly coupling your tests to architecture you know is bad, a mess of unnecessary mock objects and an inability to write tests for real bug scenarios that users report.

Disclaimer, I'm part Panamanian, that being said, I find this highly suspicious. A lot of my Nicaraguan friends are also highly doubtful this will ever take off. The big problems for Nicaragua with this are:

- Horrible ecological impact (fisheries in Caribbean are already [1]

- Many think this is an excuse for the politicians to get cushy land/resort deals along the proposed route and near the entrances to the canal [2]

- Some think it is really never going to materialize, but Chinese owned resorts will pop up on each end.

Looking at the proposed map of the canal how does this work when Lake Nicaragua [1] is freshwater? Even with locks I'm guessing salt water from the sea will slowly contaminate (seep) into the lake causing big changes for the ecosystem and local population.

I'll admit to being skeptical, too, but I seem to recall reading that people thought that the US was not going to be able to finish the original canal and that the whole project was foolhardy. I think the US was just more tenacious than France before it (also people had learned about the role of mosquitoes in disease and sprayed the mosquitoes)

China has a very long history of massive building projects, so if Nicaragua is willing, and the similar sentiment as the first canal makes me hesitant to dismiss it.

My father is in his mid-70s and after a short stint in retirement went back to work about 5 years ago. He claims it's because of the money, but there's not a lot preventing my parents from selling their home and moving to a cheaper part of the country and living out their retirement years.

On the plus side, he was remarkably sharp and agile for a 70 year old. But his legs are starting to fail and that's made him age very quickly.

Now I can see it being about money (and health care), he's going to need knee replacement surgery and months of rehab. Something my parents will struggle to afford.

He's lived a fascinating life, and is full of stories, but to me he's also a warning about the need to cultivate non-work interests and stash away enough money to enjoy a long retirement enjoying those interests.

As long as you like what you're doing and someone will employ you, right? Those folks probably have significant experience. Experience can't always trump creativity, but this is pretty cool. I'd not mind learning from a coworker in their eighties, you know?

<blanket statement warning> Silicon Valley's emphasis on young programmers and engineers in general is silly.

Anecdotal alert. All people I know who went 'on pension' either started working again or died (at least 5 cases of high profile managers getting heart attacks or commit suicide within a year after their pension). Most people, including my father and grandfather, really did not like pressureless sitting at home. You do the vacations and the reading and hobbies but then, unless you are very dedicated (like a job), there is a hole. I believe this to become a society wide thing quite soon. Actually dedicating yourself to do something (be it painting, coding, building, whatever) as a discipline is very hard and very underestimated for people who never needed it.

My father will be 70 next year and just went into semi-retirement (where semi-retirement = writing two books). I suspect if there hadn't been a coup in Mali, which resulted in a return to India, he would probably still be working full time. It's not about the money T this point. He's in good health, is sharp as ever, so doing something seems to make sense.

Kudos to those who go about it cheerfully, but I also fear it will be increasingly mandatory, as the top-heavy demographic crisis comes home to roost in many developed countries. By the time the median HN user is a "senior citizen", Social Security eligibility will start at what, 87?

The issue with not working is that if you have habitual activities during retirement (other than watching tv), those activities become indistinguishable from work. So you might as well just work. I'm only 28 but that's how I see it.

As I learned from Haskell, the correct answer really ought to depend on the domain. I ought to be able to use an unsigned integer to represent things like length, for instance.

However, since apparently we've collectively decided that we're always operating in ring 256/65,536/etc. instead of the real world where we only operate there in rare exceptional circumstances [1], instead of an exception being generated when we under or overflow, which is almost always what we actually want to have happen, the numbers just happily march along, underflowing or overflowing their way to complete gibberish with nary a care in the world.

Consequently the answer is "signed unless you have a good reason to need unsigned", because at least then you are more likely to be able to detect an error condition. I'd like to be able to say "always" detect an error condition, but, alas, we threw that away decades ago. "Hooray" for efficiency over correctness!

[1]: Since this seems to come up whenever I fail to mention this, bear in mind that being able to name the exceptional circumstances does not make those circumstances any less exceptional. Grep for all arithmetic in your choice of program and the vast, vast majority are not deliberately using overflow behavior for some effect. Even in those programs that do use it for hashing or encryption or something you'll find the vast, vast bulk of arithmetic is not. The exceptions leap to mind precisely because, as exceptions, they are memorable.

Its behavior is not undefined, but it is not well-defined either. For instance, subtracting one from 0U produces the value UINT_MAX. This value is implementation-defined. It's safe in that the machine won't catch on fire, but what good is that if the program isn't prepared to deal with the sudden jump to a large value?

Suppose that x and y are small values, in a small range confined reasonably close to zero. (Say, their decimal representation is at most three or four digits.) And suppose you know that x < y.

If x and y signed, then you know that, for instance, x - 1 < y. If you have an expression like x < y + b in the program, you can happily change it, algebraically to x - b < y if you know that overflow isn't taking place, which you often do if you have assurance that these are smallish values.

If they are unsigned, you cannot do this.

In the absence of overflow, which happens away from zero, signed integers behave like ordinary mathematical integers. Unsigned integers do not.

Check this out: downward counting loop:

for (unsigned i = n - 1; i >= 0; i--) { /* Oops! Infinite! */ }

Change to signed, fixed! Hopefully as a result of a compiler warning that the loop guard expression is always true due to the type.

Even worse are mixtures of signed and unsigned operands in expressions; luckily, C compilers tend to have reasonably decent warnings about that.

Unsigned integers are a tool. They handle specific jobs. They are not suitable as the reach-for all purpose integer.

A key difference is whether the type is meant to be an index (counting) or for arithmetic. Indexing with unsigned integers certainly has its pitfalls:

while (idx >= 0) { foo(arr[idx]); idx--; }

But this is outweighed by the enormous and under-appreciated dangers of signed arithmetic!

Try writing C functions add, subtract, multiply, and divide, that do anything on overflow except UB. It's trivial with unsigned, but wretched and horrible with signed types. And real software like PostgreSQL gets it wrong, with crashy consequences: http://kqueue.org/blog/2012/12/31/idiv-dos/#sql

Unsigned overflow is defined but that doesn't necessarily make it any more expected when it happens. It can still ruin your day, and in fact it happens much more often than signed overflow, because it can happen with seemingly innocent subtraction of small values. After personally fixing several unsigned overflow bugs in the past few months I'm going to have to side with the Google style guide on this one.

Despite the fact that signed overflow/underflow is undefined behavior I'm pretty sure that many more bugs have resulted from unsigned underflow than signed overflow or underflow. When working with integers you're usually working with numbers relatively close to zero so it's very easy to unintentionally cross that 0 barrier. With signed integers it is much more difficult to reach those overflow/underflow limits.

This tells me that C's native integer type is ptrdiff_t, not size_t. (And I agree this is crazy: unsigned modular arithmetic would be fine, but they chose a signed result for pointer subtraction).

Why care about this? You should try to get a clean compile with -Wconversion, but also you should avoid adding casts all over the place (they hide potential problems). It's cleaner to wrap all instances of size_t with ptrdiff_t- you can even check for signed overflow in these cases if you are worried about it.

There is another reason: loop index code written to work properly for unsigned will work for signed, but the reverse is not true. This means you have to think about every loop if you intend to do some kind of global conversion to unsigned.

g << h Well defined because 2147483648 can be represented in a 32 bit unsigned int

Even under those assumptions, it is implementation defined if unsigned int can hold the value 2^31. It is perfectly valid to have an UINT_MAX value of 2^31-1. In that case the code will cause undefined behavior.

The only guarantee for unsigned int is that it's UINT_MAX value is at least 2^16-1, regardless of its bit size, and that it has at least as much value bits as a signed int.

There are a lot of corner cases involved here. The corner cases for signed integers involve undefined behavior, the corner cases for unsigned integers involve overflow. Any time you mix them you get unsigned integers, which can give you big surprises.

For example, if z is unsigned, then both x and y will get converted to unsigned as well. This can cause surprises when you expected the LHS to be negative, but it's not, because the right side is unsigned, and that contaminates the left side.

if (x < y * z) { ... }

This particular case gets caught by -Wall, but there are plenty of cases where unintended unsigned contagion doesn't caught by the compiler. Of course, if you make x long, then y * z will be unsigned, then widened to long, which gives you different results if you are on 32-bit or if you are on Windows. Using signed integers everywhere reduces the cognitive load here, although if you are paranoid, you need to do overflow checking which is going to be a bear and you might want to switch to a language with bigints or checked arithmetic.

As another point, in the following statement, the compiler is allowed to assume that the loop terminates:

for (i = 0; i <= N; i++) { ... }

Yes, even if N is INT_MAX. The way I think of it, your use of "signed" means that you are communicating (better yet, promising) to the compiler that you believe overflow will not occur. In these cases, the compiler will usually "do the right thing" when optimizing loops like this, where it sometimes can't do that optimization for unsigned loop variables.

So I'm going to disagree as a point of style. Signed arithmetic avoids unintended consequences for comparisons and arithmetic, and enables better loop optimizations by the compiler. In my experience, this is usually correct, and it is rare that I actually want numbers to overflow and do something with them afterwards.

All said, if you don't quite get the subtleties of arithmetic in C (yes, it is subtle) then your C code is fairly likely to have errors, and no style guide is going to be a panacea.

While the signed vs unsigned int question doesn't concern me much[1], I really appreciate this post because I discovered the One Page CPU. The author nerd-sniped me three paragraphs in. Now I want to try to implement this on an FPGA. I think my next few weekends will be taken up with getting a real world implementation working and getting some examples working. Thanks.

[1] If I was designing a system that could only have one type of int (but why?) I'd use unsigned ints and if I needed to represent negative numbers, I'd use two's complement which is fairly well behaved, much as the author points out. This is fairly common in the embedded world.

For what it's worth, all the bugs I've seen related to integer representation (not just over/under, e.g. I've seen code casting a 32bit pointer to a 64bit signed integer) could have been fixed by changing int to unsigned and never the opposite. Of course, this could be just the effect of the vast majority of programmers choosing int as the default integer type.

No matter if you use signed or unsigned types, be sure you handle both underflow and overflow. From my experience, signed integers main usage is integer arithmetic (e.g. accounting things that can be negative or positive), pointer arithmetic, error codes, or multiplexing on same variable two different kind of elements (so if negative has one meaning and if positive, a different one). Unsigned, for handling countable elements, with 0 as minimum. The good point of using unsigned integers is that you have twice the available codes, because the extra bit.

Example for counting backwards using unsigned types (underflow check, -1 is 111...111 binary in 2's complement representation, so -1 is equivalent to the biggest unsigned number):

This is something that I had been thinking would greatly benefit the arXiv for a long time.

Presently, when you do a scientific work, the article goes to a referee who then sends you back their comments which you account for before resubmitting. Sometimes you get an excellent referee who really knows his stuff and gives reasonable comments for improvements. Sometimes you get a guy who really just can't be bothered who gives minimal comments leading you to wonder if they've even read it. Sometimes you get an opposing group, which frequently leads to untenable comments and prompts submission to a different journal.

This is the only feedback you will ever get outside of your coauthors except the citation count. In my opinion it would be amazing to have some big named authors who have read your paper drop off advice, what they liked what they didn't like, etc. At most universities in most groups you do "journal club" once a week where you discuss others' papers and produce this exact feedback, but there's no forum to post it in, so it just stays in the journal club.

However, just as abuse on arXiv led to the transformation to an invitation only site (I forget if you need an invite or just a university sponsored email; see also vixra.com), the community on a site like this _should require your real identity_.

It could be devastating to a young researcher to have their work publicly shamed by an anonymous commenter who has it out for their research group. But if the comments are linked to real identities, I think the community will police itself... Although there are frequently unofficial "response to... " articles on the arXiv, they are publicly attached to other research groups, and you will sometimes see "response to response to ..." letters.

It's interesting to see these sort of 2010 things popping up amongst our 1990s bastion websites like arxiv and ADS and such (see researchgate, the facebook of scientists). But frequently they kind of seem to fall victim to the same downfalls of their non-scientific counterparts ("cite" is the equivalent of "like" on researchgate to improve your "profile impact" metric so you frequently get people acting needy about "citation requests" even though we have our own metrics like Hirsch Indeces to measure scientific productiveness in an objective way).

TLDR;I worry that a comment based website could host troll-like behavior which could be especially harmful when the whole premise is people's professional work. This is one of the few places on the internet where I think real names must be required and institutional affiliation should be provided (as is the case with arXiv). As it is I signed up with a BS name and email in 5 seconds and can immediately start trashing this paper on quantum physics that I know nothing about.

Old usenet-head here (on it regularly from 1991, first met it 1986) ...

First problem: there's no identity authentication mechanism in NNTP. So spam is a problem, forged moderation headers are a problem, general abuse is a problem. (A modern syndicated forum system with OAuth or some successor model would be a lot easier to ride herd on.)

Second problem: storage demands expand faster than the user base. Because it's a flood-fill store-and-forward system, each server node tries to replicate the entire feed. Consequently news admins tended to put a short expiry on posts in binary groups so they'd be deleted fairly promptly ... but if you do that, the lusers can't find what they're looking for so they ask their friends to repost the bloody things, ad nauseam.

Third problem: etiquette. Yeah, yeah, I am coming over all elitist here, but the original usenet mindset was exactly that. These days we're used to being overrun by everyone who can use a point-and-drool interface on their phone to look at Facebook, but back in September 1992 it was a real shock to the system when usenet was suddenly gatewayed onto AOL, I can tell you. Previously usenet more or less got along because the users were university staff and students (who could be held accountable to some extent) and computer industry folks. Thereafter, well, a lot of the worse aspects of 4chan and Reddit were pioneered on usenet. (Want to know why folks hero-worshipped Larry Wall before he wrote Perl? Because he wrote this thing called rn(1). Which had killfiles.) Anyway, a side-effect of this was that when web browsers began to show up, the response was to double-down on the high-powered CURSES-based or pure command-line clients rather than to try and figure out how to put an easy-to-use interface on top of a news spool. Upshot: usenet clients remained rooted in the early 1990s at best.

These days much of the functionality of usenet (minus the binaries) is provided by Reddit. Usenet itself turned into a half-assed space-hogging brain dead file sharing network. And we know what ISPs think of space-hogging half-assed stuff that doesn't make them money and risks getting them sued.

USENET has always been used for porn and piracy, since at least the early 90s. Of course, most of the great newsgroups were discussions-based, but probably most of the bandwidth was porn and piracy.

When I was in college, I remember someone on my floor had written a program in Pascal that automatically downloaded porn off USENET. He would leave his computer running all the time, connected to the college's internet connection via modem, and we would occasionally see a flash of a porn pic on his screen and ask "What was that?". This was before the days of integrated TCP/IP stacks in the OS, so if I remember correctly he had to dial in via modem and then use something called Slurp or something like that, I can't remember exactly now.

This continued all through the 90s. A bunch of my friends had Airnews accounts and downloaded mp3s and porn 24x7, during what we called the "Golden Age" of piracy, when Napster was starting up in 97 up until the early 2000s, when the bust hit.

At some point, the medium for discussions moved off of USENET and went to more user friendly places like email mailing lists, google groups, yahoo groups, reddit, etc. This left only piracy and porn on USENET, and I'm actually surprised that some ISPs still support USENET at this point.

Binary groups were huge and users expected them for free. And users would download huge amounts of stuff. So it's pretty much a cost sink, and ISPs who tried to start charging (for this service that had dramatically increased costs) were faced with vigorous campaigns. At some point it's easier to just cancel and tell disattisfied customers to get a new ISP if they're unhappy.

The amount of groups distributing images of child sexual abuse created some risk (not every ISP is in the US) and things like stealth binary groups distributing porn put a bunch of people in oppressive regimes in tricky situations.

ISPs could have dropped binaries and only carried text groups. But this means putting up with groups of people who strongly held but conflicting opinions:

1) be a dumb pipe and provide everything

2) be a dumb pipe but filter spam with a Breidbart index of something or other.

3) make the news server operate to rules laid out in the ISP's ToS. (Young people may not realise but a lot of effort on the early Internet was spent on "what do we do if our users go on the Internet and start swearing?" Many ISPs had rules forbidding swearing. (At least, they did in europe))

Then www forums sprung up and they had some advantages: avatars, mods, etc.

(Background: I'm a Usenet user from the late 80's to early naughties, did outsourcing at netaxs as newsread.com, then ran readnews.com from 2004-1024).

Usenet is still around but mostly for binaries. The market's of pretty stable size, dominated by a few large wholesale players.

My take on what happened with text groups is that the S/N ratio just went to hell. In the 90s the problem was spam, but in the 2000s the problem was too many loudmouths who wanted to hear themselves talk drowning out the useful experts.

Like some of the other folks commenting, I've been pissed as hell at the PHPBB/vbulleting monstrosities. My original plan with readnews was to try to build a great web UI for discussion, but we got distracted by wholesale customers wanting service - and front-end is not my area of expertise.

For folks looking for something modern with promise, the news is good with discourse and a few others coming up. Would love to see something distributed, but if really distributed I suspect we'd see binaries and/or commercial spam and/or people with nothing interesting to say dominate - just like Usenet...

What Usenet did well was that it was completely decentralised, had zero cost of engagement (despite 'hundreds, if not thousands of dollars'), and was everywhere.

What Usenet did badly was that there was a complete absence of identity management or access controls, which meant no accountability, which meant widespread abuse; and no intelligence about transmitting messages, which meant that every server had to have a copy of the entire distributed database, which meant it wouldn't scale.

It's a tough problem. You need some way to propagate good messages while penalising bad messages in an environment where you cannot algorithmically determine what good or bad is, or have a single unified view of all messages, all users, or even all servers. And how do you deal with bad actor servers? You know that somewhere, there's a Santor and Ciegel who are trying to game the system in order to spam everyone with the next Green Card Lottery...

I found USENET and associated newsgroups to be better than the WWW, especially for discussions of software. I once even promoted the use of internal newsgroups w/in a corporate environment, where a history of topics (discussions, problems, and decisions) would have IMO proven extremely useful.

But the idea never got traction: people were unwilling to participate because newsreaders were too different from the browser and they'd had enough trouble learning to navigate the WWW. Once blogs and browser-based "newsgroups" and forums began showing up, the handwriting was on the wall. In the end, the WWW browser's low bar to entry ate USENET.

I still value the treasure trove of information stored in the archives. And some people still actively participate in USENET and other newsgroups, just as some still participate in IRC (Internet Relay Chat, which also is fading). I think these are valuable tools with a lot of greybeard expertise held in reserve.

There's a sort of Gresham's law of the Internet: "The browser drives out every other interface."

For ISPs there simply wasn't enough customer usage of NNTP servers to justify their continued existance. 5 years ago when I was working at a mid-sized ISP only about 2% of our customers used our NNTP servers. We carried binary groups and offered pretty good retention/completion but by then even the pirates had mostly ditched NNTP for torrents. At the time we estimated that we had maybe about a dozen customers accessing the server for non-binary / piracy use.

Going back further to why NNTP became irrelevant for discussion I'd say it was a combination of difficult setup for the average user and the lack of good free NNTP clients. Early web forums could offer discussion for free without the difficulty / expense of a NNTP client. As NNTP groups became more insular the miserable trolls were able to take over and ruin it for everyone. Almost every group I was active in during the late 90s deteriorated in this way. Just one mentally ill and/or very lonely person posting 50+ times per day could very effectively destroy a group.

I skimmed the comments here, and never saw the real answer (to what I understand the question to be). Even though it was public knowledge, I had some extra insight from working for a large Usenet provider.

The New York Attorney General started a campaign against child porn groups on Usenet. In the end, his office identified a small number of groups they said were used for child porn -- I think it was less than 100 groups. Many ISP's jumped on the opportunity to stop paying for Usenet service.

In the 90's it was just assumed that an ISP service would include Usenet. With the growth of binaries groups, the quality of service declined. I remember retention would be a day or two, with about 50% completion. So, for most ISP's, the service was unusable, and only a small number of subscribers knew or cared about it. The others paid quite a bit for service from a third party, like my employer. I don't know why they didn't shut down service earlier, but once the NYAG campaign started, they could cancel Usenet, saving themselves money, and getting good press for fighting child porn.

> It seems that the past 6 years or so saw most big ISP's dropping USENET support claiming mostly piracy concerns. Was it piracy or the fact that it's tough for the government to control what people say on USENET?

No conspiracy theories needed here.

Copyright infringement is one angle; the other is that it costs ISPs a huge amount of resources for something few people use.

Once upon a time, a single server could easily mirror all of USENET for all users of an ISP, and almost every user expected it, so they'd treat it as an essential part of the service. Now, it would take far more storage to do so, and almost nobody expects it, so why should an ISP provide it? It's easier to let people get USENET from a third-party service, and it'd be a better experience for the people who actually want it, too.

If an ISP has resources to burn and wants to make their technical users happy, they'd get far better results for more users if they provided things like local Linux distribution mirrors instead. Far more users would make use of that than USENET.

And if they want to make the vast majority of users happy, and save resources on their end in the process, they can provide local CDN nodes for YouTube, Netflix, and similar.

There is a lot of history and useful knowledge archived in Usenet. A lot of that content (e.g., the early UNIX newsgroups) puts today's forums and blogs to shame.

Google acquired Deja News (if Usenet is wortheless, why?) and now all the archived Usenet messages are web-access only and fronted by Java and Javascript nonsense.

If the Usenet archives are no longer important or if everyone thinks Usenet is "dead", then why put these messages behind Javascript and try to prevent bulk downloads (which is how NNTP was designed to work)?

Usenet isn't dead. I still use several Usenet groups via Thunderbird. Google Groups is a Usenet host/client, and many groups belong to both the Google and Usenet spaces. The Usenet interface is easier to use, has no ads, and doesn't require a Google account.

It felt like Usenet died as a meaningful place for discussion in the mid-to-late '90s, for all the same reasons that most (or all?) electronic communities eventually die. Bad posters drive away good posters and encourage even worse posters, which eventually results in something akin to YouTube. Forum entropy for lack of a better term.

By the time most ISPs started dropping it, a vanishingly small percentage of most ISPs' users even knew what it was, and the binaries groups had turned it into a source of both cost and legal risk. The heavy users were people who incurred that cost and risk to the ISPs because they were using it for pirating software and porn. The icing on the cake would've been the fact that it's a terribly inefficient way to distribute those things and the ISPs have to store all that stuff locally on servers they own.

From an ISP's perspective, maintaining Usenet feeds became all downside and no upside.

Regarding government control, I would think that Usenet would've been far easier to monitor and censor than the web.

The single biggest issue was spam. Being largely unmoderated, it became flooded with garbage as the reach of the Internet expanded. Conversation moved to web based forums, which IMO had worse UI in il the early days, because there was more ability to moderate.

I thought it was just limited interest. First web hosting prices came down so much that anyone could run a forum, then WordPress made free blogging with ancillary comments accessible to anyone with a browser. So the masses went to forums and blogs (and then Twitter and Instagram and YouTube and whatever chat app is popular this week) and only geeks who cared enough to find and install a newsreader were left.

When DejaNews made USENET searchable. You could actually nuke messages from DejaNews, but then Google bought DejaNews and suddenly every nuked message were made available again forever. Google killed USENET.

I wonder just how big a non-binaries feed is these days. A tiny engineering company I worked for in '98-99 had its own Usenet server with a no-bin feed going into a SPARCstation 2 (think 386-486-class x86, equivalent) and it kept up just fine.

A couple years earlier I'd been one of the senior admins at Texas.Net (now DataFoundry) and helped build out what eventually turned into GigaNews, which used multiple dual-proc Sun E450s.. I think they're still one of the "biggest" Usenet providers these days.

I still use usenet every day. There are, admittedly, only a few good groups left. But where there's a high barrier to entry there's a high reward. The discussion is of high quality. Higher than most mailing lists and reddit/HN, at least.

Usenet is a way for the ordinary person to be able to talk unfettered to other ordinary people - without a need for a central authority, without the approval and shilling of advertisers etc. So, after decades of the taxpayer funding R&D to create the Internet, when the Internet was handed over to corporations in the early 1990s, the question is not if such a resource was going to go away, but when.

It's a confluence of forces. The old Bell monopolies get a stranglehold on the last mile, and then wireless transmissions as well. They become so bold as to lobby to end net neutrality so they can pump more money from content providers with their monopoly. A vast infrastructure is being built to monitor what people say on the network (like the NSA's Utah Data Center) which makes the Stasi look like Inspector Clouseau, in a country quite different than the one whose Secretary of State said in the 1920's "Gentlemen do not read each other's mail". The RIAA/MPAA oligopolies are not busy yet trying to extend their 95 year copyright lease which is kicking in again in 2019, so they can try to shut Usenet down as well. After all, it's one of the rare mediums of distribution of content they don't control. I'm surprised the powers that be haven't cracked down on Internet Relay Chat yet, it's one of the last remnants of the old, distributed, decentralized, noncommerical Internet.

The Morris worm is on that list; Robert Morris is a partner at Y Combinator. That incident is the source of the pg quote that would not stop kicking around in my mind through the latter half of grad school:

"The danger with grad school is that you don't see the scary part upfront. PhD programs start out as college part 2, with several years of classes. So by the time you face the horror of writing a dissertation, you're already several years in. If you quit now, you'll be a grad-school dropout, and you probably won't like that idea. When Robert got kicked out of grad school for writing the Internet worm of 1988, I envied him enormously for finding a way out without the stigma of failure."

Seem to me this list needs to incorporate how easily these bugs could have been avoided/detected/fixed, rather than just how dire the consequences were. It doesn't say much about what people did to test their code. For instance the first one in the list is something unit testing would have fixed. Take the trajectory function, plug numbers in, see if it's correct.

Some of these things were a lot more obvious than others.

Race conditions, for example, can be really hard to find, but as long as you know it might happen (these days it's just about every system) you can take precautions for testing. If it's important, maybe hire someone with experience.

The AT&T network crash thing looks pretty unobvious to me. A network graph can have a huge number of topologies, so you can't really test them all. Machines might also be using different versions of software that don't interact nicely. Sounds like they took sensible precautions and were thus able to roll back. That's why "rollback" is a word.

There's a whole class of bugs where things work and then need to be upgraded. You think it will work, because there aren't many changes and stuff is qualitatively the same. Like the number overflow bug in the Ariadne, or the buffer overflow in the finger daemon.

Another thing which should be in this list (relating to floating point rounding error):

"On 25 February 1991, a loss of significance in a MIM-104 Patriot missile battery prevented it intercepting an incoming Scud missile in Dhahran, Saudi Arabia, contributing to the death of 28 soldiers from the U.S. Army's 14th Quartermaster Detachment."

> Programmers respond by attempting to stamp out the gets() function in working code, but they refuse to remove it from the C programming language's standard input/output library, where it remains to this day.

I used RPython on my own interpreter project a few years ago (stopped working on it around 2013). It's a very interesting approach to writing interpreters/JIT compilers, and produces very fast code, but developing RPython code was very painful for a few reasons (back then, maybe they got fixed in the mean time):

1) Huge compilation times, and compilation is non-incremental. Making even a small change to the source code causes it to be fully re-compiled, which on our project took 15-20 minutes (I can only imagine how painful this is on PyPy, which took me around 2 hours to build the time I tried it). I think the root cause of this is the static analysis and type inference, which needs to run again on the entire source code, and proved to be really slow on a huge code base. This was painful for development, so much time wasted on waiting on the compiler.

2) My experience with error messages was not as positive as the OP's. Sometimes, I'd make a type error in the code and get a cryptic error message, and have to guess by myself what caused it. Perhaps things have improved since then (I see some new details in the errors in the article that weren't there when I used RPython).

RPython and PyPy are (IMO) the coolest Python projects out there. I made a little Brainfuck intepreter with it and it was super simple, but making small changes seemed to slow it down a lot. There is very little visibility as to how the code is compiled - i.e should I use a list or a tuple for some things? Can PyPy work out that a list is fixed sized? How should I classes to make them as efficient as possible (like are classes with two or so fields passed as structs)?

Note that, as far as I know (the rpython/pypy team will confirm or infirm) RPython is not intended to be a general-purpose python-like language. For that you want cython or nim. RPython is a toolkit for building language VMs. I'm guessing that's why relatively little work has gone into error reporting.

I wasn't aware that Microsoft Windows supported a stderr filehandle separate from stdout. When I worked a little on it about 17 years ago, I thought it didn't have that (e.g. warnings from Perl were intermixed with redirected stdout or so). Did I misinterpret something or has the system been changed?

(Edit: that was longer ago than I first remembered; it was on Windows NT.)

See also http://www.cs.princeton.edu/~bwk/202/ for further information about early work (reverse engineering!) at Bell Labs on typesetting machines, and http://haagens.com/oldtype.tpl.html for general phototypesetting history, featuring gems like: There was a romantic tradition, in [the US] at least, of the drifter Typesetters, who were good enough at the craft to find work wherever they traveled. They'd work in one town until they wanted a change and then drift on. They had a reputation for being well read, occasionally hard drinking, strong union men who enjoyed an independence particularly rare in the 19th century.[0]

It's amazing how interlinked typesetting and computing are. Here we have a troff link, then there's the PDF (from postscript) and TeX world, keyboard layouts, telegrams, rotating drums and early mechanical cryptography, etc.

If anyone's interested in good collections on the history of printing, I can recommend both the Museum of Printing and Graphic Communication (Muse de limprimerie et de la communication graphique) in Lyon, France[1] and the National Technical Museum (Nrodn technick muzeum) in Prague, Czech Republic,[2] which also sports the best permanent exhibition on the history of photography I have ever seen (by a long shot). For those of you in California, there's also the International Printing Museum[3] in Carson (open 10-4PM Saturdays).

The article kind of misses the point. The reason for having a separate queue for connections in a SYN-RECEIVED state is to provide a defense against SYN flooding attacks.[1] An incoming SYN has a source IP address, but that may be faked. In a SYN flooding attack, large numbers of SYN packets with fake source addresses are sent. The connection will never reach ESTABLISHED, because the reply ACK goes to the fake source address, which didn't send the SYN and won't complete the handshake.

Early TCP implementations allocated all the resources for a connection, including the big buffers, when a SYN came in. SYN flooding attacks could tie up all of a server's connection resources until the connection attempt timed out after a minute or two. So now, TCP implementations have to have a separate pool of connection data for connections in SYN-RECEIVED state. There's no data at that stage, so buffers are not yet needed, and a minimum amount of state has to be kept until the 3-way handshake completes. Once the handshake completes, full connection resources are allocated and the connection goes to ESTABLISHED state.

This has nothing to do with behavior of established connections, or connection dropping.

I ran into the overflow behaviour with our source repository provider, as they'd get hammered on the top of every minute by all the continuous integration servers and silently drop connections. The specific version of SSH we were running didn't send the client banner until it received the server banner, so the connection just hung for 2 hours on the client.

After much debugging and reading of kernel source this was all figured out, and the provider adjusted things on their end so this wouldn't happen.

Moral of the story: You probably should set tcp_abort_on_overflow to 1.

There was a recent video or article posted here discussing the poor interaction between Nagle's Algorithm/Delayed ACK/TCP slow-start and how it results in increased latency, especially for the first few packets.

From a first read it sounds like the decisions made in both BSD and Linux could also be adding to the latency problem for the first initial packets.

Have OSes checked how their TCP backlog implementation affects the various congestion control algorithms being used?

Manjul makes it so simple. The odds of two numbers being relatively prime is 6/pi^2 so where's the circle? You can kind of see rotation symmetry if you draw the relatively prime pairs of numbers in the coordinate plane. However that symmetry is far from perfect.

This is a very syncretic fusion between computing, dialectical materialism, entrepreneurial laissez-faire idealism and a bombastic techno-optimism.

Unsurprisingly, it harbors plenty of confusion.

"Towards a Mass Flourishing" makes the outrageous claim that the hacker ethos is best embodied in Silicon Valley. In reality, SV is one of the most detached from the MIT hacker ethos, instead having its own entrepreneurial hacker culture that is markedly distinct.

The "Purists versus Pragmatists" essay romanticizes the release of Mosaic and gives little credit at all to Ted Nelson's ideas, who is shoved aside as a purist crank. It's a false dichotomy through and through.

"Agility and Illegibility" again romanticizes widespread access to personal computers as some entrepreneurial Randian vision, that of Bill Gates specifically.

The "Rough Consensus and Maximal Interestingness" essay misquotes Knuth and incorrectly attaches philosophical meanings to technical terms like dynamic binding and lazy evaluation. It further espouses the "direction of maximal interestingness" and grand visions in the post-dot com bust era, when in fact systems software research is becoming increasingly conservative compared to as recent as the 90s.

"Running Code and Perpetual Beta" presents the dogmas of "release early, release often" and constant chaotic flux in software as a natural result of great ideas, as opposed to being the result of a cascade of attention-deficit teenagers. Note that fault tolerance, stability and security are not mentioned once.

"Software as Subversion" equivocates "forking" as being a Git terminology that somehow reclaimed its negative stigma, when it is purely a GitHub redefinition. The author makes no distinction between a clone and a fork. Also a misrepresentation of OS/2's mismanagement to argue in favor of "worse is better" (ignoring all other great systems besides OS/2) and babble about how blockchains are pixie dust.

"The Principle of Generative Pluralism" sets up the false dichotomies of hardware-centric/software-centric and car-centric/smartphone-centric. I suppose it somewhat reflects the end user application programmer's understanding of hardware.

"A Tale of Two Computers" prematurely sets up mainframes as obsolete compared to distributed networked computers (they are not exclusive) and makes the error of ascribing a low-level property to an ephemeral, unimportant abstraction - its marvel at the hashtag when the core idea of networking has enabled the same for much longer, and will continue to.

"The Immortality of Bits" is one of the worst, and makes this claim: "Surprisingly, as a consequence of software eating the technology industry itself, the specifics of the hardware are not important in this evolution. Outside of the most demanding applications, data, code, and networking are all largely hardware-agnostic today." This reeks of an ignorant programmer, oblivious as to how just how much hardware design decisions control them and shape their view. In fact, this is a very dangerous view to propagate. Our hardware is in desperate need of being upgraded to handle things like capability-based addressing, high-level assembly and thread-level over instruction-level parallelism. This stupid "hardware doesn't matter" thinking will delay it. The essay also wrongly thinks containerization is a form of hardware virtualization. It further says the "sharing economy" will usurp everything, which is ridiculous.

"Tinkering versus Goals" again sets up tinkering for the sake of it as leading to disruption and innovation, and not churn and CADT.

The "Free as in Beer, and as in Speech" essay clumsily and classically gets the chronology and values of open source and free software wrong. Moreover, the footnote demonstrates a profound bias for the "open source" ideal of pragmatism. This is in spite of the fact that many of the consequentialist technical arguments for OSS like the "many eyes make all bugs shallow" argument have proven to be flawed, whereas free software making no claims of technical superiority and using ethical arguments has a much stronger, if less popular case.

Agree Ribbonfam peaked with the Gervais Principle essay. Agree with some of the criticisms here. Will add my own: the first three essays are somewhat accessible but after that the author is talking to the echo chamber which is his regular blog audience.

I wrote up a very rough summary/set of notes. Please excuse all the errors in punctuation, spelling, and formatting. Thought it might be helpful for people who want to skim, as the whole thing is more-or-less a book.

The first essay on that list starts talking about "soft technologies" without defining what they are.

They don't match other definitions of "soft technologies" and I'm having difficulty figuring out what the definition is here that only includes writing, money and software (frankly, I suspect if anyone other than an American had written this, money wouldn't be on the list).

So this is a blog that will write 1 article every 5 weeks and batch release them in 2017? I am all for thoughtful content, but binging isn't a concept that can be applied to blogs. This makes no sense.

Edit: i get what turned me off by this. It was the positioning as a radical new media concept and the convoluted and confusing explanation.

What do you call the development and research of a text based narrative which is catalogued for direct and total consumption online?

You'll need a 24/7 box somewhere, but it could even be behind a residential NAT with only outbound connections (at reduced performance than having a public port). You can easily & transparently move it from one physical place to another; your hash identifies it, not the physical routing location.

I built something that starts instances with Bitcoin, if that helps any: https://www.stackmonkey.com/. You add ssh keys with a callback, so the site never sees who you are. The virtual appliances run on my HP Cloud account, so for now your instance would be there.

I have a gaggle of children, and I have noticed that my 5 year old son thinks many devices have voice control, and he views this as normal.

My 11 year old son and 8 year old always think every screen has a touch interface, but they don't use voice.

To me, it seems that there is a break in those generations. The 5 year old is voice activating his tablet, amazon fire tv, watches his parents voice command their phones driving - this is all totally normal to him and you can see he will often try a voice command on a device unfamiliar to him.

The older boys think using a PC is novel, and look at me strangely when I tell them about DOS and Windows 3.1 //// oh well...

Just got back from a family reunion and my relations' kids were glued to their tablets playing mindless games and watching YouTube unattended for 4 hour spells and more day after day. It was so depressing, all these boring consumer drones in the making.

My kids get 1 hour of screen time a day, max (until they decide to become coders, and then the cap is lifted. :)

> Xiaoice, whose name translates roughly to Little Bing, after the Microsoft search engine, is a striking example of the advancements in artificial-intelligence software that mimics the human brain.

"Striking example"? "Artificial-intelligence"? "Mimics the human brain"? From the example chats, it seems to be just a chatbot, maybe a little more convincing than ELIZA. If there is a piece of news here, it's not about Xiaoice but about people feeling so lonely they can pretend a chatbot is their "friend". I realize this is standard journalist exaggeration, so I'll ask: is there any technical info on Xiaoice that explains how this chatbot stands out from the rest? Is this the current state of the art?

As someone who uses FFMPEG heavily in a professional capacity, I really wish the various community leaders and contributors could come together and build better tools for everyone-- as opposed to the current status quo where both sides seem to spend a majority of their time attacking or defending each other.

It's like a "great filter" in the growth of open source projects, so many get ripped apart by their own internal power struggles, but those few that can make it past these hurdles really do shine.

I'm always eager to read these progress reports. It is amazing how interesting it is to read about the kind of bugs they face, and how they were diagnosed and fixed. Can anyone recommend other open source projects that do this sort of writeup?

Always a risk to self-promote, but last month our team launched our Compass Platform - Behavioral Indicators and associated support to help business founders and leadership teams in private enterprise [1].

I've found these are more balanced, so work better than tools like DISC and Myers-Briggs, which tend to 'put people into a box' and therefore work against creating an inclusive environment.

I've obviously done some training on these, and nowadays deliver (mostly internal company) training on them as well. The website and online tests are a great starting point - having an understanding of a potential client or recruit's Risk profile, for example, makes it so much easier to connect with them and explain a value proposition.

You might try Toastmasters. Once you get past the first few "nervousness" lessons they focus on how to convey meaning with speech. Every chapter is different, so YMMV. Several of the speeches you have to prepare cover persuasion, motivation, and how to structure a speech so that everyone remembers exactly what you want them to remember. They teach through having you give a series of 5-7 minute speeches, but I've found the practice helpful beyond giving a formal talk.

The book Search Inside Yourself by Chade-Meng Tan the best book I know. It describes scientifically studied exercises of the mind that you can do by yourself. It will boost your empathy much higher than anything I've experienced (or read about on Sciencedaily). It explains the science too and you can look it up. One thing though, reading the book is part 1, part 2 is performing the exercises. If you won't perform the exercises, then reading the book has not much of a purpose.

In my experience, I got to amazing levels of empathy by doing these exercises. I felt like I had godlike skills. My intuition could immediately signal me if a woman liked me (first time ever in my life). Two months later I was in a relationship. I could spot feelings that my friends had that they were not aware of.

There are some caveats though, which goes for any book that will be presented here. The moment I stopped practicing, my level got down a bit above baseline before I started. So most structural gains are hard to keep, which goes for any trained skill. Another downside is that everything you see has a bigger impact on you. So when you'd go to an action movie, you'd feel like you're right in it. When you look at a rose you feel like you're a rose, that sort of thing. When you drink one sip of alcohol you feel the effect of it already on your perception in very subtle ways (that might be a good thing though).

Some final thoughts: I believe books train deliberation aka the slow system. Exercises, mental or physical, train the fast system. Checkout Thinking, Fast and Slow by Daniel Kahneman. I believe it's a useful approximation to how thinking works. Also, while empathy is a big component in becoming better, it's not the only component.

Do some volunteer work, it looks good on your resume, it does some good in the world, and it improves your soft skills because it takes you out of your grove and sticks you in a new, non-threatening situation (hopefully). Plus, most volunteer events have mentors that will teach you how to deal with people. Do something simple, do not go overboard, and listen to how the professionals there deal with people.

Best way, is to actually put yourself in a situation where you have to live the skills you're trying to get better at. i.e. I'd suggest you work a short (part-time) side stint in Sales or Customer support or Recruiting, especially under a seasoned manager and sincerely carry a quota. The pressure of actually closing that deal will improve you at a rate nothing else will.

Books and everything else will definitely help, but I'd treat them as supplemental resources. You don't get good at soccer by reading about it. You got to play it. You don't get good at coding by reading about coding; you have to actually write code.

Not saying you meant that you only want to read in order to get better and nothing else, but just trying to draw attention that getting better at soft skills is also about actual practice, like anything else.

I urge technical people to explore non-technical subjects and general "well roundedness". I've gotten immense relaxation and satisfaction from community art classes, martial arts, yoga, etc. There's a powerful argument that technical work is inherently creative, but creative work without the technical is something else entirely.

I also urge technical people to study history, language, speech, public performance and public speech giving. All of those things give a sense of perspective, and abilities to confer with partners and customers on a level that most technical folks don't understand.

Public performance in particular enables one to overcome lots of fears and be able to talk in front of both crowds and executives. This capability is often rewarded in important ways that build one's career...and the only way to get good at it is to do it.

'Any meeting, discussion, or human contact, is basically a negotiation.' is their stance, with a real emphasis on role play. Position vs. Interest, Group vs. 1-1, etc.

The role play exercises can be downloaded from http://www.pon.harvard.edu/store/* Free for checking / testing with detailed notes for the trainer / post-exercise* Low price for use (around $3 dollars/copy licensed use - super low for what they bring)

What do they bring? Really accelerated understanding of behavior (yours and theirs) in any interaction you have. This is be done via role-play and reflection, not a reading and 'know it' resource, so download a few and play them with colleagues.

Made to Stick, by Chip and Dan Heath [1]. They ask "how is it that certain ideas seem to stick our minds better than others?", and give concrete advice on how to improve the stickiness of your own ideas. I've found it useful to avoid forgettable business waffle that fails to change people's minds nor behaviour. One of the many examples they give is of Nordstrom (a fashion retailer). They could have said "we want to delight our customers". Instead, they use stories of employees that embodied those principles: ironing a shirt for a customer that needed it that afternoon, refunding tyre chains even though Nordstrom doesn't sell tyre chains.

"This guidance probably should have been Chapter 1 of our Politics 101 series. Its foundational. Its a HUGE problem for many professionals, particularly young and dare we say it, nave professionals. So many young people say, I dont play politics. The more savvy folks around them think, thats good, because this isnt a game you can play. "

"One of the characteristics of successful scientists is having courage. Once you get your courage up and believe that you can do important problems, then you can. If you think you can't, almost surely you are not going to. Courage is one of the things that Shannon had supremely. You have only to think of his major theorem. He wants to create a method of coding, but he doesn't know what to do so he makes a random code. Then he is stuck. And then he asks the impossible question, `What would the average random code do?' He then proves that the average code is arbitrarily good, and that therefore there must be at least one good code. Who but a man of infinite courage could have dared to think those thoughts? That is the characteristic of great scientists; they have courage. They will go forward under incredible circumstances;"

1. Career Tools and Manager Tools podcasts. Career advice, interviewing help, resume-building, team interactions, navigating office life/culture, salary negotiation, having your voice heard, and many other topics discussed in a friendly and approachable way. I listen every week. https://www.manager-tools.com/

From my point of view, it boils down to communication and self awareness. Nonviolent Communication that was mentioned before is a great book.

Also, I found that the Pathwise Leadership Program (http://pathwisemanagement.com/) has helped me a great deal in knowing myself and finding out how to frame your communication in the best way possible.

Not a resource, so much as a technique, but I've just been baffled to find myself moving slightly into management, and my new mantra, whenever I'm not absolutely positive what I should be saying is: listen.

Dale Carnegie's book is the obvious choice. However, Lifetime Conversation Guide by Van Fleet had a ton of specific stuff tailored to different situations. Stumbled upon it in a thrift store and bought it because of its thoroughness along with giving me a few good ideas.