Reference implementations of medium sized applications are incredibly useful for leveling up as a programmer. While there are many large successful open source applications, many are overwhelming to read and learn from.

Having something that outlines the key features and components and which ignores the important but complicated edge cases assists in keeping the attention focused.

Now if there are annotation within the source code, that would be truely incredible.

Scite is a barebone open source text editor created by Neil Hodgson to exercise his "Scintilla" text-editor c++ library which is used in others like notepad++.

In hindsight, I would have a bit more caution programming text editors. I started tweaking and modifying Scite years ago, it was very interesting but it was no small undertaking and I came to understand why Neil advised in the support forum, to customise it using the inbuilt Lua scripting. Im still using this 6 year old customised version of Scite that I never managed to sync with the latest version, and it has 10 thousand lines of custom Lua facilities like file encryption, navigation panels, multi-edit mode etc.. which I wrote and stabilised a few years ago. I rarely venture to alter it now that I am at last comfortable with it, but its going to need serious attention sooner or later...

I think Kilo is a great little project, well-structured and very educational.

Another useful resource I've relied upon in the past, dates back to the 1990s: Freyja, which is Craig Finseth's emacs-like editor written in C.

Here is a list of features:

* deletions are automatically saved into a "kill buffer" * ability to edit up to 11 files at once * ability to view two independent windows at once * integrated help facility * integrated menu facility, with help on all commands * can record and play back keyboard macros * supports file completion and limited directory operations * includes a fully-integrated RPN type calculator

It was designed for MS-DOS with the Cygwin terminal library.

I found the architecture to be very clean, and it is well explained in Finseth's classic book ("The Craft of Text Editing"). The book is worth reading even if you never touch the code: http://www.finseth.com/craft/

It uses a multi-buffer architecture roughly similar to Walter Bright's text editor (see sibling posting). (I knew about Finseth's editor years ago, but was not aware of Bright's work until now, thanks Walter!)

I got fed up with the standard offerings back in '07 or so and hammered out my idea of a code editor that's perfect for me over a long weekend. It mmaps files initially for instant response, it has a small sensible command set that I can remember completely, it depends on nothing more than the standard C library and a terminal emulator, and it's not many more lines of code than some Emacs configuration files that I've seen. I still use it for everything.

I'm curious if anyone here regularly codes in a text editor they wrote themselves?

I've often thought of coding one for fun, with no intention to share it, just for the purpose of having a long-term project that evolves along with my skills. I've never made time for it, but I still consider it once in a while.

I'm working on an editor in JavaScript. You would be surprised how fast string operations, like concatenation, are in JavaScript! You can hold the entire buffer in a String! While browsers renders text very well, the DOM is relative slow to interact with, but there are other ways to render in JavaScript, for example the Canvas, or into a terminal, or even stream a video, or talk directly to a display.

For better portability, terminfo should be used instead of using hard-coded terminal sequences. Otherwise, this is a really great intro. I liked the beginning with how to put your configure your terminal.

This is truly terrifying. The fact that the US government will pursue this kind of action, potentially exposing and punishing criticizers of the government -- seems like this is how dictatorships/autocracy/totalitarianism start.

"I disapprove of what you say, but I will defend to the death your right to say it." If we believe in the free America, this should be what we should all fight for, if we want to keep America for the reason it became great in the first place.

As a historical note, there used to be quite a few very popular solutions for supporting early social networks over intermittent protocols.

UUCP [https://en.wikipedia.org/wiki/UUCP] used the computers' modems to dial out to other computers, establishing temporary, point-to-point links between them. Each system in a UUCP network has a list of neighbor systems, with phone numbers, login names and passwords, etc.

FidoNet [https://en.wikipedia.org/wiki/FidoNet] was a very popular alternative to internet in Russia as late as 1990s. It used temporary modem connections to exchange private (email) and public (forum) messages between the BBSes in the network.

In Russia, there was a somewhat eccentric, very outspoken enthusiast of upgrading FidoNet to use web protocols and capabilities. Apparently, he's still active in developing "Fido 2.0": https://github.com/Mithgol

This sounds like what I wanted from GNU Social when I first joined over a year ago. GNU Social/Mastodon is a fun idea, but it falls apart when you realise that you still don't own your content and it's functionally impossible to switch nodes like it advertised, along with federation being a giant mess.

I tried to switch what server my account was on halfway through my GNU Social life, and you just can't; all your followers are on the old server, all your tweets, and there is no way to say "I'm still the same person". I didnt realise I wanted cryptographic identity and accounts until I tried to actually use the alternative.

That's also part of the interest I have in something like Urbit, which has an identity system centered on public keys forming a web of trust, which also lets you have a reputation system and ban spammers which you can't do easily with a pure DHT.

> However, to get access to the DHT in the first place, you need to connect to a bootstrapping server, such as router.bittorrent.com:6881 or router.utorrent.com:6881

This is a common misunderstanding. You do not need to use those nodes to bootstrap. Most clients simply choose to because it is the most convenient way to do so on the given substrate (the internet). DHTs are in no way limited to specific bootstrap nodes, any node that can be contacted can be used to join the network, the protocol itself is truly distributed.

If the underlying network provides some hop-limited multicast or anycast a DHT could easily bootstrap via such queries. In fact, bittorrent clients already implement multicast neighbor discovery which under some circumstances can result in joining the DHT without any hardcoded bootstrap node.

My friends and I have thought this through in detail a while ago, and have a few suggestions to make. I hope you make the best of it!

Distributed identity

Allow me to designate trusted friends / custodians. Store fractions of my private key with them, so that they can rebuild the key if I lost mine. They should also be able to issue a "revocation as of certain date" if my key is compromised, and vouch for my new key being a valid replacement of the old key. So my identity becomes "Bob Smith from Seattle, friend of Jane Doe from Portland and Sally X from Redmond". My social circle is my identity! Non-technical users will not even need to know what private key / public key is.

Relays

Introduce a notion of the "relay" server - a server where I will register my current IP address for direct p2p connection, or pick my "voicemail" if I can't be reach right away. I can have multiple relays. So my list of friends is a list of their public keys and their relays as best I know them. Whenever I publish new content, the software will aggressively push the data to each of my friends / subscribers. Each time my relay list is updated, it also gets pushed to everyone. If I can't find my friend's relay, I will query our mutual friends to see if they know where to find my lost friend.

Objects

There should be a way to create handles for real-life objects and locations. Since many people will end up creating different entries for the same object, there should be a way for me to record in my log that guid-a and guid-b refer to the same restaurant in my opinion. As well I could access similar opinion records made by my friends, or their friends.

Comments

Each post has an identity, as does each location. My friends can comment on those things in their own log, but I will only see these comments if I get to access those posts / locations myself (or I go out of my way to look for them). This way I know what my friends think of this article or this restaurant. Bye-bye Yelp, bye-bye fake Amazon reviews.

Content Curation

I will subscribe to certain bots / people who will tell me that some pieces of news floating around will be a waste of my time or be offensive. Bye-bye clickbait, bye-bye goatse.

Storage

Allow me to designate space to store my friend's encrypted blobs for them. They can back up their files to me, and I can backup to them.

All I got back was "An error occured (sic) while attempting to redeem invite. could not connect to sbot"

It worked with http://pub.locksmithdon.net/ though I feel a bit odd trusting a "locksmith" I've never heard of to stream lots of data to my harddrive...

It's cool that anyone can host a pub basically, an instance of FB/Twitter/Gmail, it seems but things 1) will get expensive for them, and it's unclear how they'll fund that and 2) now I have to trust random people on the internet not only to be nice, but also secure.

As a "random technically aware netizen", I honestly trust fooplesoft more, since they have a multi-billion-dollar reputation to protect. (Not that I trust fooplesoft).

Not everything needs a global singleton like a blockchain or DHT or a DNS system. Bitcoin needs this because of the double-spend problem. But private chats and other such activities don't.

I have been working on this problem since 2011. I can tell you that peer-to-peer is fine for asynchronous feeds that form tree based activities, which is quite a lot of things.

But everyday group activities usually require some central authority for that group, at least for the ordering of messages. A "group" can be as small as a chess game or one chat message and its replies. But we haven't solved mental poker well for N people yet. (Correct me if I am wrong.)

The goal isn't to not trust anyone for anything. After all, you still trust the user agent app on your device. The goal is to control where your data lives, and not have to rely on any particular connections to eg the global internet, to communicate.

Btw ironic that the article ends "If you liked this article, consider sharing (tweeting) it to your followers". In the feudal digital world we live in today, most people speak must speak a mere 140 characters to "their" followers via a centralized social network with huge datacenters whose engineers post on highscalability.com .

Why do all "social networks" have to be a feed of news? Couldn't anyone think of anything better than a system in which people are only encouraged to talk about themselves and try to get other people's approval? In which having more "friends" is always better, because you have more potential for self-agrandissement in your narcissistic posts?

When I used it, which admitedly was a long time ago now, the biggest setback was lack of cross device identities. So I ended up having two accounts with two feeds, `wesAtWork` and `wes`. Maybe they have solved this by now.

ps. Does patchwork still have the little gif maker? Because that was a super fun feature.

This excites me. I'm probably naive, but I always imagine that one day I'll retire and spend my days trying to work on an open source mesh network (or something similar).I want future generations to live in a world where 'the internet' isn't a thing that authorities can grant/deny. A headless social network is a promising omen of a headless internet.

Forgive the rambling, this is the first time I've written any of this down...

My idea is to use email as a transport for 'social attachments' that would be read using a custom mail client (it remains to be seen if it should be your regular email client or have it be just your 'social mail' client. But... if using another client as regular email, users would have to ignore or filter out social mails). It could also be done as a mimetype handler/viewer for social attachments.

Advantages of using email: - Decentralized (can move providers) - email address as rendezvous point (simple for users to grasp) - Works behind firewalls - Can work with local (ie Maildir) or remote (imap) mailstores. If using imap, helps to address the multiple devices issue. Could also use replication to handle it too (Syncthing, dropbox, etc)

Scuttlebutt looks like a nice alternative though. Will be following closely.

1) be on the same wifi (presumably great for dissidents in countries with heavy-handed internet control, and inconvenient for everyone else)

2) use "pubs", which can be run on any server, and connected to through the internet?

So most users would use pubs, which are described as "totally dispensable" (a nice property). But how can users exchange information about which pub to subscribe to? Is there a public listing of them?

It seems like the "bootstrapping server" problem (eg; reliance on router.bittorrent.com:6881) will still exist in practice. For that matter, is there currently an equivalent to router.bittorrent.com that would serve this purpose?

This seems like a potentially significant project, and I'm excited by the possibility that it might actually take off hence the inquiry.

I am not much of a social networking type of person, but I have wondered how nice it would be to network with a community like HN. For example, I see a nice comment chain going on in some news article, but as the article dies so does all the conversation within it.

Maybe it's just me but if I see an article is x+ hours old (15+ for example), I don't bother commenting.

What type of social networking would HN use for non personal(not for family and immediate friends) communication? (I've tried hnchat.com, it's mostly inactive imho)

Can I choose who's content I pass along? I am ok distributing my own feed, that's presumably why I am joining the network. I am not OK passing along someone else's hate speech, porn, warez, malware, spam, etc. I'd like to be able to review the feeds available and say "Yeah sure I'll pass that around." If everything in a feed is encrypted then I'd need to decide. Also yeah my brother who's feed I follow and pass may upload a really nasty bit of content and I may relay it.

I'm not totally sure how the traffic management works, but what I would like to know is how services like this will be able to scale? What happens when there is a Pub with millions of users? Does it creep to a halt? Is there a need for dedicated Pub machines? If so, Who funds/maintains them? Does this lead to subscriptions?

Decentralized social networks seems like an inevitable progression as internet users become more aware of their privacy and ways they can improve online relationships and ...."social networking"

I think I missed something. If information is exchanged when machines are on the same network, how does the guy in New Zealand get updates from the guy in Hawaii? Is there a server involved, or does the New Zealand guy have to wait until he is on a network with someone who has already connected with the Hawaii guy?

The post starts by introducing two people (one in a boat in the ocean and another in the mountains in Hawaii) and states that they are communicating to each other. I thought this post was about some new long-range wireless protocol that sync'd via satellites or some such. I was disappointed to see this:

> Every time two Scuttlebutt friends connect to the same WiFi, their computers will synchronize the latest messages in their diaries.

Ultimately this technology seems to be a decentralized, signed messaging system. What problem are they solving? That facebook and twitter can delete and alter your messages?

Meanwhile I'm in search of a long-range, wireless communication system that can function like a network without the need of an ISP. Anyone know anything about this?

> For instance, unique usernames are impossible without a centralized username registry.

This is Zooko's triangle and was squared by blockchains. Namecoin (2011), BNS (the Blockstack Name System, 2014), and now a bunch of other fully-decentralized naming systems can give you unique usernames. Recently, Ethereum tried launching ENS and ran into some security issues and will likely re-launch soon.

Does it normally take too long indexing database? since I started the app have been a long while.I thought this could be a nice tool to use in places like Cuba, but I've realized now, that once connected to a Pub it download more than 1 GB, that would be a problem too in a place with lack of internet bandwidth.

Basic question: since the entries form a chain and reference the previous, is there no way to edit or delete your old entries? (I see it "prevents tampering" and there's something of a philosophical question here about whether you're "tampering" with your own history when you editorialize -- I agree with the crypto interpretation, but in the context of offline interaction, social communication isn't burdened with such expectations of accuracy or time-invariance.)

If so I see that as a fairly large limitation for the common user. Even though truly removing something from the internet is effectively an impossibility, I think most non-technical folks aren't actively aware of this, and I'd at least like the option make it harder for folks to uncover.

I never answer questions about my past or expected salary, not to employers and not to recruiters.

Most employers don't ask, and the few that have (perhaps by having a part of an employment form ask for previous salary) have never made my leaving that information out an issue.

Most recruiters, if they even ask, respect my decision not to talk about it, but I've been pressed hard on this by a handful of recruiters, and have had this be a deal breaker for a couple of them. One recruiting firm admitted that they were paid by the employers to get this information. I wasn't getting paid to give this information out, however, and it's worth more to me to keep it private as I'm placed at a disadvantage in negotiations if I name a number first.

It's still a seller's market for IT talent, and there are plenty of other fish in the sea, so if some recruiters can't accept that I won't name a number, it's their loss.

It's great that NYC is taking the lead on this, and I really hope the rest of the US follows suit.

Lots of people here talking from their own experience as highly skilled, in-demand professionals.

However, helping friends apply to jobs in other industries - specifically medical - I saw that most of the applications involved filling out an automated form that required prior salary information to complete.

There's no advantage to an employee from being forced to disclose this information and it perpetuates compensation discrepancies by gender/race/guts to ask. Very glad to see this made illegal.

Now, if they were really serious about fixing pay discrepancies, they'd make it mandatory to post salary ranges with job listings.

Once upon a time I interviewed for a role in NYC. An employee that I spoke to said they paid pretty well, and I could expect about 120. The HR person wanted my previous salary, and I refused. Eventually they said their range was 130-150. I said it wasn't gonna work cause I was looking for something more like 220. They said okay we can do that no problem. My previous salary was 110.

Have a google for: "can i lie to a employer about past salary" - it really really messes with people - people feel super uncertain about how to approach this situation. Throwing any confidence they have during the negotiation out the window.

Even now I hesitate to write this as a million people will come out and say never lie - what if they found out.

More than banning. There needs to be acceptance that if someone asks you. You are totally free to make any damn number up that you like. Seriously. Its a sales situation. It should not be like your under oath on the stand. Which is how most people view it.

My past salary is an irrelevant information for my potential future employer. If they ask about it, my response would be: "Why would you like to know it?" Any answer to this question is bad. If they do not bail out and stop asking at this point, then I bail out.

The point is that if I want, I can completely change my way of life by switching to a job which pays 50 % of my current salary. Or 400 % of my current salary. It does not matter. What matters is that it is solely my decision and none of my potential future employer's business.

If they want to know my current salary, it is a red flag. I do not care about them knowing it, but there is a high risk that they will use that information to try to make an offer which they think that I ought to consider good. They can offer e.g. my current salary + their negotiating margin and think "hey, we have offered you more than you have now, so you ought to be happy". While in reality, the only person who can responsibly decide whether I am happy about it or not is me.

Note that I am not criticizing companies which want to hire for cheap. This is all right. But they need to do it transparently, from the beginning. They should say it clearly and upfront: for this position, our budget is somewhere in this range ... are you interested or not? This is a fair way to go.

It will be interesting to see how this affects the hiring markets. Out in SF/etc it came up in just about every discussion I had last time I was looking for work usually as part of the first phase. No point interviewing candidates that wouldn't accept the job. It's pretty much a risk mgmt exercise from the hiring side. Similarly, I always asked what the compensation range they're targeting is as I don't want to waste my time either.

I wonder if this ban addresses background checks covering the same information, because some companies do ask for this data from previous employers although not all provide it. Without protection there this ban seems fairly limited.

Anyhow, I don't agree with all advice to never disclose current/previous salary. In some scenarios certainly it makes sense, but in others it is the opposite. You want to justify a higher market value and set the expectation that you're unlikely to be interested unless they're willing to compensate at $X or higher. Of course it's different in terms of leverage if you're employed currently or not. Recruiters and interviewers will waste tons of your time if you don't get on the same page quickly. Lack of transparency around your compensation expectations will exacerbate this issue. Whether that means you tell them what you're making or what you'd like to make doesn't really matter, but you better do at least one of the two.

I have a strong dislike for systemd, so while I'm really sorry that upstart "lost" the fight, Ubuntu gained a lot of respect in my eyes with the decision to go with the rest and avoid unnecessary fragmentation. This could have easily ended up another as another community rift, slowing down everybody along the way.

Now they do it again with Wayland/Mir! It actually takes a significant amount of both balls and goodwill to give up on the product that you invested so much into for the sake of aligning better with your open source community. Bravo!

FWIW, I too would like to keep the DE experience of Unity, and especially the Dash panel and shortcuts. If that expose-text-search could scan non-focused browser tabs that would be a killer feature, but that's for the other thread.

The idea of "Let's simply ask HN users what they think" is a gem, that I suspect will now make it into many PMs' playbooks ;)

I believe that the major pitfall here is that the feedback you've received is mostly about the changes that people want to see. However if we consider the number of people who want Gnome vs the number of those who want Unity 8 vs the number of conservative users who like Unity 7 as it is now - the results might be different.

I personally am very happy with the current Unity. I find it intuitive and more aesthetically pleasant/polished than Gnome Shell (I've only used that as it comes with Ubuntu Gnome).

So please, don't drop current Unity. Or if you have to switch to Gnome Shell - please keep the user experience as close as possible to the current Unity to help users migrate.

Awesome to see our response followed with such attention, not to mention feedback with concrete promises about what (and what not) to expect dealt with.

Good job, Canonical! Happy to be a user!

(And good job finally ditching Mir. You could have kept Unity for all I care. Linux can handle a few dozen DEs. But having more than one display server, now that was just nuts.)

Edit: while the feedback post here may have been the most discussed post on HN ever, the announcement of dropping Mir clearly made rumbles too, with a record 10,000+ upvotes in a "niche" subreddit like /r/linux. When a player like Ubuntu does the right thing, people clearly care.

A sincere thank you to Ubuntu from someone who didn't study anything remotely close to IT but is now a software engineer anyway.

I've always been interested in software and computers in general and, besides with the raspberry pi, I think Ubuntu has been the biggest influence on my interest in software and decision to learn programming.

Asking why my PC wouldn't boot after 14 year old me stuck some components, including the HDD, in another pc. Turned out you needed something called 'drivers' to run a motherboard.

Shortly after this I ordered my first red Ubuntu live CD that you guys shipped and that was my first experience with Linux.

Anyway, open source projects that allow you to thinker with software and even break it played an important role in my life, and Ubuntu was my doorway to a decade of learning, playing and wonder about software and technology.

Running 1604 LTS now. Sad that you guys are dropping Unity for GNome but still happy with Ubuntu. I'm sure it'll work out.

> Add night mode, redshift, f.lux (42 weight) This request is one of the real gems of this whole exercise! This seems like a nice, little, bite-sized feature, that we may be able include with minimal additional effort. Great find.

I also really like the idea of 'Official hardware that just-works'. Not just for users without much technical knowledge but also for the rest of us.

I mean nowadays we somehow manage to get most hardware working 'somehow'. Sometimes it takes a few years before your sound/bluetooth/wifi chip actually does what it is supposed to do, but most of the time we find ways to make use of our hardware.

But when you are going to buy new hardware you are a bit lost. You can try to find out if there are any major problems with some hardware, search the ubuntu hardware database, but especially for new, rare or expensive hardware there is often not so much to find about. For example a few years ago I bought a 22" touchscreen for my desktop and for almost a year it somehow worked but didn't do the things it was supposed to do.

Officially supported hardware by vendors would be a great step in the right direction.

I feel guilty: I didn't respond on the initial post because I doubted the really outrageous/far-out ideas like "dump everything you've been working on for the past five years" would actually be fruitful, but man, Shuttleworth shut me right up haha. Congrats on the Ubuntu team for the courage to make bold changes when necessary and to actually (finally?) listen to pointed community feedback and constructive criticism.

Congrats to Dustin not just for some amazing crowdsourcing with the original post, but then for an extremely concise, thoughtful and humble followup for those of us who couldn't wade through all of those suggestions ourselves.

No one mentioned better Steam support/collaboration with Valve? CPU and operating systems are made for gaming, it's gaming that attracts users and investors from big companies and developers. Ubuntu would only benefit if they invested in MESA, faster AMD GPU integration and cooperate with Valve on Steam clients and gamedev research for Linux.

I currently use Ubuntu on one of my laptops precisely because it's not GNOME. I can't stand GNOME3's hamburger menu UIs, giant title bars, lack of real menus, inability to make changes without digging into their version of dreaded "registry" and that pointless menu bar at the top. If I'm being forced to move to GNOME, I'll be switching to Fedora instead since they have a solid track record of GNOME and Wayland support.

I'm impressed with your thoroughness in processing the responses. But I'm just curious: Why did you lump usability for children and accessibility for users with disabilities into one suggestion? Anyway, returning to GNOME should help with the latter.

Here's a perspective on FOSS and Ububtu. One point of free open source software was we didn't need to be stuck with features we didn't like, that were decided by some majority, or leadership, because we could fork our own and customize as we like. So I think the enthusiasm for responsiveness of a central body to feature requests, is an enthusiasm for a thing not normally associated with a FOSS, which therefore demonstrates that either: that the Ubuntu ecosystem says it is FOSS, but actually operates like something else, or that FOSS fork-your-own theory doesn't apply in practice on large projects with lots of people, or that this perspective just related here is missing something.

It is sad to see the modern and reliable Unity 7 put out to pasture. GTK was not used for Unity 7. Canonical chooses QT for Unity 8. KDE can easily replicate Unity 7 look/feel. QT no longer has an objectionable license. It is not clear why Gnome is the choice over basing on KDE?

Just a note about Reproducible Builds: "We've been working with Debian upstream on this over the last few years, and will continue to do so" well, apart from the regular sync/merge flow from Debian to Ubuntu, AFAIK Canonical never reached out to us Reproducible Builds folks from Debian.That said, we/I plan to reach out to Ubuntu/Canonical soon :)

Just want to chime in with others and say thank you for making this followup post. Regardless of the outcome, distilling the results the way you did means a lot to me and I'm sure others as well. Thanks!

It would be nice to get this list sorted by most surprising as well. A lot of these feedback points seem to fall into the category of "faster horses" which usually tend to dominate when you solicit ideas from the general public.

I would have really liked to see gnome2-esq style DE - bare bones, focus on a great windows like task bar / windows management. I really hated the unity dock thing and I especially hated the color scheme.

Will this mean, that the integrated Amazon search in the desktop is also gone forever?I'd like to see Canonical making money with support instead of that Amazon thing or selling "Apps" to the user. That is the number one reason, why I do not recommend Ubuntu.Other than that, I'm amazed how much the community was heard! Also thanks for the in-depth analysis blog post :)

I think the best news out of this is hopefully that it seems you (as Canonical) have shifted away from "This is not a democracy. [...] we are not voting on design decisions." and the (paraphrased) "This is good because we [Canonical] made it!"

> This is actually a regular request of Canonical's corporate Ubuntu Desktop customers. We're generally able to meet the needs of our enterprise customers around LDAP and ActiveDirectory authentication. We'll look at what else we can do natively in the distro to improve this.

OK I get the need that some may have to integrate an UI but please don't ship a full-blown Samba/winbindd plus config generator as default.

Here's the why:

Everyone has different LDAP setups. Some use a homegrown LDAP, some use MS AD in varying versions, some use Samba as AD in varying versions - and then everyone uses a different LDAP/AD scheme (e.g. is the username attribute lowercase-able, which attribute is it mapped to, are all PCs/users in a single OU, do you want to restrict logins to specific groups, does the organization need "full" AD setup or will a plain ldap_bind be sufficient ...) and you almost always need to hand-tune the configuration for your specific setup. A GUI configurator will most likely only work OOTB for people sticking with a standard MS AD, and make problems with non-standard setups, multi-domain memberships or similar.

And: Non-enterprise users will most likely not need AD/LDAP support. Those who do should have competent admins anyhow, but what I can certainly say is that the documentation could be updated (e.g. https://wiki.ubuntuusers.de/Samba_Winbind/ only works for 12.04/14.04). I'd rather like if the documentation were improved than yet another shoddy Samba config generator that's falling out of sync with Samba more sooner than later...

I was really happy when that Ask HN thread was first posted and even happier now knowing that all of the comments were read and thoroughly considered. That said, what this blog post and the general feeling I get from Ubuntu as a whole is that:

1. There is too much of a focus on how far Ubuntu has come and not enough focus on how far things still need to progress. I too remember the days when sleep/hibernate were a crap shot, but that was when I viewed Ubuntu as an open source alternative without much expectation. Nowadays, I see Ubuntu as a mature desktop and as such, I judge it much more harshly. Anything that doesn't work or isn't 99.99% stable is a red flag for me. So I do hope that the Ubuntu team puts more focus on making things up to date, rock solid, and super stable rather than go chase after new features. Just like a building, you need a stable foundation before building upwards.

2. There isn't a clear target audience for Ubuntu Desktop. What does Ubuntu Desktop want to be? Before it was convergence and while a very neat idea, I was never clear on who was supposed to use it. The requirements screamed high income, tech savvy end users. However, development focused on Unity, Mir, etc. with work going into features that didn't fit the target audience. For example, like the post says, HiDPI & 4K were a surprise. Why was this surprising? The group that would most likely be your early adopters and trend setters are the same exact group that would have this type of hardware. Same with trackpad, gestures, customizability, flux, root on ZFS, security, etc. All of those are used heavily by the demographic most likely to follow the news on Unity and convergence. It baffles my mind that Ubuntu's Product Management couldn't make this connection and understand what core features to build out first in Unity/Mir. Yes, these are on the sidelines now, but I really hope the Product Management team takes the time to figure out some direction.

3. At the moment, Ubuntu is at a major crossroad. Even after Mark Shuttleworth's post, this post, and all of my usual Linux news following, I don't really know where Ubuntu Desktop is going to go. Tell us what we can expect as users. Tell us when we can expect it to come. Tell us how you intend on getting there. And most importantly, tell us how we can help! Either though regular posts to various communities like the Ask HN one, or ways we can contribute actual work. Not everyone is a dev, but as an example, I used to do professional QA and yet I found it extremely difficult to find out how I can help QA things and submit useful bug reports (this isn't just Ubuntu but most open source projects). The usual "check the docs/wiki" or "submit something on the issue tracker" are not helpful. In all of my years using Linux, that Ask HN thread + this blog post was the first time I ever felt like I was heard and managed to contribute to Ubuntu. Even something as simple as periodically getting feedback from the community and telling us what you heard from us makes me feel more optimistic about Ubuntu's future.

I apologize for the rant-like nature of my comment, but hopefully this gets read and something positive comes out of it. Thanks for reading.

Instead of adopting Gnome3, they should adopt KDE. It's far more customizable (so "Easily customize, relocate the Unity launcher (53 weight)" would be already done) and much better architected than Gnome. And for people lamenting the loss of Unity, it wouldn't be that hard to make a custom theme for KDE which largely replicates the look-n-feel of Unity. Gnome simply is not set up to allow any kind of customization, and the devs actively discourage it. The opposite is true for KDE, and a distro that wants to stand out with its UI would be better served with a DE that allows them the freedom of customization.

I may be a minority, but I am very saddened by this. Not because I have any particular love for Unity, but rather I share Mark's conviction that convergence is the future.

Love or hate it but Unity was IMO the best shot we had at getting an open source unified phone, tablet and desktop experience...and now this is effectively Canonical not only shutting down Unity, but refocusing efforts away from convergence and towards more traditional market segments. I mourn the death of this innovative path.

That said, hopefully this convergence with GNOME will eventually lead back to convergence...but for now that dream is dead it would seem.

I'm one of the people who asked for less NIH in Ubuntu in the recent thread https://news.ycombinator.com/item?id=14002821 but I didn't think they would take it this far. Jokes aside, it's sad that Unity won't be developed further.

I'm one of the ones who loves Unity 7, it's always been faster and less memory hungry than GNOME or KDE for me. I will just have to cling onto the LTS for as long as possible.

In the long-term I think this is good for Ubuntu and Linux users in general, less diversity can sometimes help an ecosystem form. I think many users just want a DE to stay out of the way and make life easier, so I hope some of the Ubuntu ease of use focus and community will get injected back into GNOME. I really hope a huge flood of users coming back forces them to look at their memory usage and get it under control.

I understand that choice in the Linux world is very important, but I also think that choice (taken to extremes) can be crippling. My opinion is that we have too many desktop environments, and too many distros.

If we imagine a hypothetical scenario where in June 2010 Ubuntu committed to Gnome as the DE, imagine how much progress would have been made with Gnome in the last 7 years, not just from a coding perspective, but from a community and social perspective.

I consider it supremely important that we educate as many computer users as possible about the negative side of proprietary software (lock in subscriptions, proprietary file formats, closed source privacy concerns etc).

What Ubuntu did back in 2010 (I think) did major damage toward that vision.

I applaud Mark Shuttleworth for making the decision, even if he only got there because of commercial reasons. I really hope that Canonical and Red Hat can work together to make Gnome not just a technological success, but a social one too.

I get that they're moving away from convergence, but what does this ultimately mean for Ubuntu as a mobile OS? In the grand scheme of things, what does this mean for users who want a completely FOSS stack for their phone (let's ignore the baseband for now)?

As far as I can tell, this just means that your only options are Android or iOS. It's not easy to get a Jolla/SailfishOS phone that will work on most Canadian or USA networks, and with this announcement it seems that Ubuntu phones won't be around for much longer. This coupled with the death of Firefox OS means that there's really not much of a choice. Certainly you can run AOSP with no Google Apps, but not having Google Play Services tends to cause more and more problems, or at the very least means your phone is less and less capable as time goes on.

I guess in general we can all celebrate that Ubuntu is moving to GNOME / Wayland and is ditching convergence, but I think the fact that there's no healthy alternative to iOS / Android is quite sad. If Canonical is exiting the mobile space to work on other things, what other alternatives do users have?

I might be a small minority, but I _like_ Unity 7. I have never used Unity 8, and I thought the Ubuntu phone was misspent effort, but wow. Now I have to figure out if there is a way to style Gnome to look like Unity 7.

This is good news for linux on the desktop. Not that diversity isn't good, but Unity has been stale for years while Gnome has been progressing, but suffering from the fragmentation that Canonical caused.

Hopefully this will result in more contributions upstream, which will benefit all linux distributions. This was always the main complaint with Canonical.

> We will shift our default Ubuntu desktop back to GNOME for Ubuntu 18.04 LTS

This is huge and was my #1 request for the previous post for Ubuntu 17.10. Gnome on Fedora is amazing and I have had people walk up to me and ask me - what OS am I running ?

It is so much better for Ubuntu and Redhat to have joint stewardship of Gnome going forward rather than split energy on wasted competition.

My next biggest request is flatpak vs snappy - I cant believe that the package management wars are beginning all over again in 2017. Just pick one and be done with it. RPM and DEB will never converge, but we have a narrow window of opportunity with flatpak and snappy.

This is awesome. When that "What do you want to see in Ubuntu 17.10?" post https://news.ycombinator.com/item?id=14002821 was up recently, I wanted to say "Get rid of the abomination that is Unity" but figured it'd just be flamebait. Little did I know how close my dreams were to coming true!

It takes courage to reflect on previous decisions and re-consider your product strategy. I am quite impressed by Mark Shuttleworth decision to move away from Unity and the desktop/phone convergence that has been slowing down innovation for Ubuntu, and allowed other distributions to catch up.

Every Linux users should benefit from this decision; I am excited to see the improvements they will make to the Gnome environment.

I'm entirely convinced that Shuttleworth's vision of convergence will happen. It looks like an inevitability, as mobile computing power continues to grow faster than typical consumer workloads (the same forces already made it possible for $400 laptops to be good enough for most mainstream users).

Canonical just didn't have the resources to push a 3rd mobile platform. Hell, even Microsoft gave up (who did have the resources, and IMO made a mistake in giving up).

GNOME has been doing some really cool stuff as of late (http://www.omgubuntu.co.uk/2017/03/top-features-in-gnome-3-2...). I'm still using Cinnamon, because I still like the look a bit more, but it's getting harder to ignore all of the excellent features GNOME provides, including:- Drive support for file manager- Gmail/Outlook support for GNOME accounts and built-in calendars- Working Wayland implementation

I'm holding out at the moment, as it's missing one feature from Cinnamon that I really like (the ability to launch and control any audio player from the sound icon in the tray), but when Fedora 26 launches I may finally have to switch over.

I hope that Canonical shifting back to GNOME will further its development under Wayland and not spend a crapload of time doing more work for Mir.

Unity has often been criticized because it was a Canonical thing and didn't leverage Wayland, but it's another world compared to Gnome shell. I couldn't like gnome shell no matter what.

I was using Gnome shell on a old eeepc. The interface is dumbed down and you have to install a bunch of extensions to make it as functional as Unity.

What was worse is that the launcher automatically triggered the search function, and that slowed down the pc to a crawl.I'm using KDE on it now and, even though less stable, it has a decent interface and it is surprisingly snappy.

I just got the Ubuntu phone. It's the first smart-phone that I don't hate. I do agree that Ubuntu needs to stop with their NIH syndrome when it comes to desktop, but the phone market is just flat out terrible.

I would be willing to spend a lot of money for a decent FOSS phone, but there just isn't anything out there. Ubuntu was my only hope |:(

Wow this is impressive just after the Ask HN they did few days back. It's been few years users complaining and opposing Mir so it seems it just took that last feedback cry. It's great that they listen to their users and also that there's going some love given back to the desktop+server (+IoT).

"If your guy is involved in criminal activity and has to have criminal lawyers of the caliber of these two gentlemen, who are the best, well, okay they got the best. But its a problem I cant solve for you. And if you think Im going to cut you some slack because youre looking atyour guy is looking at jail time, no. They [Waymo] are going to get the benefit of their record. And if you dont deny itif all you do is come in and say, We looked for the documents and cant find them, then the conclusion is they got a record that shows Mr. Levandowski took it, and maybe still has it. And heshes still working for your company. And maybe that means preliminary injunction time. Maybe. I dont know. Im not there yet. But Im telling you, youre looking at a serious problem."

...

"Well, why did he take [them] then?". "He downloaded 14,000 files, he wiped clean the computer, and he took [them] with him. That's the record. Hes not denying it. You're not denying it. No one on your side is denying he has the 14,000 files. Maybe you will. But if it's going to be denied, how can he take the 5th Amendment? This is an extraordinary case. In 42 years, I've never seen a record this strong. You are up against it. And you are looking at a preliminary injunction, even if what you tell me is true."

Uber is having a very bad day when a Federal judge starts talking like that. A preliminary injunction looks likely. If Uber can't find anything, this goes against them. Nobody has denied that Levandowski copied the files. Uber paid $600 million for Otto's technology and people. Even if the files didn't make it to Uber's computers, Waymo can probably get a preliminary injunction shutting down much of Uber's self-driving effort. Then Uber gets to argue that their technology is different from Waymo's. It's going to be hard to argue independent invention when all the people are from Google's project.

This judge is mighty impressive, and since it's so much in fashion these days to be suspicious of institutions, I want to highlight this passage:

THE COURT: If you all keep insisting on redacting so much information, like -- and you're the guilty one on that, Mr. Verhoeven -- then arbitration looks better and better. Because I'm not going to put up with it. If we're going to be in a public proceeding, 99 percent of what -- 90 percent, anyway, has got to be public.[..]

THE COURT: The best thing -- if we were -- one of the factors that you ought to be considering is maybe you should -- if you want all this stuff to be so secret, you should be in arbitration. You shouldn't be trying to do this in court and constantly telling them not to, or you putting in -- the public has a right to see what we do.[..]And I feel that so strongly. I am not -- the U.S. District Court is not a wholly owned subsidiary of Quinn Emanuel or Morrison & Foerster or these two big companies. We belong to the public.And if this continues, then several things are going to happen. One, we're going to call a halt to the whole -- we're going to stop everything. And we're going to have document-by-document hearings in this room,

It is surprising that Google did not push the court to appoint a third party discovery firm to handle the device imaging process and to provide a report to the court.

Maybe both parties' intense desire for privacy in this matter has driven Google to this strategy.

The seeming ludicrousness of the result - Alsup's "go try again, harder this time" - is not caused by this case's parties playing badly. It is caused by poorly defined and understood laws surrounding what constitutes a defensible search. Data handling in this stage of legal proceedings is imperfect, and can be manipulated by both parties to drive up the cost of litigation, or to strategically avoid disclosing the key breadcrumb documentation that would otherwise have led to the smoking gun(s).

'To the extent Uber tries to excuse its noncompliance on the grounds that Mr. Levandowski has invoked the Fifth Amendment and refused to provide Uber with documents or assistance, Waymo notes that Mr. Levandowski remains to this day an Uber executive and in charge of its self-driving car program. Uber has ratified Mr. Levandowskis behavior and is liable for it, Waymo attorney Charles K. Verhoeven wrote in a letter to the court (emphasis his).'

Interesting statement here from the judge and Uber's attorney (Gonzalez). Gonzalez worked for Alsup at some point in their careers.

Judge Alsup: Look. I want you to know I respect bothsides here. And everyone knows I know Mr. Gonzlez from thedays when he was a young associate and I was a partner, and hewas working for me on cases. And he has gone on to be a muchbetter lawyer than I ever was. But you shouldn't have asked for in camera on this. Thiscould have all been done in the open. I'm sorry thatMr. Levandowski has got his -- got himself in a fix. That'swhat happens, I guess, when you download 14,000 documents andtake them, if he did. But I don't hear anybody denying that.

I just realized how stark the prisoners dilemma here between Uber and Levandowski is. Based on what Alsup was saying today that if Uber can't produce counter-evidence by May 3rd, they are staring down the barrel of a preliminary injunction, they're damned if they fire him and damned if they don't.

2. Uber fires Levandowski. Now, he has no reason to protect Uber, the incentives for him are to avoid criminal prosecution. He could even do a deal with Google or a prosecutor to cooperate in the civil case in exchange for avoiding criminal prosecution. Uber is then likely to lose the actual case, sad trombone, no self driving cars for Uber.

As others have pointed out, the stakes for Uber are incredibly high, they missed the china train and if they can't catch the self-driving-car train, then their $50+ billion valuation is up in smoke.

This story is fascinating for tech people everywhere and we should all pay attention.

We all have big dreams of starting our own company some day (I know do) and many of us work for big corporations that would rather we never go anywhere and work for as little as possible. (admittedly the markets are forcing them to pay us a lot but they aren't doing it out of good will).

The outcome of this will teach us all very valuable lessons. I can't be the only one who is a little paranoid that if I start my own shit I'll be sued or that I may even be sued for some of the side projects I'm working on even though I've never taken any code or resources from my company.

A lot of people seem confused by the idea that a party can request personal documents someone else has.

Just like in criminal land, civil land has subpoenas.Parties can issue subpoenas for most things to other parties.In federal court, civil subpoenas are covered by Federal Rules of Civil Procedure rule 45.

"[The Judge] told Uber to search using 15 terms provided by Waymo, first on the employees computers that had already been searched, then on 10 employees computers selected by Waymo, and then on all other servers and devices connected to employees who work on Ubers LiDAR system."

Seems interesting that there's not a more comprehensive system or way to search for these since Google is clearly in possession of the specific documents they claim are stolen.

The way they're continuing the Judge's order to look for "15 terms" almost makes it seem like the extent of the original search was tied to file name or document titles or something?

It seems to me that even if Uber proves that Levandowski never downloaded the files to Otto, much less Uber, they still are in deep trouble unless they can prove that he never laundered the information in them through his brain to help Otto or Uber develop their technology.

I don't really get how the orders to search for documents on employee-owned devices are possibly enforceable. What stops employees with incriminating data from just throwing their devices in a river before they can be searched?

It seems like Uber has to prove a negative here - because Google has evidence Levandowski took the files they need to show they don't have theM? Or that the files weren't involved in their self driving IP? Not sure how they're supposed to do that.

Based on personal experiences this isn't surprising. When I was in college in a small town in the late 00's, many friends and acquaintances would drive across town after heavy drinking. 10-15 minutes, straight roads, little traffic. Other than not going out, or not getting home, you didn't have many options. It was a hour long walk or a $20-40 cab ride, if you could get one.

I was recently back in town for a wedding and our uber to the hotel from the venue (about the same distance) was $9.

Once came to town (now on the east coast) I instantly noticed many friends who used to drive would take an uber. It's cheap, it's easy to call one from a crowded/loud place, you know how much it will cost, they don't use cash, they know where you are, and you don't have to give directions. For someone intoxicated (or anyone really), these are game changers.

It's the difference between "who's going to drive?" and "who's calling an uber?"

Company politics aside, the accessibility of ride-sharing services introduces numerous real safety benefits on top of the obvious convenience.

Another study from July 2016 said that Uber doesn't save many drunk driving accidents although the study referenced in the Economist only focused on NYC, whereas the study from 2016 was focused on multiple metropolitan areas.

Hmm, the study [0] credits over all reduction of drunk driving accidents to Uber. If ride sharing is the source of reduction, shouldn't ride sharing in general be credited? Maybe Uber was the only ride-share available during the study though since it's data from 1989-2013.

"A recent increase in the ease and availability of alternative rides for intoxicated passengers partially explains the steep decrease in alcohol-related collisions in New York City since 2011.I examine the specific case of Ubers car service launch in New York Cityin May 2011,a unique example of a sudden increase in cab availability for intoxicated passengers.7This study draws on a dataset of all New York State alcohol-related collisions maintained by the New York State Department of Motor Vehicles from 1989 through 2013. My inference is based on the variation in Uber access across New York State counties over time and the careful choice of New York State counties that provide an appropriate control group for New York Citys drunk-driving behavior"

Edit:

Fair enough, looks like lyft only came to NYC around 2014[1]. But does anyone know if the ride share prices in NYC from 2011 [2] to now has significantly changed? I vaguely remember a lot of people using it initially because of dirt cheap prices during the first few month of introduction but I don't trust my memory over facts if someone has some.

Another bright light on the horizon: Self driving cars. I'm hugely optimistic that within the next 100 years, we could very possibly see deaths from drunk driving shrink to a small fraction of it's current value. It's a great thing when profit driven businesses also have positive side effects for everyone.

Why are people being so critical about this work? Sure, the blog post provides a simplified picture about what the system is actually capable of, but it's still helpful for a non-ML audience to get a better understanding of the high-level motivation behind the work. The OpenAI folks are trying to educate the broader public as well, not just ML/AI researchers.

Imagine if this discovery were made by some undergraduate student who had little experience in the traditions of how ML benchmark experiments are done, or was just starting out her ML career. Would we be just as critical?

As a researcher, I like seeing shorter communications like these, as it illuminates the thinking process of the researcher. Read ML papers for the ideas, not the results :)

I personally don't mind blog posts that have a bit of hyped-up publicity. It's thanks to groups like DeepMind and OpenAI that have captured public imagination on the subject and accelerated such interest in prospective students in studying ML + AI + robotics. If the hype is indeed unjustified, then it'll become irrelevant in the long-term. One caveat is that researchers should be very careful to not mislead reporters who are looking for the next "killer robots" story. But that doesn't really apply here.

If you are interested in looking at the model in more detail, we (@harvardnlp) have uploaded the model features to LSTMVis [1]. We ran their code on amazon reviews and are showing a subset of the learned features. Haven't had a chance to look further yet, but it is interesting to play with.

> We first trained a multiplicative LSTM with 4,096 units on a corpus of 82 million Amazon reviews to predict the next character in a chunk of text. Training took one month across four NVIDIA Pascal GPUs

Wait, what? How did "232 examples" transform into "82 million"??

OK, I get it: they pretrained the network on the 82M reviews, and then trained the last layer to do the sentiment analysis. But you can't honestly claim that you did great with just 232 examples!

I've been noticing a lot of work that digs into ML model internals (as they've done here to find the sentiment neuron) to understand why they work or use them to do something. Let me recall interesting instances of this:

1. Sander Dieleman's blog post about using CNNs at Spotify to do content-based recommendations for music. He didn't write about the system performance but collected playlists that maximally activated each of the CNN filters (early layer filters picked up on primitive audio features, later ones picked up on more abstract features). The filters were essentially learning the musical elements specific to various subgenres.

2. The ELI5 - Explain Like I'm Five - Python Library. It explains the outputs of many linear classifiers. I've used it to explain why a text classifier was given a certain prediction: it highlights features to show how much or little they contribute to the prediction (dark red for negative contribution, dark green for positive contribution).

3. FairML: Auditing black-box models. Inspecting the model to find which features are important. With privacy and security concerns too!

Since deep learning/machine learning is very empirical at this stage, I think improvements in instrumentation can lead to ML/DL being adopted for more kinds of problems. For example: chemical/biological data. I'd be highly curious to what new ways of inspecting such kinds of data would be insightful (we can play audio input that maximally active filters for a music-related network, we can visualize what filters are learning in an object detection network, etc.)

> The model struggles the more the input text diverges from review data

This is where I fear the results will fail to scale. The ability to represent 'sentiment' as one neuron, and its ground truth as uni-dimensional seems most true to corpuses of online reviews where the entire point is to communicate whether you're happy with the thing that came out of the box. Most other forms of writing communicate sentiment in a more multi-dimensional way, and the subject of sentiment is more varied than a single item shipped in a box.

In otherwords, the unreasonable simplicity of modelling a complex feature like sentiment with this method, is something of an artifact of this dataset.

I would imagine stuff like sarcasm is still out of reach though. It seems hard for humans to understand it in text based communication. Also using anything out of the standard sentimental model might throw it off. "This product is as good as <product x> (where product x has been known to perform bad." I am just trying to think of scenarios where a sentimental model would fail.

Sentimental neuron sounds fascinating too. I didn't realize individual neurons could be talked about or understood outside of the concept of the NN. I am thinking in terms of "black box" its often referenced to in some articles.

Since one of the research goal for openai is to train language model on jokes[0], I wonder how this neuron would perform with a joke corpus.

I think one of the most amazing parts of this is how accessible the hardware is right now. You can get world-class AI results with the cost of less than most used cars. In addition, with so many resources freely available through open-source, the ability to get started is very accessible.

"The sentiment neuron within our model can classify reviews as negative or positive, even though the model is trained only to predict the next character in the text."

If you look closely at the colorized paragraph in their paper/website, you can see that the major sentiment jumps (e.g. from green to light-green and from light-orangish to red) occur with period characters. Perhaps the insight is that periods delineate the boundary of sentiment. For example:

I like this movie.I liked this movie, but not that much.I initially hated the movie, but ended up loving it.

The period tells the model that the thought has ended.

My question for the team: How well does the model perform if you remove periods?

Can someone explain what is "unsupervised" about this? I'm guessing this is what confuses me most.

I think this work is interesting, although when you think about it, it's kind of normal that the model converges to a point where there is a neuron that indicates whether the review is positive or negative. There are probably a lot of other traits that can be found in the "features" layer as well.

There are probably neurons that can predict the geographical location of the author, based on the words they use.

There are probably neurons that can predict that the author favors short sentences over long explanations.

I think it's fair to criticize this blog post for being unclear on what exactly is novel here; pre-training is a straighforward and old idea, but the blog post does not even mention this. Having accessible write ups for AI work is great, but surely it should not be confusing to domain experts or be written in such a way as to exacerbate the rampant oversimplification or misreporting in popular press about AI. Still, it is a cool mostly-experimental/empirical result, and it's good that these blog posts exist these days.

For what it's worth, the paper predictably does a better job of covering the previous work and stating what their motivation was: "The experimental and evaluation protocols may be underestimating the quality of unsupervised representation learning for sentences and documents due to certain seeminglyinsignificant design decisions. Hill et al. (2016) also raises concern about current evaluation tasks in their recent work which provides a thorough survey of architectures and objectives for learning unsupervised sentence representations - including the above mentioned skip-thoughts. In this work, we test whether this is the case. We focus in on the task of sentiment analysis and attempt to learn an unsupervised representation that accurately contains this concept. Mikolov et al. (2013) showed that word-level recurrent language modelling supports the learning of usefulword vectors and we are interested in pushing this line ofwork. As an approach, we consider the popular researchbenchmark of byte (character) level language modellingdue to its further simplicity and generality. We are also interested in evaluating this approach as it is not immediately clear whether such a low-level training objective supports the learning of high-level representations." So, they question some built in assumptions from the past by training on lower-level data (characters), with a bigger dataset and more varied evaluation.

The interesting result they highlight is that a single model unit is able to perform so well with their representation: "It is an open question why our model recovers the concept of sentiment in such a precise, disentangled, interpretable, and manipulable way. It is possible that sentiment as a conditioning feature has strong predictive capability for language modelling. This is likely since sentiment is such an important component of a review" , which I tend to agree with... train a on a whole lot of reviews, it's only natural to train a regressor for review sentiment.

What they have done is semi-supervised learning (Char-RNN) + supervised training of sentiment.Another way to do is semi-supervised learning (Word2Vec) + supervised training of sentiment.If first approach works better, does it imply that character level learning is more performant than word level learning?

This is a great name for a band :-). That said, I found the paper really interesting. I tend to think about LSTM systems as series expansions and using that as an analogy don't find it unusual that you can figure out the dominant (or first) coefficient of the expansion and that it has a really strong impact on the output.

Impressive the abstraction NNs can achieve from just character prediction. Do the other systems they compare to also use 81M Amazon reviews for training? Seems disingenuous to claim "state-of-the-art" and "less data" if they haven't.

Walter, thank you so much for finally doing this! I am so happy that Symantec finally listened. It must have been really frustrating to have to wait so long for this to happen. I have really been enjoying D and I love all the innovation in it. I'm really looking forward to seeing the reference compiler packaged for free operating systems.

Switched to D 4 years ago, and have never looked back. I wager that you can sit down a C++/Java/C# veteran, and say write some D code. Here's the manual, have fun. They will with in a few hours be comfortable with the language, and be fairly competent D programmer. Very little FUD surrounding the switching to yet another language with D.

D's only issue is that it does not have general adoption, which I'm willing to assert is only because it's not on the forefront of the cool kids language of the week. Which is a good thing. New does not always Mean, improved. D has a historical nod to languages of the past, and is trying to improve the on strengths of C/C++ and smooth out the rough edges, and adopt more modern programming concepts. Especially with trying to be ABI compatible, it's a passing of the torch from the old guard to the new.

Regardless of your thoughts on D; My opinion is I'm sold on D, It's here to stay. In 10 years D will still be in use, where as the fad languages will just be foot notes in Computer Science history as nice experiments that brought in new idea's but were just too out there in the fringes limiting themselves to the "thing/fad" of that language.

Since I see some comments in this thread, asking what D can be used for, or why people should use D, I'm putting below, an Ask HN thread that I had started some months ago. It got some interesting replies:

Please don't get me wrong, as I don't want to start a flame here, but why do they call D a "systems programming language" when it uses a GC? Or is it optional? I'm just reading through the docs. They do have a command line option to disable the GC but anyway...this GC thing is, imho, a no-go when it comes to systems programming. It reminds me of Go that started as a "systems programming language" too but later switched to a more realistic "networking stack".

Interesting change! Before, people had a choice between the proprietary Digital Mars D (dmd) compiler, or the GCC-based GDC compiler. And apparently, since the last time I looked, also the "LDC" compiler that used the already-open dmd frontend but replaced the proprietary backend with LLVM.

I wonder how releasing the dmd backend as Open Source will change the balance between the various compilers, and what people will favor going forward?

Something I always thought was cool about dlang was that you can talk to the creator of the programming language on the forums. I don't write much D code as of now, but I always visit the forums everyday for the focused technical discussions. Anyways, congrats on the big news!

It's really surprising that to this day, there are languages in use which have its reference implementation closed source. All the possible optimizations and collaboration possible when it's open is invaluable.

This was something that always rubbed me the wrong way about the language, and it was an impediment for adoption for me (for D, but also Shen and a few others). In this era, there is no excuse for a closed source reference compiler (I could care less if it's not a reference compiler, I just won't use it). I'm surprised it took this long to do this, it seems like D has lost most of its relevance by now...relevance it could have kept with a little more adoption. I wonder if it can recover.

I wanted to play around with D using the DMD compiler but it's unfortunate I have to install VS2013 and the Windows SDK to work with 64-bit support in Windows. I've installed VS in the past and found it to be a bloated piece of software I'm not willing to do again.

This is one of those announcements that seems unremarkable on read-through but could be industry-changing in a decade. The driving force between consolidation & monopoly in the tech industry is that bigger firms with more data have an advantage over smaller firms because they can deliver features (often using machine-learning) that users want and small startups or individuals simply cannot implement. This, in theory, provides a way for users to maintain control of their data while granting permission for machine-learning algorithms to inspect it and "phone home" with an improved model, without revealing the individual data. Couple it with a P2P protocol and a good on-device UI platform and you could in theory construct something similar to the WWW, with data stored locally, but with all the convenience features of centralized cloud-based servers.

At that time I was working at a healthcare startup, and the ramifications of consensus algorithms blew my mind, especially given the constraints of HIPAA. This could be massive within the medical space, being able to train an algorithm with data from everyone, while still preserving privacy.

The key algorithmic detail: it seems they have each device perform multiple batch updates to the model, and then average all the multi-batch updates. "That is, each client locally takes one step of gradient descent on the current model using its local data, and the server then takes a weighted average of the resulting models. Once the algorithm is written this way, we can add morecomputation to each client by iterating the local update. "

They do some sensible things with model initialization to make sure weight update averaging works, and show in practice this way of doing things requires less communication and gets to the goal faster than a more naive approach. It seems like a fairly straighforward idea from the baseline SGD, so the contribution is mostly in actually doing it.

"Federated Learning enables mobile phones to collaboratively learn a shared prediction model while keeping all the training data on device, decoupling the ability to do machine learning from the need to store the data in the cloud."

So I assume this would help with privacy in a sense that you can train model on user data without transmitting it to the server. Is this in any way similar to something Apple calls 'Differential Privacy' [0] ?

"The key idea is to use the powerful processors in modern mobile devices to compute higher quality updates than simple gradient steps."

"Careful scheduling ensures training happens only when the device is idle, plugged in, and on a free wireless connection, so there is no impact on the phone's performance."

This is quite amazing, beyond the homomorphic privacy implications being executed at scale in production -- they're also finding a way to harness billions of phones to do training on all kinds of data. They don't need to pay for huge data centers when they can get users to do it for them. They also can get data that might otherwise have never left the phone in light of encryption trends.

This is speculative, but it seems like the privacy aspect is oversold as it may be possible to reverse engineer the input data from the model updates. The point is that the model updates themselves are specific to each user.

Why did they not build something like this ? I'm kind of concerned that my private keyboard data is being distributed without security. The secure aggregation protocol doesn't seem to be doing anything like this.

Even if this only allowed device based training and not privacy advantages it's exciting as a way of compression. Rather than sucking up device upload bandwidth you keep the data local and send the tiny model weight delta!

To be honest I have thought about this for long for distributed computing. If we have a problem which takes a lot of time to compute but problem can be computed with small pieces and then combined then why can't we pay user to subscribe for the computation? This is a major step toward thr big goal.

Where is the difference between that and distributed computing? A part of the specific usage for ML I don't see many differences, seti@home was an actual revolution made of actual volunteers (I don't know how many google users will be aware of that).

I think the implications go even beyond privacy and efficiency. One could estimate each user's contribution to fidelity gains of the model. At least as an average within a batch. I imagine such an attribution to rewarded in money or credibility in the future.

I would argue there is no such thing. The model will after the update now incooperate your traning data as a seen example, clever use of optimization would enable you to partly reconstruct the example.

Is somebody able to explain why certain e-ink displays are so slow to refresh while others are much faster? For instance my Garmin Vivoactive HR e-ink display, that is even capable of displaying 64 colors, is like an LCD display in terms of refresh rate apparently, you can't see the difference easily, while the one that was used to build this project takes a lot of time to even show a single frame (see the Youtube video where the display is presented, following the link Jann provided in the blog post). My best guess is that they use completely different technologies.

EDIT: Vivoactive HR uses a Transreflective LCD actually. This web site explains very well how it works:

It looks like there's a business opportunity here for someone to make a really slick browser based LEGO editor that does cost estimates and orders all the correct components for you when you're finished. I'm curious how large the market for such a thing would be.

Would anyone else chose a different software solution rather than Docker with resin.io? I love working on projects like this but I've stayed away from Docker so far. Docker plus a third-party service to manage it seems like it could be overkill, but it obviously got the job done.

There is much discussion here regarding quantum efficiency (QE). Keep in mind that figures for sensors are generally _peak_ QE for a given colour filter array element. These can be quite high like 60-70%.

But - this is an 'area under the graph' issue. While it may peak at 60%, it can also fall off quickly and be much less efficient as the wavelength moves away from the peak for say red/green/blue.

From what I can tell from the tacky promo videos, the sensor is very sensitive for each colour over a wide range of wavelengths, probably from ultraviolet right up to 1200nm. That's a lot more photons being measured in any case, but especially at night.

Their use of the word 'broadband' sums it up. It's more sensitive over a much larger range of frequencies.

I also wouldn't be surprised if they are using a colour filter array with not only R/G/B but perhaps R/G/B/none or even R/IR/G/B/none. The no filter bit bringing in the high broadband sensitivity with the other pixels providing colour - don't need nearly as many of those.

Edit - one remarkable thing for me is based on the rough size of the sensor and the depth of field in the videos, this isn't using a lens much more than about f/2.4. You'd think it would be f/1.4 or thereabouts to get way more light but there is far too much DoF for that.

It would be interesting to see how this compares to theoretical limits. At a given brightness and collecting area, you get (with lossless optics) a certain number of photons per pixel per unit time. Unless your sensor does extraordinarily unlikely quantum stuff, at best it counts photons with some noise. The unavoidable limit is "shot noise": the number of photons in a given time is Poisson distributed, giving you noise according to the Poisson distribution.

At nonzero temperature, you have the further problem that your sensor has thermally excited electrons, which aren't necessarily a problem AFAIK. More importantly, the sensor glows. If the sensor registers many of its own emitted photons, you get lots of thermal noise.

Good low noise amplifiers for RF that are well matched to their antennas can avoid amplifying their own thermal emissions. I don't know how well CCDs can do at this.

Given that this is a military device, I'd assume the sensor is chilled.

One would think with all the money the military throws into imaging technology that they would already have this.

For Special Operations use, it'd be nifty to have this technology digitally composited in real-time with MWIR imaging on the same wearable device. Base layer could be image intensification with this tech, then overlay any pixels from the MWIR layer above n temperature, and blend it at ~33% opacity. Enough to give an enemy a nice warm glow while still being able to see the expression on their face. Could even have specially made flashbangs that transmit an expected detonation timestamp to the goggles so they know to drop frames or otherwise aggressively filter the image.

Add some active hearing protection with sensitivity that far exceeds human hearing (obviously with tons of filtering/processing), and you're talking a soldier with truly superhuman senses.

That's not to mention active acoustic or EM mapping techniques so the user can see through walls. I mean, USSOCOM is already fast-tracking an "Iron Man" suit, so I don't see why they wouldn't want to replicate Batman's vision while they're at it.

Can someone wake me up in the future? When we have digital eyes, and we can walk around at night as if it were day except the stars would be glittering. Sometimes, I'm so sad to know I'll not live to know these things and I'm incredibly envious of future generations.

That really is incredible. I wonder how they keep the noise level down and if the imaging hardware has to be chilled and if so how far down. Pity there is no image of the camera (and its support system), I'm really curious how large the whole package is. It could be anything from hand-held to 'umbilical to a truck' sized.

My brother in law experimented with this camera a few years back on family portraits. The camera picks up a lot of "dark" details. Skin displays pale and veins are very defined. My nieces called it "the vampire camera".

Is that red rocks (just outside of Las Vegas). There's a lot of man made light sources that really scatter light pretty far and in a lot of directions (the Luxor spotlight comes to mind). I wonder if that could have an effect on this camera's performance.

They list a lot of potentially useful applications on the product's own web site. I wonder how long it will take for this sort of technology to be commercially viable for things like night vision driving aids. High-end executive cars have started to include night vision cameras now, but they're typically monochrome, small-screen affairs. I would think that projecting an image of this sort of clarity onto some sort of large windscreen HUD would be a huge benefit to road safety at night. Of course, if actually useful self-driving cars have taken over long before it's cost-effective to include a camera like this in regular vehicles, it's less interesting from that particular point of view.

If you know a little bit about legal procedure dealing with something like this shouldn't be too hard.

You'd file against your local bank based on the address of the branch you go to for an order to show cause hearing where they are tasked to show up and show cause as to why the transfer of money should go forward. You'd state the basic grounds of mistaken identity, propose a temporary restraining order barring any further action until the case is heard, and go to the court for a judge to sign the order and give instructions for service.

Once it gets on everyone's radar as a conflicting court proceeding (rather than a customer service complaint) they'd likely quickly get to the bottom of it.

It sounds really hard but it isn't, most courts in bigger cities at least will have an office where you can make an appointment to get free volunteer legal help.

Yes this will burn a couple slightly frustrating afternoons getting it together but it's eminimently possible to do, and an interesting exercise for the average person who enjoys learning how things work.

He should be calling up the plaintiff's lawyers, not BOA. Lawyers don't want to collect only to have to give it back. Find out who they are and call them. Any other attorneys involved as well. Once they have proof the money should not be given out, any ethical attorney would be very wary of handing out money.

It's also likely the money will be sitting in the trust account of an attorney before it actually gets dispersed, so he has more time that way. Contacting the bank for help in the legal system is totally the wrong way to solve this. They've given the money away and can't actually get it back even if they tried. They're a useless avenue at the moment.

What I don't get: Why would he pursue some court order he's not a party to? The only logical counterparty to this dispute is BoA. They gave away your money without proper title. That shouldn't hold up in court? That they gave some money to the LA Sheriffs Department, that would be BoA to recoup.

A few minutes later, she came back with the bad news. Why didnt you call earlier? Its too late for us to withdraw the request. The money was already sent to the court! You need to go down to the courthouse and ask them to show you the court documents.

It is not his problem what they did with the money, they took it from him plain and simple. The error was on their part and how they correct it on the other end is their problem.

Back in the day when people use MCI for some of their phone services , I got an adder to my phone bill from ATT (my carrier) for a charge attributed to a John Fox. I called ATT a couple times and pointed out that I'm not John Fox and they tried to tell me to call MCI. MCI used their LEC billing agreement to have ATT collect the fee. At some point I explained that I was not an MCI customer, had no relationship with them and that ATT had actually charged this fee to me, which is incorrect - how they wanted to deal with MCI or John Fox was their problem. They credited my account and I never heard about John Fox again.

EDIT: Actually the comment below by CLPX is better - the bank did this, they took his money and it is THEIR problem not the LASD. My point remains somewhat valid, it's not your problem to chase the money after someone wrongly took it - it's their problem. The bank didn't do a decent job of verification - they went on a partial name match alone apparently (no SSN, no account number, WTF).

Not sure if you have small claims court in America. In Canada for losses under 10000$ process is fairly straightforward. You should file and follow the process. If they don't give money back, go to the court and execute writ of seizure. One guy did this.

Don't mess around with the court, the whole thing is Bank of America's problem, don't let them make it your fault or the courts. They have debited the wrong account, you can prove this; in the UK if this happened and they didn't return me my money I would take them to the small claims court which can be filled in online. Surely California must have this?

I had the same problem with one of the biggest french bank.I saw an important withdrawal on my account in a city where I wasn't. I called my bank and I explained the situation. They told me I'll get the money back in one day. I asked if I should change my credit card in case it was hacked, they told me calmy it was just an error on their part because another client had the same last name. I just couldn't believe it.

Wells Fargo did this to me many years ago from a business account. The State of California issued a seizer for unpaid taxes for another business. In the end I think Wells Fargo told us that our tax ID numbers were similar. We did end up getting the money back from the state of California after about 3 years. We of course never got back the penalties Wells Fargo charged or interest.

I wrote the blog post. It turns out writing a blog post works better than talking to the bank beyond simple catharsis. Just got a call from someone from Bank of America's social media team who credited the money back to my account. Thanks for all the suggestions here!

On two separate occasions they "lost" transfers to the IRS (totaling $40k) that I only discovered when the IRS came after me for failure to pay taxes. I had full receipts from BofA for the transfers that apparently never happened. At least as disconcerting, the reaction of BofA to the situation both times suggested that sending money to /dev/null was a routine occurrence in their organization.

BOA is probably the worst institutions I've ever dealt with and I never even opened an account with them.

I did get to close one I had with them though and that was a very good day for me because I was among the very 1st of many here who did the same and I got to watch what happened to them here from start to end.

They came to Branson Missouri and bought out a well loved local bank where I had my business account. I closed it as soon as I found out. As the new management came and started enforcing BOA policy people learned fast that I and a few others familiar with them weren't bullshitting about them.

Soon, the long time employees of the old bank started leaving because they weren't willing to piss their friends and neighbors for BOA and, of course, they told everyone why they left. Most went to work for one of the several other local banks here, and all their close friends and family moved their money with them.

When the last of those employees were gone everyone here started moving their accounts over to one of the still locally owned banks and in just a few years BOA didn't have hardly anyone left here to screw so they closed their doors and left town.

I will always admire my neighbors here for that. I knew people who let that bank abuse them for decades out in Los Angeles and I could never, not for the life of me, understand why.

Hm ... the bank acted clearly careless, but I am surprised that there is so little blame on the court/sherrif, as they messed up in the first place and then in the end to not really be willing to undo their misstake.

While BofA has its share of blame to absorb here, the specific issue is the lack of support for unusual names such as names containing whitespace. The issue may affect other banks, big or small. I for instance had once to deal with a banking application not supporting non-US phone numbers for 2FA, and yet positioned to serve customers 24x7 globally.

A comment on the article advises filing a theft report. This has worked for me twice. The one most relevant to HN:

Crappy payroll company ADP double-debited the quarterly IRS tax payment for payroll. (Luckily I always have payroll checks written out of a payroll-only account, so when the account went negative the bank called me so I could transfer additional money (about a half a million bucks) in and nobody's paycheck bounced). To make a long story short ADP refused to do anything and when I finally got to talk to the district manager he insisted his hands were tied. "I don't know why you keep saying we improperly took your money -- we didn't! The money is at the IRS and in three months you can just declare it as a credit on your next payment. We don't have your money at all." Finally I agreed with him, "you're right! I will stop saying you took your money and will instead use the correct vocabulary, 'grand theft', 'fraud', and 'abuse of power of attorney'. Since you prefer the precise terminology, if the money is not in my account by 2PM (note: that's when the fed wire closes) I'll drive over to the Santa Clara district Attorney's office and swear a complaint."

Oddly enough the money that they supposedly didn't have was in our account by 1pm.

You need to hire a lawyer. It's not fair but that's how this is. This is going to haunt you forever -- not just this account! You have to do so ASAP and get this straightened out so you have the paperwork to prove it next time this happens. You also should write Consumer Reports Web site -- they'll love this story.

In this case I think the author is railing on BofA too much, unless the court said "pony up any money from any account" I'm guessing they specifically asked BofA for the money from this particular account. The person responsible here is the Sheriff's office and they are the ones that should be pursued.

Wow. Lots of parties liable here. Reeks of those cases where banks forceclosed on the wrong houses.

Sad that one has to go public like this to get anything to happen, but I suspect that it will help showing the world how incompetent the involved parties were.

I remember there being some brilliant story in the reverse direction where someone sued a bank (I believe it was BoA) for something like this and won. When the bank didn't respond they got the court to allow property seizure to recover damages. Guy rocked up at a local bank branch and started taking computers and such with court order in hand and sheriff watching. He was legally robbing the bank. Bank reacted quite quickly then!

My personal nightmare with BofA started when they merged with Fleet. I had a credit card with Fleet and it had been set to auto-pay the balance every month. After the merger, the auto-pay got disabled without telling me and by the time I realized it two months later, around $20 of interest had accrued. After customer support calls to try to get it resolved, they refused to refund the interest. So I asked to cancel my account and asked for the full-payoff amount and the address to mail the check. Since I had paperless billing, I didn't have an official slip to send in with my payment, but I wrote my Fleet account number on the check and the check was cashed a couple of weeks later. I assumed the unpleasant situation was finished.

Little did I know, my nightmare was just beginning. Several years later, I was applying for an apartment and I failed the credit check. I got a hold of my credit report and saw that BofA claimed I had a large unpaid balance that was several years behind. After calling them many times to try to resolve it, we finally figured out that they had credited my payment to a non-existent account and that my account had a different BofA account number that was different from my Fleet account number. I thought that would clear everything up, but after they reconciled the two accounts, they still claimed I owed them over $4k in interest that had accrued on the unpaid account. No amount of common sense could make them realize that they cashed my full-payout amount check so my account should be fully paid off. It took getting a lawyer involved for them to finally zero out my account and, by that time, I'd missed out on the apartment. The lawyer also helped me dispute the credit mark on my account with the agencies because BofA refused to remove it. Hundreds of hours of my life calling their inept customer service and over $1k in legal fees all because of their screw-ups.

BofA are a truly despicable company that doesn't give customer service reps the ability to make even obvious, common-sense adjustments which leads situations like mine. Whenever I have the chance, I try to warn people away from doing business with them. They literally took a happy customer (I loved Fleet) and, through incompetence unwillingness to act reasonably, turned him into an enemy for life. And all for $20.

> my credit union was infinitely more competent than the buffoons at Bank of America.

This. It's not even a pro-credit union thing: small banks handle exceptional situations much better than big banks in my experience. You'd think it would be the opposite, but it's not. When the tellers know every customer, even when I almost never walk in, exceptions seem to generate a helpful phone call.

> ...So what we do here is match you with one of our members for a $30 fee which guarantees you a 30-minute conversation...

That's a good service to have I needed it and was referred but ten days later still no callback. And there was deadline I had to honour long passed. There was little time between when I needed the lawyer which I can't afford and the deadline. Scumbags know the law so well you are screwed for time trying to fight anyone.

This amount of money seems like it would be a good candidate for small claims court. Can anyone tell me if this guy can use small claims court (which is easy to initiate) to recover the money? I feel like even if he lodged a motion against the branch manager for negligence it might be enough to get things cleared up.

I don't understand why he's taking this on himself to be honest. If I was incorrectly named in some sort of lawsuit, my first reaction would be to call my lawyer. Who knows if there's also an incorrect arrest warrant out for example?

Was the bank really at fault here? From what I see they receive a legal letter from the Sheriffs Department, and they complied with the letter. The article doesn't say what was included in the letter to the bank, I assume the author doesn't know either. It could have the wrong SSN, or the Sheriffs Department just looked up the defendant's name, found the author's account number and SSN, and sent those to the bank.

BoA gave everybody's money-substitute-government-credit away, but few have realised it (Even thou BoA says so themselves in their financial statements). So the guy is lucky in a way; he now doesn't trust the thieves. Even fewer realise that the banknotes they are so proudly holding have long ago been defaulted on. It's all running on illusions now. Sell your paper.

Some of the multi-author articles on Distill have a very important (IMO) innovation. They quantify precisely the contribution each author has made to the article. I would like to see this become norm in scientific papers, so on the one hand it'd be clear who to ask questions if they arise, and on the other the various dignitaries won't get their honorary spot on the authors list of papers they were barely involved with scientifically.

I'm curious about the method chosen to give short term memory to the gradient. The most common way I've seen when people have a time sequence of values X[i] and they want to make a short term memory version Y[i] is to do something of this form:

Y[i+1] = B * Y[i] + (1-B) * X[i+1]

where 0 <= B <= 1.

Note that if the sequence X becomes a constant after some point, the sequence Y will converge to that constant (as long as B != 1).

For giving the gradient short term memory, the article's approach is of the form:

Y[i+1] = B * Y[i] + X[i+1]

Note that if X becomes constant, Y converges to X/(1-B), as long as B in [0,1).

Short term memory doesn't really seem to describe what this is doing. There is a memory effect in there, but there is also a multiplier effect when in regions where the input is not changing. So I'm curious how much of the improvement is from the memory effect, and how much from the multiplier effect? Does the more usual approach (the B and 1-B weighting as opposed to a B and 1 weighting) also help with gradient descent?

Hm. So that helps with high-frequency noise. Any progress on what to do when the dimensions are of vastly different scales? I have an old physics engine which had to solve about 20-value nonlinear differential equations. During a collision, the equations go stiff, and some dimensions may be 10 orders of magnitude steeper than others. Gradient descent then faces very steep knife edges. This is called a "stiff system" numerically.

If you are curious to see the code to produce that post you can check it out here: https://github.com/distillpub/post--momentum I was surprised to see that each post has its own html page and javascript library. I was expecting to see some form of rendering engine and a common javascript library.

It would be really, really great if you could somehow hook this up to Discourse so people could comment on and ask questions about the article. Allowing people to ask questions and having others answer like MathOverflow would I think bring a lot more clarity. Many different kinds of people want to understand material like this but may need the math unpacked in different ways.

This made me uncomfortable though (about CSS grid, not about the game):

grid-area: row start / column start / row end / column end;

So you have to put the rows (Y axis coordinates) first and columns (X axis coordinates) second, i.e. the opposite of how it's done in every other situation - i.e. draw_rect(start_x, start_y, end_x, end_y)

(1, 1, 3, 4) in every other language would draw a box 2 wide and 3 high, but in css grid it selects an area 3 wide and 2 high.

Also the fact it uses 'row' and 'column' to describe the gridlines rather than the actual rows and columns irked me.

I'm going to jump on the bandwagon here of others wondering just what the person or committee who thought up the API was smoking when they came up with it?

At first, it made kinda sense. Nothing too troubling.

But the deeper it went, the less it made sense. I don't have a problem with 1 vs 0 indexing (because I started coding in old-school BASIC back in the dinosaur days of microcomputing - so that doesn't bother me much).

It's just that the rest of the API seems arbitrary, or random, or maybe ad-hoc. Like there were 10 developers working on the task of implementing this, but with no overall design document to guide them on how the thing worked.

I'm really not sure why there's two (or three? or four?) different ways to express the same idea of a "span" of row or column cells, based on left or right indexing, or a span argument, or...???

Seriously - the whole thing feels so arbitrary, so inconsistent. This API has to be among the worst we have seen in the CSS world (not sure - I am not a CSS expert by any means). I can easily see this API leading to mistakes in the future by developers and designers.

We'll also probably see a bazillion different shims, libraries, pre-compilers, template systems, whatever - all working on the same goal of trying to fix or normalize it in some manner to make it consistent. Unfortunately, all of these will be at odds with one another.

I'm sure JQuery will have something to fix it (if not already). Bootstrap too.

The dumb thing is that had this been designed in a more sane fashion, such hacks wouldn't be needed.

I think these CSS learning games would work better if the game you were trying to complete was something you would actually use the technology for. For example, a game that involves building a website. (e.g. Instruct me to make the page for UX friendly by moving an element from one column to another, or adjust columns, etc.)

I would never use CSS grid to do what this game is asking me to, so even though it helps me learn the syntax and properties, it's not helping me learn how it's going to be applied to an actual website.

`grid-row: -2` targets the bottom-most row, whereas I would have expected `grid-row: -1` to do so. I've never seen `-2` used to refer to the last element in a sequence. Python [1,2,3][-1] yields 3, for example.

Is this like a built-in equivalent of Skeleton and similar frameworks? It's kinda cool the way it brings CSS closer to frameworks like iOS', which has built-in UI components like collection views and such that can be extended to build interfaces easily.

"Oh no, Grid Garden doesn't work on this browser. It requires a browser that supports CSS grid, such as the latest version of Firefox, Chrome, or Safari. Use one of those to get gardening!" I am running Chrome 56 on MacOS

This is pretty cool. One thing I notice is that when you submit an answer, it shakes the editor box. This usually has a "you did something wrong" connotation (ie if you type a bad password when signing into a mac.)

For those who are interested in some of the details of the work that's going on, Lin Clark's recent talk on "A Cartoon Intro to Fiber" at ReactConf 2017 is excellent [0]. There's a number of other existing writeups and resources on how Fiber works [1] as well. The roadmap for 15.5 and 16.0 migration is at [2], and the follow-up issue discussing the plan for the "addons" packages is at [3].

I'll also toss out my usual reminder that I keep a big list of links to high-quality tutorials and articles on React, Redux, and related topics, at https://github.com/markerikson/react-redux-links . Specifically intended to be a great starting point for anyone trying to learn the ecosystem, as well as a solid source of good info on more advanced topics. Finally, the Reactiflux chat channels on Discord are a great place to hang out, ask questions, and learn. The invite link is at https://www.reactiflux.com .

React team is doing an amazing job. I remember when it was first announced, I thought Facebook was crazy. "JSX? That sounds like a bad joke!" I don't think I've ever been so wrong. After hearing so much about React, I eventually tried it out and I realized that JSX wasn't a big deal at all, and in fact it was actually pretty awesome.

Their migration strategy is great for larger actively developed applications. Since Facebook is actually using React, they must have a migration strategy in place for breaking changes. Since breaking anything has such a big impact on the parent company, it makes me feel like I can trust em.

Heck, most of the items in this list of changes won't surprise anyone that's been following the project. Now there's less magic (e.g. React.createClass with its autobinding and mixins), and less React-specific code in your app (e.g. react-addons-update has no reason to live as a React addon when it can clearly live as a small standalone lib).

PropTypes' deprecation is not difficult to handle, but the removal of createClass means one of two things for library maintainers:

(1). They'll depend on the `create-class` shim package, or,

(2). They must now depend on an entire babel toolchain to ensure that their classes can run in ES5 environments, which is the de-facto environment that npm modules export for.

I'm concerned about (2). While we are probably due for another major shift in what npm modules export and what our new minimum browser compatibility is, the simple truth is that most authors expect to be able to skip babel transcompilation on their node_modules. So either all React component authors get on the Babel train, or they start shipping ES6 `main` entries. Either way is a little bit painful.

It's progress, no doubt, but there will be some stumbles along the way.

The breakup of the React package into a bunch of smaller modules really puts packages that treat React as a peer dependency in a pickle. I have a component module using createClass that works fine and exports a transpiled bundle in package.json. I guess now we'll have to switch to create-react-class, or maintain some kind of "backports" release series for people that are still using older React versions but want bugfixes.

What is this problem in the Javascript landscape to keep forcing developers to do things differently, with the penalty of your app not working anymore if you don't comply?

I mean, creating a new type of brush for painters is ok, but I don't see the need for forcing them to redo their old paintings with the new type of brush in order to keep them visible..

IMHO Coffeescript and some other to Javascript transpilers are still a much better language than the entire Babel ES5/ES6/ES7 thing. But for some reason my free choice here is in jeopardy. The community apparently has chosen for Babel and are now happily nihilating things that are not compatible with that.

In my opinion this is not only irresponsible, but very arrogant as well.

Although I do understand and can write higher order components, I still write and use small mixins in projects because it works for me. I also use createClass because I enjoy the autobinding and don't like the possibility to forget calling super.

Now I need to explain my superiors why this warning is shown in the console, making me look stupid using deprecated stuff. And I need to convince them why I need to spend weeks rewriting large parts of the codebase because the community thinks the way I write is stupid. Or I can of course stick to the current React version and wait until one of the dependencies breaks.

It would be really great if library upgrades very, very rarely break things. Imagine if all the authors of the 60+ npm libs I use in my apps are starting to break things this way, for me there is no intellectual excuse to justify that.

This is a good move. Modernization with sensible deprecation and scope re-evaluation with downsizing when more powerful alternatives exist. Too often codebases get bigger when they should really get smaller.

Awesome changelog with great migration instructions. Bravo to the React team!

Going to set aside some hours on Saturday to upgrade our React version.

I recently started to go in with functional components where I don't need life-cycle events such as componentDidMount. Does anyone know if React is planning to make optimizations for code structured in this way?

Fiber is what I'm really waiting for. Not much official chatter about it, but looks like a 16 release?

They just removed some addons in master that many third party packages rely on, including material-ui. Hopefully these other popular packages can be ready to go with the changes when the fiber release hits.

Why people are always end up with J2EE-like bloatware? There must be some pattern, something social. It, perhaps, has something to do with the elitism of a being a framework ninja, a local guru who have memorized all the meaningless nuances and could recite the mantras, so one could call oneself an expert.

The next step would be certification, of course. Certified expert in this particular mess of hundred of dependencies and half-a-dozen tools like Babel.

Let's say that there is a law that any over-hyped project eventually would end up somewhere in the middle between OO-PHP and J2EE. Otherwise how to be an expert front-end developer?

* They actually started deploying them in 2015, they're probably already hard at work on a new version!

* The TPU only operates on 8-bit integers (and 16-bit at half speed), whereas CPU/GPUs are 32-bit floating point. They point out in the discussion section that they did have an 8-bit CPU version of one of the benchmarks, and the TPU was ~3.5x faster.

* Used via TensorFlow.

* They don't really break out hardware vs hardware for each model, it seems like the TPU suffers a lot whenever there's a really large number of weights and layers that it must handle - but they don't break out the performance on each model individually, so it's hard to see whether the TPU offers an advantage over the GPU for arbitrary networks.

It's interesting that they focus on inference. I suppose training needs more computational power, but inference is what the end-user sees so it has harder requirements.

Most of us are probably better off building a few workstations at home with high-end cards. The hardware will be more efficient for the money. But if you're considering hiring someone to manage all your machines, power-efficiency and stability become more important than the performance/upfront $ ratio.

There's also FPGAs, but they tend to be much lower quality than the chips Intel or Nvidia put out so unless you know why you'd want them you don't need them.

Looking at the analysis of the article one of the big gains of this is that they have a Busy power usage of 384W which is lower than the other servers while having performance that is competitive with the other methods (although only restricting to inference).

While this is interesting for TensorFlow, I think that it will not result in more than an evolutionary step forward in AI. The reason being that the single greatest performance boost for computing in recent memory was the data locality metaphor used by MapReduce. It lets us get around CPU manufacturers sitting on their hands and the fact that memory just isnt going to get substantially faster.

I'd much rather see a general purpose CPU that uses something like an array of many hundreds or thousands of fixed-point ALUs with local high speed ram for each core on-chip. Then program it in a parallel/matrix language like Octave or as a hybrid with the actor model from Erlang/Go. Basically give the developer full control over instructions and let the compiler and hardware perform those operations on many pieces of data at once. Like SIMD or VLIW without the pedantry and limitations of those instruction sets. If the developer wants to have a thousand realtime linuxes running Python, then the hardware will only stand in the way if it cant do that, and well be left relying on academics to advance the state of the art. We shouldnt exclude the many millions of developers who are interested in this stuff by forcing them to use notation that doesnt build on their existing contextual experience.

I think an environment where the developer doesnt have to worry about counting cores or optimizing interconnect/state transfer, and can run arbitrary programs, is the only way that well move forward. Nothing should stop us from devoting half the chip to gradient descent and the other half to genetic algorithms, or simply experiment with agents running as adversarial networks or cooperating in ant colony optimization. We should be able to start up and tear down algorithms borrowed from others to solve any problem at hand.

But not being able to have that freedom - in effect being stuck with the DSP approach taken by GPUs, is going to send us down yet another road to specialization and proprietary solutions that result in vendor lock-in. Ive said this many times before and Ill continue to say it as long as we arent seeing real general-purpose computing improving.

Are people really using models so big and complex that the parameter space couldn't fit into an on-die cache? A fairly simple 8MB cache can give you 1,000,000 doubles for your parameter space, and it would allow you to get rid of an entire DRAM interface. It's a serious question, as I've never done any real deep learning...but coming from a world where I once scoffed at a random forest model with 80 parameters, it just seems absurd.

Interesting stuff; really points to the complexity of measurement of technical progress against the Mores law; it's really a more fundamental around how institutions can leverage information technologies and organize work and computation towards goals that are valued in society.

This is going to sounds like crazy advice, but having worked on many side projects in my life the last thing that's going to let you down is your skills. What you really need is time. Let's say it takes you 400 hours to build your project -- in those 400 hours, you will build up enough skills to get you started (not nearly enough to be good at it, but good enough).

So you need to work consistently 1-2 hours a day on your side project. It really doesn't matter what you do. If you manage to get those 1-2 hours in, you will muddle through and accomplish something. If your goal is to make a side project and bring in a non-zero amount of money, this is achievable. Learn whatever you learn on that project and then do it again.

Personally, I would spend exactly $0 on your task because, like I said, the thing that will kill you in the end is likely to be time commitment. If you spend money, you will be out the money and your time. So start with time and see where it takes you.

As others have said, no need to get fancy. Just build the simplest thing that will get you started, using the simplest tools you can find.

Not sure if I should share this, as it is a trivial and obvious thing to do. Recently I created a ramen-profitable on google play with currently a couple of thousand users.

The idea is to look for apps that have low ratings, high downloads and lots of recent comments, then make them better. You can use synonyms and the same niche category to increase visibility on google play. This is where the money is.

Here's what I did: at work I needed something. (A git commit graph). But the one I found was #1. buggy, and #2. too expensive. It wasn't my money, but I just couldn't allow my company to pay that much.

And then I realized I needed a rebase button on the pull-request screen... and so it continues to evolve.

Here's the thing: I've always known I'm a good maintenance programmer. I've always preferred working on existing software instead of making new software from scratch. And writing add-ons for Bitbucket is basically just another form of maintenance programming: reading Bitbucket's code, noticing its flaws and shortcomings, and fixing them.

I think you should start something very very small and forget about the money part for now. For me, my most successful project ideas came from problems that I faced during my own site launches.

Since you don't have much knowledge of FE development, I would suggest you keep things Simple Stupid and try to do as much as possible with HTML and jQuery. I have created really complex websites using just PHP and jQuery (sites that have made me 6 figures over time), plus you will learn the real nitty-gritty like DOM manipulation, CSS tricks, etc - which you will need to use anyway at least a few times regardless of the shiny JS framework.

I would highly recommend at this time you don't get sucked into the React, Node, Vue, etc. You will only end up wasting months without nothing to show for it (but maybe I'm just too old school).

Whatever time you have left after that, use it to learn online marketing. Learn about list building, SEO, Copywriting, outreach and affiliate marketing. Because that's how you turn your technology into actual money.

Here's my suggestion. Walk down the road to shops in your area (small, family run businesses) and ask if they have a business problem they think IT can solve.

You'd be surprised how little some of these businesses know. I have previously;- Built a travel database in MS Access for a Travel Agent (long time ago)- Ordered and setup ADSL connections and email for a water tank manufacturer and a furniture store- Capture requirements, researched, ordered and installed an office (6 people) worth of IT kit for a not-for-profit (didn't charge them for this work).- Designed and implemented a roster management system for an IT helpdesk for a university.

I built Rocket League Replays[1], a website which analyses the replay files generated from matches played in Rocket League. I took my inspiration from GGTracker[2] which is essentially the same thing, but for StarCraft 2. I was looking through the replay files and noticed that there was some human readable content in them, so I wrote a parser[3] and built the site around it. Eventually I started a Patreon which allowed users to support the site in return for more advanced analysis. I get around $200/mo from that which covers the server costs etc, so I'm more than happy with that.

My side projects always come from personal needs, in each case I built it to solve my own problem before turning it in to a full product (summaries below). If you are going in with the sole intention of making money then make sure you know that there is a need for what you want to make and that you can get your project in front of people.

You don't need to use the latest and greatest tech, in fact, I would urge you don't. For front end you can stick to simple JQuery interactions, bootstrap theme and you'll be fine, depending on the market sector you go for they may not even care about the design, so long as it's functional.

Summary of my project and where they came from:

http://www.oneqstn.com, before launching our company's product I put the question "Where would you expect to buy the Radfan?" on the shop page and then 5 options. I expanded this on its own dedicated domain and 5 years later it's still ticking along. Very popular in the middle east for some reason.

http://www.stockcontrollerapp.com, I manage in-house production of my company's hardware product and after moving from an Excel spreadsheet to a Python script I decided to make a stock management app for small factories. Has made my work life much easier and is more appropriate for me than Unleashed.

http://www.taptimerapp.com, I didn't like any of the timer apps that I had tried so made my own mainly for use in the gym. All the others had too small touch targets, hard to see at a distance/without glasses on, or stopped music playing when the timer finished so I made an app that addressed these.

If your goal is to make money, don't allow yourself to write a single line of code until you have talked to people not related to you (and not close friends with you), heard at least two of them independently describe facing the same business problem, and heard both of them say "yes, that would help!" (or better "yes, I would buy that!") in response to your proposed solution.

Finding a real business problem and a real solution is what matters. The tech is just an implementation detail you work out later.

List down 25 products/services you consume regularly. For each, ask whether a better version could be done. Yes? Do it.

Here's one case: the local/popular site to search for used cars sucks. It is slow, hard to see/compare all options, silly reloads the page on each added filter, filled with outdated listings, flooded with ads, and pic slides take forever (all of this on my slow phone over a slow 3g which is how most visitors must be using it). Furthermore, car dealers (who post most listings) complain about service and price. So I built the proverbial mvp and put it in the hands of my marketing partner (you won't sell a line of code if you do not partner with a person/company dedicated to push your stuff) who's already working on a deal with the used car dealers association, pitching a novel business plan, hopefully making some passive income for both of us.

I think the easiest thing is build what you need. So that if it failed to make money, you still have the tool you want.

I build https://noty.im that way, an monitoring tool that call me when my site is down.

Then I realized more thing is needed and I started to add more features and plan to publish launch soon.

I will say not to worry about scale and technical first. I learned it the hardway, Just get it out. No one will really care if it's broken or something doesn't owrk when you doesn't have lots of customer.

So to answer your questions:

1. Can you provide some ideas on where to start?Pick a technology stack you familiar with. Apply to Microsft BizSpark to take advantage of $150/monthly credit. Learn FE, it isn't that hard.

2. What are some simple things I can build by myself? Any idea?

I build `https://kolor.ml` in a sunday. It's very simple but I need it. So you can try to build some simple/small utility that helps people with their daily live such as: a tool to call people up in morning.

A tool to check if my site has expose some particular header such as `nginx`, `php version` etc and if found an old one or vulnerable one, alert.

Of course, lots of people already build those, but the point is just get started, along the way you will realize what you really want to build

This is what I believe to be the formula:1. Find a group of people who are interested in a subject.2. Find out what, related to that subject, they want to buy.3. Sell them that thing.

This is the approach I took with my book/videos at AngularOnRails.com, a "side project that makes money".

Another important thing is to surround yourself with people who have successfully done the thing you're trying to do.

I don't have much time right now but if you (or anybody) wants to talk about building side projects that make money, feel free to email me at jason@angularonrails.com. I'm not an expert but I know a hell of a lot more than I did 9 years ago when I started.

I used my food side project as a way to learn new skills. I am still in the process of working on version 2.

The simplest way to start is to take a framework or system that has most of the basic parts ready for you to use.

Since you already know python, try to learn something like django and use Bootstrap with a CDN for your front end stuff.

I would recommend reading some of the posts on indiehackers.com to get an idea of how those people got started with an idea and how they got their first customers. Some do not even have any tech skills and just used wordpress or found someone to help them with the site.There is also a podcast for this that just got started that is excellent. The founder of indiehackers was a YC alumn named Courtland, he is a cool guy.

I chose to solve a problem that I personally encountered. If you cannot think of something, try picking something that you know requires lots of manual effort for some people. Then use some scripts from the book Running Lean to try to work out exactly what the problem is for those people.

Another great resource is OppsDaily which I love reading first thing in the morning. Cory sends out a problem someone has in a particular industry that needs to be solved. The criteria is that they must be willing to pay for it if someone responds. In many cases they will say how much they are willing to pay.

Start to teach. Create a course in udemy on what you know (data processing or management). If creating a video course overwhelms you, create a text based course. I'm using softcover to do that. You can check here: https://www.jjude.com/softcover-in-docker/

Creating a course can get you the momentum. You can start there and branch out to other things.

Oh man I know this feel. I have been programming for startups for years and I have always had lots of ideas but never the mental commitment to finish anything.

I'm proud to say that very recently I did manage to complete a side project that I intend to launch in a week or so. It is a social site based on an idea I got from watching Japanese dramas.

What helped in my case was that my idea was really simple to build. I too have zero frontend / web design ability, so I just paid a guy from Craigslist to fix it up. Being able to bootstrap a finished product with a relatively small amount of time / money helps you get in that "closer" mentality instead of just playing around and never finishing.

I'd also suggest not worrying about making money at first. Just try to make a cool product or service. Money is a stressful and distracting motivator I find. Once you have something of value to offer and get some feedback from potential users, then you think more about pricing and marketing.

So in short, start small, don't be afraid to outsource and trade money for time, and don't worry about making a profit right away. That worked for me at least.

Side projects that also do some public good might be a good avenue for you to consider. I built Walkstarter https://walkstarter.org a free walkathon fundraising platform for public schools as a side project. The experience is fantastic. I continue to develop my skills, e-meet new people, and the platform is on track to raise a very satisfying $1 million for schools.

You're already successful at selling enterprise services to at least one company in management and backend data processing. Have you considered selling management and backend data processing advise, perhaps delivered in the form of a PDF or series of videos? This is stupendously valuable to tech companies if you meaningfully improve on what they have already and would allow you to sell to people who have expense accounts tied to, to steal a friend's phrase, the economic engine of the planet.

You don't need a commandingly high bar of programming sophistication to sell books. There exist services that can do all the heavy lifting for you. If you prefer knocking together a site to sell your own books, it is essentially an hour to get the minimum thing to charge money and ~2 days to get something which could plausibly be the kernel of an ongoing business.

> Listen to your friends, coworkers, and clients. Find something painful they mentioned that you also have first hand experience with, or that youve needed at your job. Package it up so its easy to use. Build an MVP, get feedback, iterate. Charge more than you think you should. Listen to your customers. Launch on ProductHunt. Market the hell out of it! Use Content Marketing, reach out to communities, forums, friends, and businesses with cold calls/emails. If youve built something great, word of mouth will do its magic. You can do this in your spare time, and probably should.

I've made a Windows GUI for a powerful command line open source application that was for Linux. It makes a few thousands dollars a month. It did take many years of part time programming along with user feedback to get to that stage though.

Desktop and Windows might seem like a dying market but that's what people use at work and those are the people who can pay for things.

I started a side project last year and it's been a fantastic experience. It's an opportunity to "scratch an itch" that the day job can't provide (which for me is doing whatever I want).

I had some frontend dev skills but didn't have the backend chops, so I hired someone on Upwork. I'm pretty busy at work so getting someone else involved is key (If I was by myself I'm not sure I would have stuck with it).

As someone with many failed side projects, I can tell you that having a goal of making money is usually a bad thing. Like many other people have said, you can learn the skills. The hardest part of building a side project is finding the time and staying motivated for more than a few months.

So, pick something _you_ will use and, if you enjoy building/using it, so will others. Obviously think about ways to monetize it, but money should be more of a side-effect than a motivation.

I'm currently working on https://insomnia.rest, which makes around $800/mo right now. I started it as a side-project a couple years ago with no intention of making money. However, traffic grew organically and I eventually left my job to pursue it full-time.

In summary, find something you love to work on and let it consume you. If you do this, making $100/mo should come in no time. Have fun hacking!

Look at oppsdaily.com. The developer posted here a while back. While there's no archive, there are daily postings concerning software needs. Some of the needs are unrelated to software, but most are. Could give you some ideas. (I have no relationship with the site or maker).

Building an audience + market validation before building a sideproject are the top starting priority for "a side project that makes money".

Start by building an audience (this can be as simple as interacting with professionals on Twitter and/or their own blogs, or even contributing here on HN!). I won't be able to determine what people want without asking people, and I will save a lot of wasted time by building something that I can guarantee people already want to pay for! For example: I intend to walk around my neighborhood with a survey to gauge interest in localized "technology disaster prevention" (aka initial setup of PC & phone backups with verification and increasingly annoying reminders) as a service. The first sideproject is the hardest because initially the audience is smallest but then can be re-used.

I hesitated to post this because most developers have an "if you build it, they will come" mentality (and a tendency to focus on technology/implementation details that they enjoy) that even I personally have a hard time overcoming myself. However, if the criteria is making money, building an audience is the right first step. Once the bare-minimum MVP functions, marketing makes all the difference on the "that makes money" part (see my list of random books to buy elsewhere in this thread)... and there is no point over-engineering something I can't convince anyone to use!

I realize I'm going out on a limb a bit to say that market validation before each sideproject comes second... I know of one example of someone who has built an audience while publicly initiating sideprojects without thorough market validation (focusing on technology instead -- note that this determination is 100% my own armchair quarterbacking with the benefit of hindsight); this person's projects appear to be faltering because of poor market fit. However, it hasn't stopped many from buying into this person's brand / other projects, and that audience is now following the next project even though initially it appears to be trending toward the same mistake!

PS. As mentioned elsewhere I could shortcut market validation by tracking down commercial products (already being paid for) that are getting a lot of visibility and addressing issues raised in bad reviews; however even when going this route I will still benefit greatly by having an audience to market the replacement.

Edit: switched to first person to preach to myself to get off my butt and start doing something!

You can start with what you already know and then build over it, or diversify, with time. The easiest way to monetize your knowledge to create a blog and link it up with social media (twitter, FB, youtube etc).

For example, you can write about management and backend data processing (what you do at work). This way, you don't have to learn something new to start your side project (except maybe how to manage a blog). The blog can be monetized via Ad networks like Adsense and Amazon affiliate program etc. As you grow, you may take in direct advertisers, sponsored content etc.

Spend a few days identifying problems people pay for, particularly easy problems that you can build yourself. The key is that the solution has to be fairly easy to build since you're not comfortable full-stack. Once you've chose a solution that already makes money solving a problem, build the same solution but position it for either a niche market or make a better product than competitors.

I would at minimum leverage bootstrap or semantic ui as your ui. Otherwise, hire someone to do the web interface for you.

Before you do anything, I would recommend that you clear state a goal. Do you want to learn a new language? Do you want $100 a month in revenue. Do you want learn a little SEO?

Recently, I started on project for the sole purpose of learning more about nodejs and then in the second phase angular 2.

So a few months later I have a process that extracts data from amazon api and looks for price decreases in products. I learned quite a bit about nodejs and even about mysql database structures. So it was a good learning exercise.

Although I have accomplished my objective, I want to make money. This is the problem... Now I have learned what I needed, unfortunately I have learn more about seo, twitter api and facebook api to get users to visit my 200k webpages and make some money. So the side work winds up becoming a challenge and sometimes a burden, to continue to figure out how to reach your goal.

But when you reach your goal of $100 a month, then you will want more... So basically it never ends.

My advise: don't focus so much on how to build it, focus on how to grow it... REALLY!

I've done so many complex projects that at the end I couldn't sell, that's frustrating... please hear me: figure out first how to sell it (or at least get good traffic to a crappy wordpress site), then build a very crappy version and then improve it over time.

Surprised this has not been mentioned yet: Make sure your current employer is cool with side projects and moonlighting. Very few companies I've worked for are OK with it, even if done completely with personal equipment/time. You don't want to build the next Facebook in your free time and get fired over it or have your current employer claim IP ownership of it.

Firstly as other have said build something you would use and that interests you. It doesn't have to make money if it adds value to you personally or professionally (technical knowledge and hard life lessons learnt). Secondly just ship it, personal projects can easily become obsessions, always needing one more thing. I did this and even though I hate parts of my apps design it is getting good feedback.

I recently had a quiet period in my freelance work so spent the time learning React Native amongst other things. I applied this to an idea I have had rattling around in my head for a point tracking app for people on Slimming World. I spent 4 weeks developing this and then shipped it to iOS. In under 2 weeks it has grown to almost 10,000 registered users, is number 4 in the UK Lifestyle Free Apps chart (ahead of Slimming World's own app) and has made enough ad revenue to cover the only costs I have had (App Store membership costs). It is never going to bring in big money but the lessons I have learnt are priceless.

If you want a stable tech stack that'll stay the same for the new few years then check out Clojure + Clojurescipt. I'm still doing the same since I started a few years ago.

Regarding FE dev: I also have a technical Background (EE) and I hated any CSS (HTML isn't so bad). Though, flexbox is a life changer. It's actually enjoyable and I can get stuff done without spending hours on simple layout issues.

Scratch your own itch so that even if no one else uses it or buys it are least you can.

Look at the non-financial upsides, so that if you make zero or little money you can still feel proud: For example - learning new skills that might help you get a raise, learning marketing so that your next project is more likely to be successful, etc.

A word of warning - once you have spent some time on a side project the shine will wear off and it will feel like a job - and you have to find ways to keep yourself motivated when you could literally just go an watch TV instead on your time off.

I started holding talks for independent agency owners (www.agencyhackers.co.uk). It brings in about 200 a month, from an 'agency roundtable' that I run. That's not much obviously it only just covers ConvertKit subscription and other SaaS software I need like Reply.io. But I think this is an audience I will be able to monetise with webinars, conferences etc.

As someone who has done countless side projects (check them out here: http://www.reza.codes) I suggest taking out "making money" as a variable and focus on things that interest you or something that stems from a personal need.

The satisfaction you get from these side projects will come from being able to finish them as opposed to try and make money from them. When you try to take on a side project with the goal of making money, you'll end up sinking way too much time in marketing and reaching out to possible customers as opposed to building something (which I find to be more fun and rewarding).

And the time you spend on trying to get people to sign up or even try your product doesn't have the same returns in satisfaction as building it. (my opinion of course)

If thousands of engineers are looking for something to build that others will pay for (but can't find it), then that tells me there is something fundamentally wrong with the way the economy works. Shouldn't it be the other way around? People who have a certain need express it, and engineers just pick one and work on it.

I guess you could focus on building a side project that doesn't need front end skills. Your aim is to get a bit of satisfaction and money and prove you can do something like this, right? Building something simple you can start on right now, starting from where you are right now.

Two ideas:

You could build some kind of email integration, or something delivered over, or using, email. The email processing itself would be mostly backend stuff, you could template the emails with Django or even Python triple quote strings.

Or you could build an API of some sort. The only front end really needed for an API company ( such as Stripe or whatever ), is documentation. You can write your docs on one of those doc hosting platforms ( readthedocs might be one? I don't know much about it now ).

For your side project, it's probably important to pick some things you like and start doing them, rather than trying to make certain up front which things are going to make money.

Instead of building an MVP and then trying to sell it, there are people who suggest:1-Think of an idea.2-Make a simple landing page with a mockup or something.3-Try to sell the product. And by selling the product I mean actually selling it (so people actually transferring you money). If you can achieve a given amount of sells in a given amount of time (important: set concrete goals with concrete timing), then you have validated your product.4-Build a first version of your product.5-Iterate.

Very simple to say, very difficult to do (I've never tried, but I would probably fail). But I think there would be ways to systematically apply steps 1-3 until you find the right product to work on.

In game development you can indulge and improve as many different skills as you want; graphics, sound, music, mathematics, networking, AI, UX design, character design, writing, storytelling, difficulty balancing, teaching..

An indie game project will give the freedom to be as creative as you want, and you get to enjoy your own product, but of course you don't have to arrive at a finished, marketable product to have fun building it.

Before you get deeply into what you are going to sell, consider how you are going to market it. Marketing is a * * *! Successful marketing is harder than programming a product, much harder and more problematic. Just ask the folks trying to sell iOS apps when there are about 2 million apps on the market. Lining up a buyer for a custom product before you begin (as some here have suggested) sounds very attractive to me.

My advice to all startups is that you should spend as much time thinking about how you are going to find & acquire customers as you do on your product. The same goes for cash-flow businesses & side-projects.

One approach is to decide which customers you can find the easiest and then ask them about their problems. Start there.

Build something that helps you in your everyday job. This can increase your motivation to improve it and keep working on it regularly. When it starts being useful, try to find another user to get feedback and increase feature. Before considering putting it in the wild, try to find a couple of hackers to anticipate potential security issues ;-)

I recently came up with next problem, I did the ungly POC (proof of concept), but don't know how to properly advertise it to find to get responses (except for HN, reddit and a couple of same-type resources).

Definitely build something you personally need. Try to see what you can do without building a product. For example, site for a service that could eventually become a product in the future, but allows you to validate early on and figure out a business model prior to touching any code.

Try to look at what you do at your day-job from an outsider's perspective. What are some little things that to you seem trivial and obvious, but to an outsider would seem complex and foreign. There's opportunities there to package your knowledge into a tool or resource.

If you are weaker on the FE side from a project point of view keep your 'stack' really small, jQuery and Vue.js would get you a long way without really needing much front end knowledge, you can then gradually add in the other tools as you go (things like SASS/Less etc).

Anytime you start focusing on monetization of a side project, it stops being a side project and more of a "startup". Your mindset has to change around it entirely. You now have to consider marketing, your audience, and legality.

What about a side project that isn't heavily related with tech? Which are your passions or hobbies outside work? Maybe one of them can lead you to a niche market.You can also share your professional knowledge. Do you like teaching or writing?

Great topic. We sound a lot alike. I work in a different sector, but I'm also now squarely in management and have been for a while, and my previous expertise was data engineering and infrastructure. Started with zero front end ability, and also not much Python either since I'm in the size of company and industry that still accomplishes most data work in SAS, and what it can't, Informatica and DataStage. Here's where I'm at, and so as to not bury the lede, I'm not making money yet but I found satisfaction in just spending some time each week on my projects -

1. If you know python, you can probably make a pretty natural jump to Flask. I didn't know Python, but I could program in a handful of other languages, so I figured I'd pick it up as a useful tool anyway. You may like this tutorial:

I'd say I really started learning when I got to the stage on authentication. The reason is that this tutorial implements OpenID, which isn't very common anymore, so I went off and implemented OAuth instead - heavily googling and scavenging, but ultimately having to piece together something that worked myself. I learned a lot that afternoon.

You could do this with any framework and there are tons of them. I chose Python and Flask over a Javascript-based framework because learning Python in parallel would be useful to me in data engineering, even though I don't write code for a living anymore.

2. As others have said, time commitment is the biggest issue. Figure out what you can give this, and scope appropriately.

3. I haven't done this because I'm too much of a completionist to pull the trigger, but get your MVP out there and build off of it. For me, I've decided I'm pretty happy just spending time on the project, even if no one else has seen it.

4. Bootstrap is my friend, it can be yours too. I have never been a strong visual person. I like words on a page. I have no eye for what makes a good visual and what doesn't, which has been my biggest developmental item when I moved into an executive role last year. All that said, Bootstrap is awesome and makes it a lot easier to build good looking websites. I started off here and built out a static website for an idea I'd had, and am now circling back to build the things I want to be dynamic in Flask.

5. There are a lot of choices out there. Unless you're developing bleeding edge, and I may get flamed for this, most of the choices really don't matter that much. I chose Python+Flask+Bootstrap because I liked each individually, it seemed like something I could work with, and NOT because I decided they were objectively better than Node, Angular, Express, React, or anything else that I haven't touched. I also sort of like that there isn't a new Python web framework each day, so diving into Flask seems like a more stable investment of my time. I'm sure there are drawbacks.

6. When it all starts to come together, the real, revenue generating idea might be to address pain points in your day job. My sector is insurance. I know a lot about certain operational functions. Eventually, I could solve some of those and build a business around it, I tell myself. You probably have some specialized domain knowledge as well. Consider that.

Good luck, and have fun. Like I said, I'm happier just for having taken on the challenge. If I ever make a dollar, that'd be good too, but less important than I initially thought.

How ironic. Yesterday on HN someone said "Google Security Team, here's your call to stop pontificating on the Project Zero blog and throwing cheap muck at Microsoft. You've got an even bigger and more complicated mess to clean up, you dug the hole yourself, it's going to take you longer, and you should have started on it years ago" [1]

And today we have this very impressive counter-example of Google putting some engineers to work for months doing vuln research for making, in the end, EVERYONE safer: Apple users, Samsung users, and hundreds of other mobile device vendors who use this popular Broadcom Wi-Fi chipset in products shipped to 1B+ users.

But no, somehow, tomorrow again it is going to be all Google's faults that Android-derived commercial works are insecure and poorly maintained by their respective vendors.

This is one of the most serious and instructive pieces of technical security work we're likely to see this year. In case it hasn't sunk it:

- This vulnerability affects tons of smart phones (iPhone, Nexus, Samsung S*).- The attack proceeds silently over WiFi -- you wouldn't see any indication you've been nailed.- Mitigations and protections on WiFi embedded chips are weak.- The second blog post will show how to fully commandeer the main phone processor by _hopping from the WiFi chip to the host_.

Imagine the havoc you could wreak by walking around a large city downtown, spewing out exploits to anyone who comes into WiFi range :-)

That's going to hurt a lot of folks. Especially those whose manufacturers are not doing their bit with respect to updates. It's absolutely incredible to me how sloppy manufacturers are when it comes to keeping phones updated, they seem to see phones that are older than two years as effectively end-of-life.

Personally I think that a phone is only end-of-life when it stops to work and that manufacturers should either offer to buy them back if they don't want to support them any longer or should be forced to provide security updates.

This research is effectively a free audit of Broadcom's firmware by Google. At what point does Broadcom approach Google, have the appropriate NDAs signed, and give them access to the source code? If someone is providing a (very valuable) free service to you, wouldn't you want to make their lives easier?

I assume there are some important reasons why this wouldn't occur, but at first glance, it seems to me that the pros outweigh the cons.

> Broadcom have informed me that newer versions of the SoC utilise the MPU, along with several additional hardware security mechanisms. This is an interesting development and a step in the right direction. They are also considering implementing exploit mitigations in future firmware versions.

...considering implementing exploit mitigations in future firmware versions. I'm somewhat doubtful that they give much shit unless it hurts their bottom line. This sounds like lip service. What else are they gonna say? "We're not considering implementing exploit mitigations"?

Whoa! This is really impressive stuff, and will cause head-ache in my dayjob where we develop a product using this WiFi SoC.

Can this vulnerability cause content-owners and DRM vendors to no longer allow such devices to decode 4K content? I'm thinking of for example PlayReady certification that may be withdrawn/downgraded because of this issue, but I'm fuzzy on the details how this would work.

For reference, here is the bug[0] that affected Apple that was discussed yesterday[1]. One commenter on that HN topic noticed that there was 1 other public bug about Broadcom wifi chips, though it was not the specific one that affected Apple.

This blog post points to 4 Project Zero bugs for different Broadcom issues.

There's a bug in Apple's EFI driver for BCM4331 cards present on a lot of older Macs which keeps the card enabled even after handing over control to the OS. A patch went into Linux 4.7 to reset the card in an early quirk, but I suspect other OSes can be taken over via WiFi on the affected machines:

> For several years, 23andMe has worked on demonstrating that its reports are easy to understand and analytically valid...

I guess these are different reports, but I know a genetic counsellor who describes 23andMe's carrier screening tests as "the bane of their existence". Those reports seem not-so-easy to understand based on the patients she sees.

One problem is that they warn that your offspring are at high risk for some condition, when really "high risk" means 0.5% higher risk than the general population. The other is that they may say you are not a carrier for a certain condition, when they only test for one variant of it, where proper tests will test for multiple variants. They can both scare and soothe irresponsibly.

We leave genetic material behind everywhere we go. 23andme analyzes only a small subset of one's DNA.

The most important thing to realize about genetics is that very few health conditions (and even traits) are highly correlated with a specific genotype.

Some are, but the reason something like 23andme hasn't revolutionized health is because the correlations for most things are weak. 23andme does a good job of showing just how weak in the results. I'm 52% likely to have the eye color I have even though both parents have that color. I'm the tallest in my generation (in my family) yet my genes are mostly for below average height.

Over time, with a lot more data and a lot more correlation analysis with health and behavioral data, there will be more actionable information for the average customer.

As it stands, 23andme is useful for the following reasons:

- the data is entertaining. It's fun to find out how much neanderthal DNA one has, etc.

- the ancestry results are interesting.

- the health results make it clear just how little impact genetics has in most aspects of health. Yes there are some big exceptions, but those are a minuscule percentage.

By joining 23andme you get a chance to watch the studies unfold and plug in your own data. For a curious, patient person, this offers a great way to make an interesting area of science a bit more salient.

Are they still saying that by submitting a sample to them, that they then own your genome and can sell it to whoever they want? I'd love to get mine sequenced and check it out a bit, but not if they are going to sell it off to a million shady companies whenever they go bankrupt (maybe 50+ years, but still)

Is there any way to just have your entire genome sequenced and get all the data in a software-friendly format? At that point there could/should be some open source software for analyzing it and finding common or well understood things like this. That way the software could be updated and people could re-run their analysis to look for newly discovered stuff.

I think this would be an awesome amount of fun. I for one would be interested in looking for certain gene variants that are not mentioned at all over at 23andMe.

I think people are forgetting to ask the key question - Cui bono? Who benefits?

23andme definitely benefits - all the data they have collected is very valuable, and they intend to sell it to pharmaceutical companies etc.

On the other hand, working in genomics, in my opinion the benefit to any one person having their genome tested in this manner is minimal. The simple reason is that most genetic alterations have low penetrance for phenotypes or involve complex interactions.

My opinion is that this test is useless at the least and dangerous at the most. It provides information that in almost the totality of the cases no one can correctly interpretate and transform in actionable health advice. Not scientists, not doctors, much less consumers.

But it is sold as a cutting edge scientific resource that will improve your life. It wont. Not even increasing the chance that you might avoid something somehow, that's the fallacy.

For our level of knowledge regarding causality in biology and genetics, I believe this test is as good as buying your astrological map.

Love the ZeroNet project! Been following them for a year and they've made great progress. One thing that's concerning is the use of Namecoin for registering domains.

Little known fact: A single miner has close to 65% or more mining power on Namecoin. Reported in this USENIX ATC'16 paper: https://www.usenix.org/node/196209. Due to this reason some other projects have stopped using Namecoin.

I'm curious what the ZeroNet developers think about this issue and how has their experience been so far with Namecoin.

Has the code quality improved since I was told to screw off for bringing up security?

* 2 years out of date gevent-websocket

* Year old Python-RSA, which included some worrying security bugs in that time. [0](Vulnerable to side-channel attacks on decryption and signing.)

* PyElliptic is both out of date, and actually an unmaintained library. But it's okay, it's just the OpenSSL library!

* 2 years out of date Pybitcointools, with just a few bug fixes around confirmation things are actually signed correctly.

* A year out of date pyasn1, which is the type library. Not as big a deal, but covers some constraint verification bugs. [1]

* opensslVerify is actually up to date! That's new! And exciting!

* CoffeeScript is a few versions out of date. 1.10 vs the current 1.12, which includes moving away from methods deprecated in NodeJS, problems with managing paths under Windows and compiler enhancements. Not as big a deal, but something that shouldn't be happening.

Then of course, we have the open issues that should be high on the security scope, but don't get a lot of attention.

We need more projects like these. Whether this project solves the question of a truly distributed Internet* is out of question. What we need is a movement, a big cognitive investment towards solving the Big Brother problem.

*I am referring to concentrated power of the big players here, country-wide firewalls, and bureaucracy towards how/what we use.

How does this track with the Tor Project's advice to avoid using BitTorrent over Tor [1]? I can imagine that a savvy project is developed with awareness of what the problems are and works around them, but I don't see it addressed.

A little known fact: the Namecoin blockchain's cost-adjusted hashrate [1] is the third highest in the world, after Bitcoin and Ethereum, making it unusually secure given its relative obscurity (e.g. its market capitalisation is only $10 million).

[1] hashrates can't be compared directly due to different hashing algorithms having different costs for producing a hash.

>So if technological change were going to cause elimination of jobs, one presumes we would have seen it by now.

...considering this statement was delivered while the US Workforce Participation is at 30+ year lows while productivity and technological change has made significant inroads during that time (ex: Macintosh 512k vs. iPhone 7), I think he's missing a large chunk of the, uh, big picture.

Then, contrast one of his well reasoned and very telling thoughts about the future:

>All of a sudden you can have the idea that an hour-long commute is actually a big perk because instead of driving and having to sit and focus and lurch through traffic, what if your car is a rolling living room? What if you get to spend that hour playing with your kid or reading the news or watching TV or actually working because you dont have to worry about driving?

Because in the United States, we should be working even while we are getting to work, because we don't work enough? SMDH. To me, the Working Class has plenty of reason to be cynical about this vision of the future..."playing with your kids in the car" time or not.

"Take the ego out of ideas" is sound advice for investors, not entrepreneurs. Ego is a loaded word, but if you define it, in this context, as an irrational belief that you are right and the world will catch up, then it's essential for every entrepreneur. "New ideas" get no support. You're the only support. You have to strongly believe that the world will get there, do whatever it takes to convince them to get there, and survive long enough to bank on that moment. Without that ego in your idea, you probably won't survive long enough.

1 hour commute is fine? No. There were all these visions about how with the advent of the Industrial revolution people would have to work half a day because that's how long it would take them to finish their norm. Instead, they were asked to produce twice as much.

Now we have our 'great' thought leader try to convince us about the virtues of hard work and 1 hour commute again.

How about "Put the type of Ego in your ideas that will remove the need for you to have a job in a few years"? Because jobs will be going away, and we don't need an even more hard core rat race in the US.

In tech, we meet so many people who are emotionally attached to their work, who would treat their production as 'their baby'. This is a terribly common counterproductive bias. It prevents from:

- taking criticism productively: people "put their soul" in their work, and then someone tells them it's perhaps not the best way. Do hear them.

- assessing one's position objectively: people who are attached to their work often misconstrue their vision with the reality of the work. They tend to minimize weak points and emphasize strong points.

- delegating your job away: people infatuated with their work have a hard time giving it away. Necessarily, the delegate will screw it up.

That should be rule number 0 of all jobs: Be invested in the mission, not in the solution

> All of a sudden you can have the idea that an hour-long commute is actually a big perk because instead of driving and having to sit and focus and lurch through traffic, what if your car is a rolling living room?

This is ridiculous. That's what our supposedly most innovative thinkers can come up with? Turning your car into a living room so we can have even more commuting (with all the wonderful side effects that come with it ...)?

Re: tech creates jobs, Tyler Cowen's Average is Over has an interesting passage about automation:

"Keeping an unmanned Predator drone in the air for twenty-four hours requires about 168 workers laboring in the background. A larger drone, such as the Global Hawk surveillance drone, needs about 300 people...an F-16 fighter aircraft requires fewer than 100 people for a single mission."

It's well known that the industrial revolution created countless new jobs that were unimaginable at the time, a sentiment echoed in The Second Machine Age by Brynjolfsson. But how do you pick the winners that will bring the most jobs? Some say disruptive innovation, but it still seems like an open question.

> "Self-driving cars, for example, could potentially put 5 million people involved in transportation jobs out of work....."

On a work day, NYC subway provides 6 million trips. Think of all of the car drivers it is displacing. And then there are the buses! And that is in NYC alone. Just think of all of the drivers mass transit has already displaced throughout the nation!

Then there is intercity transit: think of all of the drivers displaced by planes, trains, and buses!

Cars, even electric ones, create air pollution which impacts health as well as greenhouse gas. Electric cars are charged from electric power plants -- most of the US electricity is generated by carbon-based fuels -- coal and gas.

Using Via which transports multiple passengers [part of Manhattan, part of Brooklyn, Chicago, Washington DC] (or Uber pool for example) at least helps to reduce air pollution and greenhouse gas compared with single passenger vehicles that at least helps to reduce air pollution / greenhouse gas.

I guess with the growing remote work movement this becomes harder and harder to do since you spend less time with your peers whom you can "argue with" mentally since you lack time around them to get a better sense of how they think

I do like the idea of almost a rolling office. I've always wanted a sort of vagabond life fueled by tech. There so much out in the world and so many people. It's a shame that we're often stuck in the same places for such long periods of time.

If I become a remote/work-from-home/smb-owner I'd love to just being a self-driving car doing stuff on the go and also changing where I am all the time.

All of a sudden you can have the idea that an hour-long commute is actually a big perk because instead of driving and having to sit and focus and lurch through traffic, what if your car is a rolling living room? What if you get to spend that hour playing with your kid or reading the news or watching TV or actually working because you dont have to worry about driving?

I suppose one can find these answers from people who commute by company shuttles, trains or subways.

I'm getting tired of all of the hot air coming from these tech oligarchs. They're so enriched by a tech boom and a decade of easy money that we worship at their feet. Their vision and goal for the future is simply more money for themselves at the expense of others.

"Guys look! An hour long commute is actually a good thing because you can spend it with your kids!" Why is it so hard to spend time with our kids now?!

I know that sounds harsh, but we seriously need to stop the hero worship in SV culture and begin building a society that benefits everyone, not a society that works itself to the bone just to eat the cake of a larger corporation and enrich the early investors. They will just as quickly dilute your quality of life as they will dilute the shares in your company.

His ideas seem to be (A) some large changes in the economy and society from (B) some exploitations of largely existing computer technology to meet some want/need previously unnoticed or infeasible to meet.

But, even for just (A) and (B), there is potentially MUCH more potential in ideas that Andreessen seems to ignore.

An example was Xerox: Copying paper documents was important. The main means was carbon paper. Xerox did quite a lot of engineering research based on some early research, IIRC, at Battelle. The result was one of the biggest business success stories of all time.

he's a media vc. facebook is basically his crown jewel and that's it. facebook/media is cool i guess, but i don't see how he know much about anything else such as robotics or ai.

just look at andreesen horowitz investments. many are largely media companies(buzzfeed, stack exchange). they've tried doing finance which is a much bigger market but like clinkle clearly imploded and coinbase probably is next(literally transfers went down the other day eek). so fb is still all he's got.

he hasn't invested in any big winners yet beyond fb/media. so why should i listen to this guys advice(unless of course if i'm building a media company).

1) Uber's upfront estimate is based on a naive calculation of getting from A -> B. From a software perspective, that makes sense. The consumer hasn't even committed to riding, so let's just toss out a ballpark figure.

2) If the consumer looks at the figure and says, "Yes, that's reasonable for transportation from A -> B", which they indicate by clicking "Request Ride", then they are agreeing to pay that price for the service.

3) The rider can verbally request a different route once in the Uber.

4) The driver is paid based on minutes and miles, via some formula that they've agreed to. The rider is charged based on an up-front calculation, which they can decide if it is worth it or not.

It sounds like the lawsuit is alleging that the rider is being defrauded by being taken on a different route than the one displayed at time of purchase.

I think this is silly because, to my knowledge, everyone taking an Uber is paying for the transportation and not any particular route. I.e. being taken on a specific route isn't what the rider is agreeing to pay for. Also, as noted in (3), the rider is always free to change the route.

Additionally silly because the rider seems to be alleging that they were defrauded by being taken by a more efficient route. There just doesn't seem to be any "harm" in what's happening here. I can understand the case if the user agreed to go from San Francisco down to San Jose, based on a route straight down the 101 highway, then, once they got in, was driven to San Jose through Los Angeles.

To me it seems the disconnect here is between the fixed fee on one side (the passenger) and the flexible fee on the other side (the driver).

Uber is, sort of, acting as an insurer and underwriting the cost of the journey. The passenger pays a fixed fee for a projection of how much the route will cost and the driver gets paid by how much it actually costs in driving time and distance. If there is some sort of unexpected delay and the journey takes longer then, presumably, the driver will be paid more than the passenger paid so Uber will lose out.

As with all insurers Uber charges a higher initial charge to act as a buffer and minimise the chances of losing money on the journey.

I can't really see any way of getting round this as long as the passenger pays a fixed price and the driver is paid a flexible fee.

Both driver and passenger think they know the full truth of the matter for the financial transaction they're agreeing to, but they don't. There's implicit dishonesty in that, and when you combine dishonesty with money we call it 'fraud' usually.

But let's set aside the question of whether it was legal. Was it moral?

Software like this doesn't fall from the sky- management approved it, software teams wrote it, maintain it and system tests probably exist to validate it works... how do those developers feel okay about this? How do they not feel like they're cheating people out of money? When your Mom hears about it, and asks if you were part of it will you spend 20 minutes giving a long-winded answer about how it was actually not a bad thing? That's a bad sign, man.

I think people misunderstand upfront fares. Its like buying an airplane ticket: the airline charges passengers the appropriate price to fill the plane, and it pays pilots a salary. Pilots who fly more profitable routes don't get paid bonuses because their passengers pay more. Same thing with UPS drivers, who get paid a fixed amount to drive packages around. The concept of "up front fares" seems to be widely practiced in logistics companies, and is probably a part of the transition as ride sharing companies become less like taxis and more like UPS/airlines.

The fare discrepancy can extend beyond longer/shorter route calculation -- there is also the issue of surge price disparity between driver and passenger. For example, the user would see 3x surge pricing while the driver would see 2x surge pricing, where the user is charged for 3x but the driver is paid for 2x. This is pure speculation and I have not witnessed this behavior but it's another way things can go wrong in Uber's favor.

Uber might be able to defend itself saying that the data provided to the driver and passenger are different because of misconfigured caching and stale data being served to either party, but it's a moot point in case Ars Technica has concrete and verifible claims of methodical and programmatic fraud. Personally I have witnessed being billed for $0 in-app after taking a round trip (effectively zero distance traveled) but the email notification showed the proper billing value, and there may be more instances of this "confusion."

As an Uber user, I'm unaware of this "upfront" pricing model. I thought the price charged was based on the actual time/distance (which, incidentally, they email me on the receipt). I know I can estimate the trip cost, but I thought that was just an estimate.

Am I wrong? What is this "upfront" pricing?

And is the reverse true? E.g., can I commit to some committed price then have the driver take some crazy route?

A bit off topic, but I was just in New Orleans for a fun trip. I normally use Lyft, but apparently Lyft pickups were not allowed at the airport so I used Uber (my last Uber ride was months ago).

After waiting at least 15 minutes in the pickup spot, my driver cancelled. Annoyed, I requested another Uber ride (which went fine). However, I was shocked to learn that Uber had still charged me a cancellation fee for the first ride and continued to argue it was appropriate when I protested.

I finally resolved it when I continued to press the issue, but I found the whole scenario incredibly customer-hostile. Along with the litany of gross Uber stories, I will continue to prefer Lyft!

UberGo is most prevalent option in India. In UberGo you are shown a fixed final price at the time of starting the trip. This is supposedly calculated based on the best route you will take from point A to B. Def. of best is - cheapest cost - by trading between short/long routes vs traffic congestion on those routes that cost time.But once the rider gets into the car and driver starts google maps, it can show a different route due to changing traffic conditions. Or driver can refuse to follow google maps and use his own judgement on which route is better (for him). In either case, Uber should be transparent and show to both rider and driver the difference between what was initially calculated vs what it actually cost based on the actual trip. But uber does not do this. Instead of they also add another arbitrary/opaque surge multiplier. If at all they have to do any fraud, they are better off doing that "fraud" by showing different multiplier for rider vs driver. Consumer protection law agencies should insist Uber should at least be transparent and predictable in how they determine surge multiplier and their distance/time metering is accurate.

This just sounds like Uber quickly charges you for the worst case since if they charge you for the best case and things go wrong, Uber loses. No one can know what route will even be possible given how chaotic traffic and closures can be. Then the driver gets paid by whatever route is actually taken. I don't really see this as an issue at all.

"27. In the overwhelming majority of transportations, the upfront price is the amount that a User is ultimately charged for the transportation services by the driver.28. When a driver accepts a Users request for transportation, the Users final destination is populated into the drivers application and the driver is providedwith navigation instructions directing him or her to the best route to the Usersdestination"

---------

It seems like User sees a price X for a ride and accepts it. The driver might see a price X-y if conditions have changed. Doesn't that imply User agrees to price X and driver to price X-y ? Uber might be able to adjust the price at the end but can they be sued if both party agrees to it before hand?

--------------

"36. Had Plaintiff and the Class known the truth about the Uber Defendants deception, they would never have engaged in the transportation or would havedemanded that their compensation be based on the higher fare."

------------

I am curious as to how they reached to a conclusion that Uber was intentionally doing this. Did a bunch of drivers co-ordinate experiments with riders to see if there was price differences? Did they just log out and log back into different accounts to see the price differences?

I am neutral to Uber so I feel its natural to question if Uber is seen as an easy target to go after since they are already in a legal swamp. IANAL so would love to read what people familiar with law have to say.

In the thread about Uber retreating from Denmark, people asked why taximeters are sensible regulation. This is the reason. We need a trustable third party that ensures fair transactions. Uber cannot be this because they have their own interests. Regularly checked taximeters can ensure this at least partly.

If there is a sudden traffic jam and it takes twice as long, then presumably uber must pay the extra money to the driver. So are the drivers asking for some kind of "flat rate or variable, whichever is greater" contract?

This is a non-story and just good product management. This feature solves the problem of presenting a surge multiplier to the customer. With a surge multiplier the customer has to guess how much it's going to cost. With this they just see the price and figure out if it is worth it or not. Reducing purchase friction and uncertainty increases demand and is good for drivers.

Plus, both Uber and Lyft are assuming risk with upfront pricing. They are guaranteeing a price. Sometimes it will be higher and sometimes it will be lower. The driver is accepting a different payment arrangement based on distance and time.

Both companies are classic middlemen and taking advantage of consumer surplus.

If the allegations in this suit are true, fuck Uber forever. They should go out of business, their assets should be stripped from the investors and redistributed to the users, and Kalanick and a bunch of other people should go to jail for fraud. There is no way to overlook the persistent structural problems displayed by this company. Some things could be matters of opinion (like the values of their corporate culture and so on), but there are multiple instances by now of Uber actively choosing to circumvent laws or deceive people on a systematic rather than an occasional or ad-hoc basis. I've rarely seen such a clear chase for revocation of a business license.

This is all very interesting, but Uber has pretty robust arbitration and class action waiver clauses in their contracts, both with the users and the drivers. Sadly, this will go to arbitration on an individual basis pretty quickly. I haven't seen Uber lose a motion to compel (except once in S.D.N.Y., but it was quickly reversed on appeal).

The issue is with the agreement with Uber driver's, if Uber is changing the terms of the relationship without letting them know and agree to it, then that's a major violation of the contract and Uber could see a massive labor lawsuit.

Are we worried that shady behavior that hurts consumers and riders might become the new equilibrium? I don't see how it could be. Whatever the machinations of Uber to artificially alter prices, no matter how sneaky, at the end of the day they'll lose drivers and riders to competitors if their margins drift too far from the economic cost of being the middle man. A driver don't need to know in what way he or she is being lied to or maniuplated to know that they make less per hour driving for Uber than for [Uber's next best competitor]. Thus, I don't see how there could be an equilibrium where Uber is overcharging and still has a significant portion of the market.

No this doesn't happen. Some people may think that Uber charges the rider a different rate than the driver receives but it isn't the case.

A driver I had was sure this happened and asked me that on a fairly expensive trip (I think it was around $80).

I told him this didn't happen and I gave him my personal phone number and the amount of fare I was charged. I told him to check his daily numbers and if he didn't see this charge then to call me immediately. He never did.

I'm no lawyer but I understood "fraud" to be misrepresentation. Who is being defrauded? The passenger pays one price, the company pays the driver and takes a bit of that fee. The company is defrauding the driver by not telling him the full fee the passenger is paying? If they word it such as: "A % of the fee and other fees", I don't see fraud. Not a lawyer.. feel free to correct me.

One good way to make sure the fare matches the distance would be to install some sort of device that measures mileage. The driver could start the device when the ride starts and turn it off when it's over. It could even calculate and display the fare for both parties!

Of course that kind of transparency wouldn't be possible unless all the vehicles had the device. So you'd probably need a licensing system for them. Which in turn could be overseen by a commission made up of industry reps and local government officials to ensure fairness and local control.

But an argument that uses objectively true and verifiable facts may nevertheless be invalid (i.e. it's possible that the premises might be true but the conclusion false). Similarly, a news story might be entirely factual but still biased. And in software terms, your unit tests might be fine, but your integration tests still fail.

So here's what I tell people:

Fact checking is like spell check. You know what's great about spell check? It can tell me that I've misspeled two words in this sentance. But it will knot alert me too homophones. And even if my spell checker also checks grammar, I might construct a sentence that is entirely grammatical but lets the bathtub build my dark tonsils rapidly, and it will appear error-free.

Similarly, you can write an article in which all of the factual assertions are true but irrelevant to the point at hand. Or you can write an article in which the facts are true, but they're cherry-picked to support a particular bias. And some assertions are particularly hard to fact-check because even the means of verifying them is disputed.

So while fact checking can be useful, it can also be misused, and we need to keep in mind its limitations.

In the end, what will serve you best is not some fact checking website, but the ability to read critically, think critically, factor in potential bias, and scrutinize the tickled wombat's postage.

The problems aren't facts. The problems are what completely distorted pictures of reality you can implicitly paint with completely solid and true facts.

If 45 states that "the National Debt in my first month went down by $12 billion vs a $200 billion increase in Obama first mo." that's absolutely and objectively true - except that Obama inherited the financial meltdown of the Bush era and Trump years of hard financial consolidation (while any legislation has a lag of at least a year to trickle down into any kind of reporting at government scale).

Fact-checking won't change a thing about spin-doctoring. At least not in the positive sense.

I think this has huge potential for abuse. Let's say politifact or snopes or both happen to be biased. Let's say they both lean left or both lean right. Now an entire side of the aisle will always be presented by Google as false. I know that's how most people perceive it anyway, but how's it going to look for Google when they're taking a side? Also, I have to wonder whether this will flag things as false until one of those other sites confirms it, or does it default to neutral?

It's a great start and hope it leads to improvement, but this has the same psychological effect as reading a click-bait headline (fake news in itself) -- unless readers dive deeper. And just as with Wikipedia, the "fact check" sites could be gamed or contain inaccurate information themselves. Users never ask about the 'primary sources', and instead justread the headline for face-value.

My pessimistic expectation is that this inevitably will result in something like:

>Snopes main political fact-checker is a writer named Kim Lacapria. Before writing for Snopes, Lacapria wrote for Inquisitr, a blog that oddly enough is known for publishing fake quotes and even downright hoaxes as much as anything else.

>While at Inquisitr, the future fact-checker consistently displayed clear partisanship.She described herself as openly left-leaning and a liberal. She trashed the Tea Party as teahadists. She called Bill Clinton one of our greatest presidents.

I think that politifact, snopes, and most fact-checking websites I'm aware of are great and everyone should use them as sources of reason and skepticism in a larger sea of information and misinformation.

But they are not authorities on the truth.

Google is not qualified to decide who is an authoritative decider of truth. But as the de facto gateway to the internet, it really looks like they are now doing exactly that. I am deeply uncomfortable with this.

Fact checking is irrelevant. What's necessary is education. Just like spellcheck will not allow you to magically compose elegant prose, fact check is notgoing to prevent people from being misled. Notice how both of these "problems"have the same solution. In fact, fact check can be counter productive as peoplenow sprinkle their articles with irrelevant facts.

What I wish they would do is use their fancy AI to put in a link to the original source. Tracking down original sources is extremely tedious, but it generally gives you the clearest idea of what's actually going on.

This is incomplete: they need to also include the political affiliations of owners of "fact check" sites, and perhaps also FEC disclosure for donations above threshold, and sources of financial support. I.e. this site comes from PolitiFact, but its owner is a liberal and he took a bunch of money from Pierre Omidyar who also donated heavily to the Clinton Global Initiative. Puts the fact checks in a more "factual" light, IMO. Fact check on the fact check: http://www.politifact.com/truth-o-meter/article/2016/jan/14/...

Things have gotten hyper-partisan to the extreme in the past year or so, so you sometimes see things that are factually true rated as "mostly false" if they do not align with the narrative of the (typically liberal) owners.

Reading the article, it looks like what is going on is that news publishers now can claim that their articles were fact checked, or certain article is a fact check article on another one, using special markup. They also say the fact checks should adhere to certain guidelines, but I don't see how it would be possible for them to enforce any of these guidelines. It looks like just self-labelling feature, with all abuse potential inherent in this.

What if I told you (cue the Morpheus meme), that people consuming the "fake news" don't care that it's fake? It's called confirmation bias and winning. Education isn't going to solve this issue, you can't forcibly educate people nor can you change their core "values" and their determination to be "right".

The only "education" that I can envision working is quantifying the real-world-impact of their votes on the personal level. Ex: Your health insurance was cancelled? The representative you voted for caused that. This unfortunately is normally executed with a partisan goal, however should be applied as a public service to all Americans.

There is obviously a lot of debate on whether or not fact checking is accurate and useful. I think simply presenting a fact check will help people think more critically about headlines they see every day. Like "Mythbusters Science". It's not perfect, but it helps people to think.

Besides political debates, anyone else thinks this 'ClaimReview' schema put to use by Google is one step towards the application of Semantic Web? There might be something more than just a 'new app by Google' here.

I'm amazed at how much cynicism I'm seeing here about this. People just keep repeating what can be boiled down to the same premise: complete objective truth basically doesn't exist. Truth is messy, tricky, subjective business. This is not new, this is just how the world is. Truth and understanding is best-effort and always has been, so why is a tool to attempt to combat some of the most egregious falsehoods even remotely a bad thing? Nobody should claim that its bulletproof, but I'm not seeing anyone really do this? The problem is some of us never deal in absolutes, we see nuance in everything (climate science, economics, political science) but there are others who do deal in absolutes and make a killing doing so. Sitting around having the same debate over and over about facts and truth doesn't do anything to tackle the problem.

My rule of thumb is that generally there is safety in numbers. Don't trust any single source and don't trust something that doesn't have a chain of reasoning behind it. I trust all kinds of scientific statements that I don't have the qualifications or time to vet myself - but we have to do our best and that often means doing a meta-analysis of how a conclusion was reached and how many other people/groups (who themselves have qualifications and links to other entities with similar qualifications) that the statements are linked to.

Fake news isn't 100 levels deep, its usually 1 level with no real supporting information. When people (like Trump) categorically denounce someone elses statement they often provide no real information of their own. Similarly, when refuting a fact-check, most people don't dig into it and refute something in their chain of reasoning, they just say "well that is just not true!" and leave it at that.

We don't need to fundamentally fix the nature of truth but we need to be able to combat the worst cases of misinformation and any tool that helps do that is great. Continuing the have the same philosophical debate about truth is fine from an academic standpoint but from a practical standpoint it is sometimes not helpful. I feel similarly about climate change - its great to acknowledge nuance but what good is that if we're trending towards pogroms and a totalitarian dictatorship (to be hyperbolic, maybe)?

It's a witch hunt. Science (rather, life sciences) has a similar problem. There are just enough (statistically significant) facts to push many agendas. Peer review weeds out some stuff, but that doesn't stop a lot of wrong conclusions being pushed to the public.

Maybe a better solution is adversarial opinionated journalism, rather than this proposed fact-ism.

The blog title is "Fact Check now available in Google Search and News around the world". I think that the extra bit at the end is worthy of inclusion, as I expect this to become a point of contention over the years.

I would not be surprised if different governments take issue with Google adding any sort of editorial commentary, even if it's algorithmically determined etc.

I don't like this, at all. People need to rely on their own reasoning skills and critical judgement and not let centralized authorities have a large effect on what people can read. I like systems to be decentralized and this seems to be the opposite.

I really wish that major legitimate institutions of journalism (i.e., the ones that require multiple independent sources, publish corrections and retractions, &c.) would just stop pussyfooting around with nice simple accurate words like "lies" when they're reporting on somebody who's blatantly lying. False equivalency and cowardice is going to get us all killed.

Original title is "Fact Check now in Google Search and News"; the different capitalization vs the current HN headline ("Fact Check Now...") is significant, the new feature "Fact Check" is now available in Google Search and News, rather than a feature "Fact Check Now" being discussed in those services.

I know a person who eats those "alternative facts" like candy. When I tried to prove one of them wrong, I pulled out a website to do a fact check and his response was: "you trust Snopes?" so I have doubts this will help much, but I would like to be wrong.

I tried a slew of recent statements that are objectively false but that a certain politician in the United States has tried to say are true. Google returned fact checks for exactly 0 of the queries I tried.

this is just gonna create a pavlovian response akin to "ah okay this is fact-checked i'll read" which'll just compound the problem. it presumes that google's fact-checking algorithms and methodology are sound.

'Fact checking' should be limited to blatantly false news items fabricated and posted for online ad clicks ie 'Obama to move to Canada to help Trudeau run country' or 'Trump applies for UK citizenship to free UK citizens from Brussels despots'. These should be relatively easy to identify and classify.

There is a wide line between the fabrications above and news and journalism as we know it full of opinion, bias, agendas, propaganda and maybe some facts twisted to suit narrative.

The latter takes human level ai to sift through and even then detecting bias, leanings or manipulation depends on one's background, world view, specialization, knowledge levels, understanding of how the media works and a well informed general big picture state of the world.

This is impossible to classify for bias, falsehood or manipulation and will need readers to use their judgment. Trying to 'control' this is like trying to control news, favouring media aligned to your world view and discrediting those whose views you disagree with. It is for all purposes propaganda as we understand the term. Calling it fact checking is sophistry.

So what takes place when the inevitable happens, and an employee decides that an existing "fact check" (conducted by a third party, Google hastens to add) is philosophically inconvenient and thus removes it?

Also, FTA: "Only publishers that are algorithmically determined to be an authoritative source of information will qualify for inclusion."

Snopes and Politifact, they can't be serious. Not that I expected them to pick a neutral source, nor am I surprised that Silicon Valley's Google picked 2 leftist "fact" sources. This is a stupid idea, everyone has a bias. This isn't to help people, this is to influence how people see things.

"The security properties of a collision resistant hash function, ensure that a modification results in a very different hash."

I really appreciate the clarity of this post. The author is building up the groundwork without skipping steps that may be obvious to many readers. I of course knew the purpose of a hash before reading the article, but some people don't - and that sentence clearly let those users know why the hash matters without making it less readable for knowledgeable readers.

If you use webpack, just drop in webpack-subresource-integrity [0] for basically "free" SRI tags on all scripts.

It's not really as useful if you are serving your static assets from the same place as the HTML (and you always use HTTPS) but if you load your js/css on another server SRI can still provide some protection.

It'd be cool if the browser used this to allow cross-origin caching as well.

Say I have jQuery previously loaded a page that included jQuery from CDNJS and now I'm in China and another site tries to load jQuery from Google's CDN.

Currently that request would get blocked by the great firewall. But since the browser should know that this file matches one it has seen (and cached) before it should be able to just serve the cached file.

This could also save a network request even if I'm linking to a self-hosted file on my own servers if I include the hash.

This might have been mentioned somewhere else but - will browsers remove or make an exception instead of blocking mixed-content[0] when a sub-resource integrity check is present? I mean there really is no reason to be paying the TLS over-head for commonly used libraries.

I appreciate the tech very much, but the visions of "Millions of people working in space"? I have no desire to be a part of that but I do wonder what they'll be working on?

And "Living on Mars". No thanks, I'll pass on that too. I can certainly see the thrill of the ride described though and who wouldn't want to be weightless for 4-5 minutes? But the market for that carnival ride is about as big as the number of cars Ferrari sells each year so I don't get that.

One can call this a "steppingstone" tech for now if they choose but it's more likely to be a cliff unless there's something of real value here. Even if we go out on a limb and say all this is really a way for the wealthy to escape the planet they'll find there's no place close enough to go so even that doesn't make sense.

No, none of the above makes sense to me yet so there must be something more fueling this race than what's being said.

And hey, isn't there also a downside to poking holes in the upper atmosphere and/or ozone depletion? How long do we let someone profit off the effects of that?

I honestly don't know the answers to those questions but I do wonder about them.

$78.4 billion in personal wealth. 94% held in Amazon, which is currently sporting a $433b market cap, up 50% in the last year and trading at a generous 200 times earnings.

I hope he sells more than he needs, faster than he needs, before this latest bubble (slight or extreme is open for debate) gives out. I wouldn't have guessed that Blue Origin could cost him ~$1 billion per year to subsidize. I'm glad he's doing it, very few people on the planet could afford to; of those that could, fewer still would care to.

Blue Origins seems pretty likely to tap into a much lower cost market for panoramic views of earth at high altitudes, similar to Virgin Galactic (had they not completely messed things up). Developing something that will get them into LEO is massively more difficult, with significant funding at $1B a year however (which is about 5% the total NASA yearly budget), hopefully they can make up their lag. I would love to see more and more ventures enter this market. The more the better!

I'm impressed by the burn-rate of $1B for a company that isn't actively launching on a regular basis... Does anyone have any idea how much capital SpaceX sunk before it got its Falcon 1 up and running?

I'm not really sure about the amount of money needed to have an actual impact on a stock like Amazon but maybe somebody more knowledgeable might have an idea: Does this measurably lower the market valuation of Amazon? If there is constantly someone selling shares in those volumes there should be some effect, even if Bezos is obviously not selling $1B in one day but over the course of the whole year?

Imagine a reusable rocket, like the SpaceX one, but bigger and being able to reflight in, say, one week.

If in every flight there were a satellite + space tourists, going to the space would be much more cheaper, and it's feasible I'm the near term. I see both SpaceX and Blue Origin offering "cheap" traveling to Space in less than a decade.

I wonder why Bezos is spending so much of his own money (OK less than 2% a year). Musk has perfected the art of spending Other People Money- government loans, green subsidies, IPO. Plus he has good customer revenue stream in two of his companies.

I always figured Amazon and SpaceX would just grow to be sister companies. Let SpaceX handle hardware and negotiations and let Amazon handle logistics and sales. Instant 45-minite Anywhere-On-Earth delivery service. I think there would be enough cash for them to share.

I have no doubt that Amazon wants to dig deep into space mining and being the backbone of the early solar economy. Who doesn't. It's gonna take a series of extremely smart investments for whoever does manage to pull that off. The barrier to a sustainable space economy is quite high.

The story is too good. The girl, the monkeys defending her, the policeman ... all Disney-level stuff but where are the non-disney facts? A real story always has dark sides. This one is too perfect. I'm not saying that it is all fake, rather that I don't think we are getting the entire story. I wouldn't be surprised if we eventually learn that this girl was only living with the monkeys for a very short while, that her issues are more long-standing. Perhaps the truth is that she was a disabled girl found amongst monkeys and the story has been elaborated from those simple facts.

>>> "She behaves like an ape and screams loudly if doctors try to reach out to her."

Like an ape or like a monkey? She was raised by monkeys but acts like an ape? A lay person perhaps wouldn't know the difference but by now someone with knowledge would be on site. I have been around several disabled children. The screaming and fear of being looked at or touched is not uncommon. No mention of how she reacts to being clothed? I'm no expert on feral children but I would expect that after eight years of being naked one would not be happy about clothing and that would deserve some mention ... unless of course clothing is nothing new to her.

I want to see her feet, specifically her toes. If she really hasn't ever worn shoes then her toes will show it.

This reminds me of the story of Kaspar Hauser[1] (which was made in to a movie by Werner Herzog[2]) and of the fascinating book Seeing Voices by Oliver Sacks.[3]

In his book, Sacks investigates various cases of children growing up without language, how they cope (or don't cope) with it, how they finally acquire language (if they do), and how differently they see the world in both the pre-linguistic and post-linguistic states. Hauser was one of the most famous cases of this sort, Helen Keller[4] was another.

Reading this book inspired me to learn sign language, which I expected to be radically different from spoken and written language, and more powerful in many ways, as you can physically describe things in ways that has little parallel to spoken and written languages.

The monkeys seem to have been doing a better job at parenting than the people here. Note how the text below one of the pictures says she's frightened of people and the picture right above it has a whole bunch of (all male cast) busybodies crowding into a little room with her in it.

Would have been interesting to have Jane Goodall involved. She could have left the child integrated but used the circumstance to bridge the communication divide between us and other primates because this girl surely knows things we never will.

Now the battle begins to shape her story such that it can be used to reconfirm one of a number of different competing narratives about man's relationship with nature, nature vs nurture, theories about language acquisition, the "critical period" and early childhood development. Did I miss any?

These types of junk-news stories seem to make their rounds on the Internet for several weeks before finally evaporating into the ether.

What's interesting is that in the past they would seem to manage to stay off the HN front page.

Now it seems like I see these stories start circulating on Outbrain or the other click bait networks and I think, well, that'll be on HN in a week or so!

These stories are usually large part fake news, or reality tweaked or skewed with some angle to make it almost irresistible to read about. I personally have no use for these types of stories on HN but certainly understand they are created with a very compelling hook to want to share them.

If the girl managed to survive for so many years, she should have been left with the trouppe of primates and get observed. This sudden change will probably be worse than any other less brutal change in the environment.

The first expression I had was: What rights do we humans have to take her back from her family(monkeys in this case) and her home (the forest)? Just because she is our kind, should we impose our culture, our values, our ways (and our governments) on her?

But then - this feels more like a creative story. From the videos it looks like she might have been in the forest only for some time and needs rehab, but I am no expert here.

The story if true is discomforting, the mind ponders, and it does not completely add up.

We know the 'facts' but we also don't. This is exactly the kind of story that needs fact checking, but to get that you need people on the ground, who are experienced and confirmation will take time which the attention span of the news cycle will not allow.

The worst is turning it into some kind of circus. Hope that now with the global attention the Indian authorities will immediately retrieve her from the current facilities with people clearly not trained for this, and get her the kind of specialized care and sensitivity she needs.

It is common in India for family members to put their autistic/badly born kids into a cage and display them in circus. I remember a family showing three of their kids in a circus as "animals" just because the babies were autistic and had tail like features.

"We defined and employed SDN principles to build Jupiter, a datacenter interconnect capable of supporting more than 100,000 servers and 1 Pb/s of total bandwidth to host our services."

This type of scales boggle my mind. Though I have found I can no longer keep up with all the terminologies popping up every day. Posts like these are my only connection to learning the massive scaling of things to make the modern networks work.

"We leverage our large-scale computing infrastructure and signals from the application itself to learn how individual flows are performing, as determined by the end users perception of quality."Is this implying they are using Machine Learning to improve their own version of content delivery network?

Espresso delivers two key pieces of innovation. First, it allows us to dynamically choose from where to serve individual users based on measurements of how end-to-end network connections are performing in real time.

Second, we separate the logic and control of traffic management from the confines of individual router boxes.

I think with platforms like this it is now safe to say that the systems and services Google is deploying are no longer in the same category as classical networked systems. This is as foreign a concept from traditional networking and the seven layer OSI model as non von Neumann computing is from von neumann computing

These presentations from Google are pretty irritating at these conferences. If you're familiar with the SDN field (as most ONS attendees would be), this presentation is essentially nothing but bragging about the scale at which they operate.

There is no useful information in here to advance the state of the art, no new ideas, no publicly available implementations (closed or open source). It's just a very high-level architectural view of a large network given by people who are incentivized to present it in the most favorable light. And due to the lack of any concrete details, it's free from critical analysis.

>Espresso delivers two key pieces of innovation. First, it allows us to dynamically choose from where to serve individual users based on measurements of how end-to-end network connections are performing in real time. Second, we separate the logic and control of traffic management from the confines of individual router boxes.

I assume their framework gives them much nicer primitives to work with than the above, which would be an advancement in the field if we could actually see an API or something.

The second is very far from "innovation". This is the essence of SDN and this has been the hottest thing since sliced bread in the networking world since 2008 at a minimum [1] and even earlier if you look at things like the Aruba wireless controller.

One biggest takeaway from this is that they can have multiple machines for the same IP address. That is just awesome and also explains how they have probably managed to scale up services 8.8.8.8 without needing to use load balancers.

I really hope they aren't patenting any of this. I'm working on p2p tech that features (among many other features) similar real time performance measurements and smart file distribution based on load and proximity.

Roughly half the 'power of prolog' comes from the 'power of logic programming' and prolog is by far not the only logic programming language, e.g.,

- You can do logic programming using minikanren in scheme. (you can also extend the minikanren system if you find a feature missing).

- Minikanren was implemented in clojure and called core.logic.

- It was also ported to python by Matthew Rocklin I think, called logpy.

- There is also datalog, with pydatalog it's python equivalent.

- Also pyke. And so on.

Plus logic programming has very important (arguably necessary) extensions in the form of constraint-logic-programming (CLP), inductive-logic-programming (ILP), and so on.

It's a huge area.

EDIT: ILP at an advanced level starts making connections with PGMs (probabilistic graphical models) and hence machine learning, but its a long way to go for me (in terms of learning) before I start to make sense of all these puzzle pieces.

EDIT 2: You can have a taste of logic programming without leaving your favorite programming language. Just try to solve the zebra puzzle [1] (without help if you can, esp. through any of Norvig posts or videos; they're addictive).

EDIT 3: An "expert system" (a term from the 1980s) is largely a logic programming system paired up with a huge database of facts (and probably some way of growing the database by providing an entry method to non-programmer domain experts).

EDIT 4: In other words, logic programming (along with other things like graph search etc) is at the core of GOFAI (good old fashioned AI), the pre-machine-learning form of AI, chess engine being a preeminent example of it.

i have always trouble understaing prolog, lot of guide online seem to just tackle the syntax or assume you already know a lot of logic programming, for example the first example in the link (https://www.metalevel.at/prolog/facets):

i don't undestand it, i read the segment describing it multiple time but i still don't get it, and is not the syntax i don't undestand how it should work, a tried reading the next few chapter but i feel i'm missing something!is there a "prolog for dummies" out there?

There is a class of problems which you can solve using Prolog with pure pleasure.

There is one thing however: Prolog can magically hide the complexity of many things, which is a two-sided sword. On many occasions you are hiding away the computational complexity and wonder why the execution is so slow. This rarely happens in imperative languages (where you are more aware of all the loops and recursions). I guess this is why many people hate Prolog...

I implemented the Wumpus World form Peter Norvig's AIMA using different techniques. I found that Bayesian Logic was much more powerful than Logic programming. Perhaps that explains why Prolog has flourished.

Logic programming is limited to values of true or false. 0 or 1. Bayesian logic can deal with uncertainty values like .2 or .3. It almost seems like a superset. It is also more intuitive IMHO.

I used to teach Prolog in the GOFAI days of early 80's. It certainly was fun and probably the quickest way how to start solving interesting search based problems without having to write pages and pages of code. It was very good for motivation. Also for encouraging "top-down" design.

One of my professors in college was an original creator of the Prolog language. He made us learn Prolog so that he could teach us something we could have just as easily done in C or Java. I strongly disliked him. For that reason, I am filled with negative vibes when I think about Prolog.

Can anyone give examples of the kinds of problems prolog is ideally suited to? I took a course on it at university. It looked interesting but I didn't really "get it". It might be worth another look now I have a bit more experience under my belt. I've got a lingering feeling it would solve a certain kind of problem very easily.

I bookmarked this. I have been revisiting my own programming history. I used Prolog a lot in the 1980s, partially because I was influenced by the ill-fated Japanese 5th generation project. A few weeks ago I pulled my old Prolog code experiments from an old repo. Prolog is a great language for some applications, and Swi-Prolog has some great libraries and sample programs.

I also have used Common Lisp a lot since the 1980s and I am in the process of working through a few of the classic text books, and I have it on my schedule to update my own CL book with a second edition.

I once wanted to solve a problem that is perfect for Prolog, but I wanted it in Clojure. Turns out there's a great library for that! I don't think it has the full power of Prolog (I only know of Prolog and what it does but I've never used it), but for integer constraint programming it was a joy to work with.

I wrote some prolog for a PL class recently and had to debug some cases of nontermination caused by the depth-first-search unification algorithm. I was wondering why prolog (or some other logic programming language) couldn't use breadth-first search instead, to avoid those cases, but couldn't find answers online - could someone who knows prolog better here have an answer?

Prolog's mathematical foundation is sound but the devil is in the details, and very soon, you encounter two of Prolog's most glaring flaws that lead to spaghetti code worse than what even BASIC ever produced: