This couldn't have come at a better time - I was actually looking for a durable message-queue written in Go. Is there any way to read more about the architecture of this system? I find systems like these to be quite fascinating but taking the time to go through the code can sometimes be very time-consuming. It would be awesome if more projects have a writeup as detailed as cockroachdb[0]!

Aside: There used to be a site sometime back which used to distribute compiled binaries of Go code for all platforms? Is it still up any chance?

I can confirm that generally there is a different viewpoint. In Germany it is somewhat of the last resort mostly to be avoided, and even then the aspect of rehabilitation (and deterrence) is most important.

In the US I find there's often a notion of revenge, as in "This person must suffer for what for (s)he did!" And more severe sentences usually to be considered as "more justice".

There are no absolutes when it comes to criminal punishment. There is always an exception to everything.

That said, I've long thought if you stop treating people like animals, they'll stop acting like it. I've never studied prisons, or psychology, or anything remotely related. But I genuinely believe this philosophy is investigating.

Could it work? Of course, the data is abundantly clear here. If you treat prisoners like shit, then their chances of recidivism are much higher. If you want to reduce crime, then you have to treat prisoners well. There's really very little controversy here from an academic perspective.

The barrier is cultural. The public in the U.S., by and large, expects prisoners to be punished harshly. Retribution and deterrence rank way higher than rehabilitation in terms of the desired outcome of incarceration. As far as the average American is concerned, anything that happens inside of the prison walls, including but not limited to rape, murder, torture, is the price you pay for breaking the law.

If we want to have better prisons, then we'll need people to develop some degree of empathy for prisoners, and that's a tough battle in the U.S.

Note that in the US prison is a multibillion dollar industry. It cost around $30,000 (on the low end) to incarcerate someone yearly. Even if they spend a day or two, someone is making money. Nothing to be said of the calling minutes. All in all this may be a $50-$100B industry in the US.

One thing about the RPV/Quadcopter debate that is rarely mentioned is the reason why they don't use ADS-B transponders.

The FAA requires ADS-B trasponders to have high accuracy GPS, and that pushes the cost to over $2,000 per device. It would be logical for the FAA to relax the GPS requirement slightly, so a cheap GPS module is sufficient to alert nearby aircraft of RPV activity over a certain altitude (eg. 200ft AGL) These RPV-grade ADS-B transponders could use a limited signal output, to avoid nuisance pop-ups from longer distances. The transponder Mode-S ID uniquely identifies the RPV.

It would be possible for a transponder to use an alternate channel frequency, similar to how many General Aviation aircraft use 978Mhz ADS-B. Even with an alternate RPV channel, the RPV operators would still be alerted to regular aircraft operations.

If used recreationally, they're R/C flying models, no matter how much the DJI marketing dept likes the term.So will the the hobby that has been fine for decades be hit too?

Will you have to register a Cheerson CX-Stars that weighs 8g?

What about the congressional Special Rule for Model Aircraft? "Technically it's not the FAA so we can regulate everything!" or what?

They're also not that hard to build yourself and the components are shared with other things (e.g. motors are shared with R/C cars and the first popular flight controller was an Arduino with the sensors from a Wii Motion Plus or Nunchuk) so you can't ban the part sales.

I love the progress drones have given our society. From surveying property to beautiful aerial views in a video to cool view of your kids soccer game to just plain fun. Drones are amazing.

You can buy a great drone and get amazing shots over and over for less than it would cost to hire a helicopter once.

However, they also pose a risk. I'd favor a simple registration process and clear cut rules that are not infringing on the use of drones. I'd like to see the registration used mostly to enforce the rules and track a drone back to its owner.

Ideally, drones should also broadcast its ID, so it's easy for other pilots to know they're in the area and also allow LEO to ticket/fine for offences. Without a broadcasting ID, most of these rules will be hard to enforce.

All the big consumer drone makers are in China. I love how all sorts of unregulated surprisingly powerful tech comes out of China. The last mischief they engaged in was way too powerful lasers. They just don't have the fear of science in that country like we have in America.

This has to be one of the stupidest ideas I've ever heard in my life. Not because it doesn't possibly have some noble intent behind it, but - if for no other reason - because it's going to be bloody impossible to enforce. When any hobbyist can easily build a drone/UAV without much in the way of special skills or equipment, how in the hell do they ever expect they'll get everybody to register them?

Sure, new-tech drones should respond to a targeted ping w/ serial number or similar. Simple challenge/response crypto could avoid cloning/spoofing. Enforcement of registration: no response = police will follow the drone to its owner and ticket (else impound). Physical serial number for legacy drones.

They just need to make these drones kit only and harder to fly. I've been flying RC with my dad since the early 90s. You would spend a month building these gas powered planes or helis. Only way to learn to fly was through other people or crashing a lot. It def kept irresponsible people out of the hobby. Plus you have to have skill to fly a 6 channel collective pitch heli. These things are not toys.

In case anyone is reading from NBC, would you mind using the term "UAV" or "RPA" over "drone"? When we throw around the word "drone" here on reddit and hacker news I feel like it's okay because many of us know the key differences between what we're flying and the hellfire raining completely terrifying semi-automated flying death machines deployed in war-zones around the world.

However, many people who read your articles might not be so well informed. Many people have an incorrect and confused understanding of them and to them the word "drone" carries that jumbled mess of an associations. It would be helpful to everyone involved to use more precise modern language. You may argue - as the ACLU has - that the word "drone" is the most direct way to talk about unmanned aerial vehicles to the broad public. It may be easy to use and direct, but it's far from accurate. As Edward Murrow famously said "We cannot make good news out of bad practice."

> Q: Hiring?> A: One of our rules was, we dont want to hire your friends. Another rule was not to hire people from lesser universities. Another rule was to only hire people with good GPAs. It was frustrating, but it meant that we ended up with a lot of really smart people from great universities, and that served us well.

I wonder what led to Google changing their minds and not considering the GPA as a part of hiring decisions.

The key observation is that a study published in Nature "found a correlation between child brain structure and family income. Simply put, family income is correlated with childrens brain surface area, especially among poor children. More money, bigger-brained kids."

This is supported by the Cherokee study: when the families became a bit better off ($4,000 a year) the kids did better when they grew up.

So it's missing the point to argue about how grown-ups might (or might not) "waste" money and whether cash is better than other forms of welfare. The point is that bringing up children in poverty creates a worse outcome for society as a whole.

The fact is that $4,000 a year for 20 years is a very small amount compared with the cost of US police, courts, and prisons. If a poor kid grows up, gets a job and pays taxes, that's a massive win for society compared with the same kid growing up in the sort of deprivation that leads to a life of crime and jail.

What we've seen in study after study is that if you take low-income people who are struggling and give them more money, they use it in ways that improve their lives, they become more productive, and this has profound long-term impact.

At the same time, I don't think anybody really disputes that giving moderate amounts of extra cash to a heroin addict is unlikely to improve their lives. They need medical treatment before anything else.

I've never seen a study which looked at extra cash injections for long-term unemployed. That would be an interesting one. I would be unsurprised to find that cash alone was insufficient to solve their problems; the most obvious thing they need is education.

My point here is that cash clearly helps in some - probably most - circumstances. But "just give cash" is insufficient; we still need to work on all the other things as well.

Many of these studies appear to be variations on the form "We selected some people, gave them some money and told them we'd be back to check up on how they were doing." I wonder if there's a significant effect from the expectation that being part of the study will improve one's life. Perhaps receiving attention from authority figures (PhD or MD), and potentially their judgement, might alter behaviour in similar ways to the "honesty box" experiment [1].

It seems like there might also be some bias in selection - presumably one has to consent (in writing) to be part of a long-term study. Perhaps this encourages participants to think of the future and might influences decision-making away from short-term goals and towards "investment" uses of the money rather than ephemeral ones.

I'm looking forward to seeing the results of their differential experiment, and whether there's as much difference between the $333 and $20 groups as there is between the $20 group and the "$0 group" of the general population.

There is unfortunately a large political danger with giving cash. If you give cash, then it becomes possible for pundits to agitate for more assistance on the theory that "you can't live on $X". When poor people are explicitly given a room, 3 square meals/day, and government issue poor-people sweats, it's pretty hard to argue that they are somehow lacking anything necessary to live.

Based on this article, there is also no reason to believe that cash assistance rather than in-kind assistance is necessary. The proposed mechanism is "Parents are happier because they have more money, leading to less fighting within the family. This lowers stress on kids..." But in-kind assistance would also lower stress since parents wouldn't need money.

Money is a mechanism that, in a way, insulates us from a variety of problems that, when reduced, allows us to focus on other things. For instance, when you're not worried about where your next pay check is coming from, you can invest that energy elsewhere. Reading, continued education, practicing your craft, etc. When one is no longer burdened by the hunt, he's available to venture into other activities.

It seems that we can give money to those in need, and see a "return" on that money in terms of increased employment and generation of value. Sounds like an investment to me.

The government is the unique position; they are able to realize a gain on this via income tax revenue. Maybe this can be done via basic income? Could we pass a basic income bill as an "investment in America" and is there a model where the government actually makes a return on this investment?

"We examine how a positive change in unearned household income affects childrens emotional and behavioral health and personality traits. Our results indicate that there are large beneficial effects of improved household financial wellbeing on childrens emotional and behavioral health and positive personality trait development...Parenting and relationships within the family appear to be an important mechanism. We also find evidence that a sub-sample of the population moves to census tracts with better income levels and educational attainment."

> giving poor families money, on top of the benefits they already receive, improves their childrens behavior

I don't think anyone questions that extra money does good. The big question in the fight against poverty is: given X available welfare dollars per family, what is the optimum allocation between giving them as benefits or giving them as hard cash?

What's funny is that we have devised this completely contrived way of divvying up the world's resources, including this notion of private ownership over key natural resources. But, in truth, no one needs to go hungry, without shelter, water, etc. There is enough.

But, then, we step back and say, "what if we give these people, who currently cannot subsist under this scheme, some marginal share of the resources we've convinced them by fiat are someone else's to give them in the first place?"

Then, of course, we measure their outcomes within the context of the same scheme, and ponder other ways to help them.

Yet, the scheme itself is much more seldom questioned. That one person can earn bilions from what's pulled from the earth we all inhabit, while others die from lack of access to the same should be expected to create irreparable distortions in outcomes. But, it's somehow accepted as an unchangeable, almost natural premise, even as we search for solutions.

I've managed my own mail server since 1993, and my email address has been the same that entire time. Here are some tips for maintaining sanity:

Greylisting still works amazingly well. With a long, long whitelist and greylisting plus DNSBL, I don't even bother running a spam filter, since the little bit of spam and emails from new senders ends up in its own directory as it came from a non-whitelisted sender.

Comcast finally started blocking residential mail server ports inbound a few years ago, so I had to migrate to a smarthost environment using a VPS as email server for $15/yr.[1]

Last year for a few months, Gmail was dropping everything I sent into the spam folder, even after recipients were marking it not spam. I eventually discovered the "Authentication-Results:" header that Gmail adds to every inbound message. It is under the "Show Original" dropdown menu. That showed that I "hadn't changed anything"(!) on my mail server, but suddenly Gmail was connecting to my mail server over an IPv6 interface, and I had never bothered to put the IPv6 block into the SPF record. Gmail was nice enough to explain exactly what it didn't like about those emails.

I've run into similar issues with a similar setup. It's frustrating. You can convince gmail user A to whitelist your messages, and so they'll get through to user A, but gmail user B probably still won't see messages from you unless you tell him to dig them out of the spam trap. And your messages to A might still be classified as spam if they have attachments or hyperlinks in them. (Even if you've been corresponding with A for several years!)

Once upon a time, to send email, you needed to use SMTP. Now, you must use SMTP, from a IP block that isn't categorized as residential, and which has never before had any association with outgoing spam, and you also must implement several ad-hoc identification protocols like SPF and reverse DNS. You should also use a domain name that you've owned for some time and which is not expiring soon. Every system to which you want to send mail will give different weights to all these signals. If they don't like you, their behavior is to report successful delivery and then silently hide messages from their intended recipients.

The problem is not so much the attitude of the big guys. It is that smtp is fundamentally broken. we need a better mail protocol that ensures:1. Traffic always encrypted and content always signed2. Guarantee that the sender is who it claims he is3. Decorrelating the email from the domain, a lot of users are prisoners of their current provider just because the address they gave everyone ends with the provider's domain name, very much like it is very hard to switch bank accounts4. Ability to provide disposable adresses which can be deleted when spammed

Many of the issues we're running into with online systems, particularly those relating to quality and reputation (spam, collaborative filtering or content rating as on HN or Reddit, etc.) have strong analogs in real-world social spaces. And there were real-world mechanisms for dealing with these.

For a new businessman or professional setting out in the world, pre-Internet, "establishing your name" was a requirement. These days the concept's often referred to as "creating a personal brand", but the reality was pretty straightforward: how does an unknown quantity become a known quantity?

A common method was the professional or social introduction. This is still practiced, where a third-party _matchmaker_ will introduce two parties. The matchmaker usually knows both, and can vouch for the newcomer and smooth the path for introductions with the established party. Essentially, the matchmaker stakes _their_ reputation by speaking for another.

Lawrence Lessig describes a similar concept in his book Code and Other Laws of Cypberspace, in a passage describing a physical messaging system, the Yale Wall. This was a board onto which messages could be posted, with the proviso that that they be signed. Unsigned messages would be posted from time to time, effectively presenting an anonymous viewpoint. Removal wasn't instantaneous or automatic, but rather, at some point prior to garbage collection, another individual could review the piece, and, if they felt it warranted posting, sign for it. They weren't registering themselves as author, but as vouching for the merits of the viewpoint -- not necessarily agreement.

A messaging system in which a new peer might be able to indicate that, hey, peers X, Y, and Z, with established reputations can vouch for me, and for which those peers could confirm their endorsement, might address the "how to build reputation" issues of new mailservers.

I've also found that even large mail systems will frequently have some procedure for getting at least provisionally vetted, effectively a workfactor cost to coming online. Though the process absolutely could be improved.

I feel like this is a solvable problem without making any changes to email whatsoever. The problem is the email recipient hosts are suspicious of the sender (as opposed to the message itself being suspicious). So the solution is to have a standardized way for senders to acquire an instantaneous reputation by tying their real-world identity to it (which lets them be held accountable if they do spam), and perhaps by throwing some money at it too. If there was some company that did identity checks, similar to how EV certificates are given out, then that ties your real-world identity to it (this could in fact be done by literally requiring an EV certificate for the hostname of the sender). This company could also take a decent-sized deposit (so you're staking money on not being a spammer) and hold it in trust for a set amount of time. Once the time has passed, and you've sent enough emails for recipients to draw meaningful conclusions, if you have in fact not spammed, then you get your deposit back (minus a service fee). Then all the big email hosts would pay this company to query it about senders the host doesn't already trust, and similarly they'd report any spam from these hosts back to the service.

Heck, this doesn't even have to be a new company. A big host like Google could just start offering this service anyway, as a way to simplify their own handling of unknown senders, although I'd feel more comfortable if this was done by someone else.

Here is an almost related story: I am in italy and was waiting for an email from the local Apple store to tell me my macbook's repair was complete. After 6 days waiting, I called the store. They said the computer had been ready for a few days.

I checked the spam folder. Gmail had plopped the apple email into the spam folder. Gmail's reason was that the email was in a foreign language, Italian. Didn't matter that the previous 2 weeks all my google.com searches were done from Italy.

The email architecture was started back when it was a smaller network of researchers at universities, governments, etc. Everybody basically trusted each other.

Once the "internet" is available to the general public and commercial interests, it becomes vulnerable to the "bad actors" problem (e.g. spam abuse). That's why we have the inevitable situation today of a few entities (e.g. gmail, hotmail) being "trusted", and random residential SMTP servers run by homeowners being "untrusted".

I haven't seen a realistic de-centralized trust proposal for email. Even if a proposal is theoretically sound, what incentive is there for other big players to adopt it?

I sometimes see similar tales of woe, and I can only say that this does not match my experience. Ive done this many times, you set up the mail server, configure DNS correctly (including reverse lookup), and thats it. Never had problems being blacklisted or mail getting classified as spam.

I suspect that people having trouble are sending a lot of mail, like newletters, etc. But I cant prove this hypothesis.

I have a bit of experience with running email servers. I can't really say that I had similar encounters.

In my experience if you get blocked by big mail providers it's almost always due to some reason. What's tricky is that it may be hard to tell what exactly is wrong, because they won't necessarily tell you (or not in an easy way).

Some advice what I'd do to try to find out what's going on:

1. Take a sent example mail that is like the blocked one (but obviously one that reached its target destination) with all headers and run it through spamassassin. Don't just look if it hit the spam score (then you did something terribly wrong), look at each individual rule that spamassassin hit. They might give you a clue. A proper mail usually shouldn't hit any or very few positive spamassassin rules.

2. Check your IP at a service like valli where you can query multiple DNS black lists. If it is on any blacklist try to find out how you can be delisted. There are some rogue blacklists that make it impossible to be delisted at all, you may ignore them (google for them, their behavior is well documented), but these shouldn't be more than 1 or 2. As already said by other commenters, don't forget IPv6.

3. Read whatever error message you can get your hands on. If you're blocked on the SMTP level read the error message. If your message got sorted into a spam folder look at all the headers. If the provider blocking you has some online docs about their spam filtering read that. If they have some sort of service for mail ISPs where you can sign up to get warnings sign up there.

Of course also the obvious stuff. If you do anything that is mass mailing you are in extra danger. Make sure that you allow people to unsubscribe easily, don't ignore manual attempts by them to unsubscribe ("I want to get off this mailing list") and delete invalid mail addresses.

Charge the sender one cent per email through a combination of legislation and technology. The problem will instantly go away.The transfer of mailing lists to online forums will be a very small price to pay.

The trouble with it is that it requires computation for each sent message, which is bad for senders with low resource devices or legitimate mailing lists. I want to propose a variant.

Instead of creating an expensive hash against the message, create an even more expensive hash against the (sender TLS certificate, receiver domain name) pair. This implies using TLS but it works just as well even if the certificate is self-signed. Then each mail server only has to generate a hash once per recipient domain, ever. Every message that mail server sends to that domain is tagged with that hash. A legitimate mail server will have already computed hashes for all the domains its users regularly correspond with and rarely if ever need to do any more expensive computations.

If spammers do the same thing then the receiving server can mark all messages sent with that hash as spam. So there is a highly disproportionate cost to spammers (even if they have more computing power) because to avoid that they have to continuously generate expensive new hashes. Which can be made arbitrarily expensive because legitimate servers only need to do it once. And a new hash is much less valuable to a spammer than a domain name or IP address is today because each hash can only be used against one recipient domain. The required amount of computation can be set by the receiving server so domains with more users can require more computation.

If a legitimate mail server is compromised by a spammer then it will have to generate all new hashes (because the spammer will presumably immediately ruin the reputation of the compromised ones), but the reputation of the legitimate sender's email domains is unharmed because the reputation is tied to the hash computation, not the sender's domain name(s).

And adding support to mail servers would require no configuration whatsoever. You install server version N+1 and it starts tagging outgoing messages with hashes that receiving servers can verify.

This largely true and hugely disappointing. Our startup (https://portal.cloud) is making it possible for non-hackers to self-host their own email servers. It works very well for the most part, but we have had to explain to a number of them why their email sometimes gets bounced by the big proprietary cloud services.

All of our users have their own domain names, IP addresses, SPF records, and correctly configured (and up to date) Postfix SMTP servers. There is absolutely no excuse for not always delivering their email, and yet this is what the big companies do.

I see lots of threads shitting on the guy for doing it wrong vis-a-vis his configuration whilst ignoring his actual problem: An IP address without a reputation score. I've had the same problem and reached the same conclusion. The address can't just be clean, as in not on a blacklist, but has to essentially already be whitelisted via a "known good" reputation score or mail automatically gets blackholed. How do I get my VPS provider of choice to give me an IP address with a good reputation score?

> This isn't how the internet is supposed to work. As we continue to consolidate on a few big mail services, it's only going to become more difficult to start new servers.

And this is exactly the reason I setup my own mail server. I'm only 1 man, but I hope more people will do so with time, thus requiring the "big ones" to work on better algorithms for filtering and not base it on reputation.

Email has been the my last hold out from switch away from gapps completely. I don't want to have to deal with any of this, especially as I do business communication with clients via it. Email wasn't suppose to be like this, and there has to be a better way to enable non-giants to successfully deliver email.

It may be infeasible to run a new SMTP-based mail service from "residential IP's" that can interact with the existing email empire, dominated by store and forward middlemen who expect to make money from the "free" email service they provide.

That empire amounts to a junk email delivery service and later a way to gather information about email users. The later purpose is probably why you want to run a new email service?

However it is certainly feasible to run a new SMTP-based email service from residential IP's that does NOT interact with the existing email empire. One with no middlemen. The sender's SMTP server talks directly to the recipient's SMTP server. You decide what port you want to use. There are thousands to choose from.

There are multiple ways to do this, but I rarely if ever see this option discussed. I suspect it's because like DNS most users are not comfortable configuring mail servers nor with NAT traversal.

If indeed the motivation for running your own mail service is because you do not want your mail stored on third part servers (whether in the sender's mail folders or the recipient's), then the ability to interact with the existing store and forward email providers seems a counterproductive requirement.

This is indeed worrying. I've been running my own E-mail servers for the last 20 years or so and even though my problems weren't as severe, I did run into a few cases where the "big ones" were at least delaying my E-mail.

But this kind of problem invariably arises when we go from a fragmented Internet with lots of small hosts/providers to an Internet of several walled gardens, run by the big guys.

The real answer is to offer a reasonable self-hosting competitor to GMail.

I have self-hosted my mailserver for a long time and started to get problems a couple of years ago. The main issue is corporate networks running McAfee's "MxLogic" product that claim to bounce my mail and tell me so, but then go on to deliver it almost all the time.

It is simple: the more fear uncertainty and doubt the "big email providers" can cast on no using one of the big email providers the more they chase everyone into their business (when they can read it). You can go on and on how it is "technically hard to fix email" but that is a second order effect to not even trying.

OP, I don't know if you are reading the comments here but in case you do: Don't get discouraged so quickly.

The reason this is happening is as the blurb from the MS postmaster help page: Your IP doesn't have a reputation yet.

The reason these rules are in place aren't about email monopoly, it's about spam. If anybody could setup a SMTP server and start firing off large amounts of mail, spam would be even more endemic than today.

You can configure your server perfectly, but that doesn't mean much, since it's your IP that's the problem.

If you have legit objectives, it's a pain in the ass for sure. But you are not the only one having this problem, and there's a solution for it.

All the big email service providers (ESP's) like Neolane, Exact Target, Mailchimp, Campaign monitor etc share this problem when they onboard a new client, who requires their own IP.

Deliverability is a surprisingly deep, technical topic, and all major ESP's have entire teams of specialists working on this.

If you want to make such a service as Fastmail, you need to get really into deliverability. It's not a walk in the park, but it's not impossible either.

I'm not a specialist in this particular area myself, so I can't give you that much specific advice. I've just worked elbow to elbow with a lot of these guys, so I know what kind of challenges they work with.

One thing I know for sure is really important, is the "warming up" of IP's. Basically the IP you are sending from needs to accumulate some reputation over a period of time, typically a month or two.

If you send out reasonably small amounts of mail to email addresses that exists and the recipients does not explicitly report you for junk mail, your IP get whitelisted and you will get a much higher delivery rate.

There's no quick fix unfortunately, and email reputation is hard to gain and fast to lose.

But it certainly can be done. You sound very competent on the server side of things, so to get your fastmail-like service up, I think it's just a matter of a bit more persistence and studying deliverability as a technical subject.

My experience has been that sending from my server works fine if it's in DNS and RDNS. Sending in that server's name works fine if the SPF records are present. But I don't bulk send; everything I send is something I typed, other than some messages the server sends to me periodically.

You can talk about HTTPS and forcing encryption all over the web however much you want but as long as you're stuck with SMTP and the few large coorps as the only viable mail solutions you can forget individual citizen data privacy... at least thats my 2cents.

I guess big providers has to deal with "newbies" all the time, that don't know how to configure their server and run open relays. And don't know how to add extra headers etc.

That said, silently dropping messages without a notification is probably illegal! And pretty serious! So if you know what you are doing (and you are not a spammer) you should send a cease and desist!

Just make sure you use double opt-in and that providing an e-mail address in sign-ups/etc is optional!

Some will however put your mail in a spam-folder and it's not much you can do about it, just hope your readers complain to their provider.

So basically: setup your smtp relay correctly! Make sure you are not on any black-lists. Add extra headers like dkmi / precedence, add spf (don't froget ipv6) and ptr. Add your relays to white-lists. Publicly publish a privacy and e-mail policy (important! with opt in and optional clause); Link to them and fill out some "email provider" forms at gmail/microsoft. Send out a bunch of test mails. This will take a whole day, but if you do this, you will have no problems unless your IP or domain is perma-banned.

The anti-competitive consequences of this are really interesting. Does anyone know if there are historical statistics on email provider market shares? It would be interesting to see how things have changed over time.

Nobody else has mentioned it, so i will; This is a great situation for the NSA. Get all 400M accounts in one fell swoop by using a gag ordered warrant on Google (or microsoft) Ez peasy. Much harder than to contact a sysadmin about their 1 account that isn't being fed into the behemoth..

I did not realize that I am running my private SMTPd (+ imaps) for nearly 8 years. I never had issues. My score on test-mail is 9/10 because of lack of DKIM. I will implement DKIM, see if I can get 10/10.

I've looked into this a bunch as we've been building Nylas. Our backend is essentially a cloud mail user agent (MUA), but can have similar issues to a MTA or mailbox provider.

It turns out that creating a successful product in the email space requires you to build relationships and partnerships with the existing vendors/providers. It takes a lot of work, and is part of what you pay for when using Mailgun/Mandrill/etc.

These "greylists" and systems that drop mail from unknown IPs have been pretty carefully designed and tuned to combat the insane amount of spam out there. They work remarkably well, and for most of today's email users, spam is no longer an issue. (The current state of email abuse innovation is promotions/marketing/etc. which is a more subtle challenge.)

At the end of this article, the author writes, "This isn't how the internet is supposed to work." Ironically this system of obscure reputation-based email is a direct result of how the email system was actually designed to work, with a total lack of permissions or feedback loop. Many SMTP servers used to not even require a password.

Stuff like DKIM/SPF and DMARC is a step in the right direction. But the RFCs upon which our email system is based were written decades ago, and in many cases have fundamental flaws, like SMTP leaking metadata no matter what. I could go on and on about issues with email, and why I care about getting them fixed, but let's just say it was designed in a different era of Internet with different constraints and opportunities.

So how do you build a new email service and not get blocked? Well, you spend a few weeks or months emailing, calling, Skyping, and meeting with folks in the current space. You work your way up through marketing support and random protocol discussion lists until you are talking with the folks who can influence which IPs are blocked/unblocked. Then you convince them you are (1) building a legit venture, (2) are worthy of their trust, and (3) don't directly compete with them. Then you'll get a small number of clean IPs and you must not screw up!

There are a few hacks, like sometimes a single partner/vendor will sell you a block of clean IPs and help manage the spam reputation. But usually it's just a lot of sweat and annoying phone calls. It takes way, way longer than setting up SPF/DKIM. The challenge is more relationships than technical.

Once you have a new email sending provider, the burden shifts to you for doing abuse/spam prevention. And you find yourself implementing many of the strategies and systems you cursed when getting started. But that's the circle of rfc2822 life I guess.

Oh, and never use EC2 IPs for sending mail. Most of them have been burned by spammers.

This guys story is a sad one but I'm sure we're missing some important information here. New mail servers get set up all the time, it's not impossible for such servers to be accepted by the big players.

If none of these major services ever learned that his server was OK, it's likely that users weren't unmarking the mail he sent as spam. And that leads to the question of why not.

I rent some servers from the Rackspace cloud for personal use. I have my own sites on these machines, and my own email servers.

Meanwhile, I have a day job, and lately it has been consuming 12 hours a day. We missed a deadline and we have all been working like crazy to catch up. I have fallen behind reading my personal email.

Roughly a month ago, my friends who use Gmail stopped getting my email. Or rather, they did not know I was sending them email, because all of my email to them was going to spam.

After a few weeks, I finally had a free weekend to catch up on my personal life, so I did some investigations. Turns Rackspace had switched over to IP6 in a way that impacted my email. I did not have a Sender Policy Framework for IP6, only IP4.

It's likely that Rackspace sent me an email about this, though I never read it because I was busy.

This was easy to fix: I added a SPF for IP6.

However, these kinds of issues do make it harder to maintain a personal email server. Its tough for us to keep up with the changes.

Just goes to show reputation isn't the end all, be all, solution to everything. It's so often championed by the Linux kernel team as a reason why the distributed model works, and it's evident that in that case it certainly does. Here though, much outcry about how you can't setup a box and instantly be as respected as established boxes that have earned rep over time. This post just comes off way too butt hurt for my taste.

It's almost as if the metric system was designed to be useful to ordinary people rather than to fit a nice-looking model of units.

Most people use a ruler or their hands to measure things, not dial calipers. And most people don't want to have to say "230 millimeters" as opposed to "23 centimeters" (or more realistically, 20 centimeters).

I've always wondered if there was a platform where we could have some standardized set of arguments regarding a proposition. Then, one would not need to tolerate repetitive pointless discussion: something that usually arises regarding propositions that require little expertise to discuss.

For instance, nothing would make me happier than to be able to reply to the nth discussion regarding the idea that "the GPL is freer than the BSD licence" with a universal fully-qualified link to every argument for and against the idea. Or "For software engineers, open-plan offices lead to greater productivity than individual offices".

While it may appear that this would lead to some sort of _Futurological Congress_-esque situation where we respond to people in paragraph numbers, it has many advantages:

* No longer will people be misled by a correct statement poorly argued for.

* No longer will message boards be polluted by the nth iteration of the same argument.

* Undiscovered lines of argument will be universally available.

Of course there's the disadvantage that you'll get less participation, and there's value in just having some number of comments even if they're repetitive: at the least, the desire to respond to that may bring people who later on make novel arguments.

This seems like a fine UI to do that. Deep link to the relevant sub-graph, and let the collective intelligence of thousands do your arguing for you. I like it.

I built something like this back in the day. It looks like it's lacking a weighing mechanism. This (https://en.wikipedia.org/wiki/Subjective_logic) is a good formalized framework to use for that, if the authors are here reading along.

I have a question: When using because/but/however, do they apply to the hypothesis or to the premise? It would seem logical that they apply to the premise, however the count on the homepage is slightly misleading. I thought some people were "becausing" a lot to a subject, when in fact it counted the becauses on the "buts," too.

I love this!!! I wish it gets pushed forward! I wish a lot of people would use this! I think it is a great platform!

I worked on something very similar as one of my very first projects which got me into programming. I wanted there to be a debate website where anything could be debated using arguments. I've found that the debates I would see on TV or in everyday discussions would not be good enough, because:

- There was space for people to diverge off of the discussion

- When the discussion would fork, the participants might forget some previous arguments that were made

- It would be difficult to come back to a previous point.

- People would have a bias towards the arguments made by the most prestigious side of the sides discussion a certain matter.

- It was possible to make some claims without backing up proofs/sources.

- Emotions could become a factor. The discussion can heat up.

I thus wrote a small website where one could post an idea as a node, and others could reply in favor of, against the idea or under a neutral position. The users could also vote for some nodes. The website would then become a collection of trees. As I see it, it could be used to discuss any matters! However, I've never really pushed the idea forward.

I've always thought about picking the project back up as I was passionate about the idea. I've never really got around doing so (I would love to discuss on how to get projects pushed forward). Through the years, I thought about this website, and I've found some problems that could arise:

- There would have to be a good user base. My perception was that people would have less incentive to discuss where no one would listen.

- How do you simplify ideas as much as possible? Some texts can be summarized or shortened (and some connections like relationships to other nodes could be added) and still have the same idea. I'm guessing this would be done using moderation. I think this is somewhat relevant because if you're browsing a tree of ideas, you want to do so seamlessly such that you do not lose interest in providing your input.

- For some, it is tiring to undergo a proper debate where the claims made need to be backed up. A lot of people like to discuss freely, in a comfortable setting. The usual reply system works for that.

- I have found that many people like to stick with their beliefs more than with research. (This point applies to debates which need evidence. Many philosophical debates would be fine without the need for evidence.)

- If a node would get too big, it would contain more than one idea. There has to be a system to split nodes apart.

- How do you deal with merging nodes?

- How do you manage spam and moderate node creation? (I did not have a good understanding of how to achieve these)

- How do you deal with nodes that have been edited? I've found a way to deal with this, but it's not as pretty as I would have liked it.

- Watching websites like Reddit and Facebook, I realized the reply system was enough as it allowed people as much room as they needed to make their point, using text. The only issue is organizing the ideas properly in this case. Hacker News had the reply system and people were using it to lead great discussions.

I've also thought about extending relationships to not just logical relationships. The reason I was looking to do this was that I wanted to find the simplest and most elegant solution that could apply to many use cases (not ALL the use cases though). It fitted (and still somewhat does fit in) how I think about writing good software (please someone correct me if I am wrong). The relationships would be akin to: Grows from, Follows, Is of type, Contains, etc.

I thought that this would essentially grow into a database of everything, a little bit like Wikipedia. Although Wikipedia does not allow much discussion (As far as I know).

I honestly think the theory of the NIC messing up one of the offloaded tasks is a likely one, if I remember other strange errors that have happened in this context. To bad that he doesn't have access to the machine to do more thorough tests.

should this not be "Windows corrupting UDP Datagrams in some (possibly lower chance) cases"? The list of circumstances for the bug to occur (from the article):

1. UDP protocol. (Duh!) 2. Multicast sends. (Does not happen with unicast UDP.) 3. A process on the same machine must be joined to the same multicast group as being sent. 4. Window's IP MTU size set to smaller than the default 1500 (I tested 1300).

5. Sending datagrams large enough to require fragmentation by the reduced MTU, but still small enough not to require fragmentation with a 1500-byte MTU.

When seen as a [1 3 3 1] (the 4th row of Pascal's Triangle) kernel, it is more easily revealed as a discrete Gaussian kernel, which is for example used in scale-space representations in image processing.

At first, I optimized each channel, then upsampled the chroma channels using replication. This works terribly as you can see in the article.

So then I changed to linear interpolation. Briefly:

+---+---+ a x b y c +---+---+

We know the values in a and c but not b. The distance from x to a is half a pixel and the distance from x to c is one and a half pixel. Then linear interpolation gives x = a + (c - a) * (1/2 - 0) / (2 - 0) = 3/4 a + 1/4 c.

This worked decently, but it still showed fringes around sharper edges. I considered using more complicated upsampling methods like Lanczos or Mitchell, but instead went with optimizing a full size image with constraints on the downsampled image. By avoiding upsampling I got my optimized high resolution image for each channel.

But there were still fringes! As it turns out, just because each channel was optimized seperately doesn't mean that the image as a whole is optimized. So I switched to optimizing the three YCbCr channels together, not looking at the differences abs(x_{i+1} - x_i) but looking at the differences sqrt((Y_{i+1} - Y_i)^2 + (Cb_{i+1} - Cb_i)^2 + (Cr_{i+1} - Cr_i)^2). This actually eliminated the fringes.

I'll need to try this as an OpenGL (ES) shader in an app I work on regularly. I've been meaning to replace the default bilinear filtering with something a bit nicer-looking for ages, and this seems like it fits the bill nicely, as it avoids the usual ringing and moir issues. (I don't need to worry about perspective correction as the app is 2D only)

This brings back high school memories of performing Gaussian blurring of 2-bit grayscale hand drawn sprites drawn on graphing paper. Yes, I got bored in class! I did use a calculator for assistance. :) I never knew there was such a remarkably simple kernel that provides good scaling!

If you prefer blurry images such as the author seems to, you can use parameters to your cubic filter to add more blur than the (probably mitchell or catrom) parameters that gimp is using. I'd bet most people would find the result superior to this filter.

This is just one piece of the puzzle in a long-term project of mine to build a typed and distributed intermediate language that we can use to share code across language boundaries. I want to give people the freedom to program in the language of their choice while still interoperating freely with other languages.

Blender may be one of the most successful and best run Artistic/Media projects in Open Source.

It manages to pack a lot of power in a very small package. And the Open Movie/Open Game projects help to drive the direction of the Application with concrete goals. Personally I think it's one of the media production suites out there for the hobbyist. The power + price (free) can't be beat for the non-professional.

I used to work for Hollywood visual effects. Over the span of many years I have tried almost all the software out there and I find Blender very powerful but my personal favorite is a software named Houdini* (not free but you do get apprentice version).The core methodology Houdini is built on is for creating procedural systems for everything which i think is of more relevance for this community.Do check it out, I am sure members of this community can put it to use for the things not even their creators would have imagined.

With all the complaints with Blenders UI I've never understood why more people haven't developed plugins that offer a UI alternative. I wanted a simpler UI for Maya so I wrote my own : https://github.com/shawnfratis/Scrimshaw-MEL-Mini-GUI-for-Ma... . It's not perfect but it works for my uses. I'd think a program as open as Blender would lend itself to something like that.

The takeaway I'm getting from this is, as with other websites, the attempt to fund streaming music indirectly via targeted advertising is hopelessly unable to keep up with ever-more-clever click fraud. At best, we end up with an arms race of more and more powerful "criminal" botnets and more and more heavyweight advertising tech crowding out the original content. I'm becoming very sympathetic to the viewpoint of backing out towards either completely untargeted advertising (which, paradoxically, can be far more effective) or -- and, admittedly, I'm going crazy out on a limb here -- paying for content.

"For example, Spotify says that its average payout for a stream to labels and publishers is between $0.006 and $0.0084 but Information Is Beautiful suggests that the average payment to an artist from the label portion of that is $0.001128 this being what a signed artist receives after the label's share."

This would make it much more expensive to run a botnet through AWS than any potential profits it could generate.

- The opening sentence isn't all that truthful. It's implying that an average user is just going to open Spotify, mute it, and go to sleep. That means they won't be there to skip every 30 seconds. So, we fall back to the 3 minute average. Assuming you sleep for 8 hours that means you're only going to get 160 plays or ~12 cents not 72.

Isn't there potential here for a much more nefarious plan than merely earning revenue from fake listens?

If you could do the same thing across a few services, spreading the number listens out on a viral pattern, based on a bit of investment in highly marketable songs, it sounds like you could create a bedroom-singer rags-to-riches superstar story and potentially make millions upon millions.

As long as Spotify doesn't make a loss on the payout per streamed listen event and the pay-in from advertising, I don't see any problem.

Spotify has a pretty much working monetization model, they could just tell advertising to fuck off. Their free model is like classic radio, where advertisers pay without knowing if there is one listener tuned in or millions (literally).

In the end I feel we need better captcha options.. images for most people with options for the impared. In the end stuff that's relatively easy for a person (click the picture of a cat), but harder for a computer to do...

Another option might be regular challenge-response that makes interaction harder and more costly for a fake listener.. having to run a pbfdk, scrypt or other result on a given input at regular intervals... (the service could have a pre-computed pool to randomly serve out, so they wouldn't have the same costs).

They could also flag accounts that get my than N hours of play in a day, or number of days that's much higher than a typical listener... or who plays more artists/songs outside the top 10k songs the previous month. Asking them to login to their account, or validate their email address at that point... Anything that makes the process much more complicatied to automate but would affect a very low number of real people.

Yes, it's an arms race, but there are a lot of things that could be done that could keep the barbarians out of the gates... Not to mention other suggestions that split per-user royalties to artists, instead of the pool as a whole... That combined with other models could go a long way here.

I feel like the writer of this article has a fundamental misunderstanding of Spotify's business model. The number of plays influences how much money Spotify brings in from advertisements. As far as they are concerned fake and real plays are not much different beyond maintaining credibility with their advertisers.

The 2018 projection of $2.8 billion revenue with 329 MAU for Pinterest is roughly what Twitter will achieve in 2016.

Twitter has a market cap of 20B vs 11B for Pinterest, so there is plenty of upside if Pinterest hits 2018 numbers and goes public. That is assuming Twitter's current valuation is reasonable which is debatable.

> TechCrunch has obtained documents that show Pinterest has been forecasting $169 million in revenue this year and $2.8 billion in annual revenue by 2018.

So in three years, they'll have to grow revenues by 16.5X. These sort of outlandish growth assumptions necessary to substantiate their valuation is exhibit A as to why the a16z valuation is extremely suspect.

Of course, 16.5X growth is doable when you are starting from a low number, but $169MM ain't spit.

And all the being said, $169MM for 2015 is just a forecast, so if they miss on Q4 numbers, the 16.5X assumption can easily be 20-25X.

On the plus side, their revenues per active user target of $9.34 is relatively modest, and I think doable as FB does $4.18 per user per quarter in revenue.

It's so difficult when you're on a trajectory that you know will kill you yet you cannot change it because it feels like your entire being is so tainted and the sadness is so ingrained in your soul that your mind just tells you to isolate your miserable existence from those around you. I have struggled with this my entire adult life and I'm writing this with tears in my eyes because reading this beautiful piece is like unraveling my own future. Rest in peace, George. We all die alone, but no one deserves to die lonely.

At the end, I couldn't help but try to derive some overall take-away. The best I could come up with was to cherish your friendships and always try to keep in touch. One concern I would have is whether or not today's internet-based culture could hinder this?

Contrary to the implications of the beginning of the piece, Mr. Bell had connections, he had the potential for relationships, he even had friends who tried.

I'm an introvert, who recently moved to another city for graduate school. I don't get out very often unless it's to a solitary place (my own spot in a coffee shop, for instance), so I can relate to Bell's desire to be solitary most of the time. However, if anyone reading this thinks his death was a wrong, know that he had ample opportunity to at least have one or two people to "be around his death bed" so to speak. The end of the piece talks about "the Dude", not to mention the possible wife who it seems loved him even towards the end.

If there is any take away, it is to cherish our relationships with others. There are extreme cases in which people really are alone, but for those of us who can at least count on our hands the ones we love, or at least like, (even if it takes us a few minutes to enumerate them), we should continue to develop those friendships and not force people out who could otherwise enrich us. As the "investigators" put it, we don't live forever. We need to use the time we have to enrich others, because only the rest of society will out-survive us as individuals.

> IN 1996, GEORGE BELL hurt his left shoulder and spine lifting a desk on a moving job, and his life took a different shape. He received approval for workers compensation and Social Security disability payments and began collecting a pension from the Teamsters. Though he never worked again, he had all the income he needed.

We need a society where every person is needed and wanted. We need it more than any dumb technical advance that lessens the need for people, like robots delivering pizza or whatever. That will take some heavy thinking, though, and I feel that we haven't even started.

I never understood this stigma about hermits/social isolation in western societies. In the east, it's much more accepted.

Reading this piece I got the impression that the writer and persons interviewed were HORRIFIED by what they discovered.It seemed to me as if "having friends" was pretty fucking high in their list-of-important-things-in-life.

I was very amused by this, almost started laughing in fact.Human relationships are not for everyone I'd say, in fact there are many healthy people that view them as pointless waste of time at best.

Technology will solve this little problem once and for all, in this century. I won't go as far as AGI but when house keeping robots become ubiquitous which is surely less than 20 years down the road, society at large will have to evolve and primitive points of view as described in this nytimes piece will basically disappear.

To me, the saddest thing about George was that he seems to have merely existed, while he was alive. From the article, he didn't seem to have any passions or ambitions. He didn't seem to want to go and see things, experience things, do things. He died in the same apartment, he was born in. He didn't really live, he just was, completely passive, until he wasn't.

This was somewhat of a depressing story; mainly because emphasis was put on the whole after death scenario; that is what happened to his belongings et cetera, et cetera... But overall, I feel like I understand Bell, as well as the circumstances that lead to him dying lonely...RIP BellNothing moreNothing less

I wonder how well this would work on carpeted stairs? Since the treads are gripping only the 'nose' of the stair is there the possibility of slippage on the pile? What if the padding gave way or the treads pulled the carpet away from the stair?

How well would this chair recover from scenarios like this? Or in any failure for that matter.

Was anyone paying attention when the Dean Kamen wheelchair stopped being made? Was it cost/demand, didn't work? The wikipedia article says something about the FDA reclassifying it but that doesn't seem to explain why it stopped being made.

That video is really melodramatic. It's a guy slowly ascending a staircase with these crazy dynamic shots and music that would be more appropriate for a compilation of airshow maneuvers. That said, it doesn't look technologically like it's all that interesting unless they've managed to make it super-cheap.

="documentation" that's enforced by compilerspecify what is NOT ALLOWED e.g. "you can't do that"

="substitution", "compatibility"

="A set of rules for how to semantically treat a region of memory."

Because the list is synonyms, many concepts overlap. In my mind, the highest level of unification about "types" is the concept of "constraints". Related topics such as polymorphism is a practical syntax pattern that arises from data types' constraints.

Personal anecdote: I initially learned an incomplete and misleading idea about "types" from K&R's "The C Programming Language". From that book, I thought of "types" as different data widths (or magnitudes). An "int" held 16bits and a "long" held 32 bits, etc.

It was later exposure to more sophisticated programming languages that a much richer expressiveness of "types" is possible that the C Language "typedef" cannot accomplish. For instance, if want a "type" called "month", I could encode a "rule" or "constraint" that valid values are 0 through 11 (or 1 to 12). A plain "int" is -32768..+32767 and having "typedef int month;" provides a synonym but not a new policy enforceable by the compiler.

I have realized for quite some time now that there are at least three different uses of types in programming languages (with different goals), namely:

1. Specification of constraints on data - the function arguments and return values. This corresponds to usage of types in logic. The goal here is the correctness of the program (and ease of reasoning about it).

2. A way to define polymorphic functions, i.e. functions that do the same or similar operations with different kinds of data. Here we classify data as of some type so that we can define two functions with the same name, but different types, where the correct one is selected either during compilation or run time based on the parameter type. The goal is conciseness, to avoid explicit conditional statements everywhere or other ways of code duplication.

3. Finally, to specify way how the computer is to store data, or generally how to model abstract concepts (such as integers) within the computer. This becomes less relevant with time (as languages increase in abstraction), but is important use historically. For example, integers that can be modeled with many different types in C. The goal here is to have type as a reference to concrete and efficient representation of the abstract concept.

I think this is pretty much what the paper is suggesting, although IMHO not so clearly.

"The concept of "type" has been used without a consistent, precise definition in discussions about programming languages for 60 years. In this essay I explore various concepts lurking behind distinct uses of this word, highlighting two traditions in which the word came into use largely independently: engineering traditions on the one hand, and those of symbolic logic on the other. These traditions are founded on differing attitudes to the nature and purpose of abstraction, but their distinct uses of "type" have never been explicitly unified. One result is that discourse across these traditions often finds itself at cross purposes, such as overapplying one sense of "type" where another is appropriate, and occasionally proceeding to draw wrong conclusions. I illustrate this with examples from well-known and justly well-regarded literature, and argue that ongoing developments in both the theory and practice of programming make now a good time to resolve these problems."

For most of my life, I equated types with sets of values, but after learning Haskell and working with higher-kinded types, type classes, and existential types, I realized I don't know anymore what a type _is_. I know that type systems provide proof that certain classes of operations are impossible (like comparing a number to a string, or dereferencing an invalid reference).

It's pretty mindbending to use existentials or GADTs and pull two values of a record and not know anything about those values except that, for example, they can be compared for equality.

The example is contrived, but it illustrates the point that the types of x and y are not known, _except_ that they can be compared.

That's not the kind of thing you can express with, say, Java or Go interfaces, but it makes perfect sense once you start to break down the mental walls you've built in your head over the years.

I'm thrilled to see a growing body of accessible* PL and type theory literature, because these things are important to helping us develop software at increasingly large scales, and it's clear that very few people -- including myself! -- know enough about this topic.

Nice animation. I'd add the option to show the speed of each gear (a curved arrow around the axis, with a label "1 turn per 15 seconds", and perhaps the number of cogs in each gear).

I'd like to see one additional simple version with only one gear, so it's easy to understand how the balance / anchor / escape work.

Another nice version would be a linearized version, where all the gears are in a row, so they are easy to see. Bonus points for a smooth transformation from the linearized version to the actual version.

The introductory quote comes from Hunter S. Thompson's eulogy of Nixon in the Atlantic Monthly.[0]It is very enjoyable read (though not balanced in any sense of the word).

> Nixon had the unique ability to make his enemies seem honorable, and we developed a keen sense of fraternity. Some of my best friends have hated Nixon all their lives. My mother hates Nixon, my son hates Nixon, I hate Nixon, and this hatred has brought us together.

> Nixon laughed when I told him this. "Don't worry," he said, "I, too, am a family man, and we feel the same way about you."

> President Clinton young, smart, dynamic, the first president whom I understood politically (one of us, I thought) demanded that Nixon be judged on nothing "less than his entire life and career."

Notice how this is neither a commendation nor an exoneration, attempted or otherwise. Love him or hate him, you have to admit that Bill Clinton knew exactly what he was saying at any given moment, and what those words meant.

Also, this is golden:

> Remember ... the far-right kooks are just like the nuts on the left ... but they turn out to vote.

Thus we have the Southern Strategy, which leads directly into what the GOP is now.

Serious question, however stupid, ignorant or offensive it might sound: why is the following anti-semitic? Because it was false in Nixon's times? I'm not that familiar with the history of the United States in the 70s.

You know, it's a funny thing, every one of the bastards that are out for legalizing marijuana are Jewish. What the Christ is the matter with the Jews, Bob? What is the matter with them? I suppose it is because most of them are psychiatrists.

Xena is right to note that this simple state-transformation function, f(event old-state) -> [actions new-state], is the obvious and eternal way to build a server. It's also worth noting that it's basically Haskell's State monad.

Whatever you call it, it's a design pattern, not a system service. Urbit on the outside, as an "operating function," is defined by the "universal design." But since it's a pattern rather than a service, it will tend to reappear at each layer of a layered system. Urbit on the outside uses this pattern and so do Urbit applications, but these are very different layers of Urbit.

At the application layer, a command to an IRC daemon is a great example. There still seems to be a good deal of boilerplate in the Lua code presented here. The attraction of the "universal design" is the goal of eliminating all, or almost all, boilerplate code in a network server daemon.

How would one go about that? First, be a single-level store, so the daemon is automatically a database. Second, make application messages are transactions with end-to-end acknowledgment, no return value, and exactly-once delivery. The message is automatically deserialized and validated, and passed to the application as a simple typed argument. If there's a problem with the operation, just crash; the result is delivered as a transaction failure with an annotated stack trace. Also, messages should be sent over an encrypted P2P network and authenticated by scarce, memorable identities...

>This design will also scale to running across multiple servers, and in general to any kind of computer, business or industry problem.

If I understand what you're getting at, you're saying that locking is the solution to all concurrency problems? This section is the most interesting to me, as I've been researching concurrency at a high level lately. I'm a little confused by your conclusion. It seems nave to claim that multiple hosts can agree on what action to take "just" by using locks. What if a peer is holding a lock and becomes unreachable? What if the peer isn't dead and thinks it still has the lock? What if the core that is issuing the locks becomes unreachable?

I cofounded a daily fantasy sports site in 2004, so it's funny to see FanDuel's 2009 launch date described as "early to market". But the environment for fantasy sports in 20042005 was incredibly different, with the major leagues often showing overt hostility (including the NFL Players Association suing a company for using the players' names without permission). It's yet another example of how big a factor timing can be in the startup game.

Interesting that the revenue share of winnings is projected to remain 10%. This suggests players don't value playing on a site that takes a smaller percentage of the game - or that the sites collude. Do any of the companies that occupy the 5% of the market not Fanduel/Draftkings try to differentiate based on cost?

It would be interesting to see a "When structural-typing is better than prismatic/schema" section in the readme. I read the readme, and I don't see why you can't use something like s/validate instead of built-like. So instead of:

I've had good luck with prismatic schema https://github.com/Prismatic/schema, which seems to be along a similar direction. It's fairly low commitment and can lead to big gains fairly quickly. I assume this would be similar.

This seems to me like a very practical way to approach Clojure typing. Similar to the author, I have often needed to make a series of transformations on complex objects. Those transformations mostly depended on the presence of certain keys, so having strict types was unnecessarily rigid. Derived typing would also be useful, of course.

One point to appreciate about this approach is the flexibility in only worrying about the relevant pieces. In my case, I may be worrying about whether a continuous variable has been tagged "datetime", requiring additional processing steps. Merely checking for such tags allows the input data to implicitly direct the flow of the program, reducing the coupling between data and specific processing implementations.

I've also been thinking about types, validation, and structure. I see room for various approaches, including static typing, validation, and more. For example, here's an in-progress library that I plan to build out soon: https://github.com/bluemont/shape

Our software system currently uses Redis as a central bus, and around that we have a dozen apps that send a hashmap back and forth among themselves. We use Carmine/Nippy to serialize and deserialize the hashmap, so we never have to think about anything other than a hashmap. All the bugs we face are because of missing or misused fields in the hashmap. For us, a combination of structural typing and Nippy could potentially protect us from 90% of the bugs we have seen so far.

It looks like the sonograms are full of harmonics. The conventional musical notation for a note with rich harmonic content, such as a single pluck of a guitar string, is not a vertical line on the staff with notes at every harmonic; instead, you just indicate the pitch of the fundamental. (Even if the fundamental itself is mostly missing, like in the low notes on an upright piano, that's where you put the note.) Then, notes with different harmonic content (because they are played on different instruments) are plotted on different staffs, although this might be counterproductive for visualizing whale songs. Colors are probably better for that.

It would be interesting to see if a second-order Markov model of the whale song unit sequence finds information that is not captured in a first-order model. More interesting still would be if a stochastic context-free or pushdown model were able to predict whale songs better than a similarly-complex Markov model, as it would indicate that the whale song has a recursive structure, like human language.

It makes some sense that you would use a long, highly-redundant transmission of a sequence of discrete symbols, which then you would repeat after hearing, to distribute information of general interest around the ocean, where travel is slow and the latency-bandwidth product is high. The researchers speculate (largely on the basis of sexual dimorphism) that the information communicated is merely fashionbut surely there is some generally-useful, temporally-changing information of interest to humpback whale survival and fecundity.

What if the songs actually contain the whale equivalent of GPS coordinates? How would we detect it?

I'm sure a few people have spent many hours trying to do so, but I wonder if machine learning could help. It would be a challenge: we'd need factors to correlate to, like the whale's position or information about their environment (location of boats, pollution, or prey).

Perhaps a start would be triangulating the whale's position during each song, and looking for elements that somehow vary with location. I imagine someone has looked for this. Location might not actually be a good thing to look for - whales can presumably determine each other's location from the sound source and distance alone, like a human could hear the direction and distance of a shouting human. What else might they be communicating?

I wouldn't mind seeing sources for some of the theories posited here. For example, I searched briefly for other sites that mention William Howard Taft's use of blackmail and wasn't able to find any. It wasn't an exhaustive search, to be sure, but the lack of information reduces credibility in my eyes.