As someone who always disliked Objective C, I think Swift looks very promising. I'll check it out right away :)

Software-wise, I feel these current WWDC announcements are the most exciting in years.

Looking at the Swift docs right now, I can see many interesting inspirations at work: there's some Lua/Go in there (multiple return values), some Ruby (closure passed as the last argument to a function can appear immediately after the parentheses), closure expressions, strong Unicode character support, a very very neat alternative to nullable types with "Optionals". Operators are functions, too.

It has the concept of explicitly capturing variables from the surrounding context inside closures, like PHP does, instead of keeping the entire context alive forever like Ruby or JS.

Hell there is even some shell scripting thinking in there with shorthand arguments that can be used as anonymous parameters in closures, like "sort(names, { $0 > $1 } )".

Inside objects, properties can be initialized lazily the first time they're accessed, or even updated entirely dynamically. Objects can swap themselves out for new versions of themselves under the caller's nose by using the mutating keyword.

There is the expected heavy-weight class/inheritance scheme which accommodates a lot of delegation, init options, bindings, and indirection (as is expected for a language that must among other things support Apple's convoluted UI API). But at least it's syntactically easier on the eyes now.

Automated Reference Counting is still alive, too - however, it's mostly under the hood now. Accordingly, there is a lot of stuff that deals with the finer points of weak and strong binding/counting.

Swift has a notion of protocols which as far as I can tell are interfaces or contracts that classes can promise to implement.

I think generally there are a few great patterns for method and object chaining, function and object composition in here.

I'm not even an iOS developer but this is by far the most exciting thing I heard in the keynote.

As an amatuer/hobbyist programmer who's self-taught with Ruby, JavaScript, etc., the one thing that was keeping me from experimenting with iOS apps was Objective-C. I know I could tackle it, but it's been hard to take the plunge.

I don't know much about Swift yet, but from what I've seen it looks very exciting. So if Apple's goal was to get new devs into the iOS world, at least from 10k feet, it's working.

Just glanced thru the Swift book in about 3 hours. Conclusion: all your programming language are belong to Swift, mostly stolen good ideas, some innovations, a few gripes.

I can say Swift takes inspiration and improves on at least these languages:

C:

typealias

struct

control structures

labeled statements AKA gotos

varargs

C++:

default arguments

class instance construction syntax

// comment

superclass, implementing protocol declaration syntax

semi-virtual class init, deinit

Go:

No parentheses around the condition part of control statements

Unicode identifiers

shorthand for signed and unsigned integer types U?Int(8|16|32|64)

C#:

in-out params

properties

subscript access of class member values

Objective-C:

ARC

protocols

extensions

param names as method names

willSet/didSet

nil?

Java:

enum

@final

super keyword

override method keyword

Scala:

Local type-inference, blend of an ML flavored FP with OOP without the noise and believe it or not, even more powerful in specifying generic type constraints. No stupid JVM type erasures either so you can actually create an instance of a generic type, just like C++ templates.

Self:

self

Python:

for i in enumerate(seq)

for key, value in dictionary

Type(value) explicit type conversion syntax

No public/private/protected class member access modifier bullshit

Array literals, dictionary is also like Python but use [] instead of {}

Ruby:

0..100, 100_000

Lisp:

closures

Scheme, Coffeescript:

? optional type modifier

Bash:

$0, $1... inside short callback closures

Innovations

---------------

break-less switch, optional fall-thru, comma as multiple case, case can be any value of any type, condition or a type constraint for pattern matching, supports method call shorthand

I just skimmed the tour, and my impression is: Swift is a compiled, Objective-C compatible Javascript-alike with an ObjC-like object model, generics, and string interpolation. No exceptions. Based on LLVM and appears to inherit the same data structures as Cocoa apps (Dictionaries, Arrays, &c).

It feels very lightweight, sort of like an analog to what Javascript is in a browser.

I find it a bit sad that with all of the languages that already exist, Apple found it necessary to invent a completely new one -- and then make it proprietary. Why not use Ruby, or Python, or JavaScript -- or even Go, Rust, Clojure, or Scala? (Yes, I realize that the latter two run on the JVM, which would have been problematic in other ways.)

Heck, they could have bought RubyMotion and made Ruby the high-level language of choice for development.

I realize that Apple has a long tradition of NIH ("not invented here"), and in many cases, it suits them, and their users, quite well. But there are so many languages out there already that it seems like a waste for Apple to create a new one. Just the overhead of developing the language, nurturing its ecosystem, and ensuring compatibility seems like it'll cost more time and money than would have been necessary if they had gone with an existing language.

Unlike C and Objective-C, Swift enumeration members are not assigned a default integer value when they are created. In the CompassPoints example above, North, South, East and West do not implicitly equal 0, 1, 2 and 3. Instead, the different enumeration members are fully-fledged values in their own right, with an explicitly-defined type of CompassPoint.

+100 for that. This will help developer avoid whole class of bugs.

Enumerations also support associated values. Enums in .NET are very poorly defined. Looks like Swift got it right.

My first instinct was to be cautious about new languages from Apple - Dylan was supposed to be something awesome until Apple cancelled it. But I only learned of the existence of Dylan years after it was cancelled. Looked awesome, but it was so niche I didn't want to spend time learning it.

So I took a moment to look at why Dylan was cancelled.[1] Veryin interesting stuff. What it came down to was:

- Apple was in dire financial straits - Apple needed to axe all projects that didn't show commercial viability - At the time, when Apple was transitioning to PowerPC, Dylan was 68K only, and needed another year or two to be ported - Most damning, the project was not finished - it wasn't even in the optimization stage.

None of these factors are in play here. So. My worries are assuaged. I do want to learn this, and it looks really easy to pick up so far.

I'm really curious now about two (unrelated) things:

1) is this good enough to build web apps with?2) how would one manage the transition of an Obj-C based project to a Swift-based one? Assume I don't have the budget or manpower to perform a ground-up rewrite.

We're writing a story for The Next Web on Swift. If anyone's interested in being interviewed for an article, can you flick me an email on owen@thenextweb.com with brief answers to some or all of the following questions. I'd love to talk to anyone who's used Objective-C before and share your opinions/experience:

1) How does Apple releasing Swift make you feel as an Objective-C developer?

2) Are you excited to code using Swift?

3) What about Swift makes you most excited?

4) Do you worry about upskilling to Swift?

5) How do you think Swift will change the way you work?

6) What concerns do you have about swift?

Keen to understand how this impacts people and share that if you have time to talk to me :)

500+ comments and the term asynchronous does not appear once. It is a platform pain point, several languages have baked in support for async scenarios and Apple comes up with a whole new language, ignores it and a forum full of language geeks talks about it and no one points out it is missing.

Swift reminded me of CoffeeScript a little, in a good sense (judging by what they showed during WWDC demo). Complexity and low-levelness of Objective-C is (was?) how I justified my reluctance to program for Apple devices, so I'll be looking forward to Swift.

Oh God, they just compared the speed of Objective C, Swift and... Python! It's nice to see Swift being faster than Objective C, etc., but what has Python got to do with coding native iOS/OS X apps? Of course it's going to fail at speed when compared to a static compiled language.

What a weird and pointless comparison, imo (I mean the inclusion of Python, seems so random to me).

Reading people compare Swift to other languages is pretty hilarious. OCaml.. Haskell.. CoffeeScript.. Ruby.. Go... Kotlin... JavaScript.. Scala...No one is saying it so I will: It looks like damn Java 8.

It is probably not a good sign that it can be immediately compared to every modern (and not so modern) language in existence.

- class are reference types, structs are values types, much like D and C#

- runtime dispacthed OO interfaces called "protocols". Blend the difference between runtime or compile-time polymorphism. Classes, structs and enums can implement a protocol. Available as first class runtime values, so the protocol dispatch will be slow like in Golang.

- enumerations are much like Ocaml ADT, can be parameterized by a tuple of values, value types, recursive definitions (nice)

- worrying focus on properties.

- strange closure syntax

- optional chaining, another anti-feature in my eyes

- normal arithmetic operator throws a trap on integer overflow (!). This must be incredibly slow.

The documents for ios 8 show all examples in objective-c. Can't wait for them to be updated to swift. I'd love to start with 'getting started' and work my way through rest of the docs. I'm a programmer but could never stomach objective-c.

One thing I'm very interesting in knowing is how this affects the whole 'hybrid/web app' space.

Many web developers (like myself) have used Phonegap/Cordova in conjunction with tools like the Ionic Framework for our apps, primarily due to the nearly esoteric (for some of us) nature of Obj-C, but Swift almost looks like JS, which certainly has motivated me to learn it and use it in future apps.

I wonder if the aforementioned tools will lose market share because of that. Let's see.

I found the notion of "Optionals" surprising and a bit hard to handle at first. In Objective C it was really easy to lazily allow values to be nil and still do things on them, so it's a bit of a departure.

Thinking about it a bit longer, is it because of the clear distinction between non nullable values ans optionals that the compiler can optimise the code so much more ? (I am thinking about the xx times faster than Objective C claims)

There's one thing that bothers me about Swift, and I feel like I must not be getting it. For the most part it looks like a very well-designed language, and the choices they made are extremely pragmatic. But the way collection mutability is determined seems positively insane. You can't have a mutable reference to an immutable array, or vice-versa. I don't get the reasoning behind that.

The demo from the WWDC keynote is quite impressive. Unfortunately, this site seems to have been slashdotted. (Basically, Swift is "Apple acquires their own LightTable.") It's touted as a language for parallelism. I'm curious about its concurrency primitives. Since distribution is shown as a top feature, I'm going to guess that it has an Erlang-like actor model.

Having ARC and not needing GC will end up being a big fundamental advantage for its parallelism story. (The problem with GC, is that one thread does work, then a GC thread comes along and possibly causes an additional cache miss.)

A lot of commenters here are asking whether it will be open sourced, I'm curious, specific to those who think it should be open sourced: why? I'm not really curious about the philosophical reasons, but really the practical ones. How would Swift being open source help you as a developer? It's clearly targeted at iOS and Mac OS X, so does this mean you won't write Mac OS X or iOS apps if it's not open source, or did you hope that you could write Swift code on other platforms?

Anyone know how Swift might achieve its claimed speedup vs. Objective-C? I can't see how it could get the advertised numbers without method inlining, which appears to be incompatible with the dynamic object model that it inherits from Objective-C

I'm a bit surprised by this move. I see that there are some advantages to this new language, but Objective-C is not as unapproachable as the unwashed masses make it out to be.

If Apple wanted to add official support for a new language I would think it would have been a better move to use something that already has an established following and could potentially attract new developers over. Something like Ruby/Python/Lua would seem to fit the bill nicely.

We've already seen Ruby can be done successfully on Mac with MacRuby and RubyMotion, but it nevers get full support from Apple.

Adding an additional programming language that binds me only to Mac platforms doesn't give me a whole lot of incentive.

Swift is designed to make many common C and Objective-C errors less likely, but at least one class of bugs could be ascendant: off-by-one errors in ranges. Swift's ".." and "..." range operators are reversed compared to Ruby and CoffeeScript.

Swift's way is arguably more sensible: the longer operator makes a longer range. But switching the way two similar-looking operators work, as opposed to at least two other languages popular with the target audience, is bound to lead to errors as programmers switch contexts.

Just the fact of having the two operators in the language together is dangerous, since they look similar and switching them will lead to weird bugs instead of immediate compile-time or runtime errors. Switching their meanings makes this more pernicious.

Time to prime our eyeballs to look out for this one.

[1] Swift book: Use .. to make a range that omits its upper value, and use ... to make a range that includes both values.

Swift builds on the best of C and Objective-C, without the constraints of C compatibility. Swift adopts safe programming patterns and adds modern features to make programming easier, more flexible, and more fun. Swifts clean slate, backed by the mature and much-loved Cocoa and Cocoa Touch frameworks, is an opportunity to reimagine how software development works.

So, I picked up Objective-C a few weeks ago, and I've been struggling (only coming from a Python background, with only the CS-knowledge I've picked up along the way). I just figured it would be fun to be able to make some apps. What would your advice be? Stick with Objective C, or switch over to learning Swift? Swift looks a lot more friendly, but I don't want to sell myself short. I'm also thinking big picture, where learning Obj-C might eventually be helpful in learning other languages.

It seems perfectly serviceable, but I have to admit that my reflex response on opening the page was 'Oh god, another language?'

I can't tell if Apple is proposing this as a great new language everyone should use, or whether it's only intended for developers using Apple hardware and so represents a sort of lock-in strategy. I don't have an opinion on the language itself - it seems to have several neat features that make it easier/safer than competing languages like js, but presumably there are a few shortcomings as well.

In the playground REPL, am I missing an easy way to display errors as they happen?

It seems any error messages aren't visible by default. Xcode shows a red "!" disc next to the line, and that's it.

The usual shortcuts for "Jump to next/previous issue" are disabled. Opening the issues tab with command-4 works, but it's empty. Apparently I have to mouse over and click on the tiny red disc to see any error message at all, and then it displays as text that can't be selected or copied.

EDIT: ctrl-command-M turns "Show all issues" on or off. It seems to be a little buggy, which may be why it's off by default. Hopefully we'll get the ability to copy the error text in the next refresh.

Perhaps it was just for the benchmark, but it seemed ambitious that they're testing encryption algorithms with it already. Does anyone know if Swift, by design, could help avoid issues like they've had recently with Secure Transport?

Swift uses Automatic Reference Counting (ARC) to track and manage your apps memory usage. In most cases, this means that memory management just works in Swift, and you do not need to think about memory management yourself. ARC automatically frees up the memory used by class instances when those instances are no longer needed.

The `enum` part of the language seems to be Haskellish algebraic types - like you can have enum "cases" with parameters in addition to just named enumerations .. and these enums can have methods. Cool!

As someone who's biggest hurdle was the syntax of Objective-C, this is absolutely massive personally. Just the other day, me and my friend was discussing how hard Objective-C is to properly learn. Of course, the jury is still out on Swift until I read further on it, but it can't possibly be worse then Objective-C.

Question: It sounds like the Xcode 6 beta is available on the dev center but I can't find it. Do you have to be a paying developer to have access to it, or does anyone know if it's going to be made available for free to (unpaid) registered developers?

This is a fascinating development. I wonder how Swift will impact the many cross-platform mobile frameworks. Objective-C was a big barrier for many beginners and small companies and a free and easier development language provided by Apple and supported with good docs and third-party tutorials will likely command a good amount of mind-share. It's going to be an interesting few months in the mobile development world.

Slightly disappointed that I won't be able to try it out yet, because I'm not a Mac Developer. I can't get the XCode 6 beta without it, so I'd have to cough up $99 to try a new language... It seems to me like that might hurt its adoption.

One of the things I look for in a language right off the bat, as it's a sign that powerful features can be built as libraries later, is some type of reflection api. There appears to be none (though attributes seem cool).

Swift looks promising and looks like a step in the right direction. However, looking at the reference and everything, I fail to find an answer to this question. Does the concept of private members (methods and variables) in Swift's object system not exist? It looks like every single variable is exposed fully without any way to prevent it from being so.

I'm super excited about Swift as Objective-C was always a barrier for me as I dislike it very much. This was the greatest news from Apple today, I hope to see compilers on other platforms as well soon.

Noob here. I don't understand... so what happens to Objective-C? Why would you code an iOS app with one language instead of the other? Why would you use both? That just sounds like a pain. Is Swift the evolution of Objective-C or something?

Looks neat, but I'm disappointed that Apple didn't go with Ruby for their next-generation language. Things like MacRuby and Ruby Motion make it seem like that was a possibility, albeit a pretty distant one.

This is probably the biggest announcement from a developer perspective. Swift looks like a language in which you can code as fast as you code in languages like Ruby or Python, while having the speed and performance of a language like Objective-C.

That experience just killed a potential programmer for you right there, Apple.

I had a few hours to kill, and was pumped to jump on the next apple cash cow and help us both, but you literally killed my ability to download the manual or learn anything more about it for a few days, by which time I'll probably be onto something else.

The BBC obituary on Shulgin[1] calls him the "Godfather of Ecstasy"[2], but he was far more than that.

He synthesized and carefully chronicled the effects of hundreds of psychoactive compounds on himself and a small, dedicated core group of explorers of human consciousness.

His efforts were published in two massive, definitive tomes called PiHKAL[3][4] and TiKHAL[5][6], the titles of which stand for "Phenethylamines I Have Known and Loved" and "Tryptamines I Have Known and Loved", respectively.

These volumes contained detailed chemical synthesis instructions for the compounds he created, along with "trip reports" and ratings[7] of the compounds' psychoactivity, ranging from:

PLUS / MINUS (+/-) "The level of effectiveness of a drug that indicates a threshold action. If a higher dosage produces a greater response, then the plus/minus (+/-) was valid. If a higher dosage produces nothing, then this was a false positive."

to

PLUS FOUR (++++) "A rare and precious transcendental state, which has been called a 'peak experience', a 'religious experience,' 'divine transformation,' a 'state of Samadhi' and many other names in other cultures. It is not connected to the +1, +2, and +3 of the measuring of a drug's intensity. It is a state of bliss, a participation mystique, a connectedness with both the interior and exterior universes, which has come about after the ingestion of a psychedelic drug, but which is not necessarily repeatable with a subsequent ingestion of that same drug. If a drug (or technique or process) were ever to be discovered which would consistently produce a plus four experience in all human beings, it is conceivable that it would signal the ultimate evolution, and perhaps the end of, the human experiment."

His chemistry lab was DEA-licensed to handle "illegal" (scheduled) compounds, though he often synthesized entirely novel compounds which were not scheduled because neither the compounds nor the laws scheduling them existed yet.

Shulgin tirelessly educated the public and the law-enforcement community on the effects and value of psychedelic and psychoactive compounds, and wrote a highly informative Q&A column.[7]

Shulgin's pioneering work inspired generations of chemists, self-experimenters, and explorers. He was well known, loved, and respected as one of the most highly accomplished psychedelic chemists in history. His presence and guidance will be deeply missed.

I had dinner with him once at a conference; he was amazing. When asked about the safety of self-testing novel substances (he of course starts at insanely low doses, but still), he said that he's learned to identify the signs of grand mal seizures, and if he feels one coming on, he simply sticks himself with a couple hundred miligrams of phenobarbital, straps himself in, and goes for a ride. Then he gets back to work.

MDMA may have been discovered in a Merck laboratory, but Alexander (Sasha) Shulgin devised an excellent DIY MDMA synthesis that could be attempted outside a laboratory environment. Shulgin has been accused of intentionally designing his MDMA synthesis in ways that may have reduced yields, but which utilized precursors that were simpler to obtain by DIY chemists.

Shulgin's decisions to facilitate DIY may be responsible for the proliferation of MDMA, which will only increase in importance as MDMA is given more attention in mainstream psychological research. As a psychologist myself, I suspect Shulgin's gentle subversion (a spirit that persists through PiHKAL and TiHKAL) will ultimately be viewed as a heroic act that brought attention to an important therapeutic tool.

"(with 100 mg) I had weighed correctly. I had simply picked up the wrong vial. And my death was to be a consequence of a totally stupid mistake. I wanted to walk outside, but there was a swimming pool there and I didn't dare fall into it. A person may believe that he has prepared himself for his own death, but when the moment comes, he is completely alone, and totally unprepared. Why now? Why me? Two hours later, I knew that I would live after all, and the experience became really marvelous. But the moment of facing death is a unique experience. In my case, I will some day meet it again, and I fear that I will be no more comfortable with it then than I was just now. This was from the comments of a psychologist who will, without doubt, use psychedelics again in the future, as a probe into the unknown."

"The Shulgin Rating Scale is a simple scale for reporting the subjective effect of psychoactive substances at a given dosage, and at a given time. The system was developed for research purposes by the American biochemist Alexander Shulgin and detailed in his book PiHKAL"

PLUS FOUR (++++) [...] If a drug (or technique or process) were ever to be discovered which would consistently produce a plus four experience in all human beings, it is conceivable that it would signal the ultimate evolution, and perhaps the end, of the human experiment.

Some of the most amazing moments I've had in life were on MDMA.I'm sure this is true for hundreds of millions of other people who've taken it. This substance has revolutionised our word in many ways - music, fashion, art, architecture, etc.

Apart from MDMA, Shulgin has synthesised, experimented with and wrote about countless substances and plants which affect the mind or spirit.

A great explorer, and from what I've read, a great human being as well.

Today I work in IT security- in my former life, he was a great inspiration.

I had almost forgotten him.....I was in a closed circle of psychonauts, we where about 60 people.

We would get our hands on the most exotic substances and share amongst us, and compare trip reports.

It was my entire life, 2cb,2ci,2c-t7,2ce,DIPT,5-meo dipt, lsd etc. So I have tried many of hes creations, and i idolised him.Then in 6 months 4 of the group died, 1 suicide, 3 ODs(not on any of shulgins creations of course).Then I quitted, dropped all my friends, started taking life seriously....

But I have one thing to remind of that time in my life...a book I inherited from one of my now dead friends:

He actually wrote with Shulgin himself, doing experiments using cactusses injected with ummm...some variety of DMT I think- to make the cactus metabolise it into something else.Shulgin adviced him, and my friend did the experiments.

"How long will this last, this delicious feeling of being alive, of having penetrated the veil which hides beauty and the wonders of celestial vistas? It doesn't matter, as there can be nothing but gratitude for even a glimpse of what exists for those who can become open to it."

With all the praise of self-experiments (or experiments on a dedicated core group), there is this fact: we know that even a single use of a psychedelic drug may be "a life-changing experience". We also know that some substances may cause irreversible changes in the brain (e.g. glue-sniffing). So IMHO it doesn't sound like a rigorous science, who knows what changes those countless tests had caused and how those changes affected subsequent tests.

Can we start a crowdfunding campaign for a museum of ecstasy with legal public samples? Anyone know German law? How would this go down in Darmstadt, where it was first discovered by Merck? I for one will donate significantly.

This just solved a huge problem I've been struggling with. This is beautiful - I don't actually want to know the information I've been trying to access, but it will make the experience better for the user. I now realize I don't HAVE to know - the browser knows, and that's all that matters. I just have to teach the browser what to do.

Obvious question - how was the list of URLs compiled? Some are really specific like YouTube channels. On the other hand there are only 15 categories and there are probably a lot of people that would not get a single match or only something very generic like Wikipedia.

I know that the `:visited` exploit is handled by the browsers so that you can't figure out by javascript what is going on...

but what if you used just CSS to figure it out? For instance, what if you generated the CSS which had a unique image it requested via the `background-image` property, stored the data on the server, then just requested the data from the server after the fact?

Do the browsers prohibit the usage of url-based css properties on CSS selectors with `:visited` or something? Does anyone have a link/reference to how the exploits were patched up?

Heh. I clicked a few before I realized what was going on (looking at the status bar shows the link, which somewhat gives it away). You could prevent this by adding mouseover/out and onclick logic that removed the :href on hover and just colored itself red.

I had a very similar idea a while back, except I was measuring onAnimationFrame times with a carefully crafted CSS stylesheet to determine which links were being painted as :visited automatically and completely hidden from the user.

Accuracy varied a lot between computers but in ideal circumstances (only browser running) it would have ~90% accuracy on each of 25 links I was testing against - the test took about 8 secs to run though.

Interestingly it never worked particularly well in chrome - chrome seemed to stop painting :visited elements after a certain amount which prevented it from working.

From the number of squares, I thought it might end up doing something even more 'clever' i.e. generate a square for each of the most recent n URLs from feeds of m news sites, then analyse, for example, words in headlines of those articles to determine what I'm interested in. Lots of potential for data analysis once you have someone's browser history.

Hahaha. I was giving it the benefit of the doubt before viewing source, and so I was wondering what happens when I push gray instead of red. :P That is probably why I got some weird interests in my results.

This is really clever. One interesting use for this would be to target ads at people who visit certain sites, or to customize your site's landing page to direct visitors toward areas they might be interested in.

> Marins team sent over a list of hundreds of technical, legal, and business questions that wed need to answer for the deal to go through...Tracking down document after document was tedious beyond compare.

Having your ducks in a row as much as possible can make a huge difference in the complexity, risk, and overall stress of doing a sale/acquisition.

I don't mean Day 1, of course, but once you're starting to have conversations around acquisitions, it's really helpful to make sure your books are clean, that you have clearly documented your software stack, all of the third-party code you use and licenses, all of the contracts and MSAs you might have signed iwth your customers, employment agreements with contractors, and so on.

It's annoying, but once you have it done it's relatively easy to keep up to date.

When we sold our last startup, our CFO had done this 10+ times before, and on the first day of the due diligence, he handed over a URL for a data room with hundreds of documents, categorized and neatly organized. The acquirer's lawyers said it was the easiest DD they'd ever seen.

> What you realize, though, is that partnerships are rarely a real thing.

This is tangential to the core of the article but I can't stress enough how true this is. In the two companies I've co-founded, we've been approached for partnerships by huge companies (Oracle and Adobe) and tiny, 1-person, pre-revenue shops. In almost every instance, it's been a net loss in time and money. The tiny shops just want help making inroads into your industry and customer base. The huge companies just want to show they have "partners" to their direct reports or sales leads. They'll ask you to build out some integrations (on your dime) and scrap the entire project 6 months later (true story). To them, it's a rounding error but to a startup, the financial and opportunity costs can really hurt. There's a reason Gail Goodman calls partnerships a "mirage" [0].

It's great that this call turned into an acquisition for Perfect Audience. I would advise everyone to take Brad's advice: take the call and maybe a meetingbut just one. Unless the meeting goes well and the strategic fit is too obvious to ignore, just say "no" to partnerships.

I once got to meet a bunch of M&A/corp dev people from Google/FB and other companies together and asked them "What would you do if you were a founder and wanted to streamline an acquisition?". Across the board, the top response was "Have all your paperwork in order from day 1 - employment agreements, IP assignments, every single thing you can think of"

I just started working at my first startup. Given that the employees are so crucial in building the product that is sold, why is it so uncommon for significant proceeds from M&As to go to employees rather than founders?

I realize that VCs like to take the lion's share, but even with what remains most of the stories I hear are of founders taking large payouts while employees get relatively little.

I applaud Perfect Audience for recognizing the value of employees and allowing them to participate in the windfall.

"One thing that did cut through the exhaustion was a task Id been anticipating for more than six years: writing the Facebook post in which I announce to friends, former friends, frenemies, ex-girlfriends, college roommates, future wives, and family members that I was not in fact an obscure failure but a new, minor footnote in the annals of Silicon Valley startup successes."

Amen. This has been personally the hardest thing for me about being a tech entrepreneur. My friends all assume that overnight I'm going to be the next Zuck (I'm not) and wonder why I don't have time for them. Let's take aside the fact that I'm a fairly privileged person, it doesn't negate the fact that putting my heart and soul into a startup, means many of my relationships have taken a back seat. That sucks, but you soon realize the relationships that are most important to you will ultimately wait.

Yeah huge tech firms reached out to our company. After going out of our way (many miles) & spending a ton of money to demo they were horribly rude. They literally took their huge name/foot and squashed us like a bug. They promised us the moon and the sun and when we arrived it was hell with how they treated us. Saying things as they showed us to door... "You better run fast the race is on." This is after they baited us for our secret sauce with promises of helping us out and or more (we didn't divulge everything). Also, they blocked our tech from working during our demo (wth?).

Well after that experience when other big tech firms reach out .. one recently asking let us understand how your technology works. I'm like HA screw you!

Partnerships are a time sink. One that could go nowhere, get you feeling squashed like a bug or actually provide a win.

Though that's how it all goes with this entrepreneurship game. Game on!

This is a success story no doubt; a $25 million exit in under 3 years and with just ~$1 million in funding is obviously a better outcome than what the vast majority of startups will ever realize. But it also highlights just how hard it is to realize a meaningful windfall as a startup employee.

Even if you assumed that the 12 non-founder employees equally split 50% of the company (which is almost certainly high), likely none would net $1 million after exercising their options and paying taxes. What's worse: the vast majority of this deal was paid for in stock. The Marin Software stock chart over the past two years is not very inspiring, which is especially interesting given how good the market has been to so many other tech/software companies. In an all or mostly stock deal involving a public company, you are ideally acquired by a company with a rich valuation. That's not the case here.

The $2.7 million in equity retention grants, if split equally amongst 12 employees, adds $225,000 for each, but that too is stock and the employees have to stick around and work for it. I don't know much about the acquirer, but working for stock that has for some reason languished during one of the most impressive bull markets in history isn't a very compelling proposition.

This all seems lost on the founder of the company. I can't help but wonder if it's lost on the employees too.

"Total strangers on the Internet were speculating on why we sold, how much we might have made, and what our revenues might have looked like."

This is what the Internet does, and for better or worse, I hope my (advertised) speculation[1] what taken in the vein it was offered - without judgement or malice.

Now that we know more about the deal, I'd like to add my congratulations to you and the employees. I've sold two companies (well ... my part of them) so I know the anxiety that comes right before and right after the sale. It sounds like you've made a very wise decision and I wish you the best as part of Marin.

"... it became clear that ad retargetingin which you show ads to people who recently visited your websitewas where MARKETEERING dollars were going" (emphasis mine).

That's an interesting turn of phrase: it evokes Disney's Imagineers, who can be loosely said to be engineers with a heavily creative bent, and applies it to the act of marketing, thereby implying a more heavily creative and technically adept form of "marketer". I wonder if "X-eer" is an inchoate language trend?

Can we get following information from founder or does anyone know ?1. When perfectaudience was started ?2. How much revenue it had at the time of selling ?3. What was rough ( though not exact ) percentage of company holding by owners at the time of selling ? 4. What software technologies they used for this platform ?

>Perfect Audience started as an ad design product called >NowSpots, which was itself spun out of a previous company >called Windy Citizen, a local news aggregator that I >bootstrapped (entrepreneur speak for self-financed).

This demo unfortunately uses an incorrect perspective transformation. There is no reason to go to trig if you represent the camera plane as a vector, and step along it one pixel at a time, and allowing the wall height to vary linearly in the distance to the camera vector (taking lines to lines). In addition to being correct[1], it has the added benefit of being faster if implemented well.

It is very interesting to me that you need to multiply the distance-to-wall by the cosine of the angle to transform the image from fisheye to normal. It makes me wonder, why is it that our eye in real life sees straight lines as straight, the way this demo renders the image?

To illustrate the question, see https://en.wikipedia.org/wiki/File:Panotools5618.jpg why do we see the world as in the bottom image instead of in the top one? After all, our eye really is at different distances from different parts of a straight wall, so it sounds logical that we would see the fisheye effect describe in the article. Is the rectilinearity of the image we see caused by the shape of the lens in our eye, or by post-processing in our brain?

I've seen some people suggest that voxels are like sprites for 3d programming (as far as sheer simplicity goes), but this strikes even more so as that. How does this compare to using actual 3d/voxels? Can you still have interesting physics or do you miss out on a lot?

I know that the arrow keys have actual arrows on them, but so many modern games have trained us to use WASD to navigate, that if you're going to insist upon the arrow keys for navigation, you should probably mention this somewhere before the link to the demo.

Great work. I had a huge amount of fun converting your last article (terrain renderer in a tiny number of JS) to WebGL and that was only fun because of your clean, easy to understand code. Thanks for sharing.

Raycasting brings back memories! I remember poring through raycasting techniques to make my Wolf and Doom clones as a teenager. When I was almost done I demoed this for a group of friends at our school's computer club and boy were they were impressed up until the point the clipping algorithm failed and objects stopped disappearing when they fell out of the players field of view. Was teased for months after that. ... .

I remember being amazed how simple raycasting was when I wrote a similar (though much simpler) engine in Java for a high school project. The "engine" itself was like ~200 lines of code in just 2 or 3 functions. Raycasting is a really clever technology. Cool demo!

this is incredible. I remember doing something in C in openGL with triple the line count and not nearly as impressive results during a bachelor computer graphics class. This would have been a much more interesting lab.

In Norwegian the official name is "krllalfa", meaning "curly alpha". Hip in the 90s, but I don't think I've heard anyone call it that in years. I was actually surprised when I learned that "at" was the proper English name and usage. I had always assumed it was a symbol that had been co-opted into network addresses because it was accessible from the keyboard and kinda looked like an a.

One advantage of the international variants is that they're not ambiguous, whereas in English you might have to explicitly specify "the at symbol" when speaking.

As a side note: A long time ago, before the internet and international shopping, I mainly thought of $ as the variable character in BASIC.

No one actually uses Klammeraffe in German. Its just at (English pronunciation most of the time). I wonder how widespread the use of those alternative names for at is in other languages he lists.

In German Klammeraffe is this weird, unwieldy nickname that used to be somewhat fashionable a long time ago (mid-nineties maybe, whenever many people where first confronted with email addresses). I remember it being used back then whenever people were explaining the internet in media (TV, radio, books), though I dont think it ever got widely used outside that context.

In Japanese, the sign itself is called "attomaaku" ("at mark") but it's pronounced as "atto" when dictated. So someone's email would be johnsmith-at-gmail-dot-com, and if you ask a Japanese person to pronounce the symbol, they would say "atto". However, if you show them the symbol and ask them _what it is_, they probably would say "attomaaku".

>A pictogram, is an ideogram that conveys its meaning through its pictorial resemblance to a physical object.

By this definition, @ is not a pictogram just because it is named for a pictorial resemblance. It must convey meaning through that pictorial resemblance. A "monkey tail" or "elephant trunk" or "sea snail" does not convey the meaning of "at", unless I'm missing some cultural context.

, on the other hand, conveys the meaning of "tree" through visual representation of roots below the ground and branches on top [1].

I would have mentioned that the purported Latin origin of the symbol is much closer to the English way of saying it.Some historians believe that the @ symbol fist appeared as a contraption of the word "ad", which loosely means "towards". Scribes may have altered the word by exaggerating the upstroke of the d.So at least on twitter, "at" is pretty consistent with the original meaning of the symbol.

All I can think is how much longer it must be to speak tweets aloud in other languages. As opposed to "at Jim Lipsey, at Gruber, at Chockenberry, at The Talk Show", now it's "chee-o-cho-la Jim Lipsey, chee-o-cho-la Gruber..."

Fun fact about HN's bad visual design: submissions with very short titles typically get huge amounts of upvotes as people try to click on the submission link, and hit the upvote button by accident. Since you can't revoke upvotes for dumb submissions, the number climbs without limit.

Note, however, that the article does not distinguish between what the sign is called, and how it is read. I call it an "at sign". I read it "at". Now, in Dutch it's called a monkey tail (said in Dutch, of course). But that may not be how it is read in Dutch.

Yes, we have specifically expanded the scope of our Vulnerability Rewards Program to include End-To-End. This means that reports of exploitable security bugs within End-To-End are eligible for a reward."

Should be an interesting trove of JS tricks:

"JavaScript crypto has very real risk of side-channel attacks

Since JavaScript code doesn't control the instructions being executed by the CPU the JavaScript engine can perform optimizations out of the codes control it creates the risk of security-sensitive information leaks.End-To-End requires user interaction for private operations in normal use, mitigating this risk. Non-user-interaction actions are rate-limited and done in fixed time. End-To-Ends crypto operations are performed in a different process from the web apps it interacts with.The End-To-End library is as timing-aware it can be and weve invested effort to mitigate any exploitable risk."

"Please note that enabling Chromes "Automatically send usage statistics and crash reports to Google" means that, in the event of a crash, parts of memory containing private key material might be sent to Google."

I hope that has more than a FAQ warning when they release it to the Chrome Store. Otherwise....:/

It isn't perfect but it is probably the best in-browser option given the constraints available.

Just tried this out and it works great! Had to build it using the instructions on the wiki, but nothing too painful. It doesn't just integrate with gmail, but more with all textarea's around the web. When you are typing in a textarea and press the extension icon next to the hamburger menu it will pop open a menu containing the text that you were typing on the site, and are given the options to encrypt/sign a message. When done it replaces the contents of the textarea on the site with the signed/encrypted message.

It works quite nicely, and I like it. I would like to see some kind of keybase integration, though it's not hard to import my tracked users into the extension by exporting my gpg keyring and importing it again.

"Please note that EC support was added to GnuPG 2.1 beta in 2010, but it hasnt been released as a stable version yet. To communicate with other people that don't use End-To-End, you will need to either generate a key in GnuPG and then import it, or build GnuPG 2.1 yourself."

So basically, out the box this doesn't interoperate well with non-beta versions of GnuPG which are what everyone else is using for end-to-end e-mail encryption. That's annoying.

If it is a chrome extension and is installed via the Chrome Web Store, it can be updated silently in the background if I'm not mistaken. So in theory, wouldn't it be possible to serve Google with a NSL and force them to silently push a modified update to a targeted user that reveals the private key?

There are javascript implementations of aes, cbc, pkcs7 and more, all released with Apache 2.0 license. If the quality is what you'd expect from Google they could become valid alternatives to the other implementations out there.

Isn't this contrary to Google's goals as an advertising business? If people are using end-to-end encryption, they won't have cleartext emails to mine, &c. I need to wonder what the catch is, because there is definitely one: does Google own all the keys, or does Google secretly own all the keys?

With the risk of sounding fanboy, this is really fantastic! This could actually be a viable, secure answer to mail encryption.

And this is aawesome too: "we have specifically expanded the scope of our Vulnerability Rewards Program to include End-To-End. This means that reports of exploitable security bugs within End-To-End are eligible for a reward."

Eleanor Saitta (@Dymaxion) had a few things to say about this on Twitter:

On the one hand, I'm happy Google is trying to make GPG usable within GMail: https://code.google.com/p/end-to-end/ . On the other hand, this leaves many ?sIt sounds like all you get from "end-to-end", other than a name that's going to cause horrible confusion, is a bare mininum of GPG functions.No TOFU, no pushing users to encrypt by default, no better management of keys, no attempt to stop metadata surveillance.It's good GMail users will have an easier time with GPG, but if it keeps them on a broken-by-architecture centralized service, we all lose.This doesn't seem to go far enough in making crypto usable (no indexing solution, for instance) but it will slow development of alternatives.I admit Google is kind of in a bind here - if they want to help GMail users, they're also necessarily slowing the evolution of a safe net.Mostly I wish they hadn't called it "end-to-end". Because, you know, words mean things, and like "Off the record", that means something else.I'm surprised Google weren't willing to spend the internal security resources on end-to-end to be able to stand behind it at time of release.All told, it pretty much smells like "keep engineers happy" + "win points with the net freedom community as cheaply as possible."Google, if they wanted to, could do some pretty revolutionary stuff in the secure comms space, but that would cost actual cash.Ssh, no one wants to talk about how Silicon Valley business models depend on surveillance.

Disappointed with the secondary support for RSA/DSA (ie: pretty much all existing keys) -- sadly Google never were very good at interop with others :-/

As I understand it, everyone not using this/gmail now have the option of not being able to communicate securely with the people that start using this; or running unsupported versions of GnuPG :-/ (Or trying to explain how to securely generate, export and import RSA/DSA keys into end-to-end -- somewhat defeating the whole usability benefit...)

Can anyone tell if this addon has been built in a suitably abstract enough manner such that the core can be used to build similar extensions for other browsers? I.e, would it be possible to take this code and wrap it in a Firefox extension?

Unless I'm mistaken, the author appears to be implementing OpenPGP in javascript. This has already been done by OpenPGP.js. That project is several years old, is active, and has been independently audited.

Is this simply reinventing the wheel? OpenPGP.js can easily be used in an arbitrary browser extension.

I have no affiliation with the OpenPGP.js project besides working on a small project for personal use.

Could someone better across current cryptographic trends than I comment on that choice? We know the NSA has found weaknesses in certain implementations of elliptic-curve based cryptography in the past, and I was under the impression there was a preference in the community to move away from them in general given the unknown extent of the integrity concerns.

It always strikes me, that the things that parts of python that create problems for people trying to optimize it, are all things of relatively small importance. Like __new__, __del__ semantics and changing the stack by inspection. I wish Python3 had just let go of the slow scripting-only parts.

Attn: Kivy can really use this for mobile deployements, but we use Cython and almost everyone needs cpython C modules. We need to investigate making drop-in replacements for Python.h and other cpython headers to stub out reference counting etc, which micro-python doesn't use. The compiler will just skip the calls entirely in some cases.

If these drop-in replacements are technically feasible, not only does Cython magically work, but so does a lot of the Python ecosystem. There's probably more work to get linking and other aspects working, but this might also be a model for moving to alternative Python implementations in general. As long as straight Python "just works" and the headers are available for compiling C modules, we're very close to having a sensible alternative to cpython that can grow without being wedded to it.

This is pretty neat, I'm gaining an appreciate for microcontrollers and using high level languages is attractive.

That said, the alternative I'm exploring is to upload a standard Firmata firmware to the microcontroller, then drive it remotely, say from python on a full computer (like raspberry pi).

i think the interestng area comes when you can actually put a fairly "smart" microcontroller firmware on the device (GRBL) and then program it remotely, say with a scripting language. At that point, the boundaries between a firmware that is a dvice controller, and a firmware that is an open-ended remotely drivable VM starts to break down. Interesting area.

> No unicode support is actually implemented. Python3 calls for strict difference between str and bytes data types (unlike Python2, which has neutral unified data type for strings and binary data, and separates out unicode data type). MicroPython faithfully implements str/bytes separation, but currently, underlying str implementation is the same as bytes. This means strings in MicroPython are not unicode, but 8-bit characters (fully binary-clean).

I am deeply disappointed by the "development" of OS X. It seems that Apple has long ago gutted their x86 OS group due to dwindling profits and lack of real competition. There are a handful of reasons people buy a Mac, and I would largely put them into three groups:

* Students looking for a stable and/or sexy device for their university.

* Artists using applications that don't exist on other platforms (or that prioritize Mac platforms).

* Hackers that want a Unix-like system with a BSD userland, or want Anything But Windows on a laptop.

Sadly, these groups are small groups with eclectic needs that won't be met or improved by real systems engineering with the kernel and core modules. What do I mean by real systems engineering?

* Major kernel development

* Novel and/or modern filesystem support

* Fundamental or deeply integrated "platform" features

In Linux land, every major kernel release brings these features. There are tangible improvements to filesystems, to core features that enable new things to be developed on top of them in every Linux release. These are, largely, absent on OS X. The system is too closed, and the result is that things like Time Machine or even security features and ACLs are hacks upon hacks. Full disk encryption and home folder support is, again, hack-ish, and largely built on work other people did. OS snapshots is essentially an "rsync" to another drive with a smart "restore" utility that repairs changes.

The major kernel development that Microsoft undertook with Windows Server 2003 and Vista is still paying dividends. Folder shadow copies became integrated into fully consistent backups with built-in snapshots. Full disk encryption improved - though home folder encryption is still tragically stuck with NTFS "EFS" support (lackluster, at best.) UAC, AppLocker, and integrity levels brought foundational improvements to the security model. Networking stack changes brought DirectAccess, a woefully underused and under-marketed technology. Storage Spaces and ReFS, though years too late, are interesting alternatives to ZFS/BTRFS. Transactional NTFS was woefully underused, but maybe it will return with ReFS. The core improvements Microsoft is making to the NT kernel are still worthwhile, though. Hyper-V is a fantastic technology, and could really blow people's minds when it's baked into the client OS. (For reference, Hyper-V powers the Xbox One's dual app/game personality. It allows isolating the management OS from games running on it, and also keeps the management OS from interfering with game performance with resource limiting. And they both share high performance access to the GPU.) Ah, I could go on. Reading about new stuff in kernel development is a joy.

Of course, I could go on ad nauseum about Linux changes since 3.0x, but http://kernelnewbies.org/ does a better job than I will.

The result is tragic: Microsoft invested in platform features and then didn't sell users on what it could do with Vista. Apple continues to apply lipstick to the OS X pig and sell users on changes to the window manager and built-in applications.

Since nobody has mentioned it yet here, I'm really glad to see AirDrop will finally work between OSX and iOS. It's bothered me for awhile that they have these two different things called "AirDrop" which were not compatible with one another.

I'm really looking forward to this. Unlike iOS7, the flatter design here doesn't make me feel like a bunch of amateur artists got a hold of a free copy of Adobe Illustrator. I actually like the new look quite a bit. Now, to pray that they've made some under the hood progress on multi-monitor support.

Moving work back and forth from desktop to mobile also sounds really amazing. I get a hint of it when working with gmail or drive, but this sounds much more deeply integrated. Google will have to respond, and this makes me happy.

> With this new design, OS X...now looks a bit more like iOS 7, but > there is still quite a bit of depth. Indeed, more than flat, the > design almost seems to focus more on translucency than anything else.

The above is an incomprehensible collection of words to me. I am not sure if this is because of my lack of an intimate connection to Apple products, terrible writing or some combination of the two.

I honestly don't understand this design direction. I know it's nice to have a change but from the few screens I have seen on the Verge it looks like something that came from one of those "I redesigned OS X" blog posts.

Not a great article: new features trumpeted include Spotlight's ability to search for mail messages and contacts, and a Private Browsing mode for Spotlight - both of which are pretty long-standing features.

These things tend to get rushed out, but maybe TC could have waited just a few more minutes to weed out the obvious stinkers.

I'd be happy if they fixed Mail's connection to Exchange (drops randomly - a know issue), or the terribly slow SMB - mounting Windows drives is just a nightmare. There is a fix, more like a hack really, forcing the OS to use an earlier version of SMB.

I'm not a fan of the flat design personally, but the redesign is pretty sharp. I like the minimal safari UI; its nice when the browser lets the webpage be main focus and I think it is something Safari does best.

If possible, could someone from the Apple community please tell me if Yosemite will be faster / lighter than snow leopard? I don't want an OS that requires 8 gigs of ram to run "fast" like Mavericks requires.

As seen on a the screenshot for the new notification center calendar, the Life of an Apple user begins at 10:00 with a crossfit session. After that it is not that you go to work then. Relaxing talk with Anne on the phone, maybe talking a little business on the side, but not to rough. After Lunch you do not start to work either. Just let out all those wise thoughts gathered while living your apple lifestyle in a fresh stream, like you do.

I know it's hardly the most important thing in an operating system but god that looks ugly. This dumb flat fad cannot end soon enough. I hope mavericks get security updates for awhile because I don't have plans to upgrade.

The Handoff feature is interesting. I hope its not just some lame cloud sync that takes ages to sync because your 3G/wifi is spotty. For e.g. If I was writing an email in Mail.app I'd want to be able to shut my mac and resume writing on my iphone exactly where I left off. Same with Safari and other shared apps.

I wish Apple would spend their resources on finally fixing some of the most broken fundamentals (photo sync, notifications, finder...), rather than letting Ive further trash the GUI and celebrating that as some sort of accomplishment...

"This is about consumers not getting what they paid for from their broadband provider. We are trying to provide more transparency, just like we do with the ISP Speed Index, and Verizon is trying to shut down that discussion."

This is a genius move. It's exactly what needs to happen to get consumers to realize that it's their ISP fucking them over, not Netflix.

The average joe isn't outraged enough about net neutrality. If only they'd start doing this to other known bad actors coughcomcastcough, that might just be what the doctor ordered.

I wonder why they ponied up the money to the protection rackets first, and only then started pointing fingers. Maybe the agreement gave them access to some better data? If not, this should have been done months ago!

By the way, don't bother with the comments on the article page unless you want to lose all faith in humanity :(

I feel like Verizon's treading VERY dangerous waters here. If they sue Netflix for libel over this, then they're going to have to go through discovery. Since the claims center around network congestion, that means it'd be fair game for Netflix to go after every scrap of paper they have about the state of their internal network, oversubscription strategies, data about advertised vs. actual customer performance, any history of traffic shaping, and they'd be able to depose employees about all of these things.

As much as I'd love to see all of that information come to light in a court battle, somehow, I don't think Verizon would...

Wow, what a stupid move on Verizon's part. I have to believe the Streisand effects will far and away outpace any remedial action they might try to achieve here. While I get that they are irritated by their customers calling them up knowing more about how broken their network is, than the tech answering the phone, but this is going to be the new reality. With Google, Netflix, and no doubt Amazon providing more information to their customers about exactly why their product is having issues.

It's frustrating that Verizon and friends can make grandiose statements in their advertising about "unlimited bandwidth", "faster wifi" ,"stream 5 things at a time", etc, etc. Or lie to customers, My mother just called Comcast to downgrade her service and the CS Rep said "WiFi won't work with our Economy Plus internet plan"

Yet when another entity does something as simple as showing an error message that casts them in a negative light, they are willing and able to threaten legal action.

They stretch the truth as far as they can, yet give not an inch when confronted with truths they don't like.

Remove government so we can add customers! But we need government so we can slap Nextflix when they say something we don't like!

I don't have a good answer how to fix it, it's just very plain to see, and frustrating.

Verizon fios customer here. There has been a noticeable drop in the quality of streams in the past couple of months. The picture went from excellent > barely functional(after the FCC ruling) > watchable(after the peering agreement) still not great. Kudos to Netflix.

I believe that the solution is forcibly separate infrastructure from service. The same company that provides the infrastructure should not be the same company that provides the service to the customer.

There are several nations around the world where this is the case. It reduces the barriers to entry for ISPs, creating an environment where there are dozens of ISPs to choose from with differing levels of customer service and pricing plans.

The primary problem with Comcast and Verizon is that they can leverage their customer base as a negotiation tactic rather than solely on the state of the network.

1) ISP's insisted on selling home bandwidth as functionally unlimited. They did this primarily so they would only have to offer expensive plans. Grandma can't get the 300meg email plan for $8, only the $80 plan. And then everybody tried to use it like it really was unlimited.

2) DRM. Drm makes it so that the net can't cache. 90% of netfilx bandwidth is likely the same 10% of videos. They all have to make the full trip through the wires. A local cache at the ISP to improve efficiency is impossible.

In the cease and desist, the Verizon's lawyers allege that Netflix can't possibly know if the network slow-down is coming from Verizon's network or other parts of the internet. I actually chuckled when I read this -- because after reading Netflix Tech blog (here: http://techblog.netflix.com/ ) and seeing Netflix's open source contributions (here: https://github.com/Netflix) I have a feeling Netflix probably has a LOT of data to back up these assertions.

Sue their pants off for the copyright infringement of their customers (and all the other illegal things customers can do). This why common carriers exist. They trade ultimate control over the transfer of content and treat it equally for the protections of being indemnified from the content itself. This is why the U.S. Postal Service can't be prosecuted for transporting a death threat letter. Make it about money and they'll jump on the common carrier bandwagon before you can blink.

It doesn't state it anywhere but I assume Netflix can be pretty sure of where the bottleneck is happening before transmitting that message correct? Given all the other factors that can contribute to a slow down.

Netflix is angry because Verizon isn't giving Netflix the bandwidth that Netflix paid for[0]. Netflix should ignore this C&D on free speech grounds, and tell Verizon to get lost, because corporations are people[0] and have free speech.

There is an easy and practical solution to this whole problem: If Verizon doesn't like the amount of bandwidth their customers use watching Netflix, they can always choose to block Netflix completely on their network, and inform their customers that they've done so. It would be honest, clean, fair, etc. Customers would be paying for exactly what they are getting, everyone could be happy with the arrangement.

Of course there would be a massive revolt of their customers if they did so. Because their customers WANT Netflix. So instead, Verizon will try very hard to make it "sorta" work and blame Netflix while trying to get money from them.

Someone please clarify my doubts. When I run a server, I have to pay for bandwidth costs. For eg. My website hosted on linode gets me 20TB of data transfer limit. I expect end users to be able to view content worth 20 tb of up/down data transfer. Netflix runs its own servers but the ISP providing connectivity must be already charging Netflix for a certain bandwidth and data transfer limit. If the ISP is charging for data transfer already, I expect the ISP to provide the entire service. For eg. Let us assume that Verizon charges netflix 1 usd per tb of data transfer. And Netflix uses 20000tb of data in a month. Then Netflix owes Verizon 20000 usd a month. And Verizon has to serve the data according to the bandwidth agreed to.

Is Netflix not paying Verizon for the bandwidth and data transfer?

If Netflix is not paying Verizon then why does my hosting provider charge me for bandwidth and data transfer?

This seems very naive on VZW's part its extremely easy to show that there is congestion at the ingress of one of your peers. Even if VZW is not a direct peer its pretty easy for them to get metrics off their clients deployed at the last hop.

Whatever happened to the right to free speech? There are of course restrictions to free speech, such as you cannot yell fire in a crowded building unless there really is a fire, but this is not one of those scenarios. The behavior of Verizon and the other ISPs are grossly offensive!

Netflix should also be including the phone numbers of city council members in the zip code of the account owner, and a link to example legislation for use of eminent domain seizure on last-mile cables.

Im sure most of the internet providers cap their bandwidth to services like this. I am confident that the main Swedish ISP provider Telia is capping as we speak.

I can see that ping times increases and it's always reoccurring around 9-10pm Fridays. Around 9:30 buffering speeds up, meaning most of the people giving up and switching to regular TV. One could argue that this is the time that Netflix/Youtube/Hbo/ViaPlay or any other service is used the most, but my suspicion is that capping occurs at those times, to validate the Fastlane/slowlane argument when the it arrives here in Sweden too.

I am currently collecting statistics to validate this. However, its insane that a ISP can cap and provide lesser service's on certain ports and argue that the fault is at the entertainment provider. Its also insane that when going from a "torrent way of life" to becoming a paying stream customer, I cant even watch my favourite show, when i want, on 100/100Mbit line.

ISP's are in the business of providing fast connections and high bandwidth, and that's what I'm paying for, however I want to use it. It shouldn't matter if I'm streaming from YouTube or downloaded illegal porn. Though it seems it looks like they want to sell me bad gasoline that doesn't take me anywhere, and say my engine is at fault.

Well I think we should introduce a tax on commuters because as it turns out they are the 90% of the highway traffic.If they want to get to work on time they can use the fast lane for a little extra and if they don't pay we just make them drive 10 miles per hour. I think this is fair. Ohh btw. if they complain about it, it is just them trying to influence policy that was set by good corporations so I don't understand why we should change it. :)

Is Verizon actively slowing the connection to Netflix.com itself or is the connection between Verizon and Netflix's provider (Cogent?) simply limited? In other words, are all Cogent customers suffering poor performance with Verizon customers?

Because the former is wrong and demanding more money from a single company hosted on a service provider you have specific peering agreements with is extortion.

The latter however, well, no one said there were unlimited pipes between every transit provider on the planet and if you're someone large like Netflix, sometimes you need to pay for transit on more than one provider to get the performance you need when your own provider can't or won't do so themselves. It has been like that for decades.

Netflix may be reaping what they have sown here. They agreed to buy paid peering from Verizon, so if they don't have enough capacity into Verizon the onus is on them to buy more. OTOH if there is some kind of problem inside Verizon then the error message is justified.

This is exactly why I dropped Netflix. I want a high-def movie, not their whining about someone else's network. I have no problems streaming from Amazon. I have a choice of streamers. Other streamers work better. I have less choice of wired ISPs and they're harder to switch. Netflix would rather whine and preserve its profit margins than pay up. OK, fine, but since their whining does not get me the movie I want, I cut it off.

Netflix will find that most other people also do not care to hear their whining. I can understand why Netflix does it though...it's $8 a month. Between production costs, licensing costs, marketing, and profits, all $8 a month leaves room for is crappy movies, poor investment, and a continuous drive to shift costs to ISPs and whine about it.

I wonder if Watterson really believes the gag line for the third strip, "Nah, the art form's dying"? He might, being (apparently) a neo-luddite of some sort. But there has never been as many, or as good, comics as there are now, or as many busy, thriving comic artists. Hint: they are not found at gocomics.com.

Is it just me or are the strips... just not that funny? All three of them are basicly the same joke, with one containing a minor reference to Watterson's page formatting preferences. Now that I think of it, that's really par for the course (or maybe a birdie) for newspaper comic strips in general. I guess it just seems strange to see the world's best living cartoonist come out of the fortress of solitude and do something other than leave the audience in tears of profundity.

I've always wished I could draw. Even XKCD's minimalist art style is still an artist drawing stuff which is beyond me. I can't draw a straight line with a computer (they're very heavy). C&H was not only great art, it was an amazing look on adult life despite it being about a kid and a stuffed tiger.

Great to see the credit to Bret Victor and Chris Granger's Light Table. Apple's resources can really help move forward these new ideas of what an IDE can be. If Swift is successful, a whole generation of young developers will use and improve on these ideas. Very exciting to see what happens.

It's amazing that Apple managed to go from hatching the idea in mid-2010 to releasing a fully working framework 4 years later, with tight IDE integration, huge amount of testing and compatibility without a single leak (that I've heard of).

It is nice that he mentioned Light Table. I would not be surprised if Swift ends up really benefitting Clojure adoption indirectly. I think one of the big hangups for newcomers is that, if you don't have experience with a Lisp, it's often difficult to understand the benefits of interactive development. If a large amount of new programmers become exposed to it, they'll be more open to other options that provide similar or better interactivity.

Really excited for Swift. I've tried and failed many times to learn Objective C and felt that the barrier to entry was a bit to high for myself. As a someone who writes JavaScript for a living, Swift is very inviting.

I feel like swift is really a good view on the future of programming. And it seems that in the future we have really two different kind of software engineers. As we make programming mainstream and easy, we will see some new people able to use langages like swift and dev good apps without having the slightest idea of what is happening underneath. We used to have at least a common background between software engineers but I think it gonna slowly disappear. Is it good or bad? I can't make up my mind yet, but I'm really considering more and more to go back to lower level langages as I feel like the upper levels are going to be crowded by young generations.

Swift "greatly benefited from the experiences hard-won by many other languages in the field, drawing ideas from Objective-C, Rust, Haskell, Ruby, Python, C#, CLU, and far too many others to list." I have been wondering if in fact this language will be Open Source, since it is said to have taken from other programming languages, some of which are Open Source, yet the decision to make it Open Source has not been made? Lock down.

Extremely excited about this language's versatility. Also great to see the work of my school's Alumni and professors going into production. This is the third time I've come across the LLVM compiler in industry use (albeit nothing to the scale of iOS' language) - anecdotally during my internship search this last semester. Will be very interesting to take Professor Adve's compiler course soon.

I've been using Dash for more than a year now. I love it. It is great for quickly looking up things, and best of all it works for multiple languages. I regularly write code in Clojure, Perl, Java and C, I also use Redis and PostgreSQL, and Dash helps with all of that.

My only wish is that someday I could get Intel's x86 manuals and ARM Cortex M0 and M4 instruction set documentation in Dash.

Dash is great and a big shout-out to the developer (@kapeli) who is really responsive to support requests. I found that the backbone docset was actually using the Edge version not the latest stable release. He had it fixed in a few hours.

Dash has been one of the most amazing tools to improve my day-to-day workflow. It's incredible.

I love to travel and specifically I love to travel to places that dont have wifi. Often times I take fly fishing trips to Montana, or shorter trips to the Smokey Mountains and during these times I need to be able to work an entire day without internet and Dash is the only reason I can do this effectively.

Dash + Alfred + Sublime are probably my most used tools in any given day (aside from Spotify which is rarely ever turned off)

Bought this a while back and was very impressed, definitely a worthwhile purchase if you ever spend some time without much internet access. The integration with Alfred + the fuzzy searching is just the icing on the cake.

Also as a little side note, I thought the way it handled the UI for tabs was interesting, though it does leave little room to grab the window and drag when you've got a few open.

Just bought this recently. I feel that I got my money back with all the time won over Google searches multiple times already. The low latency and absence of unrelated results helps me stay in the flow. For me the trick was to assign a global shortcut to invoke the tool.

I like the idea, I bought it and have it open all the time, however I don't find myself using it that often. That's probably because I know most of the tools I work with out of memory (angularJS), and the documentation I do have to look up sometimes (UnderscoreJS) I actually prefer to see in the browser; the navigation on the browser version has a better subdivision in Underscore modules (functions, arrays, objects etc) which Dash's index is missing.

(subtle feature request: subcategories for the underscore docset, or headers/sections in the method listing)

Great developer too. I've put in docs request (for ColdFusion) and he constantly sought feedback from me to ensure it was presented in the best way possible and if he was unsure about something himself.

Definitely a requirement on the next non-Internet-accessible development opportunity!

I would like to see a utility that would collect (readability-ified) urls and package them nicely for Dash/Zeal. This would make it easy to build an ultra-custom collection of useful info - a searchable offline bookmarking tool. Best of all would be something that knew how to periodically refresh this archive.

Also, these tools should include a timeline tracking what was useful so that as I return to projects/problems I can scroll back and pick up where I left off.

This is pretty sweet. How do the docsets get prepped for download? The author scrapes the html doc pages at the tool's site? But for example in the case of Node.js, the menu nav on the left side of the official docs [0] aren't found in Dash's docset.

I've really found this app incredibly helpful; I use a really wide variety of libraries and APIs and not having to go to each site has saved me tons of time. Maybe it's not for everyone, but I've loved it. Worth trying out.

Is there a way to tab into the content for a query (right side column) instead of having to mouseover and scroll? Also it would be nice to be able to search the content area as well so I can more effectively jump to the material I think I need.

Dash has become a part of my standard workflow in the last few months. It's great and it's always getting better. @kapeli responds quickly to feedback/questions on Twitter. I use it with Alfred and the vim plugin.

When Dash first came out, I liked it a lot, and found it better than Google for finding what I needed in almost any language I used.

But for some reason that even I don't really know, I stopped using it. I just checked the App Store on this computer, and it says Install, not Buy, which means I already paid for it long ago, and could have been using it this whole time. If only the developers could figure out why I stopped, they could probably make a lot more money.

That said, I do still see an App Store notification pop up every once in a while saying Dash needs to be updated, and it is pretty annoying how often that happens compared to any other app.

Call me stupid, but I can't get simple question answered by reading the page: What is Dash? A website? Locally run server listening at 8080? Desktop application? From the screenshots I guess it is probably OSX app, but is it so hard to put it clearly somewhere in the top?

Since most(1) web browsers do not use OpenSSL, CVE-2014-0224 is not going to be a big concern for people browsing using SSL, but it is a concern for machine-to-machine communication where using OpenSSL on both ends will be common.

Given that this also affects 0.9.8 there are going to be lots of backend systems that need upgrading.

(1) Apparently Chrome on Android is the odd man out in using OpenSSL, but I don't know if it is vulnerable to this problem.

It seems openssl will accept ChangeCipherSpec messages much too early. CCS in TLS means "we've finished handshake/renegotiation and will now start using the new keys".

It looks likely that a MITM can send CCS to both ends during handshake, and have them agree on the empty master secret (and therefore trivial application data encryption keys). This is pretty bad as far as TLS bugs go (as bad as "goto fail", but not as bad as "heartbleed").

Given that accepting TLS messages only within the right constraints is fundamental to correctness of TLS and openssl seemingly can't get this right (this, and heartbeat messages before/during handshake), it seems likely this isn't the last problem of this kind.

The large volume of vulnerabilities coming out of OpenSSL are worrying, but it likely reflects the increased effort being put into auditing and fuzzing the code after Heartbleed. What is more worrying is the many other critical pieces of software that have nowhere near the level of scrutiny that OpenSSL is receiving currently.

I noticed (in that commit anyway) there were no tests changed. Is it pretty standard not to test things like this? If I find a major bug in code I write, I usually write a test first and TTD until it's fixed.

When I saw Jerad's post on Hacker News, he looked like a good fit for a developer role at Kaggle. Our team interviewed him remotely and I met him in person and verified that he has the talent, curiosity, and life-long interest in software development that'll make him a great addition. We're really excited to bring him aboard.

Thanks HN for helping us make a rare find!

P.S. If you're into machine learning and/or have an interest in developing a great site for a community (like Jerad), we're always looking for more to join us: http://www.kaggle.com/careers

I had a similar experience. I thought "eh, why not?" and got a surprising number of high quality leads. I started a remote working contract for a group in London on the 12th and it's been working quite well.

I had kind of the opposite experience; interviewed one or more times with a bunch of the "Who's hiring" companies and only one had the decency to send an email with an update; the others went into radio silence.

I too was impressed by the response from posting on that thread. I had recently moved to Germany to be closer to my partner and got spotted by a company that was really interesting to me (MenschDanke [0]). Had a few interviews, met the team and all that went well and I start working on Monday :)

I too had a good experience and had the opportunity to speak with many interesting people... however the only thing lacking was the visibility of my preferences. I wasn't surprised that the majority of people who contacted me were from SF but everyone who had seemed oblivious to the fact that I'm based in Canada, not interested in relocating, but love to work remotely.

I agree that the Who Wants To Be Hired thread is valuable and helpful. I was a bit late to the party, but I still had 4 companies and 2 recruiters contact me, one of which turned into an on-site interview at a YC company. I would definitely love to see this thread continue to be posted every month.

Interesting that this thread has gained so much traction. I created an "Ask HN" submission this morning asking about people's experiences with the monthly "Freelancer? Seeking freelancer?" threads, which got ignored and promptly burried!

Looks gorgeous. I can't help but wonder if people unfamiliar with technology and ecommerce would be deterred by such a form? It might give some users the impression that the website is "copying" the credit card. It would be interested to test the opinions of non-tech savvy users.

This is neat, and I love the look and feel. The only problem is that it's completely unnecessary and possibly confusing for potential customers. But that's just my opinion. It would be interesting if someone would put this on their own payment page and share the conversion metrics.

There's a few annoyances that threw me off:

1) Like others mentioned in this thread, I first tried to enter credit card details directly on the card. I was initially blind to the text inputs underneath the card. If the card was initially hidden, then faded in to the left/right of the input form that might alleviate this confusion. The problem is the card is so beautiful and neat looking I immediately anchor to the card instead of the input form.

2) When entering an invalid date or credit card number there's no visual feedback. It's very common to be blind to your own input errors, which necessitates clear communication of error state.

3) Even more confusing, I can't tab to the CVC field when the date is invalid. But I can tab to the name field when the credit card number is invalid. This inconsistent behavior initially made me think the form was "broken".

If you're not the merchant of record using this form could very well violate your terms of service for whatever credit card company you have your merchant account with, and/or the IPSP terms of service.

Using this can result in terminating your merchant account or temporary suspension depending on how they take it.

The reason why is that the logo's are only allowed to be used by the merchant of record, for many of the parties interested in that this will be their IPSP. So verify prior to doing this that you actually are allowed to use the card association logos. (And for that matter, that you're authorized to capture the card details!).

Please be careful, if you lose your merchant account it could be a while before you get it back, if ever.

It's a nice display, but as others have said it invites attempts to type on the card itself (which doesn't work). It also doesn't scale with increased font sizes; the card remains the same size and text within it wraps or gets cut off.

However, how does this stack up against the recent UX trend of moving away from skeumorphism? Why should a CC number be represented by an actual plastic card that is coming to life and flipping around on my screen?

I believe the future of plastic credit cards is limited given the security loop holes etc. Companies like Square, Google, etc. are already championing transferring money over native internet identities like email addresses.

I once saw a detailed analysis of skeuomorphic credit card input, and it turns out the general population is less likely to fail on a non-moving credit card input box, largely because of difficulties with entering the CVC code. This combines both the skeuomorphic nature of Skeuocard by Ken Keiter (http://kenkeiter.com/skeuocard/) with the simplicity of creditcardJS (creditcardjs.com). I love it!

This is great! There's also Skeuocard (http://kenkeiter.com/skeuocard/) which offers a bit more of a skeuomorphic design. I think I favour the form as part of the card, however this may be confusing/inaccessible to some people - nice work on the library!

I really want to like this because the design and functionality of is very cool. Like others I tried to start typing on the card so there is definitely an element of confusion that isn't there with a normal form. The last place I want to risk any confusion is the stage at which the customer is trying to pay me! A variation of this where you could type on the card might be interesting.

Used in production today on an internal tool that we do some billing/card running on. After some small jquery options struggles, it worked as it says on the tin.

Interesting reaction by test group when deploying....test group being a small group of coworkers. Without announcement, they immediately were untrusting of it and thought it was somehow consuming the credit card information maliciously. This is the security climate we find ourselves in. heh But after assuring them, they thought it was pretty neat/fun.

For me the whole thing took too long to load. I didn't even see the card until I had filled in most of the data in the fields (and I was thinking, 'What's so different about this?'). Then the card loaded and all the data I had put in already was automatically deleted and I had to do it all again. If I was buying something on a whim, this might just be enough for me to think 'Forget it'.

More generally, while I do think that entering card details is a bit of a ball-ache, I don't think the solution is having a picture of the card on screen...

thank you for this. i've never understood why credit cards are printed w/ spaces between the numbers, presumably to reduce transcription errors, but 99.9% of web forms force you to enter the number without any spaces.

Tab from expiry to cvc doesn't work in the interactive. Other than that, honestly, I was just impressed by the "boring" version. Very tidy form, nice to see a new take on it. Existing cc forms around are seriously painful to use. Really digging the card vendor detection, neat touch.

Oh, wow. The first time I loaded the page the image didn't form correctly and it was just a block of text which was hideous. Then I read the top comment calling it gorgeous so went back, and now that it loaded properly, I agree! Quite awesome.

If you are just starting, you should have the simplest setup - everything on one server - and scale it only when it becomes necessary. Premature scalability adds complexity and slows down your iterations.

My setups usually consist of an nginx serving static content and proxying applications requests (doing gzip, etc). The data tier is initially collapsed into the application as described in http://www.underengineering.com/2014/05/22/DIY-NoSql/ This architecture allows very fast iterations while providing enough performance headroom; it can serve 10k simple (CRUD) http requests per second on a single core.

I am hosting all of my stuff on a single VPS instance in Docker/lcx containers. It is reasonably easy to migrate stuff out if I need a larger hardware, but it's also very cheap.

Regarding scaling: a couple of years ago I ran a database on a single CPU core (because of licensing issues). It stored 50M rows a day and also executed various queries quite quickly. So I seriously doubt that most of us is going to need large clusters.

The one thing I really want from Digital Ocean is a guide that carefully explains how to set up the "private network" piece of the equation.

The "orange box" that represents the private network in each of the examples is taken for granted, but for someone coming from an application development perspective that piece isn't trivial to make. EC2 Security groups make that sort of box incredibly easy to make, but DO doesn't have anything like that.

Wouldn't it be much better to tech the concept of horizontal scalability applied to the application stack? Your server is a stack of interfaces: a frontend cache, a static content server, a dynamic content server and a database. You can horizontally scale each stack layer. Much simpler, applicable to different scenarios.

However, this approach won't give you a viral article title like "eight server setups for your app" (replace eight by 2^n where n is the layer count).

website hosted on 1 droplet. additional 1 droplet per every customer is deployed through Stripe and DO api.

DO let's you save a snapshot and load it to the droplet. I have a snapshot that is basically a copy of my 'software'. It's a LAMP stack with init script to load the webapp from git repo.

Customer logs in at username.mywebapp.com

The beauty of this is that I never have to worry about things breaking or becoming a bottle neck. if one customer outgrows themselves, they won't affect other resources. It has linear scalability, new customers, add a new droplet. I don't need to worry about writing crazy deployment scripts although I use paramiko to ssh in to each server when I need to get dirty.

The main website is mostly static content. I could host it even on Amazon S3 but currently using cloudflare.

Updating the product code requires me to restart the droplet instance. However, I test things out on another staging droplet. Once things work on there, I use the DO api to iterate through all the customer droplets and do a restart.

It would be very helpful if DigitalOcean sells load balancer too as Linode, because the bandwith limits are for each Droplet which makes it very illogical to use DigitalOcean. Of course, we can use Cloudflare or similar, but still It is a need.

Virtually no mention of how the different server setups affect availability - this is very unfortunate. Availability (not to mention disaster recovery) are two things which I think are significantly more important than scaling, and your choice of server setup will affect both.

This is awesome! Great content for DigitalOcean to be pushing out as I am probably the exact audience they are looking for when they published this. E.g. I've never gone beyond a shared hosting setup but have been curious to try my luck at learning more of the stack by using the DO platform.

Thanks for the write up. It's the perfect time for me to be reminded about starting simple and changing the architecture as needed. I have prematurely optimized on one project in the past. It was painful. And after all that pain the mythical millions of unique visits never arrived.

As the "Startup Standards" begin to take shape, these guides prove to be extremely useful for the newcomers out there. Sure in 6-12 months it may become a bit dated (depending on the guide) but if kept up-to-date, they can be a powerful tool for a new company.

I propose an alteration to the typical LAMP stack: Replace Apache with Nginx and MySQL with MongoDB. Personally, the reduced resource use of Nginx is nice since I can run on a smaller "box". MongoDB is just a choice depending on the data set, but it does allow for sharding out horizontally without too much effort.

The effort D.O. puts into their community education is one of my favorite things about them. The few times I've had problems with a droplet configuration, inevitably someone had already posted a solution in the help section.

Excellent writeup! Next I'd like to see an article on deployment. What if I want my development team to be able to push code changes regularly to an app cluster via a git-based workflow and have these deploys all occur with zero downtime ? I think that an article which demonstrates how to use modern deployment tools such as ansible or docker to achieve those goals on a commonly used programming environment such as Ruby would serve to lure quite a few developers away from PaaS towards something like Digital Ocean.

For now though, those tasks are still "hard" which means that for many developers digital ocean is still hard to use relative to other emerging platforms such as Redhat's Openshift or Heroku. I know there are many shops who would love to jump ship from IaaS to a less expensive platform but they feel the cost of rolling their own zero-downtime clustered deployment infrastructure is not worth the $ savings.

I suspect that if IaaS providers were to dedicate resources towards producing more educational material for developers with the aim of demonstrating how to achieve these deployment objectives on all the popular platforms using modern open source tools then loads of PaaS developers would jump ship.

For example: How can I use ansible to instantiate 5 new droplets and automatically install a load balancing server on one of them while setting up the Ruby on Rails platform, and ganglia on the remaining ? How can I run a load balancing test suite against the newly created cluster, interpret the results, and then tear the whole thing back down again all with a few keystrokes ? How could this same script allow me to add additional nodes and how does the resulting system allow for the deployment of fresh application code ? How can it be improved to handle logging and backup ?

I know that it's possible to create a deployment system to answer the above questions in less than a few hundred lines of ansible + Ruby, so I imagine it could be explained in a short series of blog posts, but you would probably need to hire a well-paid dev-ops guru to produce such documentation. I bet if you ask around on HN...

This makes perfect sense in light of the growing relationship status of Apple and Google as major competitors in devices space (Android and iOS), and as such, it's a brilliant strategic move on Apple's part. DDG have made the first real inroads in search in over a decade, and they've done so quietly.

Snowden. People are now privacy conscious, and aware of the tremendous amount of information disclosed in Web searches. DDG's huge traffic spike following the Snowden disclosures is testimony to this: https://duckduckgo.com/traffic.html

Google is the one to beat. Microsoft, Apple, Amazon, Facebook, Rackspace, and others see Google as their primary competition. This creates an alignment of interests among them.

Google has shown vulnerabilities. Failures to execute on a string of social efforts, most recently G+, as well as an increasing sense of distraction, as well as possible signs of weakness in its core search business, suggest a vulnerable underside to Google. DDG isn't big enough to cause real damage yet, but it can certainly get Google's attention.

Democratization of search. Was a time when massive datacenter investments were necessary for search. That's both no longer the case, and DC infrastructure's getting cheaper, both of which cut away at Google's core competency and advantage.

Google's lost its favored status among the technorati. While it's not clear who's won that crown, there's an increasing strong sense among many that Google have failed at their "don't be evil" pledge, have disappointed users, and simply don't have the chops they once demonstrated.

Specialized search is making inroads. OpenStreetMap is taking on geosearch, Wolfram+Alpha and Knoema specialized data search, Wikipedia is a basic more-or-less-trusted repository of actual information (as opposed to random Web sites), Amazon is a product and bibliographic research library. There are places to go for information which, if you've got a specific interest, are better than Google, and they're carving off bits of the search market.

So, yes, for the first time in 15 years, search looks like it may be ripe for a bit of disruption.

Don't get me wrong: Google does some things amazingly well. Date-bounded Web searches still draw me back (I did some here to turn up a few of the more obscure search contenders from the early 2000s), the Google Books Ngram viewer is fucking awesome, Google Trends isn't bad, and a few other elements. Reliability of Google services is amazing. But there are chinks in the armor.

Personally I'm hoping DDG will at some point pick up the slack from Google dropping the Discussions tab (forum search). Evidently it wasn't so popular, but I found it hugely useful for researching product decisions and haven't found a good alternative. Seems like it could be a good niche to cover.

Nice, but they should open up everything more and let people install any search engine. Apple did open up quite a bit today (in terms of letting others access their platform in places where Apple previously was the sole decider), so its not unlikely to eventually happen.

For months i have been in transition to DuckDuckGo. The only missing bit was native search in safari for iOS and OSX. This definitely seal the deal. And is good for all the industry that a viable second choice exist to bring balance to the force , and bring humility to google monopoly in search.

I really really want to like ddg, but I can't seem to get away from a need to !g. Further, though its obviously a loss in privacy, the quality in search results returning from a "context aware" google is just leaps better. Its a trade-off I guess. I know HN is very much in support of the primacy of privacy ... but there is no doubt search and ad relevancy goes down in anonymity.

This is fantastic news for DuckDuckGo and for Apple. A nice solid search engine coupled with a nice new redesign. If I were Google I would be a little bit worried DuckDuckGo are steadily stealing users. They converted me.

So it looks like the language isn't open source and won't target non-Apple runtimes?

I'm not trying to troll, I just think that it's a pity that Apple tends to limit the ecosystem and applications of its otherwise-great languages. Building against LLVM ought to make it fairly trivial to make this cross-platform.

Feels like they looked at a bunch of programming languages, took all their favorite features, and then put them into one which still sits on top of the ObjC runtime. And then added some Apple syntactic craziness.

"The debugging console in Xcode includes an interactive version of the Swift language built right in. Use Swift syntax to evaluate and interact with your running app, or write new code to see how it works in a script-like environment. Available from within the Xcode console, or in Terminal."

Does this mean that we can use the Swift REPL in the Xcode debugger to explore running ObjC programs? That would be enormous fun, not to mention very powerful.

This will help young programmers solidify the connection between giving the computer logical commands and what is outputted on the screen immediately.

Reminds me of how excited I was when Processing (http://www.processing.org/) was released which made it dead simple to interact with a screen and graphics. Didn't have live feedback, but it made it incredible easy to understand OOP.

About time! Objective-C is increasingly becoming inadequate for Apple software development. Apple knew that. They finally decided to do something about it. If it goes popular the same as Microsoft C#, that would be a huge win for both Apple and developers.

Guys, I'm not sure if anybody's noticed yet, but call it a co-incident or what(make the "what" "copy"), but, another programming language with near similar features already exists, though its not C based and some other stuff is quite different but, yeah, see this : http://www.linux.com/news/featured-blogs/200-libby-clark/725...

I can't find anything in the book or on the website about concurrency. I'm not familiar with the Apple programming ecosystem, how is concurrency handled? Is the lack of language-level support a concern or is everything shuffled off into libraries and the runtime?

Anything that provides access to GPU command queues is welcome. It's been clear for a while that OpenGL and D3D are ill-suited to modern ways about thinking about GPUs. This also explicitly supports multithreaded clients, each with their own command queues.

My concern is drivers: drivers for video cards on OS X haven't been as good as Windows drivers. That, and of course the specter of another platform-specific API. This invites a comparison with Mantle. I don't think either Metal or Mantle will "win", but they're good prototypes for the next generation of GPU APIs.

The fragmentation of OpenGL is enough of a headache, but at least it offers some semblance of "write once, run anywhere." The introduction of Mantle and Metal, plus the longstanding existence of Direct3d, makes me worry that OpenGL will get no love. And then we'll have to write our graphics code, three, four, or goodness knows how many times.

I know: It's not realistic to expect "write once, run anywhere" for any app that pushes the capabilities of the GPU. But what about devs like me (of whom there are many) that aren't trying to achieve AAA graphics, but just want the very basics? For us, "write once, run anywhere" is very attractive and should be possible. I can do everything I want with GL 2.1, I don't need to push a massive number of polys, I don't need a huge textures, and I don't need advanced lighting.

This to me is more surprising than Swift. But it will make for difficult platform decisions. But since there are 4 game platforms already working on it (Unreal hasn't committed) maybe it's not a bad idea at all.

You can thank AMD for this one. This is exactly why I supported Mantle initially - not necessarily because I thought Mantle will replace DirectX and OpenGL, but because it would push all the others to adopt similar changes to their APIs.

And this is exactly what happened, first OpenGL (through AMD's extension for now at least), then Microsoft with DirectX 12 [1], and now Apple, too.

Before you get too excited, though, remember Mantle "only" improved the overall performance of Battlefield 4 by about 50%. It can probably get better than that, but don't expect "10x" improvement or anything close to it.

This seems to be Apple's answer to Google's RenderScript. It is too bad big companies (Google, Apple) are developing their own GPU software stack instead of building upon and furthering existing frameworks such as OpenCL. OpenCL desperately needs a kick in order to catch up with CUDA. Instead they are focusing on things like SyCL, hoping to catch up with already superior projects such as C++AMP. OpenCl should rather fix their poor specification and get implementers on the same page about it. The mobile community could have been a driving force. Instead, frustrated with what OpenCL is, mobile decided to roll their own. As always.

Saw lots of HN news on Apple, I own absolutely no Apple devices, as I think while them are well made, the software ecosystem coming with it is similar to whatever Microsoft had before, i.e. essentially closed, and that is important to me, it made me to own zero Apple devices.

Am I alone here? I'm running Linux everywhere from home to my office for years, and my tablet/cellphone is Android-based.

To those wondering why Graydon is qualified in the title as just the "original" designer of Rust, it's because (as far as anyone seems to know) years of being a technical lead wore him down (lots of administrative tasks and infrastructural tasks, little coding) and the fact that the position of BDFL was foisted upon him unwillingly, to his chagrin. He abdicated the project last year, though I still keep hoping that we'll entice him back someday!

He seems to generally like it -- similar to Rust in a lot of ways, but a little higher-level, which is appropriate for its intended uses (iOS and OS X apps). He nor I seem to think that it can replace Rust (especially if they don't open it up!), but it definitely seems like a compelling option.

EDIT: Does anyone know if it's possible to try it without a paid iOS / OS X dev license? I'd especially like to try the playground, but I'm not willing to dump $100 on it.

A lot of the features allegedly inspired by C# actually come directly from Objective-C. It's a decent discussion (I also skimmed the manual front to back after the presentation) but it's sad the writer appears ignorant of how advanced a language Objective-C is/was, especially since it significantly predated C++, let alone Java and C#.

He doesn't seem to know much about iOS or Objective-C due to that comment about parameter names probably being unused. You wouldn't be able to make meaningful method selectors calling into the code from Objective-C without them, since the names of the parameters are part of the method name in Objective-C.

Does he knows something of Obj-C? Because a lot what he says coming from elsewhere was already well known in Obj-C, like named parameters, "protocols", etc. Which he barely mentions. I think in the light of Obj-C, the newer C# would be less mentioned.

I think the author is seeing a little more than there actually is. There is a lot more overlap between Swift and C#/Java than with Rust. Actually, I see very little Rust in Swift, except for very trivial features that are present in 90% of C-based languages.

"Protocols get to play double duty as either concrete types (in which case they denote a reference type you can acquire with as from any supporting type) and as type constraints on type parameters in generic code. This is a delightful convenience Rust stumbled into when designing its trait system and I'm glad to see other languages picking it up. I'm sure it has precedent elsewhere." - C# again.

PS: Apple says "Swift is an innovative new programming language"! Just like everything else they do lately!

"I started work on the Swift Programming Language (wikipedia) in July of 2010. I implemented much of the basic language structure, with only a few people knowing of its existence. A few other (amazing) people started contributing in earnest late in 2011, and it became a major focus for the Apple Developer Tools group in July 2013.The Swift language is the product of tireless effort from a team of language experts, documentation gurus, compiler optimization ninjas, and an incredibly important internal dogfooding group who provided feedback to help refine and battle-test ideas. Of course, it also greatly benefited from the experiences hard-won by many other languages in the field, drawing ideas from Objective-C, Rust, Haskell, Ruby, Python, C#, CLU, and far too many others to list."

Yeah, that's all good. But why call people who understand that programming environments should not be opaque (and not violate elementary laws of design by obscuring readings of system's state) "livecoding nerds"? Really, why? :)

I think this is the most important single line of the piece. G+ was pretty broken from the get go despite some promising ideas. But instead of focusing around what was working, Google simply amplified all the broken garbage -- then spread it around everywhere, making everything toxic, cancerous.

It's one of those many weird cases where you sit there, hands on your desk, mouth agape looking at some Google property that was fucked over by the G+ project and just ask yourself "doesn't anybody at Google use this garbage?". Because the issues are so immediate and so obvious, it's impossible that nobody raised some red flags.

Which leaves two possibilities:

- Google is composed of such inept socially awkward people that no red flags were raised and they all just proceeded on course doo dee doo doo dee (a scenario I find very hard to believe)

- Red flags were raised and simply brushed aside.

The first scenario is hard to believe because it presumes mass and gross incompetence on behalf of most of the employees at Google. But I know googlers, I've been interviews by Google, I've had various interactions with people from Google, and most of them just seem like normal folks from a variety of backgrounds.

As more and more leaks out it sounds like the second scenario is where it's at, and the question is why? Was it just some dumb headed attempt to extract any money possible for the major shareholders by turning the brand into garbage? Or was it just an honest attempt at unifying the properties, just managed at an absolutely amateurish level?

It's all so senseless and stupid and now everything is broken.

The sad thing is, this is something I see all the time, one hopelessly broken pet project is carried by the good idea fairy to some senior manager, and they being a cascade of failures across the rest of the company on something they probably have convinced themselves is just a big gamble with lots of upside. By the time the damage is done and widely recognized, the exec is out the door on their golden parachute leaving the remaining veterans to pick up the pieces and unfuck things. Except in this case, the ultimate party responsible holds half of the majority voting rights and continues to blissfully push socially inept product ideas. The only remediation is a long unfucking process and some possible minor impact on share price, meaning he can only buy 2 300' yachts instead of 2 350' yachts.

I don't get it. So Brin announces to the world that Google+ is the new sliced bread in 2011. Then he tells a small group of people that he thinks he personally should not have been involved with G+. Also the leader of the G+ project leaves (one month after the project started? One month after Brin had his candid talk recently?)

Where in this is the broken trust? What is the author actually upset about? Seems to me like in 2011 Brin and co. thought Google+ was the future. Now Brin simply is admitting that he personally might not have been the right person to take this on and perhaps it was a bad idea (not clear from the poorly written article). On top of that the author is trying to make a story out of the project leader leaving precisely because there was no story there.

I think this piece is terribly written and there is no story behind it. G+ is not my favorite product but I think this is just an outburst of anger that does not deserve our attention.

I jumped on board early when I received an invite. I despised FB and was looking for for something that might actually resemble the tribe.net model of freedom and anonymity.

Unfortunately, I was told that I had to use my real name and signed up accordingly.

Everything was going along fine for about the first 9 months until I got into a small flame war with a woman in Canada about Scientologists (I used to work for some). That turned out to be the end of G+ for me.

It seems that the woman reported me for using a pseudonym, which to me and a few of my friends I obviously wasn't. I was livid! I immediately protested loud and clear in my timeline. One of my "hooped" IRL friends works at Yahoo! and told me that he had good connections at Google and could probably fix it for me. And that if he couldn't do that that he could at least vouch for me.

As he was trying to work his magic the pressure from Google was getting stronger. I had a big notice across my profile telling me that if I didn't provide legal proof of who I was that my account would be suspended in a week. I received the same threats in my gmail. So I started trying to work with them on this matter only to find that I was dealing with bots. I was beyond frustrated!

A few days later my friend came back to me and told me that there didn't seem to be much that he could do. I sure as hell didn't want to send them my my ID or birth certificate! So I caved in, scanned a court document with my full name on it and a judge's signature, and gmail'd it it.

I should mention that by this point Google had decided to lock my profile and place a huge notice across it demanding documents.

It took almost a full 2 weeks for them to get back to me and say that my document was legit. Well, duh!!

With my new found "legal" status I continued to use G+ for about another year or so. But as time marched on I became more and more disillusioned with Google and their products and interacted less and less with G+.

Then June 5th 2013 happened and I was introduced to the world of Edward Snowden. I immediately went and deleted everything from my profile and timeline (no small chore!). I then put a notice on my "about" page stating that due to privacy issues with Google and the NSA that this account is no longer active.

I now only use my gmail account, have been a happy DDG and IXquick user since before this all went down, and haven't been back to G+ since.

As an intensive user, everywhere on Google products, I feel the Microsoftization. This includes documentation with corporate jargon, the bloated and confusing Hangout fiasco, the frustrating way to connect multiple identities together.

Did they hire any corporate UX/UI/branding/marketing/documentation guys from Redmond recently, after Larry Page's CEOship?

What made Google special in the past was having principles and walking the talk.

Those days when Altavista wanted to force people into watching noisy pop up advertisements with annoying colors before you could search anything, and this small company decided to just display text.

The days when everybody was onto portals to make the web enclosed inside gatekeepers hand and Google brought freedom.

Those days are over. Just the other day I had them trying to change my name in gmail and complete the information I gave them when gmail was invite only like my birthday or a picture of me.

When I refused I had them INSULTING ME!! Something alike "it seems you are so alone". Wow, if you don't use their "social private web", or any other social site you are alone, even if you have a blog with thousands of people visiting, and real friends you can talk, kiss or hug.

My problem with Google+ isn't the unification of Google's social platform. That makes absolute sense. The problem with Google+ is that it's way too opinionated.

It's a platform, not a product. A platform has to bend to the needs of its users, and those "users" aren't necessarily the people posting the comments - it's also the people hosting the comments on their YouTube pages and whatnot.

I appreciate wanting Plus to be backed by a "real" ID, but pseudonym support that fully anonymizes the user (and controls over whether pseudonymous users are allowed to post to your pages) should have been a day 1 feature, for example.

Using and loving G+ and Hangouts to this day. I like how easy it is to get a group chat or a video chat going in the browser, and I like the increased control over sharing I have, especially relative to FB. (I can even share with people who don't want to log in because they hate the service. My FB friends cannot.)

I'm trying to sift through the complaints to see if they're relevant to me, but haven't had much luck so far.

1. Everything

Complaints on the order of "it broke everything" just seem hyperbolic and silly.

2. Nymwars

I think they should allow pseudonyms, but I don't blame the company for trying build something tied a little tighter to real world identities after fighting a decade long war against fraud and spam behind the scenes. I feel like it's within their prerogative to say they're building an identity service, because pseudonym based logins are already widely available. Faulting them for that choice is a bit like saying you don't like Gmail because you think email is stupid.

3. YouTube

Among the other major complaints is that they broke YouTube comments, ie, the worst den of inane and offensive comments on the internet since 4chan. Good for them, the team deserves a medal.

Probably the other tacit criticism is that Google launched a service that didn't immediately trounce all other social media sites, delivering everything for everyone. It's used by a mere 350 million people. It's been criticized for that number being only a third of its registered base, but that seems perfectly on track or better than estimates for other social media sites. Twitter's active userbase is probably roughly 20%, for example:

It's weird that a site with 350 million active monthly users is considered an embarrassing failure. I'm sure lots of services would be happy to trade userbases with G+.

It had a few cool features. It wasn't world changing. I feel like it hit some of the Segwey effect, a victim of its hype more than of its failings.

5. Aesthetics.

I feel this is the most inarguable complaint. Some people don't like the style of G+, don't like its approach to usability, or find its sharing system needlessly complex or confusing. By all means, these individuals should not use the service. I don't like the look and feel of Pinterest. I shouldn't use Pinterest. To each her own. I worry some authors subtly shift this argument from "I don't like the feel of it," or even, "My friends don't like it," to "It is a failure of design that no one should use." Seems a bit unfair.

I believe there are good usability guidelines, but I don't subscribe to the belief that there is a perfect one size fits all, that all implementations of any service will eventually converge to one platonic form. Competition is good because we all like different things, each find different styles more intuitive.

I'd be happy to consider other arguments, but so far allegations of the service's abject horribleness seem somewhat exaggerated.

I think it is pretty refreshing for an executive to be self critical and admit big mistakes. Sergey sounded authentic in that interview.

Scott Forestall was axed for Apple Maps, but seriously, you rewrite a Maps service from the ground up from scratch and race to release it in iOS6, of course it's going to be beta quality for a long time, since these things take time to mature. I highly doubt the decision to include it in that state was solely Scotts.

I like to see companies admit major strategic mistakes as opposed to pretending everything is awesome for all time. (and no, Tim Cook's letter was a kind of non-apology, only a single sentence really admitted any mistake 'We fell short of our commitment')

To change status quo you need to provide enough value to motivate that change:

Google Search: Search experience was completely disrupted. Since that moment people could focus on what they needed (no disturbing ads) and be more efficient.

Gmail: Google innovated and simplified a lot email experience. You can easily measure the importance of Gmail to people by the importance of Gmail to the Google brand.

Chrome: As an early adopter, I could feel specially the speed difference. I always knew that would be a matter of time till Chrome control the market.

Google+: I never understood what value Google was adding to social networks. Facebook at the time didn't need to be disrupted also. After some time G+ went in the direction of Linkedin but couldn't add enough value to make people to change also. IMHO Google+ weakens Google brand. As simple as that. Should be closed? That is a good question.

For some reason, my email address is now linked to a name that is not mine. I've not yet bothered to figure out how to change it, but I wish for the sake of trans people that this error had been more common.

> OPINION: One month after creator and leader of Google+, Vic Gundotra, quietly quit, Google chief Sergey Brin told a conference audience last week that involvement in Google+ was "a mistake." He made the exact opposite statement in 2011.

Whose involvement are we talking about here ? Brin, Gundotra or Google ?

> If only someone could have stepped in and course-corrected Google+.

>

> Oh, right. Someone could have.

>

> The same someone that just told the world, "heh, oops" and walked away to go retreat back into himself, and play with his cars.

Is that someone Brin (who could have and has plenty of money to buy cars) or Gundotra (who could have and left the company a month ago - with enough money to play with cars I suppose) ?

I wish both Google and Microsoft would understand that you can't force change down users' throats. It needs to come naturally. They need to want it, and have it grow organically.

Sure, forcing them will definitely bring you bigger "adoption" (for lack of a better word) faster, but it will also build up a lot of resentment, potentially negating any advantage you might have from ramming the change through, in the long run.

A lot of people didn't understand Twitter in the first 3+ years, but it still managed to grow organically, because people wanted to join it over the years. Google tried to push Google+ to its 1 billion users within 2 years, with seemingly very little advantage for the users. What did they expect?

Same for Microsoft when it comes to pushing Metro to PC users who have been perfectly happy with their PC interface, but Microsoft wanted to force them to use a tablet interface on a PC. Why? Because Microsoft said so, and because they would get to flash "bigger numbers" to developers for "Metro users". The actual experience of the user on a desktop was barely a distant concern.

If you're a big corporation, and you can't grow a new business organically, then tough luck. Maybe you shouldn't be in that market then.

I'd love to see Google+ turn into a LinkedIn and Facebook killer. I just have no use for Google+ at the moment. I don't really like the tiles display and would prefer a list.

The YouTube integration doesn't bother me at all because 1) I don't post YouTube comments, and 2) it's easy enough to just create a separate account for using with services that you don't want associated with your main Google account.

It sounds like Sergey is saying that his involvement in Google+ was a mistake, not that that the company going down that path was a mistake. I think the author is taking the word "mistake" out of context:

"It was probably a mistake for me to be working on anything tangentially related to social to begin with."

And BTW I am thinking that the EU privacy policy fiasco is probably related. And to think that the EU courts recently ruled that current data protection laws requires that search engines must remove results on request.

"Brin told ... that ... he was kind of a weirdo and "It was probably a mistake for me to be working on anything tangentially related to social to begin with." - I respect him more now. Being late to market was probably a bigger mistake though.

Google+ made it just a little too clear that Google is in the business of remembering everything about those who interact with it.

The attitude of "Google knows best what's good for you, and doesn't have to justify itself or even acknowledge your objections" also doesn't mesh with what a social network should be, in the minds of many.

I don't think that G+ was a mistake. The only real issue is the real name policy, though I fail to see how they'd be able to enforce it. People could create alternate email addresses with fake names (and some did), and use it when they want to participate social "i-events" without giving up their id. It's been like that before G+, and it would only take a small move from them to correct it. Of course, the downside of this is that they wouldn't be able to claim a number of real users. But could any social site?

Btw, am I the only one to find the article title offensive, and unworthy of a place like zdnet? I wasn't a regular reader of their columns, I don't think that will help.

If you like e.g. "single sign-on", it should be your choice to set it up and participate. Not coercion. Not coercion holding your existing investment in various products (of which Google was and is acquiring ever more) hostage.

If what you are offering is of benefit to your users (should I use the word "customers"? -- a whole other discussion), you should be able to sell it to them -- on an "opt-in", "I'd like to use this feature" basis.

As Google+ rolled out, it became evident that it was anything but this.

True names. Then the stories -- accurate or not -- of account deletions.

I was damned if I was going to risk my longstanding Gmail account for the sake of trying out Plus. Fortunately, the integration was not so quick and thorough that I was at that time compelled to participate in Plus in order to keep that account. (Sign up for Gmail now, and you get a Plus profile, like it or not.)

Plus has some nice technical features, and some of the conversation I intersect (under a separate Google identity that I can afford to lose) during my limited interaction with it, consist of more thoughtful and interesting content.

But I'll never trust it -- Plus, that is.

Google showed us all, with Plus, the limits of their advocacy for us, the users.

I think many of the policies that have been changed towards the usage of a real name are intrusive to privacy, but I have no pity on people who require privacy and lose it after willingly continuing to use a service that is known to conduct such practices.

That's a lot of counterhate. Not wholly unwarranted, but severely one-sided.

Corporate products are not for dissidents or the privacy-focused. Period. The end. You need to find alts designed to be private and/or pay to not be subsidized for the profitizing of YOU - whoever you are or want to be.

Google wants to fold you into their walled garden by tilting all their products towards each other. Shocking. I can't think of any other... oh yeah right... EVERY massive tech company does this. Otherwise one of the other massive tech companies will eat their lunch within 10 years. You are the frog. They are the scorpion.

Also, part of force g+ is what you are seeing grow widely. Enough people are harmed or disgusted with the level of gaming of anonymity that the trolls have achieved that the real identity movement has grown pretty quickly.

I doubt large corporate interests will be able to find it profitable, over any minimally significant span of time, to preserve privacy and be a platform for social change/justice. The unintended consequence is also being a platform for the lulz. Don't be evil meets don't be bankrupt. If your platform is a cesspool, nobody will pay to swim there.

Oh look its another goth chick blogging about Google and whats bad and whatever... Go back in your cave... noone told you to get involved with Google+ neither Google will go with you cause you based your life around Google+ ...

Why people nowadays take everything so granted... Guess what before 90 years people were going to the toilet...hmmm on their GARDENS!

Call/SMS integration is great. It's worth noting that Google had absolutely everything they needed to do this years ago, and just... didn't. Hangouts is still inferior to iMessage today. It's a real shame.

Edit: this extensibility stuff might be enough to tempt me back to Apple from Android, at last. Third party keyboards, too (I've gotten quite attached to the Android swiping stuff). Honestly, at this point, I'm not sure what keeps me on Android. I confidently predict that iOS Active Notification usage will be far higher than on Android, even though Android has had it for years.

Health: it's stepping on the toes of many partners, but might be groundbreaking. It's extremely hard to crack healtcare, it's very closed, defensive system of people and bureaucracy, Apple might just have the power to do it.

Extensibility: intents are basically _the_ reason Android can work so much better in many cases than iOS. I hope MS will bring it to WP very soon.

Still no user accounts on iPad. The only thing I want, and 8 versions in, it's still not there. Why I can't create a login for my kids on my iPad that hides my mail, calendar, certain games, etc? I don't understand why this isn't possible.

Currently, I use a screen-shotted contact screen as my wallpaper for my ICE contact - just in case the worst were to happen. This will let me put more information, and might even let me have a wallpaper again!

So so happy to see the SceneKit API make it to iOS. Even though 'minor' compared to some of the other announcements, it was the number one thing I was looking for in today's keynote, and it was nice to see it featured. Can't wait to start using it.

Apple's iOS support for older products has been stellar, though I'm betting iOS 8 will be the last update the iPad mini, iPad 2, and iPhone 4S receive -- they will have had a good run of four years (except for mini) by the time iOS 9 is released.

I'm a long time hobbyist programmer, got my start back in the days of Apple IIe, got my first Mac in 1984...and I'm still not switching back to iPhone until I can write my own software and run it on my own phone without paying Apple for the privilege.

I'm waiting for two simple words: "Unknown sources". Guess I have to wait some more. Not sure how low Apple's market share will have to go before they start allowing it.

Can't say I love programming for Android, Java just doesn't feel right to me, but I'm sticking with it as long as I can write my own software, run it on my phone or tablet, share it with others, even sell it without Google's permission.

I was really hoping for split screen multitasking (which Windows 8 on tablets does a good job of). I heard it was possible, but was having problems getting it out the door in time. I really hope it comes out in the final version of iOS 8, because that's the one thing that would tempt be to get a Surface over the next iteration of the iPad.

It wasn't made clear in the keynote and the page doesn't mention it--I hope this includes sharing contacts with "Family Sharing". That'll be huge for helping my older family members keep a coherent address book.

"Touch ID- For the first time, youll have the option of using Touch ID to sign in to third-party apps theres no need to enter a password. Your fingerprint data is protected and is never accessed by iOS or other apps."

>"Plus, it also knows who youre talking to, which is crazy. By knowing who youre talking to, it will send up predictions that are right for the type of conversation you have with that particular person."

This is a bit scary... This means that Apple not only knows who I talk to but now actually maintain an index on how I talk with everyone. 1984 is getting closer and closer.

Dear Apple, please for the love of god and the good of everyone, get together with Google and iron out a common protocol for this stuff. Don't make this one of your competitive technologies designed to fragment the world into Apple and not-Apple. Home automation is just dying to take off and there's a pile of gold for everyone if you just show a tiny bit of cooperation to get it started ... And you can all still sue each other afterwards if you like about the design of the light switches or whatever turns you on, but can we please just let the industry move forward first?

Here's hoping the "common protocol" they mentioned in the keynote is Z-Wave (strongly suggested base on Tim's use of the word "scenes" and the vendor list they put up) and not some Apple proprietary garbage. I'm still bitter about FaceTime being an 'open standard'.

The critical part here is not standards and not even the usability from a graphical UI perspective, but who is put in control by the technology. Home automation has always faced this difficult hurdle.In most families, the lightswitch, a thermostat, etc belong to everyone who wants to use it and who is near it. I like Siri being in control, but she will have to be like a Butler that can bridge conflicts if this is to make it beyond single-person households into family homes.

Google and Apple are rapidly moving in to an area with a lot of activity. There has been a recent surge of home automation startups and platforms like Revolv (http://revolv.com/), SmartThings (http://www.smartthings.com/), WigWag (http://www.wigwag.com/). One of the problems they are addressing is how to marge multiple different home automation standards (Z-Wave, ZigBee, Bluetooth Smart, 6LoWPAN). Presumably the new Apple HomeKit will be based on a single standard, Bluetooth Smart, given the Apple / iBeacon rollout.

There are also a bunch of more generic development platforms around, like relayr (http://relayr.io/) and Thingsquare (http://www.thingsquare.com/), that are targeting the device manufacturers directly. Will be interesting to see what impact Apple will have on the growth of this market, and the technology choices. Apple isn't always right (and neither is Google).

From what I've seen, it looks like you will modify your existing iOS app to register with the HomeKit subsystem. This will give HomeKit hooks into the existing control functionality that is in the app. IE Nest will register that they have a thermostat and this is how to turn the temp up down.

HealthKit. HomeKit. I think we need to resolve the ethical and regulatory issues in our industry regarding government/marketing use of this data before it's prudent to use any product based on these technologies or similar ones.

Now if a ton more hardware manufacturers would start designing around automation - like a washing machine that automatically started drying your clothes next, and an app that could see how dry they are and ask if you want to keep going, etc.

I hope this is not just handled through Bluetooth, but can be accessed via wireless as well. The idea of being able to leave air conditioning off in my house until I'm leaving the office, or to turn off all lights while outside of my home sounds incredible (or check to make sure that they're off). But if I have to be within my home to do everything, it'll turn this from a must-have for me to a nice-to-have very quickly.

don't mean to spam, but can the opensource world just do something better? clearly apple is going to use some super apple only thing...and always will

I tried a while back, and it looks awfully similar to what apple is doing...I use it all the time, manages my lights, my server, my IR electronics, through the UI or on a schedule. Has macros, etc. doesn't have the fancy detection features although at the time those were just starting to be talked about. Wouldn't be a hard extension.https://github.com/dandroid88/webmote

There are better ones out there, but my main point is that is some recent grad can hack this out on some nights and weekends, why are we waiting for apple , and verizon, and att and comcast and google....

I've been working on my own pile of HA scripts, hacks and controls - just for the fun of it. When I heard Google was buying nest and a security camera company and now Apple is jumping in - I'm glad I'll have my own. I don't care for either company to know exactly what is going on in my house. I do want my house to yell at me when the A/C is running and windows are open or the garage door is left open when I leave. There will always be a market for the big names, but this is one cloud I'll keep out of my hosue.

Building automation is already available for all your automation needs. Quite expensive but working well. I cannot see what Apple brings to the table other than control devices. Big automation vendors will probably just write a wrapper for that custom Apple protocol and be done with it.

So in a symbiotic bacteria/fungi/plant ecosystem, I can understand why CO2, H2O, and O2 might be able to remain in balance. But decomposition produces not only CO2, but CO, CH4, nitrate chemistry... I have trouble understanding why this cycle isn't at least a little bit leaky, since it seems like some of the trace gasses wouldn't automatically have all of the biological, solar, and marine sinks available in nature. I would be interested to know how the internal pressure of this experiment fluctuates due to phase transitions, especially chemical reactions which could potentially produce phase transitions that are not thermally or biologically reversible in the vicinity of STP.

What I'm more amused by is the level of scepticism over an self contained eco-system. In order for any eco-system, either contained or not contained, there has to be much greater tolerances to extreme conditions that we generally acknowledge. In truth, outside of absolute extremes where organic life is simply impossible because or either denaturing or absolute destruction of organic material, life will exist. It's also entirely possible that this bottle now contains bacteria, fungi, or other organisms that are much more efficient at breaking down the organic material left by the dying plant mater.

A clear balloon containing an ecosystem would heat up due to the greenhouse effect. If it's sufficiently large and allowed to slightly expand it could float freely in the air. Imagine closed gardens flying freely around the world.

As someone who keeps both fresh water and salt water aquariums, it makes perfect sense that this is completely plausible. I think most aquarist are well aware that bacteria are the most powerful thing in keeping our aquarium ecosystems in balance.

By mass, that vast majority of living material in that bottle is almost certainly bacteria, some nitrifying, others denitrifying. All otherwise decomposing, consuming, and recycling the various chemistry within.

Think about the earth as a sealed globe within space, and you start to understand how good a job bacteria does at balancing the chemistry of life.

I've actually seen something similar to this at a local museum called an "Ecosphere", which includes tiny shrimp living inside. They've been known to last for over ten years.

Assume the garden is somehow provably what it is claimed to be. Further assume that it is not cleaned manually because the usual culprits for blackening the inside, by pure chance, happened to be absent or long dead.

I am curious how one might go about reproducing this exact ecosystem in other bottles by cloning the original, without adding contaminants. Under these assumptions selling clones could become a commercial proposition.

Most aquarium plants grow excellent in this way, better than when they are submerged in an aquarium. Aquarium plants have two distinct forms, submerged or emerged, and the foilage can look completely different. The emerged form does better, since it doesn't need to compete with algae and co2 is more available. To grow your own plants, fill a pickle jar with damp miracle grow organic potting soil, and plant what ever aquarium plants you can get your hand on, seal the jar, than enjoy.

So it has had water and presumably some atmosphere was exchanged when it was watered in 1972, 40 years ago. The title is somewhat misleading but 40 years is still a long time.

The stopper doesn't look like its particularly well fastened and could potentially pop up and allow leaks. Impressive but without closer inspection it looks like a slightly flawed execution to me. Maybe I'm just being pedantic. A picture of the starting point would be nice.

I was wondering how easy/difficult this phenomenon is to study in depth. If I wanted to examine any organisms in the jar, I would have trouble doing it while simultaneously maintaining the ecosystem's independence.

"A good software project is never answered. It is not a bolt to be tightenedinto place but a tendril seed to be planted and to bear more seed toward thehope of enfoliating among the world wide landscapes."

The emulator lets you do interesting things, like experimenting with mesh networking, that would require quite a lot of hardware to try for real. (Plus it's a lot quicker than flashing 15 nodes every time you make a bugfix!)

The last time I read about Contiki, all of the screenshots were running on a C64. And it looked awesome! It made me want to play with it on my C64. The current website is all boring network simulation stuff. Looks like a corporation.

But, I'm happy to hear Open Source continues to make inroads into the embedded space. There's billions of devices out there that sometimes people's lives depend on running a terrifying array of proprietary and unmaintained software that is potentially broken in subtle (or not so subtle) ways.

It's interesting to see that Contiki is taking the lead in this space, since it was once going toe-to-toe with another open source OS for wireless embedded networked devices, TinyOS. TinyOS had a large following in the research community and I believe it was used in several commercial sensor network deployments by Dust Networks and Arch Rock and at least one other Korean startup company -- that i believe is still using it in their deployments.

Adam Dunkel has done a nice job of pulling Contiki from the obscure research community and into the commercial space and is riding the "internet of things" wave right now. We'll see if it lasts. I'm not familiar with what developments have taken place on their OS since maybe 2010 or so.

I'm using this wonderful operating system for my own sideproject for LED juggling props.

Aside from the strong hardware support and large community behind it, Thingsquare has recently released it's slides from their training classes on Contiki that give an excellent overview. Porting an already existing platform to my own custom hardware has been relatively painless compared to Linux or an RTOS, though it is difficult to make Contiki's makefile based workflow work well in an IDE.

Cooperative protothreads are surprisingly easy to work with, and the IP/mesh networking stack is highly configurable at each layer. Combined with an excellent overall code quality, this is the very first open-source project I've ever really wanted to get involved in.

I am unsure about this. It's big advantage is its size and that it does not require as much HW support like Linux (like a MMU). The disadvantage is that it's not a *nix and you loose the whole ecosystem (no Posix).

In my opinion the space that it occupies (the "Internet of Things"), is not well-defined and it may be probably cheeper to use something like a full-blown small computer (like the rasperry-pi) with Linux on it.

>> Contiki will soon face competition from the likes of Microsoft, which recently announced Windows for the Internet of Things [0]. But while Microsofts new operating system will be free for devices less than 9 inches in size, it wont be open source. And Contiki has an 11-year head start.

What? Why even mention windows here, these two OS's aren't even close to being in the same category other than they share the price tag of "free". I'd like to see windows try to run in under 128mb let alone the 1mb for linux or the mere kilobytes need for Contiki. A windows mention here seems very out of place.

> In the Sarasota case, the U.S. Marshals Service claimed it owned the records Sarasota police offered to the ACLU because it had deputized the detective in the case, making all documentation in the case federal property.

The notion of "ownership" of public records is a bit tenuous to begin with. You can own the paper (except they obviously don't, and you can make a copy on your own paper), or you can own a copyright on particular documents (except that US government documents, including municipalities, are public domain).

Really they're asserting some quasi-classification right to prevent a record's release because they "deputized" the author, but it's pretty unclear under what actual statutory authority they're operating.

Yep, not an abuse of power at all. Seizing all the evidence to interfere with a legal proceeding. This seems perfectly legit to me!/s

I'd hope that the U.S. Marshals [as a service] would have people fired for this and a Judge would find whoever ordered this to be in contempt. However, I really expect this just to be ignored beyond the ACLU/News reporting on it. :/

"Recently, the Tallahassee police department revealed it had used stingrays at least 200 times since 2010 without telling any judge because the devices manufacturer made the police department sign a non-disclosure agreement that police claim prevented them from disclosing use of the device to the courts."

So just by signing an NDA, I'm not obligated to disclose information to the court?

Defending against this would be a perfect use of Florida State Guard, as such groups aren't controlled by the federal government. Unfortunately as of 1947, Florida now only has the Florida National Guard which is under federal control. https://en.wikipedia.org/wiki/Florida_State_Guard

For all the talk of 2nd Amendment (guns rights) advocates, I think they miss the obvious use cases: well organized (and locally controlled) militias, such as the Florida State Guard.

Serious question. At what point do you think the public truly hits a tipping point and lashes back at the government? Or are we just going to sit unorganized and go back to our facebook and nightly entertainment. Personally I think it will have to be an economy downturn worse than the housing bubble to get the public to organize.

"Recently, the Tallahassee police department revealed it had used stingrays at least 200 times since 2010 without telling any judge because the devices manufacturer made the police department sign a non-disclosure agreement that police claim prevented them from disclosing use of the device to the courts"

What complete and utter BS. Does that mean that the DA too had no idea about the use of stingray because the police couldn't tell them either. I think figuring out who should know (and who shouldn't) is probably very very selective!

So, accept upon proof of claim that this wouldn't prejudice the rights of the people of the state if the court gives full formal equity to the federal marshalls, and that this wouldn't be a classic case of Champerty and Maintenance[1].

Personally, I'm viewing this as another "pretend to fail" moment where they claim something is an insurmountable roadblock, when it's easily solvable.

People in the future are going to be so confused. "If the Americans were the first to systematically limit government power in a written constitution, why did they just start ignoring it all of a sudden?"

Author here - I didn't expect to see this here this morning. I'd intended to write a longer post :)

In any case, here's a few things I learned about swift yesterday building this. Please note that I have about 4 hours swift experience, so feel free to correct anything I say that's wrong.

1. To make properties on a class you simply declare the variable on the class e.g.:

class GameScene: SKScene { var bird = SKSpriteNode() // ... }

2. The APIs generally have shorter names and it's really nice. E.g.

SKTexture* birdTexture1 = [SKTexture textureWithImageNamed:@"Bird1"];

becomes

var birdTexture1 = SKTexture(imageNamed: "Bird1")

If I understand it correctly, any overloading `inits` basically look like calling the constructor on the class, whereas any class functions will be called like this:

var flap = SKAction.repeatActionForever(animation)

3. You can put inline blocks and it's great

var spawn = SKAction.runBlock({() in self.spawnPipes()})

4. The typing is really strong - this takes some getting used to. For instance, `arc4random()` returns a 32 bit unsigned integer. This means before you can use any operators on it you have to make sure you're using compatible types. e.g.

If we didn't use `UInt32` to convert `quarter` we'd get an error. After you get the hang of this, it's actually really nice.

5. I use `var` everywhere and I'm pretty sure I should be using `let` a lot more. I haven't worked with Swift enough to have a strong intuition about when to use either.

I should also mention that my code is just converted from Matthias Gall's code [1].

I also want to put in a shameless plug that the point of making this was to advertise the "Making Games with Swift" class that auser and I are building. If you're interested, put in your email here: https://fullstackedu.com

I intend to redo this more fully with Playgrounds. I've been looking for a way to teach kids programming for a while now (if you recall, auser and I built Choc [2] a few months back). I think Playgrounds in Swift are finally the tool we've been waiting for.

As a C# developer, I can read and understand the code without any issues. That's a good thing for Apple. I'm sure Objective-C is great but it's too foreign for me and didn't want to toy with it for fun, not worth the effort. But I can write an app or two with this one.

Always had a problem with Objective C, could never read it (Android Dev) but this right here is pretty impressive. I like the mixture of language features. But my only question, are you still locked to using an mac to develop for iOS. I guess since the language is closed source it depends on some osx libs at compile time.

This may be a stupid question but is the language in some way tailored to game programming? Apple's examples at WWDC were game companies, their coding demo was a game, and this is the first project I've seen written in it - and it's a game.

I am really intrigued by the obj-c interop capability of swift, namely interactions between blocks and closures / anonymous functions.

I can see my AFNetworking code becoming much, much more readable now, without the need to @weakify/@strongify self on both sides of the block, but just add a blanket'[unowned self] in' inside the closure.

pretty neat! I am not an iOS developer but if i understand correctly this uses the new Sprite Kit stuff included in iOS8 for 2d rendering right ? Is this a threat to existing 2d/game engines ? Not sure where SpriteKit integrates into the existing stack for making a game.

It's pretty strange to get root access to a server, even though it's just a Docker VM. We can install anything we want, compile any C code we want, DDoS and spam anyone we want... The machine is also crazy loaded right now, with 100% load on all cores (according to htop that I installed from the package repo), almost run out of RAM and disk space decreasing fast.

As cool as this is, I am honestly curious how you could monetize such a service?When I have some ideas I am trying to implement, I always try to see how you can get at least the cost of running a service back in. I don't like the thought of taking seed money from some investors without having a plan for making money. So how would you monetize this?

Edit: Please don't say advertising, that would be probably the most obvious choice, however still... Are there other ways?

Best not to advertise compatibility with everything unless you are sure that it really is compatible with everything. Otherwise it is too easy to mislead people (who will load the site, check for their favorite language, not see it and never come back)

This really is an awesome site, and I was really impressed when I first discovered it. It's like having a nitrous.io box provisioned on the spot in whatever stack you want, for something as simple as a 3 line snippet of code to a full on project (although probably not the best place for that). I hope more people start using it and contributing to the examples, it's really nice to be able to walk through full-stack snippets

It looks very nice indeed but I wonder why there is .net and not just c#. It might actually be better because this way you can have all the web functions too but might take a bit more for simple things.

Awesome, I love the multiple file approach. This is what I've started to realize holds backs a lot of similar services. (Especially jsfiddle). I'm glad someone was able to capture something I've been looking for.

Exposing yourself to the direct, harsh feedback of the market is key. I've noticed that bad founders will do just about anything to avoid this. Instead of selling, which is hard, they spend their time going to conferences and meetups, trying to do PR, talking to biz dev people about partnerships, etc. It all sounds like work, but mainly serves to insulate them from the harsh reality that nobody wants their product.

"All too often, Ive seen founders build some initially mediocre product, announce it to the world, find that users never show up, and not know what to do next. As well as not getting any users, the startup never gets the feedback it needs to improve the product."

I threw up a poll on my existing site, some people said, yes, they would like a hosted saas version. I then spent 6 months making it - without speaking to anyone further. It's now been 6 months since launch and its just cobwebs.

Speak to people first! Don't waste 6 months or more just doing the 'easy' tech stuff. Found out now that no one actually would be willing to pay for it.

Maybe the HN crowd has a different view of marketing than I do. Our marketing team relies pretty heavily on getting user feedback. We'll listen to individual calls to make sure the site answers questions potential customers have. We'll run surveys and over the shoulder tests to understand intent, concerns, and confusion.

My background is in marketing, and I'm confused by this parody of a marketer who doesn't know how to gather and apply user feedback to the product and site.

I was at a board game convention, as a venue to launch my site gamerustlers.com, and although we got great reactions, what I really cared about was how many people walked up to the kiosk and actually signed up. The second metric was how many signed up on their phones. While in Beta the site is free so I can't call them "sales", but there is a HUGE difference between someone saying "Hey, great idea" and that person actually signing up, even when it's free to do so.

I disagree; however, I think it may be because of her definition of marketing. She states "Sales and marketing are two ends of a continuum." Marketing is creating, delivering, and communicating value to your users / customers. Startups need to do both. Well. You need to create a product that gives value to customers (whether that be through elimination of pain or creation of new value) and get it into their hands. That involves both sales and marketing.

I am running a one man bootstrapped startup (I prefer to call it a business then a startup) inBoundio, which is a marketing software and for me, having 1 paying customer is more important than 100 users. I get paying customers through sales, users through marketing.

The interesting thing to me is how quickly you transition from Sales oriented -> Marketing oriented if things are going well. Early product/market fit can act as a bit of a guide for when to do the transition.

I tried lot of things to get the word out like facebook marketing, adwords, email marketing, exhibition, brochure distribution, regular updates on facebook page, deals having upto 30% discounts, spying on twitter for competitors and their customers to see what kind of conversations they are having and what they are doing, regular updates to website for look and feel as well as making it faster and faster.

I reached few affiliates but they were asking for upfront money so I stayed away.

The site was launched about 9 months ago and I have zero sales so far, that is making me sad and sometimes I lose my moral as you can see I have done lot of work. Spent countless hours during day and night. I am not sure what I am missing.

This is so unbelievably true. I worked at a firm where the opposite was true. The VP of Marketing spent a lot of time inviting himself to existing customer meetings, wasting exec time on magic quadrants, and hiring his buddies to do marketing collateral. Inevitably every hour of their time took up four of executive, sales and developer time. It was impossible to point to even one sale that they influence. This could also be due to their incompetence, rather than a general condemnation of the topic.

Great article, but I have a situational question: Let's say a company has grown at a 10% weekly growth rate and is now at 500 users. But as they try to sell to more people, they realize they are no longer growing at 10% and their growth rate is stagnant or decreasing WOW. Does it make sense to continue trying to sell OR focusing on user feedback and improving the product? I assume 'both' will be a popular answer but why? If you know your product is currently subpar, why not just build until the next iteration is ready and then start selling again?

I am shocked at how many people have proclaimed that using the telephone to source opportunities is dead. We have proven this model to be extremely successful, and have tied incentives to ensure that we are promoting the right behavior. For instance, we reward our inside sales team for setting up qualified appointments and provide an additional bonus if their appointments turn into closed deals. Lists on the internet are in abundance, and should be leveraged to their fullest capacity. In my experience, if you are calling a prospect with genuine intent to uncover whether a problem or pain exists, and are respectful and intelligent in your dialog, you will uncover great opportunities at every turn. We try to help start-ups by providing the initial lead at SalesZip.com

In my three previous businesses, I hustled and cold-called my way to paying customers (or at least valuable pilot programs) each time. But these were enterprise (B2B) businesses that could cut relatively large monthly checks. The reward was absolutely worth the lift.

That said, I'm having a hard time making the leap that for some consumer internet products with hefty cold-start issues cold calling is still a viable strategy.

For a product that has no network effect and is useful for the first user (e.g. Google search), sure, I'll buy it. For a product that needs 10+ people to start getting useful (e.g. Facebook), sure.

But for a product that needs multiple thousands of users to start getting useful, how does cold calling still make sense? These 1x1 users would come to your product, say "Um, it's a ghost town.", and then leave, never to return. Wouldn't the founders be better off putting effort into PR (TC, Pando, etc)?

TLDR. I'm not arguing that non-scaling hustle is not important -- I've seen the results myself, first hand. But doesn't the type of product really dictate how effective it will be, and therefore, how strongly should be prioritized over other avenues?

Startups selling a business product need to focus on a salesforce, direct marketing methods that are profitable, and getting in front of real paying customers.

But startups producing social media products, or consumer applications that are freemium or passively monetized will not benefit with sales. They need marketing via PR, social media, or viral mechanisms baked in early on into the app.

Im currently working on a start-up and can relate to this. The truth always hurts and people worry that their dreams will be dashed or the need to correct things early on which is most times, tremendous hard work (But it becomes crazy amount of effort if the change is much later on).

I targeted a low price, sales-free model, until i realized cost is not the key issue, getting feedback is! Hearing what people want and need is crucial! Its the reason why small firms are more nimble, simply because they move fast and are able to change rapidly from the feedbacks they received. Also important is that through talking, I noticed many times, people not only like to share painful experiences, they kind of impart their "ideal state" solution to you which can be incredibly helpful from a different perspective standpoint as well as a imaginative point.

In fact, I would rather spend more time talking to people in person (which I am doing now) than to rub shoulders and network. Its like delayed gratification. Have incredible amount of pain upfront so there will be less (much less) hiccups later on in development.

Would anyone be interested in purchasing software that allows you to secure wipe your phone or Linux laptop remotely?

Yes you can secure wipe your phone, but that's tied to the user account. What if you wanted to secure wipe data on the phones or laptops you give you to employees (esp. less technical capable people that lose their phones)?

I noticed that most options only allow encryption and are Windows only. However since most developers us private source control (and BT Sync), your likely not going to lose much work. I know I would feel better if my data was deleted.

"How should you measure if your manual efforts are effective? Focus on growth rate rather than absolute numbers. Then you wont be dismayed if the absolute numbers are small at first. If you have 20 users, you only need two more this week to grow 10%. And while two users is a small number for most products, 10% a week is a great growth rate. If you keep growing at 10% a week, the absolute numbers will eventually become impressive."

Doesn't the 10% growth (but actually just 2 more users) thing sound like vanity metrics? I don't see how 2 more users is that great by making it seem bigger?

Better than nothing, better than non-paying users maybe but it reminds me of a Publishing company i used to work for who once internally touted their 100% rise in video revenue (ignoring the fact they had 5 or 6 times the amount of products released at the beginning of that month and had no video product with a projected profitable lifecycle).

Covering user acquisition in unneeded and transparent vanity metrics seems to me to be unnecessary, especially when you are asking customers for brutal reality.

This was a great article. I just wanted to add the logical extension: the need to be flexible and a willingness to "pivot." The process of getting your product out there manually can give really focused user feedback that will help to refine the product in a smart way. Painful at first, yes, but ultimately very valuable.

Great article and great advice. I do this every single day. Acquire one customer at a time, work with them patiently, learn from our interactions, and continue to build a better product. We acquire new customers through referrals, Google, and traditional sales. A very important part of the sales process is nurturing them through the trial period - get them to paid no matter what. If you aren't doing one on one sales and working with your customers you will never figure out what the "what" is and you will never be able to replicate it with technology.

I know I have 30 days to impress and win a new customer and convert them. The most useful tool that I have to help me with this is intercom.io. Their automated time and event based messaging can interact at key moments when I can't always be there. Any time they need me, I am one click away. It is a fantastic platform.

In the end, every business model will have a more optimal and less optimal emphasis on sales, marketing, user feedback, etc...

Most B2B and B2C startups may in fact need more focus on sales...but a B2B2C company may find it misleading. If your customer is not your end user, focusing on sales and not marketing can actually be quite dangerous.

Something doesn't add up to me: either we lost de definition of Marketing, or Markteers that start-ups get are not doing their job.

Marketing is the way to get sales. We measure the success of Markteting by sales. It's the whole purpous of it. If the focus on Marketing is not being reflected on sales, then their Marketing Plan is not working. I think it's as simple as that.

The broadness of the audience is irrelevant when it's clear what is the target for your product - everything outside the target shouldn't count.

It all comes to the hold saying: if you want to please all, you end up not pleasing anyone.

If you study your target audience properly and with the right tools to analyze data, you can market narrow and deep. Sales should be #1 priority, I agree, so you can continue to collect feedback and make product iterations but saying marketing is broad and shallow is an incorrect statement. Digital marketing tools have evolved in the past few years and it's a lot easier to measure success on specific tactics. I think the old school mentality of marketing is "spray and prey" and if that is what one's thinking of marketing is, then they are not doing marketing correctly. The job of the marketer is to make the life of the sales person much easier so that the conversations they are having are meaningful and have greater chance for conversion.

"At Y Combinator, we advise most startups to begin by seeking out some core group of early adopters and then engaging with individual users to convince them to sign up." Sounds like marketing and user acquisition to me...