This is more interesting for what it means for Opera than for what it means to Fastmail. First they abandon their browser engine, now a top-notch webmail offering. What do Opera even offer these days that you can't get elsewhere? What 'long term vision' is Rob speaking of?

I've been a Fastmail user for years, ever since I realized that their web UI was compatible with my Windows Mobile Phone. (pre iPhone era)

Here we are now, and the rest of the net has caught up to mobile access, mostly. Though my initial reasons for using Fastmail have become moot points, I'll continue to use my Fastmail accounts with fond memories and hope for improved resistance to governments' exceeding their mandates.

I'm a big fan of Fastmail, and I'm tentatively excited about this announcement.

Missing CardDAV/CalDav sync ability is sorely missing, so it's good to see the developers talking about it.

I'd also like to see (and would pay for) options where the data is located in other countries, away from the United States. It's really symbolic, but also practical: I'd like my data closer to where I live.

I love Fastmail and have been a paid user for many years now, but I hate the new AJAXy interface, in particular the infinite scroll as currently implemented. Slow, annoying, and makes it difficult to reach old emails.

I just recently enabled SSL on my business website. It was anything but simple.

First of all, I had to get a dedicated server, because a bunch of other sites were running on the same server, and the hosting company doesn't offer additional IP addresses.

Then I wanted to get an SSL certificate. I picked Comodo, because they seemed to offer the cheapest full business validation certificate, but then accidentally bought a domain only certificate because their marketing was so confusing. Their friendly customer service walked me through a complicated process for changing my order.

To get the certificate issued, it took me a week to collect the documents they requested. I had to make sure my business was listed in the yellow pages, so they could send me an automatic phone call for verifying my number.

After every step in the process, they told me to log into their online management area, which was offline from time to time.

I had to confirm my email address by clicking a link about a dozen times. Half of the emails were missing the confirmation link.

Twice I got an email telling me my order will soon be processed, and nothing happened for two days. I had to open tickets in some online support area or send them emails to get them to continue processing.

All in all it took me a month to get SSL working. Now I understand why so many sites do not use HTTPS.

Summary: obtain a certificate from StartSSL. They provide free certificates as long as it is for an individual, not a company's website. Their process to get a cert is a little more difficult that the competition, but konklone.com provides a nice step-by-step guide.

One gotcha that these kinds of tutorials don't mention is that if your site might be blacklisted if google doesn't say it's been 100% clear of trojans for the past 90 days. I hit this on my domain when I accidentally had an .exe file in my static files. I've had to wait 3 months until I can get the certificate.

It is sort of worth noting that it's only free if you have a dedicated IP address. If you just have a cheap hosting plan somewhere, you'll need to pay them for said dedicated IP before you can set up SSL, generally.

I mean, we're not talking a huge amount of money. Webfaction is $5/month [1]. Still!

No need for a dedicated IP address. No need for wildcard certs, SNI, or any of that fancy stuff. Sure, it's ugly. But it works with every browser (even IE6), and it's not like anybody is actually going to type that into an address bar. You'll be redirecting your HTTP website to your HTTPS website anyway, aren't you?

You can only have two of the following three: (1) shared IP, (2) pretty URLs, and (3) legacy client support. Choose which two you want to have.

Do people trust StartCom? Just curious ... I always wondered why you have all these very expensive cert providers who charge a lot for SSL certs, and then this mysterious company with ties to Israel is handing them out for free?

I know it's pure paranoia, but this would seem to be an excellent way to compromise a lot of SSL traffic if you were into that, and the Israelis are pretty famous for all kinds of spying activity that makes PRISM look tame. Just curious what others think about this?

Because certs are so easy to get, it's better to use fingerprint identification for sites, to make sure it's right one. That's what I've been doing for ages, with https, and smtps. I'll be blogging about it soon.

Note that in addition to using it for your web site, you can also use this same certificate for e-mail, assuming you run your own mail server.

Long story short: I recently moved my e-mail from Google Apps to a machine under my control. As part of that project, I "redeemed" an unused SSL certificate I had purchased a while back for Postfix and Dovecot.

(While I paid for mine, you can use a self-signed one and most MTAs won't complain or refuse to deliver mail, if memory serves.)

This is kind of glossing over the point. We all know SSL is good and should be used everywhere. But the simple fact is that to have a fully capable SSL server you need two things: A certificate and a unique IP. There are firms now offering free certificates, but not everyone has the choice to select them. And IP certainly aren't free on most hosts. Sure there are always solutions, like moving to a self hosted model and so on, but it is a significant inconvenience for most.

Excellent guide but unfortunately StartSSL does not support all top-level domains. I went through the trouble of registering with StartSSL, they even issued a client certificate for my email account at .tk domain, but they refuse to issue SSL server certificates for any .tk domain.

Even though this is a perfectly legitimate top-level domain (yes I paid for a real .tk domain, and I fully control the DNS settings just as any other domain it is not a free web-based redirect domain, which the Tokelau NIC also offers), StartSSL does not let you choose it when requesting a certificate. They have a drop-down of supported TLDs, but .tk is nowhere to be found (and you cannot edit the HTML to submit this domain anyways, it will be rejected by the server). Initially this appeared to be a simple omission, but investigating further revealed it was an intentional decision to not allow issuance of SSL certs to .tk due to "abuse".

Quite annoying to have purchased an apparently legitimate domain, only to discover it is considered "second-class" by certain online services. Now I am faced with a decision to buy another new domain at a more reputable TLD, switch all my servers and services over, or find another SSL issuer which supports .tk. CACert appears promising, also issuing certs for free, but sadly they are not widely accepted by browser vendors. A paid SSL authority would likely issue a cert for .tk, but at this point I'm inclined to not use SSL at all, or stick with my own self-signed certs (I mainly use my server for personal services, so wide accessibility is not a major concern, but having a "real" trusted cert would be nice).

Does anyone else have any experience with acquiring SSL certs for less popular TLDs? I picked .tk because a short and easily recognizable domain was available, got in before many of the better names were snatched up as in .com, etc, but perhaps giving in and buying a longer domain name at a popular TLD is worth it if it means StartSSL and other services will consider it more trustworthy.

In the switch to https everywhere, we have barely started. For every HN and wikipedia with https there are 20 websites without (and whether the ones that do https do really secure https is yet another question).

Somebody should go through the top 10k websites and make a list, then repeat every few months.

It's interesting, I did switch to HTTPS for all my sites but Google Search still did not reveal search keywords to Google Analytics from users logged in at Google. If that's what was referred as "referrer information".

Did anyone get lucky with getting 100% of google search keywords after switching to SSL?

Using HTTPS everywhere doesn't really help much. It doesn't help at all if the surveillers either have your cert or access to decrypted traffic inside the firewall. Any PII being sent over the wire should most definitely be encrypted, but encrypting my access to a news site isn't really hiding anything. The requested URL still need to be unencrypted, you'd just be encrypting content that is already availble unencrypted.

This isn't strictly related to this post, but I've always thought that the idea of paying a fee for SSL certificates was a bad one. Time spent buying and setting up an SSL certificate would be better spent making your site available as a Tor hidden service.

So, this is our new major release, and I'm going to share some stuff that should fit better the audience here on HN, and that are not part of the main announcement :)

First, this is a release that fixes some important architecture mistakes we've done in 2.0.x branch of VLC. I'm notably speaking of the lag in reactivity, notably on volume change (that was shared on the mpv thread) and seeking, but also some grave video settings propagation. I wish we could have fixed and shipped that earlier, but we couldn't (long release cycle).

Then, this is the first official release of libVLC that is LGPL for most of what you need as a developer, including the right modules. SDKs for Win32/64, MacOSX, iOS and Android are getting ready.

If you are a web developer, our VLC plugin now supports Windowless, to fill the gap between Flash and HTML5 (it should work on IE6,7,8 without too much work).

If you are on Mac OS, the interface is finally polished after the major changes of 2.0.0 :)

Finally, we decided, as a community that we will accelerate the major release cycle of VLC. The fact that we needed 1,5 year to get the fix to some critical audio core and video settings issues out is way too much. We will move towards a 6-months schedule with LTS.

Sure, there are other very good players on each platform, but we are doing our best so that you can play everything everywhere for free, using open source technologies :)

I really don't like the idea of the Playlist-driven interface forcing itself in front. I have no use for Playlist, why do I have to see it, ever?

Even when I launch a file from Finder, I get a split-second blink of the Playlist. And when the clip stops, I see Playlist instead of the starting screen and can't drag and drop to play files to it anymore.

When I disable the Playlist by pressing its button on the interface, the expanding transition of the window when opening a file is oddly jumpy hopefully an easy fix in future releases (I'm on OSX 10.8.5). Playlist still appears at times.

The standalone Controller module from the interface I miss it, any chance of it ever returning?

Back to the two years old VLC 1.1.12 for me, it was much better thought-out interface-wise (Playlist is just a functionality, not the driving feature and Controller is still there) and it still plays every file I need it to.

I remember when I first moved to Linux, six or seven some odd years ago for the first time, and I researched a good media player. I went, unlike others with mplayer, with VLC. It was one of the first projects that made me think "how are proprietary software companies not embarrassed to compete with this, it is SO much better!"

I have always tempted, and wanted to use VLC. But i have always been sticking to MPC and its derivative. Currently I am using MPC-BE.

The reason is rather simple. VLC on Windows is just plain ugly. You could tell this is a Linux software ported to Windows. It doesn't even need to complex and fancy. Take a look at MPC-BE, plain simple and stylish.

And it isn't all just about the looks. The settings, menu placement, icons, etc.

Been using VLC for as long as I can remember, but recently I have had loads of audio/video sync problems with VLC. Really annoying since I don't want to use any other media player, and I've had to. Will this release do better?

The streaming and transcoding capabilities of VLC appear awesome but are hard to get to the bottom of. I tried to use VLC to convert a h264 stream coming out of an IP camera (Foscam) into either a live FLV stream or an iPhone compatible HTTP stream; it seems like it is POSSIBLE but actually knowing which sequence of magic whispers to utter is the challenge :-)

I've recently switched to Media Player Classic for watchign movies. I tried not to, because VLC works on Linux, but MPC's video quality is simply superior, when you put the two side-by-side you can see a difference...

I'm excited about where Valve is going with this, of course, but to be honest I'm concerned about controllers the most. Buying a good controller for a PC is not hard, but it's not simple either. Picture yourself as a "living room console guy" getting into PC gaming. You'd like to use a controller for a certain game.

Consider:

- You can use your XBox360, PS3 controller, or WiiMote, but that's not obvious. You'll need to do some research to figure out that you CAN do it as well as HOW to do it. Again, the steps aren't particularly complicated (especially for the XBox wired controller), but remember who we're targeting, here. If you don't know much about this stuff, you might be worried you'll break something or won't be able to hook your controller back to your console.

- If this doesn't occur to you or you'd rather not use your console controllers, you might be tempted to buy one of those gaming controllers you see at Radio Shack, Best Buy, or somewhere online. Chances are high that the controller you bought will be quite shitty in comparison to your console controllers. You'll notice everything from drifting inputs to cheap buttons to just plain uncomfortable hand feel. You'll convince yourself that you just picked wrongly, so you do some more research. You eventually come upon something pretty good, but it's expensive and it's STILL not your XBox 360 controller.

- If you get past all this (whether that's finding a good 3rd party controller or reusing your console controller), you're still not QUITE sure how each new PC game will react with a controller. Sure, maybe the mappings make sense, but you worry that you'll come upon something that requires an action the developers forgot to map to a controller button. Or maybe it'll just feel wrong because the controls for your particular game were clearly designed to work best for the physical characteristics of a mouse and keyboard. You know with enough tweaking this won't be a problem, but it still bothers you that you have to tweak anything in the first place.

Nothing I've outlined above is a problem for advanced gamers, but if something like a Steam Machine is ever going to take over the living room, it has to be a natural plug n' play experience with respect to input devices. And I mean natural for your mom or uncle, not for you.

Luckily it sounds like Valve will be addressing this head-on; I am more excited about what they have to say about this than about what the specs of any particular Steam Machine might be or what the beta might look like.

As a parent, if these things don't come with time-based parental controls, that would reduce their appeal A LOT. Windows, for all its warts, is great this way. The Windows-based PC in the living room, running Steam, has time-based parental controls configured at the OS level. This works great for everyone involved, reduces effort and contention. Also, kids do really well on fixed timetables.

I've opened a discussion thread in the Steam Universe forum, on this very topic.

> If you want. But Steam and SteamOS work well with gamepads, too. Stay tuned, though - we have some more to say very soon on the topic of input.

This excites me. Valve's bread-and-butter, as a gamedev company and not a game reseller, uses a pointing device. FPS games and Dota are both genres that do far better with a mouse.

Obviously, supporting gamepads will get the vast ocean of console-like games into the living room just fine. But for games originally designed for a mouse, a gamepad is a pretty sub-par experience. Do they have some new control device planned? Please? Pretty-please?

"Am I going to be using a mouse and a keyboard in the living-room?If you want. But Steam and SteamOS work well with gamepads, too. Stay tuned, though - we have some more to say very soon on the topic of input."

1) Top of the range high spec machine running SteamOS (500-600)2) Medium spec machine running SteamOS (250-400)3) Basic machine running SteamOS that's designed for people who just want to stream games from their Desktop PC into their living room (60-120)

So, according to the last answer, the 3rd announcement will probably be about a gamepad or an input device of sorts. That's a bummer. I know it was very unlikely, but HL3 announcement would have made me so happy. I'm still hopeful though, since it's been confirmed that Source 2 is in the works. Valve usually shows off their new engine with a new HL game.

I wish they would follow Apple's lead and tightly control the hardware. It would make it easier for devs to test and consumers to make a choice. Apple's wildly successful with their iPhone business model. Please try to avoid the fragmentation issues with Android. Windows already owns the home PC market, why go after it?

what MS-Dos was to PCs in the 80's or what Android was to Smartphones more recently is exactly what Steam OS will be to home consoles in the present. I have faith that Valve will dominate the next generation of interactive entertainment, if of course they don't mess anything up.

I am not trying to be a downer, but I think this is going to be a huge flop.

Gaming appliances need to be focused at the gaming market, which Sony and MS own like the U.S. and USSR in the mid to late 20th century. Nintendo messed up with the Wii U and probably won't recover, and everything else is secondary, for now. I even think Apple's move into the TV gaming market will be mostly a bust, but I could be wrong, because the casual game market is strong.

I've personally not bought a single game from Steam. I know they are big, but I just don't have time for it. I'm not the target market though.

The exit visa is a system that legally enables indentured servitude. Any country that has this system (pretty much just Islamic mideast countries) should be shunned by the US and the UN (won't happen of course given the US has a couple of bases in Qatar and BP has a huge investment there).

The British engineering company Halcrow, part of the CH2M Hill group, is a lead consultant on the Lusail project responsible for "infrastructure design and construction supervision". CH2M Hill was recently appointed the official programme management consultant to the supreme committee. It says it has a "zero tolerance policy for the use of forced labour and other human trafficking practices".

Halcrow said: "Our supervision role of specific construction packages ensures adherence to site contract regulation for health, safety and environment. The terms of employment of a contractor's labour force is not under our direct purview."

So they've got a zero tolerance policy, unless you're talking about the actions of their contractors which is just, like, totally out of their control, man.

Lot of Indians also work in these countries due to poverty reasons. I feel privileged to be born in a relatively wealthy and educated family in India. If it was not the case may be I would have one of the migrant workers like them.

If these countries have oil money in abundance then why don't they give good working conditions and pay more money. Few million dollars will hardly going to move needle for them. I smell corruption.

On a side note, is there anyway we can donate money to his family. ~ $1500 with 36% interest is lot of money for his poor Nepalese family and I doubt they will be able to repay that debt. I guess they will have to work rest of their life just to repay money.

If potential employees had a slight opportunity to easily research conditions, they might choose another company with a better record. This would have the potential to drive disreputable shops out of business and promote those that care a bit more about basic human rights. It would be a daunting education and development task, but one that would be meaningful.

Why is this not front-page news of mainstream media? (I know why, it's because it's not profitable). But where is the outrage? Why is fighting back restricted to a few NGOs and some back office of the Nepalese government?

Meet Gary Webb, the Pulitzer-winning investigative journalist who committed suicide in the wake of the public tar-and-feathering (and financial impoverishment) he endured as his reward for bringing Ross and his exploits to the attention of our great nation:

I have a theory that cocaine, and later crack cocaine, were really the downfall of the United States from the late 70's to late 80's. It wasn't Reagonmics, it wasn't the Japanese. All the white collar (and a lot of the blue collar) guys were all doing powder cocaine, and all of the rest of the blue collar workers and the unemployed were all doing crack. Crime got worse, business got worse...

For me, following the link here comes up as my default set-up for iGoogle, the home page skin that Google will deprecate in another month or so.

I have used Google since the beginning. I was amused, when I updated my personal website at the beginning of this year, to discover that most of the pages on my site still had a paragraph specifically recommending Google, as if most people had never heard of it. That's how enthusiastic I was about Google when I first discovered it. (I discovered Google when it was still Backrub, but examining which search engine spiders visited my site.)

I assume that the top "Google RN" link refers to "RealNames", which was a cross between AOL keywords and an alternate domain name system. It surprising to see that there, because canonical registries (i.e. Yahoo, RealNames) are kinda the antithesis of what Google was pioneering at the time (i.e. PageRank).

'and every fucking thing I think about, I also think, How could I fit that into a tweet that lots of people would favorite or retweet?'

That, as I see it, is the problem not twitter or 140char limits or any of the other stuff Dustin raises, it's the desire for external validation (and I can't help but imagine him furiously clicking reload on his own blog post to see how fast his Kudos score is climbing)

My takeaway/advice is - try to recognise when you're being manipulated by gamification techniques and choose to be aware of them and ignore/resist them when it's in your better interest.

Does anybody _really_ think Picasso would have painted iPad trifles for immediate social media validation, instead of starting and completeing Garon la pipe? I _seriously_ doubt that - from Wikipedia: "At the time of his death many of his paintings were in his possession, as he had kept off the art market what he did not need to sell. In addition, Picasso had a considerable collection of the work of other famous artists, some his contemporaries, such as Henri Matisse, with whom he had exchanged works."

Picasso _didn't_ paint for the twitterati - he painted for Picasso. Dustin should write for Dustin - not for Twitter. He's allowing himself to become distracted from achieving what he wants at achieve. That's not Twitters fault. Procrastinators gonna procrastinate (he said hypocritically while wasting time on HN)

Much like television, smoking, or facebook, it's pretty easy for me to look at these products, look at what the users get out of them, and make the conscious decision not to use them. That isn't to say it's easy to quit smoking, but it's been pretty easy for me to never start smoking because I know what sort of personality I have.

Dustin's problem is similar in some respects to an addiction (which he alludes to), so perhaps the solution is treating it as such; forcing moderation on himself, or even complete detachment (the cold turkey approach). Of course, being involved in technology means Dustin is essentially an alcoholic working at a brewery, so disengagement may be especially difficult. But to throw up your hands and claim there's no way out strikes me as a bit defeatist. If you feel a technology is negatively impacting your thought patterns, perhaps you could find a way to use that technology less.

I've stopped cooking for myself because TV dinners are so easy. But most TV dinners aren't great. But because they're so convenient, they have killed my desire to cook. And yet I see no solution to this problem."

The solution is to not solely crave affirmation from others. Be comfortable with yourself, and try to live a life that enriches yourself, and those around you. If the parts of that that you share happen to enrich those you come in contact with, great ... but the internet-celebrity that Twitter and other social-platforms encourages (how many followers, how many retweets, how many favorites, how many likes, etc.) is fleeting at-best.

I think the biggest contributor to the feelings Dustin is talking about is the way Twitter's design puts scorekeeping mechanisms front and center. Follower count is a scorekeeping mechanism -- if I have more followers than you, I'm "better" at Twitter than you are. Retweets are a scorekeeping mechanism -- if I get retweeted a lot, I'm better than you are. And so forth. Scorekeeping mechanisms are problematic because when you make them public, put them right up in the user's face, they turn the application into a video game. People see a connection between some actions and an increase in their "score," and that drives them to repeat the same behaviors.

Which is sort of what Dustin's getting at with the comparison to addiction, I think; Twitter is addictive in the same way that, say, Farmville is addictive. It's a Skinner box (http://en.wikipedia.org/wiki/Operant_conditioning_chamber) rather than a medium designed to facilitate discussion.

John Mayer speaking at Berklee College of Music had this to say on the subject:

> The tweets are getting shorter, but the songs are still 4 minutes long. Youre coming up with 140-character zingers, and the song is still 4 minutes longI realized about a year ago that I couldnt have a complete thought anymore. And I was a tweetaholic. I had four million twitter followers, and I was always writing on it. And I stopped using twitter as an outlet and I started using twitter as the instrument to riff on, and it started to make my mind smaller and smaller and smaller. And I couldnt write a song.

I don't do twitter, but I have wondered what it would be to write or read a longer work composed within the limits of 140 character chunks - parceled out over time. It is not so much the size limitation on tweets as it is the disconnect between them that causes the dissolution of bigger ideas.

"And yet I see no solution to this problem. I will forever be a slave to 140-characters..."

I'm having a hard time sympathizing with this.

It's not Twitter that "instantly takes complex ideas out of my brain, over-simplifies them, and ships them off to random people." It's ME. Twitter is just a medium the solution is to care about those complex thoughts enough to see them through.

Not to say that the instant gratification of tweeting does not exist, or is easy to fight it's a struggle, and something to be mindful of. But the battle is already lost when, as this article does, you shift all the blame to the service instead of looking inward.

"I sit on the couch watching whatever is on TV. It's not very entertaining but it's something to do, and after a while you get used to it. And yet I see no solution to this problem."

What are we, automatons? Farm animals?

This isn't rocket science, if you want to stop being a hack then stop being a hack. You have a brain, you have a developed intellect, if you have sufficient introspection to realize you're doing something you don't want to be doing then maybe try not doing that thing. I have a hard time believing that twitter is more addictive than alcohol or heroin or even television.

Nothing's forcing you to be a hack other than your own vanity. And there's nothing intrinsically superior to being addicted to seeking bite-sized chunks of personal validation through twitter than there is in seeking feelings of comfort, camaraderie, and friendship through television viewership. Yet if someone wrote about the perils of being a couch potato and the difficulty of stopping we'd just laugh at them and move on.

Tweets provide a lot of efficiency. "Tweets aren't great because the compress otherwise complex ideas". A decent summary of your essay and tweet-able. Point made in 5 seconds of reading. Yes you don't get the full emersion but thats exactly why both mediums still exist. The tweet saves time and consequently gets a wider audience.

Qualys SSL Labs also has a great online tool that allows you to quickly analyze your configuration changes [1]. Highly recommended and a great resource if you're just setting up your SSL certificate, too, to make sure you have it set correctly.

> TLS v1.2 should be your main protocol. This version is superior because it offers important features thatare unavailable in earlier protocol versions. If your server platform (or any intermediary device) doesnot support TLS v1.2, make plans to upgrade at an accelerated pace. If your service providers do notsupport TLS v1.2, require that they upgrade.

Too bad CentOS is still stuck on TLS 1.0 and apparently will be for quite some time.

Stunningly interesting exceptions and, if I'm reading this right, not necessarily an improvement:

"Repeals the USA PATRIOT Act ... except with respect to ... the acquisition of intelligence information concerning an entity not substantially composed of U.S. persons that is engaged in the international proliferation of weapons of mass destruction."

"Requires orders ... to direct ... any person or entity mustfurnish all information, facilities, or technical assistance necessary to accomplish such surveillance

- in a manner to protect its secrecy and produce a minimum of interference with the services

- that such carrier, landlord, custodian, or other person is providing the target of such surveillance

- (thereby retaining the ability to conduct surveillance on such targets regardless of the type of communications methods or devices being used by the subject of the surveillance)."

All the negative waves [1] :-) This is primarily a bill where some folks can vote for it, feeling confident it won't pass, and get re-elected by "trying to shut down those rogue intelligence agencies." But it can also get people elected, just like the tea partiers got elected on the fear of government over spending, liberals can get elected on the fear of government oversight. Mixing up the opinions in congress is always a good thing in my opinion.

Natalie put a lot of work in to this (and we're suposed to be on holiday!). There's lots of great stuff in here - not just about the overall startup experience, but also advice on talking to press, raising money and building out the company.

"An immensely useful lesson to learn is how to correlate all the conflicting advice and apply it to your own situation."

This appears to be the single most important way to get use from YCombinator (or from reading Hacker News). Even if it seems obvious, keeping this advice in mind also helps to avoid posting indignant comments on other startup advice threads.

Reading this brought back a memory for me too, even if just as a bystander. I was lucky to be able to attend dConstruct 2010, it was the most wonderful design conference Ive been to so far. All the presentations, by the likes of Merlin Mann, John Gruber, and David McCandless, were very inspiring.

It was a one day event and all the talks were held in the same space. At one point, the guys from Lanyrd came on stage and explained how the site worked. They asked all the attendees to tweet to @lanyrd and write that they are attending dConstruct. That way, everyone got automatically added on the Lanyrd site as attendees, with profile and everything. It was an impressive demo.

Until now, I didnt know that was the event when Lanyrd officially launched, it come across to me like theyd been polishing the app for ages.

Good Story! Your next startup should be one in which someone can easily add text on top of photos in their blog, and then allow readers to easily share those (nuggets of wisdom) photos on Social Media with the click of a button.

My own thoughts on the negligence and indolence of the PC industry are full of rage. But this guy makes my rants seem a little tame. I love it!

I believe I have an unpopular opinion about desktop PCs. The conventional thinking is that desktop computing is boring because a modern PC does everything it is intended to do just fine. That may be true, but the problem is that the industry is not interested in establishing new usage patternsnew things the PC should do.

At the end of last year, I started a series of rants about how modern technology sucks [1] with particular emphasis on the frustrating stagnation of desktop computing and the bothersome way every new portable computing device wants to be a center of attention.

I was pleasantly surprised that the author of the linked article hits the target squarely when he lists off what PCs need. The first item: better displays. He may be speaking more about laptops (and they are deserving of the shame), but allow me to rant a bit about my preferred computing mediumdesktops.

The stagnation of desktop displays is, and has been for a decade, the crucial failure of desktop computing. Display stagnation is the limitation that allows all other limitations to be tolerated. It is the barrier that leads the overwhelming majority of users (and even pundits!) who tolerate mediocrity to declare everything elsefrom processors, to memory and GPUsas "good enough." I absolutely seethe when I hear any technology declared good enough (at least without a very compelling argument).

Desktop displays, and by extension, desktop computing is so far from good enough that it should be self-evident to anyone who observes users interacting with tablets or mobile phones(!) while seated at a desktop PC. Everything that is wrong with modern computing can be summarized in that single all too common scene:

1. Desktop displays are not pleasant to look at. They are too small. They are too dark. They are too low-fidelity. And they often have annoying bezels down the middle of your view because we routinely compensate for their mediocrity by using more of them, side-by-side.

2. The performance of desktop computers is neglected because "how hard is it to run a browser and Microsoft Office?" This leads to lethargy in updating desktop PCs, both by IT and by users ("I don't want the hassle"). In 2013, I suspect many corporate PCs in fact feel slower than a modern tablet or even mobile phone.

3. Desktop operating systems are actively attempting to move away from (or at least marginalize) their strong suits of personal applications and input devices tailored for precision and all-day usage.

4. Desktop computers--and more accurately personal home networks--have lost their role as the central computing hub for individuals by a misguided means of gaining application omnipresence: what I call "the plain cloud." This is because none in the desktop industry (Microsoft most notably) are working to make personal networks appreciably manageable by laypeople.

5. Mobile phones and tablets are often free of IT shackles and therefore enjoy more R&D (more money to be made).

Desktop displays stopped moving forward in capability in 2001, and in large part regressed (as the article points out) since then. Had they continued to move forward--had the living room's poisonous moniker of "HD" spared computer monitors its wrath--I believe we would have breathtaking desktop displays by now. In that alternate universe, my desktop is equipped with a 50+" display with at least 12,000 horizontal pixels.

4. Extremely fast processors and GPUs to deal with a much greater visual pipeline.

Such a computing environment is a trojan horse for today's tablets: it turns tablets into subservient devices as seen in science fiction films such as Avatar. The tablet is just a view on your application, allowing you to take your work away from the main work space briefly until you return. I say trojan horse, but that's not quite right because I actually want this subservient kind of tablet very much. I do not want a tablet that is a first-class computing device in its own right (even less do I want a phone to be a first-class computing device). I only want one first-class computing device in my life, running singular instances of applications for me and me only, and I want all my devices to be subservient to that singular application host.

For the time being, that should be the desktop PC. In the long haul, it could be any application host (a local compute server, a compute server I lease from someone else, or maybe even a portable device as envisioned by Ubuntu's phone). But for now, the desktop should re-assert its rightful role as a chief computing environment, making all other devices mobile views.

It's a rant, so I'm not going to critique this post too much, but I'd like to call this out:

[...] Intel is desperately trying to figure out what to do to combat the phones and tablets that are eating them alive from the ankles up. It is pretty obvious that the company both doesnt understand what the problem is and is actively shutting out all voices that explain it to them.

I don't think this is true. Intel certainly understands the market and where it's headed. However they are committed to x86/64. What Intel is doing in my view is taking a series of huge but calculated risks. They seem to be betting that:

- Laptops will stick around and have Intel Inside for quite a while. The market may be boring, but it will be there for years. Corporate America helps.

- Servers won't be switching to ARM any time soon (I'd argue this is the riskiest bet).

- The desktop and enthusiast/gamer PC market will be around for a while, and also won't be switching to ARM any time soon.

So all of these "shoe-ins" buy them time, and I believe they think that in time they can pull off the biggest risk of all:

- Intel is betting that the biggest differentiating factor is and will be performance per watt. They are willing to gamble that they will eventually eclipse ARM cores in this area. In their view, if they have an x86/64 core that trounces competing ARM architectures in ppw then phone, tablet, and set top manufacturers won't have a problem putting those chips in their devices.

Granted, I'm not saying I think Intel is 100% correct or that they'll succeed with their long term bets; I just don't think they are as clueless as this rant makes them out to be.

No doubt about it, though, UltraBooks DO suck.

EDIT: I'm going to revise my statement on UltraBooks. Not all of them suck. In particular, the Lenovo Yoga is fantastic.

What's actually happening is that the PC market is basically saturated with machines that pretty much do whatever anybody asks of them.

The market has pretty much plateaued. Pretty much everybody has a PC at home and work. Most households already have multiple computers. Heck, I know entirely non-technical powerwasher/gutter cleaner guys who have 2 or 3 computers. In fact, I don't know a single person older than 10 years old who doesn't have at least one Personal Computer of some kind.

Any commodity off-the-shelf PC will pretty much do whatever you ask of it (at least for most consumers). I used to replace my computer every year or two just so I could run modern software. I haven't felt compelled to do so for the last 6 years and even then I'm 50/50 on doing it. The rMBP my work issued to me is fantastic for virtualization, but unbelievable overkill for everything else I do (mostly email, word and web).

There's just not much of a reason to buy more machines outside of regular replacement rates due to failure and total obsolescence and new humans buying them as they get old enough.

It's not that PCs aren't coming back, it's that the constant growth in the market has plateaued.

Everybody was hoping China, India and Africa would explode 3/5s of the world's humanity moved into the middle-class and needed computers, but the growth has been far slower than was hoped and these first time computer buyers won't really be constantly upgrading like previous markets did -- the market characteristics are such that it won't be a simple repeat of the 80s, 90s and early 2000s.

Smartphones and Tablets are an entirely new segment and still growing (though showing some signs of flattening out as well). That's why they're exciting, because those markets are still building out and upgrading. But there are signs that those segments are flattening as well.

Tablets and phones are awesome, but they're definitely not a replacement for a general purpose PC. Even my mother and father, who're quite the luddites, regularly needs capabilities that don't work well on a tablet -- like doing taxes. Even if those things were magically fixed and working awesomely tomorrow, they'd still want a bigger screen than a tablet afford.

PCs aren't going anywhere, it's just that the market has to shift to sustaining the market not growing it (which is infinitely more expensive, meaning loads more money sloshing around in the secondary markets). This is fundamentally the problem that both Intel and Microsoft are dealing with. Apple escaped it largely because they created new segments to grow into.

Heck, the one new market segment that PC makers did manage to get into, netbooks, they managed to screw up so bad that the entire segment was dead within just a few years. (If you think of where netbooks needed to go as a segment, the Surface Pro would probably be a reasonable outcome, except that market is totally hosed now and Microsoft has to rebuild it).

Ugh. Change tack[1], which is a sailing reference[2]. As for the actual content, I feel this analysis lacks nuance. Mobile is booming, of course, but the PC is not dead, nor will it be dead five years from now. There are a hundred use cases for which a desktop or laptop is the only practical solution. Fantasize all you want about businesses abandoning real machines for iPads; reality begs to differ.

What do users want and ask for vocally? Screens that arent garbage quality, resolutions that are not worse than mainstream laptops from 2007, SSD instead of error prone and driver dependent hybrid garbage, an OS that isnt grating to the user, decent Wi-Fi, good build quality, and a decent price.

Do they really ? Imo most consumer couldnt care less about any of that, its a tech savy minority that wants higher quality screens and SSDs. Thats exactly the reason why we are seeing zero innovation in the PC monitor space, because the market doesnt really care. It cares for price most importantly which leads to popularity of low res screens and slow HDDs in the first place.

Oh well... This rant has little sense and a lot of angriness. The phrase "The PC is over and PC sucks" appears several times with little explanation other than citing the grow of other markets. The true is there is no replacement for the PC and it doesn't seem to have a serious replacement any time soon.

People can't make movies, edit images properly, use a compiler, debug, use a nontrivial spreadsheet,etc in phones or tablets. Until that doesn't change the desktop PC won't die. They might not been as popular as before nor have the same upgrade cycle as before, they might had lost relevance as a growing market, but they are far from dead.

> During this time Windows 8 came out and PC sales dropped 15% in the first full quarter after launch.

I don't think it's all Windows 8's fault. The average desktop PC is just too powerful.

I've been using Visual Studio 2010/2012/2013 with an i3 and an SSD for years now and I rarely run against any sort of performance bottleneck.

To compare what sort of performance requirements I have: in the project that I work on I have a solution with 28 projects that takes about 50 seconds to build from a clean build. Visual Studio takes care of incrementally building the projects during normal development, so usually I'm looking at ~5 seconds to build then launch the debugger.

I have absolutely no need to upgrade. No need = no sale.

I'm using Windows 8 as my operating system. It takes one step forward and one step backwards. I'm looking forward to Windows 8.1 but there's nothing so seriously wrong with Windows 8 that I need 8.1.

When I'm sitting in front of my PC and using Visual Studio, I'm not thinking "I wish this was actually a docked tablet". I have an iPad for mobility.

PC sales are probably undergoing a bit of a course correction as people who are satisfied with tablets buy tablets instead of PCs. But I suspect PCs will be around for a long time to come and, until that day, there's nothing for them to "[come] back" from.

Argh, PCs never died. I am reading this on a PC in an office full of PCs. I can't develop code for other people's PCs on a tablet - I couldn't develop code for a tablet on a tablet, it would be horrible. I need a PC.

I dont really buy his arguments. I think the Surface2 is a good example of where the PC and Windows 8 is headed. For most people such a tablet with the option to use it as a desktop pc trough a docking station is all the computing they need. The Surface2 seems to do this job very well and with Haswell finally has decent performance and battery life.

In 5-10 years, i am pretty sure that the real desktop PCs will be for professionals only, while most consumers are using some mobile tablet/laptop hybrids.

The fact that tablet devices and phones have entered the market, reducing the need to do everything on a PC, doesn't spell the end of PCs. It just means they aren't the only go-to computer anymore, which is a good thing for everyone.

A rock solid PC in the home connected to a nice big monitor and other useful peripheral devices, is a good thing to have. Be it a compact PC, laptop or desktop, Windows or something else.

"Post PC" is a stupid agenda-driven term. We live in a "post horse and cart" world, but the PC has no inherent limitations preventing it from evolving. If you bother to look, there's currently more enclosures, cases, and interesting "desktop" configuration variety for PCs than ever before, cheaper than ever before.

Coming back? Where did they go? When did they arrive? I don't think there was ever a time where the PC enjoyed significant market saturation. If anything, the PC "bubble" is deflating back down to normal levels.

It may be popular and "obvious" to accept that Intel and Microsoft own the PC market and decide where it goes, but the reality is that the market makes the demands and Intel either meets them or they don't. I think this is clear when AMD pushed 64 bit first and Intel adopted it. This is also illustrated with the fact that PC sales have declined along with the stagnation of Moore's Law. That last point seems counter-intuitive but it shows that Intel can't force a market if it doesn't deliver.

Now both CPUs and GPUs are "as fast as they're going to be" for some time now. For some reason, next gen GPUs are joining the theoretical ranks of "Moore's Law is more threat to economics and security than fruit of civilization," giving us 10% yoy speed improvements but doubling up on security and management overhead, added coupling, APIs for compilers only, dedicating more silicon to hypervisors and management that should go to the programmer and his compiler.

IT has become a completely dysfunctional market at the macroscale. The demand doesn't know what they want or how to shop for it, and the supply is to scared to deliver anything new.

At the micro-level, those who know what they want are still taking it one step at a time with their own feet, to their own drummer, but the mess that is the macro-market is just destroying knowledge and value like a wildfire. However, those few programmers who know what the Internet should look like, instead of one that's built to be profitable for thing manufactures, aren't able to keep up with the complete mess big software is doing to the collective wisdom of the netizens and the internet infrastructure, both physical and social.

I think people are really underestimating where things like perception computing are going. Something Intel is also invested in.

But maybe people start looking at which jobs require using a computer to get essential work done vs. not needing one and therefore not using it. If people really think that entire generations of people are not going to need computers to do work are seriously mistaken, especially in BRIC/developing countries. I don't think the question really is are PC dying, the question should what the hell can I do with the ~$1000 machine other than look at cat pictures. We can thank Microsoft mostly for that. Seriously I think people really underestimate how turned off the entire industry is from Windows 8, especially when they need to upgrade the only reasonable choice is Apple.

Remember Apple is the only company that is actually increasing sales to laptops(MBA models). Clearly there is a market it's just not being served by current parties.

This 'the PC is dead' nonsense will come full circle eventually. Phones and tablets are PCs, we just haven't yet got to point where we can satisfactorily dock them with a full desktop accessory set.

I personally see a scenario where everyone has a nice big LCD screen, full sized QWERTY, and probably still a mouse, in their study at home but carry their 'beige box' in their pocket. Just 5-10 years out imho. Unfortunately I think Windows is still positioned best to make this happen.

I'm on the complete opposite boat - I think the laptop experience generally sucks still. Battery life only recently got improved to a great point in the past few years, but performance is still generally lacking.

Meanwhile a great desktop lasts longer than ever, is cheaper than ever, and does everything extremely fast. I have a 5 year old desktop that outperforms a lot of laptops out there, including my new Macbook Air & my work laptop (not even a month old), and that desktop pales compared to my half year old desktop (which costs maybe $200 more than the 13" cheapest Macbook Air).

I think part of the shift in the market is due to the great state desktops have become as long lasting devices (& thus declining sales), and some of the improvements on more mobile devices - I'm highly skeptical of any call that the desktop is going away anytime soon though, because the mobile experience is still seriously lacking in the sweet spot of performance, battery life, weight, and price.

Total Apple fanboy rant. The latest Ultrabooks are superior to the Macbook Air IMO. They are faster with better battery life and cost less. I prefer Windows 8 and 8.1 over Mac OS X ML and over IOS 7. I will never buy another iPad or iPhone ( I have an I iPad 3 and iPhone 5 atm) as I prefer the flexibility of my Ultrabook and Android phones have leapfrogged iPhones in almost all aspects.

"Then comes the hardware, you know the part Intel does. It sucks too. Why? Because for the last 5 or so generations it doesnt actually do anything noticeably better for the user. Sure the CPU performance goes up 10% or so every generation, battery life gets better at a slightly faster pace, and graphics improving extra-linearly but that is irrelevant if you arent benchmarking."

In a sane world, this would be a feature, not a bug. PCs are now mature enough that you can buy a decent machine and expect that it will not be hopelessly outdated in two years. This is a good thing.

The problem is that hardware and software manufacturers have a mutually beneficial relationship whereby new software just won't function without that extra 10% hardware capacity you get from a new computer. Even if it's a word processor or a not-terribly-impressive game. (Remember "DirectX 10 requires the power of Vista", which requires a much faster computer that XP?)

And the other problem is that doofuses like the article writer have been so thoroughly gulled by the planned-obsolence treadmill that they actually think that's how it's supposed to be, and throw tantrums if this year's hardware isn't at least 10 times shinier and more sparkly than last year's.

For those who didn't make it to IDF, it felt dead. There was very little attendance in most sessions and the expo floor was also pretty much empty. They actually moved food into demo areas so it looked like there was buzz. I'm pretty sure the "outside of Intel" attendee count was remarkably low.

PCs are not coming back in the sense that they won't see growth like they used to, but at the same time they're not going away. To be fair, most people don't actually need a PC. My wife uses a Nexus 7 as her main computing device, and loves it (she prefers it to a PC).

As an aside, Chrome OS devices are gaining alot of traction... Probably because they're more than adequate for most people's needs as is, and developers can always switch on development mode for a full set of Linux-y features...

I just bought a overpriced ultrabook with Windows 8 (Microsoft tax). The Windows 8 experience is indeed terrible for touch screen. Seems like someone made an amateur touch screen mod for Windows 7. The product isn't ready. I can fell the Steve Ballmer signature in this product.

I just wanted a PC because I tough it would be easier to install a traditional Linux on it, but UEFI. Oh the humanity, UEFI is the most disgraceful scam the industry ever did. How they could be so wicked?

I don't want to live in a world that the only good option is a monopoly of Apple machines and software but the PC industry is not even trying.

I don't think PCs are dying. I think computing consumption is increasing so desktop productivity looks like it is declining.

I think when a dock for tablets or phone finally happens for consumers, they'll just get it. Desktop mode is not intended for using your fat fingers on a touch screen. "Metro" mode is for that. Windows 8 is all about for when you get off the bus in consumption tablet mode and dock into your desk and your dual monitor with keyboard and mouse lights up and you go into productivity mode.

1. Its not sucking if its only meeting 95 out of 100 of your requirements. My desktop can do everything my iPod can do, with difference. There is a very select group of tasks, ie eBook or drawing surface, that the table excels at but these are not shared by desktops. They are two different products with one offering a miniaturized and inferior version of the former.

2. Users see laptops that run faster on desktop relevant applications: games, spreadsheets and programs like Adobe CS and MATLAB. The rest don't count as they arent' motivation to buy a desktop.

- OpenGL/DirectX Drivers actually re-write the code stream and fix existing bugs from developers for specific game/versions. This is useful for the ecosystem because AMD/NVIDIA know more about 3D than most devs and how to be performant.

- It also really means NVIDIA will do the same if this catches on and now there will need to be at least two different supported implementations for anyone going down this route (prob amd + nvidia + {opengl, dx}). Changing out the 3d driver to be pluggable in a backend-agnostic way will be extremely hard/annoying to code around.

My first thought when I saw the slides, especially the "cross-platform part", was "yay, no more DirectX!!!". This might mean that we're finally going to have AAA games on Linux too (DirectX was a big obstacle to that).

This is the huge benefit with the console wins to AMD that Carmack spoke about at Quakecon this year. Sounds like thanks to the new consoles, this will put AMD in the driver's seat. Everyone else will be stuck on DX or OGL. If NV/Intel implements Mantle it may well require a hardware change, since AMD defines this API going forward.

Physics professor here. As others have said, this is a really cute idea in physics that touches on some neat properties of the universe (like fundamentally identical particles, conservation of lepton number, and the effects of time reversal on particle properties), but:

1. It's more akin to philosophy than to science: this suggestion either has no testable consequences (or if it does, its predictions look obviously false: see below).

2. In all of our observations of the universe there seem to be many more electrons than anti-electrons, but this concept would seem to imply that the number should be exactly equal. You can try to get around that, but anything you do is a stretch.[1]

3. The formalism of quantum field theory naturally includes plenty of situations where electrons and anti-electrons form "closed loops" in time. (The classic example is a something like a photon giving rise to a virtual e-/e+ pair that immediately annihilates back into a photon, but they can get a lot more complicated.) Those closed loops would not be connected to the hypothesized "one electron" bouncing back and forth through all of time, so this idea would fail to explain why they also look identical to the rest.

So in my book, the beauty of this idea lies entirely in the fact that it could be suggested at all. It doesn't actually match reality, but it does give some striking intuition about how particle physics works.

[1] You could suggest that there is some other (unobservably distant?) region of the universe where antimatter dominates, but there's absolutely no evidence to support that suggestion. As another idea, the quote in the link suggests that "maybe the extra anti-electrons are hiding in the protons or something". But anything of the sort would seem to eliminate the whole point of saying "it's all one electron" (which always has the same properties, etc.).

It is one thing to say that physical measurement of the first particle's momentum affects uncertainty in its own position, but to say that measuring the first particle's momentum affects the uncertainty in the position of the other is another thing altogether. Einstein, Podolsky and Rosen asked how can the second particle "know" to have precisely defined momentum but uncertain position? Since this implies that one particle is communicating with the other instantaneously across space, i.e. faster than light, this is the "paradox".[2]

If this were true, there would be much more antimatter in the universe than is currently observed.

The idea is that since positrons can be viewed as electrons moving backwards in time, it may be the same electron weaving its way to the future and past countless times. But that would imply that the number of electrons in the universe = the number of positrons.

I don't see any benefit to using this framework to Bootstrap. The author lists the following pros and cons:

Pros:

- Published under the incredibly permissive MIT License (sure, I don't know enough about the different licenses to comment on this)

- Very well documented (bootstrap is very well documented, and has a huge community behind it)

- Seems to be easier to learn/use (that's subjective, but I think bootstrap is plenty easy to use)

- Has a Grid layout (yes... who doesn't use a grid layout?)

- Uses LESS (so does bootstrap)

- A very nice implementation of buttons, modals, & progress bars (again, subjective, but bootstrap has a great, and simple, button and modal implementation, haven't used the progress bars. also, <button class="btn"> is much better than <div class="button">, especially if you're going for semantics)

- Uses an Icon font for many of it's features (k, sure)

- Has some very useful extras such as the inverted class (so does bootstrap)

- Open to community contribution (so is bootstrap)

Cons:

- No image slider (bootstrap has this)

- No thumbnail classes (bootstrap also has this)

- No visibility classes (bootstrap has this)

- No SASS (does have LESS) (not really a con but ok)

- Not at a release >1.0

I'm not trying to dis the framework (I haven't use it), but the author is claiming that it is somehow superior to bootstrap and foundation and does not present any evidence supporting this claim.

Some of the Bootstrap v3 changes, like using class="glyphicon glyphicon-..." vs. just "icon-..." are things that seem inferior from a pure HTML standpoint, but were motivated by performance differences, especially on mobile devices. There's some hard learned lessons in the existing frameworks that shouldn't be thrown away lightly. (In static site it may just be a search & replace to change, but in a single page app such things could be much more complicated to change after the fact...)

Will this fill the void of CSS frameworks for rapid WebApp development?

Bootstrap is fine, but even in Responsive mode it is not made for developing sound application interfaces on the mobile form factors. The grid adjusts, but the elements do not do a good job of resizing for touch over mouse use.

RatchetUI (http://maker.github.io/ratchet/) seems to be the best hat in the ring currently (one of the lead devs is one of the guys from Bootstrap and another is from Zurb); but the commitment to project leads me to believe even the founders aren't sure what they want out of it.

$ create "Website that has a homepage with a picture of dogs and a picture of me. Oh, and a little blog with a twitter feed. Make it look cute, idk, like pink and blue, but not bold, but that washed out water color that's in right now." ---> Making... ---> Looking for pictures of puppies... ---> Writing several blog posts for you... ---> Created! http://my-website-blah-blah.tld

I find it interesting the author of this is so earnest about getting an image slider. I can't remember where I saw it, but it made it to the front page of HN... it was a single-purpose site demonstrating that sliders/carousels are lazy answers to information hierarchy challenges. I agree, although I've been guilty of using sliders in the past. However, I'm trying to make amends and try to come up with design solutions that don't require a carousel or slider.

I don't have any research to back up that sliders are detrimental if not just ineffective. But I do see how they can be just convenient...and not in a good way. The fast food of design? Tastes good at first but doesn't really sustain?

This is not horrible. The architecture is pretty decent, and it seems a lot of thought went into categorizing the different aspects of the library. I, especially, like how each item is broken down into types, groups, states, and variation.

The UI makes some interesting choices, but I'm sure this could be skinned with a bit of elbow grease.

I'd say my only real complaint is the validation on forms - it seems specifically inefficient to declare an object in JS and pass in each form field and set it's rule as "empty". I would consider utilizing a data attribute and doing something like data-validate="empty". You could even accept a list of them with something like "data-validate="empty email" etc. The only downside would be figuring out a way to still accept custom messages - something like data-message="Please enter your first name." would be fine for one validation check, but passing in a multiple messages for multiple validation rules would get ugly fast.

Interesting, but I wonder how well it will do? Bootstrap is already very widespread and benefits from network effects. Foundation is a nice alternative to Bootstrap that appeals to people looking for such (I'm a Foundation user myself). But a third framework? Can it get traction?

tl;dr -- While many frequent filers are legit book-writers/journalists/legit-folks-who-I-honestly-wish-well, sometimes they're essentially spammers, so I can see why some offices might keep a list of those who are to be bottom of piled because of their patterns (although nowhere I ever worked did, god I wish we did).

As someone who once in the past had to handle (an ungodly amount of) FOIA requests, should there be a list? I don't know, primarily because I don't know what the utility is. That said, some opinions derived from experiences:

1) Some people are serial filers. They haven't bothered to do the research up front to ask relevant questions and after a bit when you see their names you know that when you read the next paragraph it's going to be asking about a) things you know you cannot disclose (not for some BS reason but because of things that actually should remain secret), b) aliens (no we don't have them), c) blatant evidence of government conspiracy, generally of the NWO sort (duh and or hello? Project BlueBeam doesn't keep paper records!).

2) There were also frequent filers where the requestor isn't a crackpot, a FOIA spammer, or asking for something they know in advance could not be provided. These were generally authors. We always got these people what we could and even though my shop didn't keep a list me and the other poor souls stuck on FOIA duty knew them by name. This meant that we basically knew what they wanted when they filed which helped us get them their stuff faster with less overhead. (N.B. this did not stop us from cursing them for their curiosity and the resultant carpal tunnel).

The FBI's preference for total secrecy, in complete contrast to the law, is a very scary fact indeed. There are many things that I love about living in America - you get to be at the pinnacle of tech development, you can make a good wage, but all that is starting to pale in comparison to the Government's agenda for KGB-like activities. The USA used to be the champion of civil rights. Now it uses that image to perpetrate abuses of public freedoms that would impress the NKVD.

Kind of funny how a lot of people are all bent out of shape about surveillance, privacy and what not. But then this guy published a list that the FBI has of people who have requested a lot of docs... and he included their full names. I don't think it is unreasonable at all that the FBI keeps a list of people who are making a lot of FOIA requests. That is their data to track if they want to. But I do think it is irresponsible for the guy to publish all those names.

Over the last forty years, a Moore's Law of processor speed, transistors per chip, information storage and I assume other things has operated [1].

Moore's Law of processor speed has definitely broken down in the last ten years and I assume that this research is attempting to address this fundamental limitation. I believe Moore's Law of transistors per chip is still here but that without increasing speed, this tendency is nowhere as useful (we don't want a whole lot of slow cores, we want a few fast cores).

Moore's Law of data storage is still here but it's also not as useful without a similar exponential increase in system data throughput [2].

I am from Pakistan, and this damn island is all everyone is talking about. My facebook feed is flooded with people planning to go visit the island :P. Personally, i dont see what all the fuss is about. Might just be the rise of planet of the apes people :P

Also, I really wish HN mods would not reflexively rewrite headlines. The news here was the remarkably sudden appearance of a decent-sized island, the fact that it emits flammable gas is distinctly beside the point.

From a different article about the island: When a devastating earthquake struck the remote Awaran district in Pakistan's Baluchistan province on Tuesday, it killed hundreds of people and left thousands homeless, as the government struggles to rescue those who need help.

IIRC, Pakistan is demographically a very young country, with overall low education levels and a lot of challenges. I will voice my hope that this oddity somehow brings them more help with the aftermath of the quake than they might otherwise be likely to get.

could any seismologists / other geographers go into some more detail: pockets of inflammable gas are stored under 200m of seafloor, seismic activity heats them up and they rise the whole seabed to the ocean surface. and then drift down again?

I can't say its making a lot of sense. what is the inflammable gas? not methane presumably. I guess it's some form of honeycombed rock with lots of little pockets, presumably lava that has rolled over seabed, so not really "attached". I mean how does it all work?

Fun stuff, especially if you're a U.S. geography nerd (states with the largest coastlines, states that border the most states, all that good but mostly arbitrary stuff)

Pedantic, I'm sure, but from the title before I clicked on it I was trying to think of the state whose shape might independently be considered the most concave (though that may be much harder to define). This version of concavity depends largely on the shapes of the states around it (e.g. if Nevada split into 6 horizontal states, suddenly California would be the winner).

Aaaaannnd now I'm playing FTL again. Lining up those 5-room beam strikes is just too much fun!

On topic, though, this is pretty cool. Rivers and coastlines seem to be the best way to get appropriately jagged borders. It's interesting to look at states across the map from east to west and see the shapes get simpler and more geometric over time.

It is an odd definition of concave. It would make more sense to me to require that the line joining two points on the state's border does not cross in and out of the original state when determining the number of other states it crosses... This doesn't really capture concave as a geometric concept either but more aligned with the idea that it is a local property.

Any idea whether this depends on the projection used to get the map in the first place? I mean, the "straight lines" are really curves that lie in the boundary of the Earth's surface, unless I'm missing something major.

It appears that the author makes a mistake in his attempts to simplify the problem, because although he is correct that he only needs to look at points on the edges, he goes on to suggest that he is looking only at corners of the polygon, and not at any of the (infinite number of) points between the corners.

*droppable: true or false (false by default). If true, dropping an image on the canvas will include it and allow you to draw on it

That's awesome! I was thinking that this would make a great coloring book, and being able to just drop lineart into the back of a canvas and have your kid go to town is awesome. Really nice work, Leimi!

A difficulty I had was to make the lines smooth whereas the cursor positions are not sampled very fast by the browser. I'm curious as to what your approach was, since everything seems perfectly smooth. I'll be reading your code...

This is one of the slickest implementation of a sketching app I've seen. The API is nice and the controls are self-explanatory. I've been working on a drawing app for the past few months (journeyship.com) and I'm considering replacing the main canvas with a sketching pad like this one.

For those of you who are curious about other projects out there that offer similar functionality, take a look at these:

Nice! Leimi - you should check out my project xBoard: https://github.com/eipark/xboard. It is somewhat similar, but focuses more on making the drawing canvas 'video'-like with a scrubber and recording functions. You should fork or pull pieces from it if you'd like that sort of functionality.

Seems to perform quite well, though the fill tool is a bit useless thanks to the antialiasing on the lines that can't be turned off. Considering the simple approach of this, I'd either remove the AA (or make it togglable) or get rid of the fill tool.

I did comment to say that it works well on iPad, but I would like to add that there are some interesting 'features' when using 2 fingers: they don't work at the same time,but on one board, it's almost like an etchasketch where one finger takes over from where the other left off, on another board it draws between the the fingers, and yet another board the draw point switches between where each finger is. Looks like fun to explore later...

This doesn't keep track of my cursor when I'm outside the boundary of the canvas. Are you listening to mousemove on the canvas element? If so, you might want to move that to the window or document, so you can keep track of the user's mouse even when its not moving in the canvas.

I have a Wacom tablet that I routinely use for sketches. The only problem I always have is that most applications have trouble recognizing the eraser tip. I think Wacom has some sort of web plugin that comes with the driver when you install the table. Any chance of adding that style of tablet input?

Although it's a fun read, it's a classic example of diving into coding without giving a project it's due diligence. There's nothing that's derailed my projects more consistently when I started out as not understanding the user's needs. They'll never tell you what they want, only what they don't want after you deliver something.

I think that's the key difference to an experienced dev/BA. One who can actually sit with the stakeholders and build the system on paper and go through each of the problems as the diagrams connect. What you end up with is the stated requirements (tip) and the unstated assumptions (iceberg).

These types of projects are easily spotted as they're often called "quick" or "easy", which in layman's means no one's really thought about it yet.

For those who don't know about the author of this post. His name is Mark Jason Dominus, the author of one the most awesome programming book that one can ever read.

Higher Order Perl, is available for free download. If you read it you will see some amazing insights into programming techniques most people would have never heard of encountered in MegaCorp jobs. You will also grow a great appreciation for Perl in general and understand how it can be an amazing language of choice for a wide variety of problems.

I worked at Prudential about 10 years ago, as a FTE. Our small division mainly ran on a bunch of custom Access reporting applications. It wasn't quite cutting it, because Access, so it was decided that we would build a portal on the company's intranet. The only problem was that we, as accidental web developers, were not allowed to run development web servers on our dev machines, because they were locked down by corporate. We had to use an extra PC that, by some miracle, had IIS, and develop against that remotely. Good times.

The shittiest project I ever worked on was a php project that was converted from another language (I don't remember which one). This doesn't sound bad, except they used software to automatically convert it. The PHP code had no comments, minimal white space, and the variables were all hex. My job was to fix bugs.

The title reminds me of the shittiest software job/project I ever worked on- short and sweet: Got hired off craigslist. It was all php. Their main competitor was "the spreadsheet". Worked directly next to a cold caller that repeated the same phrase over and over again, fake laugh and all. They were all from the same church group and tried to convert me multiple times. Paid minimum wage.

In addition to being funny in retrospect, it was a good lesson to me to learn that no matter how shitty your current situation, you can always improve it.

That's not bad at all, it's sounds more like business as usual. I think it's my regular workday. Btw. Did the customer require extensive documentation, escrow, you to fix their data when they can't get it done (like invalid post area codes linked to wrong addresses, fixing post number / city information based on post area code) etc or looking it up based on street adress or so. Been there, done that. Did you spend several weeks in meetings where they can't decide how their stuff works, and you'll just keep wondering if they'll ever decide what they actually want etc. Of course they're going to completely change that week later etc. They don't have a clue how things should technically work, or even what the actuall business process is. Because they have bought an integration, everything must just automatically work, right? We need to know what females in age range 20-25 have bought during last month. Err, but we don't record customer age or sex? Well, but our management team needs that information. It's sure alarm sign, that they want to know how much "this" project will cost, but nobody really knows what "this" is. Also it's needs to be completed by end of the month. I have declined so many projects, and clearly told customers why I'm not going to do anything for them at all. Unless they accept it as "agile" project, with unlimited budget. Then I'll promise them that I'll personally see that it gets done, but it's going to be expensive. - 15 years of ERP/POS/BI/CRM integration programming & consulting.

> These days I would handle this easily; after the first or second iteration I would explain the situation: I had based my estimate on certain expectations of how much work would be required; I had not expected to clean up dirty data in eight different formats; they had the choice of delivering clean data in the same format as before, renegotiating the fee, or finding someone else to do the project.

Great advice for dealing with issues we did not consider in our estimate.

Had he charged hourly instead of a fixed price, would this project have been less shitty?

I'm sure this is the very tough first lesson any new consultant with limited experience would learn. The summary is that the specs are incredibly important, they should be expensive to produce and they should protect you and your client.

The shittiest project that I worked is just finished where we are not allowed to write any test cases cause we don't have time. I just can't believe that I finished fairly big project without writing a single test case. I hate my company, my role and managers, sales people who negotiate tight deadlines. Over all don't work for any consulting companies out there. They care less of the code quality. They just want money and no ethics.

This is how life insurance companies operate on a daily basis. OP had a pretty shitty deal but he should count his lucky stars that he didn't have to interact with actuaries, who would have multiplied the same problems ten-fold.

I am working on quite a tedious project right now. It involves 10 years old, quite extensive, Visual Basic 6 programs. No source control was used. In our company it is practice to hire interns for 3-month periods to work on production software. A mix of programming styles can be found in this project. Some functions return 0 when they fail. Others return 1. Or -1. Or False. Or "False". I love it!

If that's the shittiest project you ever worked on then you're a lucky guy. User's will use a software project for all sorts of political purposes that have nothing to do with you and it does nothing but f up the process.

1. You should spend more time designing (away from the computer) so that you find a _problem_ and then come up with a reasonable solution. Instead of putting together a list of features. (see Rich Hickey's talk "Hammock-driven development").

2. Think of the sum you're going to charge for your consulting and then multiply it by 4 and charge that because you have to take risk and other factors into account.

I can highly recommend Flawless Consulting by Peter Block.He covers all these issues and many others. Self awareness is critical to success. If you are inexperienced you need to be able to recognize/acknowledge your inexperience. Then read everything available on the subject and interview every expert you can identify. Clint Eastwood said it best, "A man's got to know his limitations". http://www.youtube.com/watch?v=_VrFV5r8cs0

Sounds like poor contract negotiation more than anything. Fixed quotes are very tricky things to navigate. I simply don't go there. If the client insists then I try to negotiate a fixed budget, then when the budget us running dry they can extend the budget or reduce scope. If they don't agree to that I walk.

Interesting read. However, I helped build a web based system for the management of a bowel cancer screening programme. Considering what would be sent back on the testkits and processed into the system... it's the shittiest project I've ever worked on, but not for the same reasons :P

wow! I am jealous. If that is the shittiest project he ever worked on, he really has not too much to complain about.

Add to this that this was back in 1995. Companies had no clue what the internet was nor what they wanted to do with it, so this kind of clueless behaviour what the customer wanted was pretty much standard for most companies up to at least 1998 - 1999.

The guy has either been tremendously lucky, or he has not worked in too many different projects / companies for the last 18 years....

I've often wondered why Apple doesn't have an "iPad Pro" that includes a Wacom or Wacom-like stylus system. I know that Apple is very much "no stylus" and I'm positive that was the right move for most uses, but there seems to be an obvious market here for creative types that need a real stylus (as in tip angle detection, pressure sensitivity, on-stylus buttons, etc).

I've been happy with my Surface Pro; the two changes I'm most looking forward to are the docking station and Haswell. The extra pixel density is a nice-to-have, but being able to pop it in and out of a docking station is killer. There are some other minor things I'd like addressed (the impossible-to-use microSD slot, for one. The always lost stylus is another), but I'll rebuy and probably hand the Surface Pro on to the kids.

The new docking station and ability to drive high resolution displays is a lot more important than sites are recognizing. We can finally truly have 1 machine for everything on the go and docked at work or at home. No doubt it'll likely be more of a pleasure perhaps in a Surface Pro 3 that would be lighter and even more powerful.

I don't really see the current crop of convertible laptop/tablets as solving this problem due to low perf and the fact that I'd never be able to code all day on those cramped keyboards.

I did a bit of research on tablets and hybrids recently since my fiance needed a new machine. I ended up buying her a Sony Xperia Tablet Z (Andriod) since there really wasn't any good Windows hybrids on the market except for the Surface Pro. However, the battery life for the first generation was terrible and Surface RT isn't a smart investment because of the OS [place bet]dead in the next few years cough[/end bet].

However, I personally want a compact machine with the full Windows 8 experience so I've been waiting for the Surface Pro 2.

If this thing has an all day battery life it will be an instant sale for me.

I had a Fujitsu Lifebook P1510 running Windows XP back in 2006... a 9" convertible tablet with a capacitive stylus. It's still my favorite machine that I've owned, and felt like the future (sitting on a couch, browsing the net & playing games).

It saddened me greatly that Microsoft forbade anything under 10" for Vista & Win7. The Surface Pro is the first Windows machine I've actually desired since then -- the only thing that held me back was the 4gb cap on RAM. I already pre-ordered an 8gb model and can't wait to play with it.

One thing that I hope people don't miss is that the problem "Google Alerts" solves is an information retrieval problem that is still unsolved (at least in the open literature ;-)

Conventional search ranking algorithms give you some score from 1 to 0 and the only meaning of the score is that a document with a higher number is more likely to be relevant than a lower number. The results usually are good at the top and they gradually get worse as you go down. You stop either when you're satisfied or when it feels like a waste of time.

Suppose, however, you wanted to search scientific papers or news articles about a topic and see the results ordered in time. All of a sudden the junky documents that were hidden are visible; the results are embarrassing even for world-class search engines.

You might say, "let's filter out documents that have a score less than, say, 0.8".

It doesn't work, at least not very well. You run into two problems. Search engines that crush TREC search evaluations have worse than 70% precision when the score approaches 1. Also, you'll see plenty of cases that are obviously a direct hit and the score is 0.5.

The difficulty of the problem is one thing, but the academic approaches people have taken in IR are another part of the problem. The methods used for most TREC evaluations are designed NOT to give search engines credit for "knowing what they know", because to score well on "knowing what you know" you need to do a super job on easy queries and recognizing they are easy queries, and if you don't do that, how well you do on hard queries won't shine through.

Another one is the whole idea that you need to normalize scores from 0 to 1. You don't. A while back I developed a topic similarity scoring system that just counted the number of common traits things have in common, rather than using a dot product or K-L divergence or anything like that. It turned out when the score was 40 you knew the results had to be good because 40 pieces of evidence is a lot of evidence. If you had 4 pieces of evidence, it was clear things that were iffy. I might have gotten "better" results in some sense with a more complex algorithm, but the scores from the simple count were meaningful -- from my point of view, the better algorithms are stupider because they are erasing their knowledge about their own confidence.

It's also a big problem in machine learning: often you use the SVM or Bayes or a neural network and you get some score and if you say the score is greater than some threshold and it is in the class otherwise it isn't. Because these algorithms almost always get the wrong idea about the prior distribution, you often make a "failing" machine learning algo very useful if you do logistic regression on the output and use that to convert the output into a probability score.

Anyhow, if you want to learn about this and stop making 'stupid' intelligent systems, stop what you're doing and read the issue of the IBM Systems journal about IBM Watson because that's what Watson is all about -- it converts all of the signals it gets into comparable probability estimates, and then uses decision theories to take actions that maximize it's utility function. (i.e. "business value")

Google Alerts pretty much only alerts me of news stories. Unless it would show up in Google News, new links never make it to my e-mail.

For example, a customer posted a nice video review of Improvely on YouTube today which I can find through Google limiting the date range to today with the "Search Tools" button. No e-mail from Google, despite the alert set up for the brand name.

On the other hand, I have one set up for "Surface Pro" and get daily e-mails when the big tech blogs mention it. Smaller blogs and forums, which are no doubt talking about Surface often too, never show up in those alerts. The e-mails even say "News" up top [1].

A few years ago, every mention would trigger an alert. Something did change. 3rd-party apps like Mention [2] alert me more often.

Like others have said, Google alerts only notifies you of news articles.

My company http://www.Alertification.com takes a more general approach and alerts you when something on any public website changes. For example, you'll get an email or text message when an Amazon price drop occurs, when a college class opens up, or even when concert tickets go on sale.

The sample images are of two types: images which are mostly of the subject (cat or dog), and images which have a cat or dog in them, but are not necessarily focused on them.

In computer vision, these two types of images are traditionally handled separately. First, a detector for a class (like "dog" or "cat") is run across the image at all locations and multiple scales to find where the things are. Once you have the locations, then an image classification algorithm is run for each detection window to either confirm it, or to give you more information about the object.

The latter often takes the form of giving more fine-grained category information, such as what species of dog/cat it is. Both leafsnap [1] and dogsnap [2] take the form of this type of program; i.e., they both assume that you've captured a single subject, roughly centered in the photo window, and that you already know that it's a plant/dog.

Sometimes you don't have to run a detector even if the object is not the focus of the image, if the context/setting can narrow down the answer for you. For example, if you were deciding between dogs and airplanes, it would be pretty unlikely to see a dog on a runway or a plane in a living room, so just by classifying the entire image, you can do reasonably well. That's not the case here, as dogs and cats will, for the most part, appear in pretty similar environments.

So if I were attacking this problem, I'd first see how many images were of the non-focused type. If not many, I'd basically ignore them and focus on building a classification system. Note also that if you're constrained to make a hard choice between only two classes, that's a much easier problem than a more open-ended "what is this?"

As many have pointed out, deep learning approaches seem to be the current state of the art on classification tasks such as these. But deep learning requires a lot of training data to be effective. A procedure I've been hearing many people use to great success is to use the Imagenet [3] hierarchy and images to train a deep learning classifier (i.e., as if you were going to compete in the Imagenet Large Scale Visual Recognition Challenge [4]). Then use the trained network, chop off the last stage (which makes the final prediction), and replace it with an SVM trained on your specific training data. In this way, you'd be using the network only as a feature extractor.

I think that if I were to do this, I would use facial landmark recognition (using something like a Haar classifier). Haar-like features have been used to aid in (human) facial recognition since 2001 to great success[0]. And recently, people have been thinking about using similar methods for animal tracking[1].

If one could locate the face in the test set, she could also presumably find some landmarks of interest: eyes, nose, mouth, etc. Considering that dogs typically have longer snouts, cats have pointier ears, etc, this data could be used to differentiate between a dog and a cat. There would be difficulty dealing with awkward angles and bad lighting though.

"You too could solve this problem, a get a Phd and joined that overcrowded labor market"

Just consider that if you have M categories and you have N Phd students who can each four years to create one clever algorithms to distinguish category i from category j, then you need M(M-1) Phd students for a complete classification system - which when you consider many, many categories there are in human knowledge, works out to being more than can even be pumped out by excess student loans today and exponentially more than can find tenured positions.

IE, once you'd add to the "deep but not wide" algorithms of computer vision, And twenty years ago, we might have believed this adding-to would lead to something broad and general but it's been twenty years and the trend is becoming clear.

I understand Kaggle wants someone to make an algorithm to "identify the entity", but if used as an alternative to CAPTCHA, is it not possible to defeat this HIP (Human Interactive Proof) by reading the image and the classification data from the same Petfinder.com and just do image matching?

It may take some time to match from 3 million images, but doable right? Or am I missing something here?

A captcha of 8 characters has a space of ~26^8 (~208 billions) possible combinations in a brute force attack. To divide a set of 12 images between dogs and cats has a space of 2^12 (4096) possible combinations in a brute force attack.

So narrow and so useless. What exactly are dogs? Almost all cats look the same and are almost the same size. But dogs? Dogs vary greatly in size, and looks. some of what we have accepted as dogs today, if you take them back to the past before TV/Computers, people back then won't recognize them as dogs, because of the looks or size. They would have to hear it back and behave like a dog to classify it as such. if all they had was a picture, they mgiht very well refuse and reject say pugs as dogs. so an algorithm to distinguish dogs from cats without context (behaviour, sound) will be more difficult.

This article lacks substance. Martha Stewart may not have it in for patent trolls at all. It may be that she's a big target and her council has advised her to make an example out of any patent trolls that approach her. This could also mean that she has lawyers that are interested in racking up big bills.

Interesting that Martha Stewart is suing Lodsys in the US District Court for the Eastern District of Wisconsin:

"On information and belief, Mr. Small [CEO of Lodsys] conducts Lodsyss business from an office located in Oconomowoc, Wisconsin, within this jurisdictional district. Accordingly, on information and belief, Lodsyss primary place of business and/or headquarters is located within this judicial district."

Maybe that court will be more reasonable than the one in Texas where the patent trolls like to file their suits?

The Myst series are probably one of the games that bring back so many memories. When my brother and I heard they were turning URU into an MMO we were pretty excited, but also scared that there wouldn't be a big enough user base. We were right, and the project 'died' . Then I told him they'd probably open-source it, and there we are!

I'm gonna play the MYST series again. Any programmer/hacker will love these series. they're a real classic brain cracker, and worth the play. You will get pulled into the myst worlds as if it are your own. it's so immerssive!

I was a big Riven fan but I never tried URU so far - this seems like a great chance doing that. Could someone sum up what the multiplayer experience is like? I could never really imagine how this works. Are there puzzles that you solve together, something like in Portal 2?

This would great and worth consideration if helmets weren't disposable items. Obviously a crash will cause a write-off (and good luck convincing insurance your helmet was $2000). But helmets get stinky, they get dropped, they get exposed to UV. A five year replacement cycle is recommended by the Snell Foundation. [1] Okay, so $400/year; some folks will part with that. You'll cry real tears if you drop it hard enough to crack that carbon fiber shell.

Then there's fit. Some people have a Shoei head, some an Arai head (helmet brands). For $2000, that sucker better not cause hot spots on my noggin.

Replacement shields? Shields are consumables, IMO. And I want a dark one for sunny days, a light one for when I'm out after dark. I want a new one when the original gets scratched up.

I like the idea, but a cool HUD is just one thing to consider when buying a helmet, and not high on my personal list if I'm wearing the thing up to twelve hours a day.

Safety gear is great and I wear a lot of it when I ride, but on a motorcycle you really depend on avoiding accidents -- The gear only helps so much compared to a car. It is sort of like picking the fast and nimble vehicle in a videogame instead of the tank -- except you actually die or get hurt.

Therefore distractions are perhaps the biggest safety concern (in particular here in south Florida, everyone is out to get me :-) ). I could see the view that maybe not worrying about navigating is actually less of distraction, but for me I don't think that would be the case. Every second I would spend looking at the HUD is a second I'm not:

* Looking for someone making a left hand turn right in front me * Noticing who is texting while driving * Noticing stuff on the road that could result in a loss of traction * Watching Cars with stuff that might fall off in front of me * Observing people that look like they might run a light * Focusing on good control of the bike * Planning Escape Routes * etc....

In short, accident avoidance in the long run takes constant focus (and unfortunately, a bit of luck too).

As a rider, I'm worried about what happens to the electronics, projector (+ lens/ bulb), reflectors, and screen in case of an accident. If the helmet deforms, will those shatter, possibly scattering shards around while my head is bouncing in there?

I would absolutely love this, but I'd wait to see what the actual crash tests look like before risking my head to one.

Edit: thinking about this more, I'm even more worried trusting my safety to a company that has never even made a helmet before. I'd be much more confident if they had approached older helmet manufacturers and worked in conjunction to get these new features into something thats confidently safe.

The focus on battery life, charging, UI and menu systems also makes me wary -- there's almost nothing about safety in the presentation, except for the material ('carbon fiber'), and nothing at all on crash testing.

I ride motorbike for about a year now. In the beginning having a navigation hints voice over bluetooth was the only way. Now when I'm confident driving I can just use the navi mounted on the handlebar. Just have to look at it now and then.So I think all time on navigation on the visor is a bit too much. If they would find a way of keeping just an arrow most of the time and only show a map when needed ...

The key question here is the ergonomics. It takes time to adjust you focus near-far and you can't just stick a GPS on your shield and make it work. It is distracting and dangerous. Cockpit HUDs have different spatial relationships with your eyes.

Projectors tend to put out a lot of heat. I would worry that my head would get hot wearing it. I would also be concerned about the weight.

While HUDs are cool; there is a reason that most major automobile manufacturers haven't put them in a car yet and the reason is not to do with new technology because HUDs have been around for a long time. The reason is that anything that goes between you and the windshield will obstruct your view. Any coating that is required to reflect the light from the projector to your eyes will interfere with light from the road even with the projector turned off. You have one fatal crash wearing this helmet and the insurance companies will make mincemeat out of this company. I'm guessing that $2000 doesn't include much for the company to put towards an insurance premium.

I've been wondering recently if iBeacon is the first showing of a 2 year long plan to crush NFC and take over POS payments, or whether Apple released iBeacon just for better Passbook functionality, and then the world anointed it as the NFC killer and all-hail-the-new-king - and now Apple is scrambling to come up with a comprehensive plan to do what people think iBeacon was meant to do.

(They've delayed any detailed specifications for weeks now.)

Doesn't matter to me. Anything that finally gets Bluetooth LE on a roll is fine with me. It has awesome potential.