Guys, does V8 still deoptimize on ES6 features? For example, would using say const/lets in a function prevent V8 from optimizing it as a whole? That was the case some time ago when these features were still under a flag.

ES6 support is nice, especially if there is no performance hits for using ES6 features. I slightly prefer Typescript, but the extra language features in ES6 really make JavaScript development more fun for me. Thanks to the newly re-combined Node team!

Does anyone have a good source of information about the major changes between Node and IO? I haven't kept up with the community recently and I'm curious as to what the delta really is and what merging IO back into Node will mean practically.

the iojs site mentioned nothing about the merge as I checked yesterday, neither does nodejs site, which is a bit odd.

the new version number is using iojs instead of nodejs's existing version scheme, which is interesting too.

I just began a php device-configuration-management project and was strongly persuaded by a senior php developer that I should use nodejs instead, as he thinks nodejs _is_ the future and many big guns are using it for real deployment(netflix, linkedin,paypal...), just in time to try the fresh nodejs release for the new project.

Any one of these diagrams is the sort of thing that in one fell swoop demonstrates a principle and how something works, where a lecturer at Uni may spend twenty minutes with a couple of diagram and plenty of hand gestures and produce an inferior result. The combination of them all together makes this the sort of thing that would be a superb basis for more complete educational/training materials.

Great job, Steven. I don't think I have ever seen anything like this before on the web. This was extremely fun to watch/read. I'll be taking a look at MathBox.js and see if I can build something fun with it as well.

Exceptional visualisations! At least for me, seeing things like the camera zooming so it's placed between the perfect-vector world and rasterised world just makes it so clear. A picture is worth a thousand words, but a visualisation like that may be worth ten thousand.

This is incredible. I've learned a lot of this before, but without the awesome slides, but this presentation makes most of it so easy to understand that it becomes obvious that that's how these problems should be approached.

It really does go to show how much of a difference presentation goes to aid learning and (more importantly) understanding.

That's beautiful. WebGL makes you realize that all the "designers" fooling around with CSS are playing in the kiddie pool.

Of course, as soon as you use WebGL, users expect the visual quality of an AAA game. What you tend to get is crap like this.[1] It's possible to get the GPU to do great things for you.[2] But that's a programming exercise. Good 3D content is expensive. Most of the WebGL demos available either have very little content, or are recycling old video games.

All this technology, already deployed, and little good content for it.

This is great, I'm a complete noob and I learned a lot!Something that I didn't understand though is how sampling rate and the justification for Apple's Retina are related (slide 31). I probably just don't know enough about either, but I'd greatly appreciate it if someone could explain. :)

Great presentation, although I didn't finish it because the load times between steps got to annoying (it seems like they only load when you switch to the next slide, instead of preloading at least the next one or two slides)

Does anyone know if the author has written or presented on his workflow as he goes from idea, to concept, to rough draft, to finished product? I'd really love to learn how he goes about it... Pixel Factory was so dense and clear thinking, beautiful, intuitive. Wow.

I never studied math in college, in fact I barely studied math in high school. It was always daunting to try to parse the notation and figure out how the abstract symbols corresponded to some piece of reality.

When I became a self-taught developer I found my math skills continuously lacking. I started teaching myself on Khan Academy and really picking it up a lot better because of the simplicity of the language and the good examples. I finally realized I learned math best visually.

Interactive lessons like these are great. There are things that can be improved about this book (load times and enhanced interactivity) but all in all this a great resource for people that learn best visually. I'll come back to this soon in my future self-education.

I always see linear algebra on HN and many people comment on how they never understood the subject. This makes me ask: what exactly is it that people don't get about linear algebra? What makes it appear to be a difficult subject?

As someone who has used linear algebra almost every day in some form over the last decade, it's hard to get a perspective of what aspects are challenging to the beginner. And since I TA courses that involve linear algebra, it is good to know where the problems are.

In terms of programming and linear algebra - please consult someone who is actually knowledgeable about the subject if you're implementing it in code.

Linear algebra without error analysis is very dangerous. Many many things are theoretically useful, but can't be used in practice. You can't calculate determinants, you can't count unique eigen values, you can't use certain decompositions.

Unfortunately this isn't really topic you can do a quick tutorial on and start writing new algorithms

This is actually pretty cool! I spent the last few weeks going through Gibert Strang's popular OCW course[0] & I'm sure this would serve as a great companion. I can't wait for the chapter on Eigenvalues to be published as that is something I don't yet grasp intuitively. Great work and thanks for making this free and accessible!

I'm sure that this a wonderful book, but please note the error right off the bat in equation 1.3 The tangent of the angle in question is b/a NOT a/b as stated. Probably a good idea to keep an sharp eye on the math as you go along in this thing.

70 seconds to load a chapter? That's a terrible benchmark for even for some of the heaviest websites out there!

This may not be a popular opinion but I (and many ordinary readers like me) see that link as a website. Not a book.

It feels heavy and overwhelming to see a large number of 3D diagrams and visual depictions on just one web-page. Having to scroll down to read the full chapter with all that animation and "motion" is probably a bad move too. Given that this is supposed to come off like a book you can probably ditch the scroll.

Ideally, you'd want to give away few concepts in small easy-to-understand chunks with just 1 or 2 figures per page. And let the reader flip/click over to the next section like it happens with an ibook or kindle book or even a real physical book.

IMHO the idea of ripping apart a book at its spine and forcing the loose design of websites over it is a complete no-go for avid book readers. Especially for the mobile and tablet users (probably even for the desktop users!, why else would everyone insist to download PDF, ePub or other artifacts?). But I'm sure that a section of developers over here wouldn't agree with my opinion. So take it all with a pinch of salt.

At some point, I'm hoping to get enough of this to solve this problem: you've taken a picture of, say, the Mona Lisa in its rectangular frame, but because of crowds you weren't in line with dead center, instead you were 5 meters back, 1 meter high, and 2 meters to the side, and not even pointed at the center. Your photo now contains some quadrangle that is a projection of the rectangle. I'd like to tag the four corners and have an algorithm map the photo to its original rectangle - I intuit there's enough information in the photo and the four tagged points, given that the original is actually a rectangle.

Slow to load (possibly due to HN traffic), but once it does, it seems like it's got the makings of a great learning tool. Waiting for matrix chapter since that's where I stopped learning Linear algebra on both my past two attempts (Gilbert Strang made my mind explode as I tried to comprehend past 4 dimensions. And then I just got lazy). Really want to pick this up because without linear algebra it's easy to get lost in all the major streams of Machine Learning. At least that's what I felt when I tried to skip linear algebra and move on to ML.

Sigh. I sometimes wish I paid more attention to my studies while I was in school instead of goofing off and playing card games :'(

I was thinking of working in a lab that does research in the field of computational biology. However, I never took a linear algebra course before so I always felt like it would be a waste to make an attempt. I did a quick skim and this looks very promising. If I can comprehend this, then maybe I will be of some use in the lab. Thanks for sharing :)

This looks amazing and well timed as I've been attempting to learn Linear Algebra on the side. However, everything after Vector Products is "coming soon". I will definitely use this once it's all there, which i hope is soon. Heck I'd be willing to pay for it if it were ready now.

Is this available in an off-line bundle? I am becoming increasing wary of online books/training/applications that can not be read locally. If I am going to take the time to read through a full book (possibly weeks of reading) I want to be able to use/reference it in 5 years like my paper books.

most of the value of a good math book is that years after reading it you can use it as a reference to look things up you will inevitably forget.

I just bought a home, and just started a considerable renovation. I'm putting in new water pipes, new electrical wiring, etc. I thought of putting "smart" devices (i.e. switches, alarms, thermostats, etc.) given the "advantages" these promise.

After considerable research, it's not worth the hussle or money. Let's put aside the fact that these are considerable more expensive, and won't breakeven in years (some devices smart devices simply don't breakeven).

The main reason I decided not to have any of these installed was due to how cumbersome they are to operate. Each appliance/brand has their own app/portal, which does not connect to other brands, making it impossible to have an overview of your "smart home". Even more scary, some of these devices are operated by startups, god knows, if they will be alive next year. Good luck getting that app to work with iOS 10! It's a true headache, it's even a headache for contractors, who have no clue how these work. It's going to take some time (and education) to have an OS that makes a smart home smart...

and don't get me started on the smart baby monitors, etc... if my siblings an I were brought up just fine in the 80's without being in a "smart onesie", I'm sure we can do just as fine today.

Commentary about the silliness of the avalanche of IOT devices being created right now aside (99% of consumer internet startups are based on dumb ideas and fail, but that doesn't mean there is no market or trend!), it's inevitable that this stuff is going to get traction in the market and it's a vast market. I doubt it's going to happen based on a bunch of edge-case $99 devices though.

The big trend here is the cost of wifi enabled microprocessors dropping down to nearly nothing. Last year we were excited about raspberry pi dropping prices down to $30 for sensor-enabled hardware on the network.

This year you can buy a wifi-enabled microcontroller for _$3_ (search esp8266). And that's not even in volume. At that price, pretty much anything consumer electronics companies build can be addressable on the network.

Add to that voice control, which is crude but usable and built into every phone already and improving quickly. The idea of walking into your house and looking for a light switch is going to feel like walking up to your TV to change the channel did 30 years ago when the remote went into wider use.

I find the economic arguments about not saving money using IOT devices a little amusing, on HN especially. My guess is that almost everyone reading this forum spends a shitload of money buying techno gadgets for reasons beyond "it saves me money."

I went to an Internet of Things meeting in SF about two years ago, and it was about like this. A Samsung executive was touting an Internet-enabled refrigerator, which was basically just a refrigerator with a tablet built into the door, with no special sensors, costing more than a refrigerator plus a tablet. I asked him why they'd built the product, and got an honest answer. He said the market was three types of people:

- People who just had to have the latest thing - early adopters.- People who like to show off their houses to other people (the granite kitchen counter crowd)- People who just like to buy expensive stuff and will buy the most expensive thing.

I talked to a HVAC engineer there. The room we were in was an old industrial building in SF. It had skylights with chains and toothed pulleys for opening them, openable windows, curtains for both, ceiling fans, both spotlights and light cans, a video projector and powered screen, and a standard HVAC system controlled by a standard thermostat. Controlling and coordinating all that would be a good "internet of things" application. He pointed out that companies which installed that sort of thing wanted it to work, and not generate service calls. Engineering, installing and connecting all the motors and sensors to run that room properly would be a big job. Motorizing the old skylights alone would need custom engineering.

That's the problem. Internet of Things stuff that's actually useful requires more than buying some plastic gadgets. Just an HVAC system for the home able to open and close windows would do more for heating cost and air quality than Nest's gadget, which, in the end, just turns heat and A/C on and off.

1. Anything that is in the world when youre born is normal and ordinary and is just a natural part of the way the world works.2. Anything that's invented between when youre fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.3. Anything invented after you're thirty-five is against the natural order of things.

And this (though you should read the full thing):

Another problem with the net is that its still technology, and technology, as the computer scientist Bran Ferren memorably defined it, is stuff that doesnt work yet. We no longer think of chairs as technology, we just think of them as chairs. But there was a time when we hadnt worked out how many legs chairs should have, how tall they should be, and they would often crash when we tried to use them. Before long, computers will be as trivial and plentiful as chairs...

So, since Douglas was writing (a) a lot more of us are operating in the 15-35 category where technology is cool and (b) a lot more of the stuff around us is technology in the sense that it doesn't quite work yet. It's become pretty much standard in startup-technology land to make the case that some technology "tick all the boxes," saving time, money and generally being ultilitarian and awesome. people who want to buy tecnology because its cool, play along. They need some way of justifying an internet-of-things coke machine, which they want because it's new and exciting.

Internet-of-things is still at the stage where we're throwing things against the wall. Most of it is not useful, or barely useful and the people who buy it, do so because they want to... for fun.

That doesn't mean none of it is useful or that some generally useless thing isn't useful for you, it just means you have a two legged chair.

One of the big problems with IoT is the cost of the connectivity bit of the hardware. You want these things to be low powered but that costs money. You want these things out in the field, but providing constant power is a nightmare.

I've been looking at Automatic Number Plate Recognition networks using Raspberry PI2s, transmitting only the number plate to do transit route analysis. By the time you've added a battery, a GSM module,and a solar charging panel, it's suddenly become a 150 piece of hardware.

IoT is so so interesting, but I think the hype around it is driving money into the domain and people are just ramming the devices anywhere they can.

As someone actively working in the smart home space, I refrain from calling our business IoT for exactly this reason. I've even challenged the team to stay away from smart home. We do very little "smart" home stuff and instead rely on cleverly designing a set of devices that don't require an application or future technology (AI, voice control) to work properly. They also don't take up space in your home and combine the functionality of two or more devices into one.

I'd like to think I've been a bullhorn for the "IoT is stupid" movement, but I think the author did a great job of calling it out as well.

I work at a market research/consulting firm that specializes in the embedded devices market. This means we cover any semi-specialized device with a CPU that is not a desktop, laptop, or tablet.

The consumer-facing home IoT stuff (Nest, smart-fridge, smart-car etc.) gets a lot of press because it's exciting and it appeals to the least common denominator - anyone from an electrical engineer to a nanny can see how these devices might affect their lives.

Most of the (pretty astronomical) growth of the embedded device market is driven by the applications of industrial connectivity. Think aerospace & defense, automotive, medical, municipal, retail automation. The industries that don't make for sexy headlines.

Ultimately, I believe the entire IoT movement is going to contribute substantially to the economy in the form of cost-savings. Companies will be able to access and analyze a lot more data which will hopefully enable leaner operations due to process refinement and resource conservation. It's a good time to be in the security and analytics business.

While cost savings are great for the bottom line, we also need to find a way to create new markets and generate new, useful products. Hopefully the government has invested enough in R&D to enable the next internet to begin to take root sometime soon, whatever that may be.

From my perspective, it would make sense that virtual reality would be a huge paradigm shift in the way that we create and consume information, which seems to be an underlying theme driving many advances in technology and overall quality of life.

IoT seems to be the maturation of internet connectivity - what's next in the world of technology?

I want to bring an alternate viewpoint into the discussion. For some a product like Leeo may sound superfluous and may not seem to justify the added value it provides for the rise in cost(99$+). (Note there is some added value however trivial it may be). For others they may say no to a product like https://nest.com/ or this https://on.google.com/hub/ based on their financial flexibility and their lifestyle (which you may see as 'obviously' needed).

I do think in this case, the best judge is the free market. If any product maker provides added value at a price point where there will be enough buyers and they see profit, their business will run successfully, else it will fail like any other business. How can my opinion decide what is a good product, it is the market that should decide it!

There are lot of independent products that solve mostly one problem - cars for eg:

Every product design does not have to solve multiple problems, it is just that the users need to feel that it justifies its cost based on the value it provides.

The Internet of things is starting to look like 'Sky Mall'. Time will show if the concept gains traction with the majority of people. At this point I don't see the single mom working in food service for minimum wage buying her children electric onesies.

We should be working on improving existing technologies. Not dreaming up a million more that all inherit the same flaws as the ones we already deal with.

Home automation always was and continues to be a puttering around hobby or suckers game.

A friend bought a house that had a late 70s state of the art home system. Central radio/vinyl/8track player, intercoms, and a broken CCTV setup. Also cool stuff like central vacuum.

The big difference between that house and the modern gadgetry is that the 70s stuff was hard wired and still works. None of the IoT crap that is on the market now will be completely unusable in a decade.

Internet-enabled house jewelry? Feh. It will be fun for a week, then boring.

Here's what I want: stuff that will make me a better neighbor and citizen of the world.

Specifically: Realtime smart energy and water consumption meters. Wouldn't it be great to get some sort of alert if there was a pipe burst or even a water trickle? Wouldn't it be fabulous to track electricity consumption? That could generate the creation of sets of light bulbs each of which consumes a different prime number of watts. Then your smart meter can say, "hey chump, you left a light on in the attic. Turn it off."

Combined with a smart grid and demand-pricing of public utilities (yeah, fat chance, I know), this kind of thing could make a dent in my carbon footprint.

FWIW I find the Whistle to be a good product. I can easily find out when our dog walker dropped our dog off, and therefore how long she has been alone. At the risk of stating the obvious: just because the author doesn't have the problems that these devices purport to solve doesn't make them superfluous.

Some of these products strike me as being created because one of the owners of the startup thought it was cool, not because they went out and actually tried to identify peoples' household problems and figure out ways to solve them.

There are always two kinds of developments. One is where you have a huge problem and people try different things to solve it. The other thing is where you have new capabilities and don't know yet what to do with it yet. One is not worse than the other. Given some time there will be reasonable usecases. Think back to the first iPhone and Android. Nobody really knew what to do with a smartphone yet. Now everybody has at least one and uses it way too often. Internet of Things are just one of the next areas. Let's just calm down and let the market work out what's reasonable.

My favorite question for any smart home product is "what happens if company X goes out of business?" Had to learn to ask that the hard, expensive way but now all my connected things have the ability to run off the local grid.

> I asked a young man working at the Target store how visitors felt about their every action being tracked and he said that theyd come to accept it. And that was that.

I think this is completely true. I've done research in this area, and people under the age of about 22 have no concept of privacy whatsoever (it should be noted that these people were 12 when Facebook started and basically hit their teenage years just as Facebook opened up to the general public).

Here is one of the anecdotes I collected: when one of them arrived at college, she posted a picture of her school ID and her key and said, "I've arrived!". I pointed out that with just the info in the photo someone could make a copy of the key and get into her dorm room. She said, "eh, that won't happen".

I walked through Target's Open House in SF a few weeks ago; I'd recommend visiting if you're in the area. It's pretty slick product display space. Each "room" has a projector which gives an overview of four or five products in a room, and how they tie together in your life. One of the rooms had a Kinect mounted above next to the projector; not sure what it was being used for.

The main lobby has a couple long tables with all of the products on display which were demo'd in the rooms along with some interactive Surface-like table which detects if you get near it and moves floating sprites around. They had displays on the wall listing the most popular products, and a few sales people to answer questions. IIRC there were approx 40-50 products displayed. Kudos to Target for setting the space up.

Everything being sold felt they'd fit perfectly inside of a Brookstone, or Sharper Image when they still had retail stores. Most of them were "vitamin" products rather than "aspirin", which gives way to some of Allison Arieff's criticism in the article: "What the products on display have in common is that they dont solve problems people actually have."

That's very fair to say. There were a few items which did solve real problems, like Nest which can help reduce heating costs, but most things sold didn't fit into that category. Many were "neat" things which you could entice someone with disposable income to splurge on.

Refuel looks like it would be much better pivoted towards the beverage industry. The BBQ going out is not a problem, beer running out is a party-killer. It looks for all the world like a WiFi-connected scale that tells you when the weight of the tank is getting a bit light.

We're poking fun at these, but this just caricatures the entire bay area startup scene. There are so many companies solving rich-people-problems that really shouldn't exist or at least are highly unlikely to scale.

About 15 years ago I was working with a developer that was adding scripting support to their hardware / software combination - an X10 module controller. He was expounding on the greatness of his smart home system, for example, the lights would go out when he got into bed. I asked him what he would do if he wanted to read in bed. He seemed genuinely confused and replied that the bed was ONLY FOR SLEEPING.

All the automation, setup, scheduling and monitoring we are building now needs to be able to deal with people not being consistent. cf self driving cars.

Like a lot of tech products I think these devices have a niche appeal, despite the fact that of these doohickeys are answers in search of a problem people will buy these products (Target hosted this expo after all).

Unlike a selfie stick or edible gold pills however there is a deeper ethical issue inherent in selling products that transact so much data about your life (and metrics about your family and home) for the purpose of creating marketable data sets about every mundane aspect of living. Not to mention how vulnerable a person or family becomes once these devices are integrated into their house, children, car, BBQ, etc since so little attention is given to making these devices secure.

This is why we started EarthData.io. It hasn't flown due to me failing to raise money, but I still believe that the premise of every 'thing' accessible to every 'app' in 'near-real-time' where this all heads.

As long as the connected devices are all connected via their own, standalone cloud, whether it's proprietary, open source or purchased, we're not going to see the true value of IoT and the ROI of connectedness will be squelched. Yet this is how the device manufacturers still view connected devices: A marketing lever to lock their customers into their hardware.

As someone who is building IoT devices (www.flair.zone), I would say that many of these complaints resonate with me. There have been two motivating factors behind what we are doing:Building the Internet of Useful Things and not building the Internet of Expensive Things. So far that has worked well for us and we haven't even launched officially.

Nest (as a company) is an interesting case to examine with respect to this article. The thermostat in it of itself was a much needed upgrade for some and dropcam has a ton of potential for more complete automation triggering, but the protect was pretty marginal value add if you ask me. Fires just aren't that big of a problem statistically and while a smoke alarm that can call the fire department is great in theory, in practice people are leery of false alarms when it could be incredibly expensive. And the 'works with nest' integrations are fascinating: its largly a bunch of companies that want to be associated with Nest and its percieved superiority from a brand/acquisition/something(?) perspective and then integrate these super low value add enhancements. Like the Whirlpool integration: '[if we know when you are getting home, we can refresh your clothes so they stay wrinkle free]'. Such a ridiculous proposition for an integration.

Leeo was particularly crazy. It was a case of 'top tier founders' that all the VCs in the valley love with 30M in investment before leaving stealth mode. Everyone assumed they must be onto the next big thing but it was in fact a giant let down. I am sure the pitch was great: we are going to put a microphone in each room and have voice command in every room but somehow they lost sight and it just became a smoke alarm relay. The other angle maybe was that they could convince insures to subsidize them like (GE/Wink)? I would love to see the total number of dollars invested into residential smoke detectors by consumers annually, the number of house fires in the US/World (and aggregate damage/loss of life) all compared to the stealth mode investment of this company...The internet of things will happen and some devices will add substantial value by better managing energy adding real convinience but the author correctly found some really questionable value add and called it out.

When I saw the Kolibree, the "smart" toothbrush, I realized we had passed the inflection point on the declining marginal utility curve for this stuff. Too much of IoT are solutions in search of problems. It's not too long before we see app-controlled "smart" implantable uterine devices.

There sure are a lot of stupid smart devices, yeah. There's a few useful ones out there, too. Sturgeon's Law ("90% of everything is crap") has not been revoked by the magical act of putting sensors in things.

Home automation is the least likely IoT category to succeed, at first, anyway. The low hanging fruit is in things like public infrastructure monitoring by instrumenting the municipal maintenance and transit fleet. Many enterprises are going to find they can do with a lot fewer desks if they instrument their work environment and spread workers out into co-working spaces.

The people instrumenting these environments are also more-capable of calculating the benefits. Without analysis, it's just shiny toys.

There is plenty of merchandise that people buy that they don't need and can do without. Many of the products in the article are just technology variations and extensions of the type of products that sold for years in Sharper Image, Brookstone or Skymall. (Or on infomercials) [1] Just something attractively priced, that if marketed correctly, will find a small or maybe even a large market because it's in front of people and an impulse buy (as opposed to buried on a shelf at a Walmart. Focus people and single out the product in other words.

[1] There was a commercial last year that I watched for a striped screw removal tool. The price was attractive and I thought "hmm you never know when you might need this". I then searched Amazon and found and purchased the most highly rated product of that category (wasn't going to order from an infomercial). I knew this product existed prior to that of course (my Dad used them when I was a kid) but until I saw the infomercial I had no motivation to seek this particular tool out. After seeing the infomercial I wanted one so I bought it. It actually did come in handy when having to pull a stripped screw from a washing machine.

I'm glad the "Internet of Things" is being held to task by a mainstream media outlet. The Internet of Things is just a marketing term being pushed onto consumers by Cisco, Qualcomm, Google, etc., because selling more radio chips and putting more sensors in the home directly benefits these companies.

But it's offensive marketing because these companies haven't even bothered to frame the issue in terms of solving people's real-world problems. You want to sell an overpriced thermostat or smoke detector? Fine, but don't tell me it's a revolution.

A lot of smaller players are getting swept up in the hype, and wasting time and money thinking consumers will jump at the opportunity to pay 10X the price for something that interacts with their phone. Prove me wrong, but I'm not buying it.

>>Privacy and Security. Every one of these items is connected to the Internet

And we've seen how this has been handled by companies recently. Their idea of security is somewhere between non-existent and EPIC FAILURE status. No thank you. I have enough problems trying to lock down my Windows PC.

>>>I asked a young man working at the Target store how visitors felt about their every action being tracked and he said that theyd come to accept it. And that was that.

Maybe the young man's generation has accepted it, but those of us who have seen first hand what can happen when data gets in the wrong hands, it's not even remotely ok.

>Like you, I once had many products that each fulfilled a separate function: a landline, a cellphone, a camera, a video recorder, a stereo, a calendar. Now, I have one product that does all of those things a smartphone. This level of product integration was a revolution in product design.

Is the smartphone really a revolution in product design or just the inevitability of technological convergence? There is essentially no fundamental difference between products listed. Sure, the user function may differ, but the actual implementations are all based on the same phenomena: stored information manipulable through electromagnetic fields.

How early on was a device like the modern smartphone conceived? I'd wager not long after the discovery of silicon transistors.

2 years and instead of adapting one of PLENTY DSI screens they opt for convoluted DSI to parallel conversion, with 10 year old resolution to boot, color me not impressed :(

This is just like when they released camera module. Instead of opening MIPI interface to the developers they shipped binary blob locked to one particular camera module from one vendor, because fuck you thats why (well, actually one of rpi/broadcom engineers said something like "people wouldnt be able to figure out how to color correct/debayer because its trade secret of camera module manufacturers, so why bother")

Before shooting this down as expensive can we stick to comparing like with like, the device is intended to have a long life span so educators can build quality teaching resources based on the platform.

Even though most people will use RPi as headless server or connect it to TV, it is good to have a decent "default" display option that works out of the box. The display looks very elegant in provided photos. It should be a great choice for hobby projects.

I use RPis for bespoke installations for clients. One of the problems has been offering an easy way to make adjustments to the apps the RPi is running without a keyboard/mouse/monitor setup or having to SSH in. This is a great way to offer the ability to make changes. Looking forward to trying one out

A lot of people are comparing this to just buying an Android tablet and saying it doesn't make sense. You're probably right :)! The Pi has so many more use cases outside of just typical Android use however that this product does make sense for.

My example is that you can rig the Pi to work with your own Receiver as a wifi flac player with this device: https://www.hifiberry.com/. You have to control it over wi-fi, but having a console that I can go up to and interact with will be awesome. Also will be great for people like my Father-in-law, I wanted to build him a device that has all 60s/70s/80s rock for christmas, but I didn't want to have to set-up a wifi router and get a device for him just to control it.

I had Remzi for two courses at UW, one of them being Operating Systems. He's the best professor I've ever had, and this book is an amazing tool for learning the basics of an Operating System. It's a quick read, and I would recommend it to anyone looking for a free Intro to OS resource.

I've learned/taught from the "dinosaur book" [1] and for the price tag it's pretty bad. It's a nice overview but it has several problems. First of all the section on CPU scheduling is pretty sparse and confusing. I skimmed through this book and it seems on par. But the one thing this book skips is Rate monotonic and Earliest deadline first - which I found to be rather difficult algorithms. This is because whenever you would research it - I would find other professors using screenshots from the dinosaur book that doesn't help explain it at all. I would be happy to give you my notes on it.

I really wish that was a an open source project that took developers and/or students from start to finish of an operating system. I should preface that and say that it should be easy to understand and use. I know about xv6 and I feel like that's too complex. I've found MikeOS [2] but I will have to study/extract it into pieces.

In any case - I really think this practice should be more widespread. Unfortunately, I've found many people to offer "lazy criticism" they point out something is wrong but don't want to offer any help to make it better. The Rooks Guide to C++ is a perfect example of this - yeah it's not perfect and doesn't contain all C++ knowledge you could ever know about (there have been a lot of negative criticism about the book). But that's not the point - it's designed for people who know nothing about programming to learn about C++ in a 16 week course. It's goal isn't to replace the Stroustrup expert C++ book.

Most useful thing I have picked up from this is the notion of interposablity. It captures the basic idea behind both LD_PRELOAD hacks on unix and the way servers can be stacked in Plan 9. Very useful new term.

Looks funny, but I recommend you to remove all proprietary Blizzard graphics from GitHub and possible just recreate repository without it. If you want to host assets there better to keep them in different repository.

Otherwise you'll clearly get DMCA because Blizzard has long history of banning any project even remotely copying their products no matter if it's done for fun or whatever.

This is awesome, but as others have said, the Blizzard legal team will come knocking on your door very soon. If I were you and if you're serious about continuing working on this, I would take this down immediately, get in touch with their legal team and see if you can work something out with them to get their blessing on this.

They're not evil, it's their job to protect copyrighted assets, without them Blizzard would be out of pocket and SC2 might not have been created, from my experience, there are some really friendly people there, but you have to get on their good side and I'd say you've already gone about this the wrong way (by githubbing their copyrighted stuff).

Well nice game, few days ago I found something very interesting, html5 game that is a combination of starcraft and clash of clans, with much better graphic and real time AI, but the game doesn't seems to be ready yet...http://ageofsalvation.com/It is the most advanced html5 game I have seen though.

I make games all the time for myself, sometimes using copyright protected images. But I can't share the source code or publish the game in Google Play if I do that.

If I want to share or publish, I have to use free stuff or make it myself. If I want to use copyright material and publish it, I need permission and a contract with the owner of the IP, a license or agreement of some sort to use their stuff.

All this applies to everyone, not just the US, if you want to do business. If it's just for fun that's fine but you can't distribute it in anyway, including github.

As someone who still plays the game sometimes, this will take way too much effort to reproduce satisfactorily. Cursors need to change on mouseover, or when a command is selected. There need to be hotkeys, ctrl groups, etc.

Not sure if you're going to finish this, because it is going to be quite hard to polish.

Not only is this another impressive demonstration of incompetence, but by disseminating the idea that people's luggage could be secure and backdoored at the same time, they're actually destroying what security has existed before. They should pivot to be the Transport Endangerment Agency.

I doubt these images really did all that much for people who wanted a set of TSA keys. The locks themselves are widely available and it's easy to reverse engineer a key if you have the lock (especially multiple copies of the lock to destroy and test on).

If anything, this just made it easier for casual lazy people to get a set of images for keys they'll never make :)

Seems to still be a lot being done in the name of security theater in the US, and just wasting dollars on the TSA, for what appears to be very little effect.

Who's really profiting there? Is it just for the employment of people that otherwise wouldn't have a job? Or are the majority of citizens there really made to feel more secure by having them? I'd have to say some defense contractor is getting a bit fatter off this.

If someone happens to have questions about these keys, we don't physically have them. But we plan on making our own versions & finding the right blanks. If you think you can help, or want to know more, you can always reach out to myself or @Irongeek_ADC.

If you happen to know Solidworks & how to trace objects, I'd like to really get to know you.

So luggage handlers can open my suitcase, put in some drugs, and at the other end I can get caught for having drugs in my luggage?

They should setup a service where you checkin your luggage, they check it for drugs or illegal stuff, they seal it, and at arrival you get your suitcase with the guarantee that it had no drugs at checkin.

I'm kinda surprised that key number 2 on the imgur mirror is a dimple lock. Those are generally used for more high security things than crap tsa travel locks; they're expensive too. Key 4 doesn't surprise me though.

The whole proposition here is ridiculous. "we must assume any adversary can open any TSA "lock""

No shit.

We're not talking about a bank vault here -- it's luggage. Does anyone, anywhere, have any expectation whatsoever that a luggage lock provides meaningful security? I think I opened my mom's luggage lock with my sister's hairpin when I was 6 years old, and I have zero lock picking skills.

It's sad that so many comments concentrate on whether luggage is secure in the first place. Of course it is not. The real issue is that having a backdoor makes a new class of attacks possible. A wilful or accidental leak, for example. Or you can reverse engineer the master key if you have enough locks.

The big impact is that one leak kills the security of all locks (of that type).

I don't think this would necessarily be the case when looking at (publicly) backdoored encryption. Here, you could have an individual backdoor key for each "lock". Of course, the mass storage of backdoor keys make a mass-leak also more probable.

My google-fu is lacking, but recently (last year?) an inmate escaped thanks to their cellmate who was a master jeweler & had a full kit in his cell. A photograph of the guard's keys was smuggled in and the jeweler cut a key for the inmate to escape.

I generally use either a pelican case with abloy protec 2 321 or 330 padlocks (essentially the least pickable), or a pacsafe anti theft suitcase with tsa lock and seals. Not perfect, but beyond casual or even local LEO surreptitious entry.

I usually pay to wrap my luggage with plastic, see www.cnbc.com/2014/04/02/travelers-pay-to-protect-luggage-with-plastic-wrap.htmlI do it more to protect the suitcase than its content.I think it defeats the purpose of a TSA lock. Is it still allowed in the USA?

There's a comment in the original post about bags with firearms requiring a non-TSA lock. Has anyone travelled with a firearm as a maneuver to secure their luggage? Seems lengthy, but probably works. I'd imagine you need to check-in in a different area and not the front desk?

Edit: I just watched the YouTube video posted below. Looks like we're just dealing with a flawed system.

For luggage it's usually on a zipper with two sliders, which you just pull on the tape to separate, look inside, do whatever you like while in there, and then move the sliders back and forth twice to re-close, so it never really mattered.

The TSA has resulted in millions in stolen items, and not caught a single "terrorist". Its procedures are a joke, it is irradiating everyone, or if they opt out, molesting them, which is a crime in all 50 states... not to mention every single TSA search is a violation of USC 18-242.

The existence of this organization proves that both Bush and Obama and the Democrats and Republican parties are corrupt and irrational... and more interested in their own power than in benefiting the country.

It's hard to feel bad for an industry that just flat out refuses to offer the products and services it's customers demand. 15 years ago I considered it ridiculous that TV shows weren't offered online. Popcorn Time should never have been able to get a foothold in the first place because people should have been able to access it's services legally. The video game industry has somewhat learned its lesson now that we have steam which has been great for gamers and developers (especially smaller developers). So, yeah, it's illegal and I understand why we have copyright laws. But people have been bitching about this and taking copyright law into their own hands since napster. Reading articles like these is like watching a youtube video of someone obnoxious get their ass kicked. I don't condone violence, but you get zero sympathy from me.

> "Somebody told me that popcorn time is the Netflix killer and I think that isn't true. I think it's not a piracy problem, it's a service problem. You have to give the users what they want in a fair price."

This is a pretty common sentiment on HN and other places on the internet that's used to justify piracy but I don't think it really applies to most people who say it. The person in the video is from Buenos Aires and really can't get his hands on movies and TV shows short of going to the United States.

But if you're in the United States and want a TV show, between iTunes, Amazon instant, YouTube rentals, Google Play, Microsoft Store and your cable provider the movie is probably a $2-$3 HD rental. $5-$6 if its just released. If you have time to browse HN you're not poor enough to justify pirating over a $6 rental.

Of course the counterargument is "but the DRM formats don't work on my TV/car/fridge", but that doesn't work here because Popcorn Time is designed for desktop viewing and deletes the videos on reboot, not for transferring to other devices. And "the only movies with high pirate rates aren't easily available" doesn't work either because the most pirated shows are the ones most easily available. Game of Thrones is on HBO Go, Walking Dead is on amc.com, Kingsman: The Secret Service and Seventh Son is on every rental service listed above. [0]

"I am convinced that the Popcorn Time-killer is going to be a Netflix without borders. They should remove national restrictions for films,..."

Does someone have some insight into why Netflix has time and geographic restrictions on content? I can understand, in some cases, publishers not wanting to let their movies or TV out to foreign countries. (Maybe waiting for a marketing push, or a broadcast deal to be reached in a new market/locale) But I can't really wrap my head around movies and shows being phased in and out. (Example: Recently saw that some of the Transformers movies will disappear in a week or two).

From a technical standpoint, I can't see the issue being that they can only have a certain number of films view-able. from an economic standpoint, I would think that whenever someone watches a show, a portion of their monthly Netflix fee goes to the creators of that show, so there's always an incentive for the creators to let Netflix show their content.

The most blatant of all is that Popcorn Time is not a site, it's an application (which is why it's been so hard to block).

It uses existing sites (like YTS and The Piratebay) to find magnet links to content to stream (using a torrent streaming library)

Also:"Mr. Robot is not available elsewhere, apart from on Popcorn Time." - What the hell? If a series is available on Popcorn Time it's inherently because it's available somewhere else, as they don't host any content

But the one that bothers me most is that they mention how before Popcorn Time, piracy involved: "Aggressive advertising banners, websites popping up unexpectedly and strange porn ads".

Well, guess what? Popcorn Time is an application that most people download in binary form, so it could steal your personal data, inject advertising in other sites, use your computer as a proxy... etc. It's not a step forward.

Napster, Gnutella and friends forced the music industry to adapt their business models to include digital distribution -- and we've learned that consumers are still willing to pay for services that are reasonably priced, DRM-free, and easy to use even with piracy as an alternative.

Hopefully Popcorn Time will do the same for movies. Netflix and friends have made great strides -- but they are still hobbled by DRM and geographical restrictions, as the article points out.

Never heard of the site until this article but this is the problem we've seen all too much. Most of the time, there's no paid service offering what we want. If there is, the price is unreasonable or the service is ridiculously locked down. This is the same thing that happened with music in the 90s etc. Finally, something like Spotify came around and made it so that music was actually AVAILABLE for us to explore not "you have to purchase this if you even want to know what the artist sounds like."

If you want to watch football games online and you find that it's going to either cost you $20/game for only your home team's games and they cut out the announcers or something and double up on the commercials to pay for the network AND to pay for the game(hypothetical) and then you find that you can watch it on a third party streaming site for free and have your favorite announcer doing commentary, you're bound to not want to pay the ridiculous amount for it and move to using some less-than-wholesome service.

There's no real solution to any of this aside from a paradigm shift. Yes, money is the motivation for creating a lot of this stuff. However, people are just going to continue to find ways around terrible ridiculous lock downs.

The MPAA spends who-knows-how-many millions of dollars hiring lawyers and PIs to go after volunteer programmers in countries all over the world, when they could just be spending that on an online distribution system for movies that would provide the service that Popcorn Time currently does. There's demand for streaming movies and up-to-date releases with people willing to pay, why not meet it?

This kind of parallels craigslist, which has turned the classifieds market into a multi-billion-dollar sinkhole (http://theweek.com/articles/461056/craigslist-took-nearly-1-...). Except craigslist has critical mass and can't be easily replaced, whereas a concerted effort to innovate instead of stagnate by the movie industry could easily become a preferred service to PT.

It boggles my mind how glossy and polished and professional the site is, and that it gives credit to the people who make it happen, but gives no recognition to the people who make the content that everybody feels entitled to.

Seems like this is an idea whose time has come. Once the source is out, how much would it take for a new team to take up the quest? This time they could make sure to create new anonymous identities (how to do this properly?). I don't know the in and outs of such things.

Why has the time come? Watching content on the owners web site is a horrible experience. You get the same ad multiple times in a row, OR the volume on one is barely audible, and the next is blowing out the windows. Worse then it ever was on cable or broadcast. From what I hear, popcorn time makes this all go away.

> Creators and makers should have the right to determine how and where the work they own is distributed. Popcorn Time has no legitimate purpose; it only serves to infringe copyright thereby preventing creators from earning money for their work. The film and TV industry is comprised of hundreds of thousands of men and women working hard behind the scenes to bring the vibrant, creative stories we enjoy to the screen. Content theft undermines that hard work and also negatively impacts the audiences experience online by often directing them to low-quality versions of movies and shows or sites infected with malware and viruses. - Stan McCoy, Stan McCoy, President and Managing Director of the MPA in Europe, the Middle East and Africa.

The initial line of reasoning in this quote is flawed. Creators and makers don't have the right to pre-determine judgement of a particular piece of software or the possible use of that software by users based on some claim to "rights". It's the pre-judging part that is wrong here, not the simulated assertion to ownership of the content in a hypothetical violation. Judging my use of a particular piece of software before I use it is stupid, narrow minded and factually blaming. Would they also limit my use of an operating system to run the software? Or a computer to run the OS? No. Why? Because Apple makes lots of money doing those things.

Because this line of reasoning is flawed, it's not a big surprise Stan quickly drops into bias hacking the audience by making arguments that Popcorn Time "negatively impacts audience experience" and contains "malware and viruses". Given the fact they are willing to spread falsehoods is an indication they themselves are in cognitive dissonance over the whole thing.

Not that they don't spread falsehoods about their own content all the time to us via commercials, billboards, flyers, ads on websites, reviews, etc., etc.

I'd like to see the creative industry move toward an Open Source model over the coming years in an attempt to move us away from these confrontational rationalizations which are being driven by increasing demands around revenue. Perhaps this Open Source model would also allow us to better illustrate the problem of mass production of low quality movies and content. These low quality movies "have no legitimate purpose and only serve to infringe on moviegoer's rights, thus preventing them from enjoying their night and wasting their money on yet another crappy flick".

America is spent out. The US personal savings rate is under 5%. Everything else gets spent, and the saved money gets spent later. There's no "pent-up demand" waiting to be unlocked by advertising.

Advertising is thus a net lose for Americans. All that effort adds to cost. For some products, including movies, long distance phone service, and many prescription drugs, the advertising cost exceeds the manufacturing cost.

This is an argument for a tax on advertising. Advertising expenses should not be deductible business expenses at all.

Note that neither Amazon nor WalMart advertises much, compared to other large businesses. Target spends more on ads than WalMart does, although WalMart is much bigger.

As much as I dislike some of the current trends (and I do digital media for a living mind you), I do have to point out that this stuff wouldn't be done if it didn't work.

Ultimately this implies that there are enough people out there who engage with or...dare I say...want...the bullshit, that their collective voice outweighs those that do not simply by the fact that those are often the users who click ads, share things, and otherwise generate more value and revenue for the publishers than those that do not.

While the arms race to fight this stuff is commendable (I myself run at least NoScript at home and it is beautiful), I can't help but think the only way to win is to not play.

By that I mean coming up with revenue alternatives for publishers that not only generate more revenue than this approach, but also provide a direct incentive to not use these things.

If such magical solution existed, they would switch of their own volition. Instead, they focus their efforts and dollars (and by extension the focus of an entire industry that has been built on those dollars) on adding more items to the list of bullshit.

This page talks mostly about interface bullshit, but people are also tired of content bullshit. To paraphrase from Harry Frankfurt, this is content produced not to conceal truth, but without regard for for it whatsoever. If to lie is to murder truth, to bullshit is to manslaughter it.

Producing bullshit is more profitable because it still attracts eyeballs (and therefore ad revenue), but is much less costly to produce. Thats why the presence of large amounts of ads, needless pagination, and interface bullshit are a reasonable indicator of content bullshit.

The vast majority of working adults in the US are employed in the making and selling of things people don't need. A world without bullshit is total, utter economic collapse. It's the end of capitalism. It's hundreds of millions of people with nothing to do all day and no way to sustain themselves, an inevitable civil war with the landlord class, and a revolution that manages to install a government's that quite possibly worse.

Every piece of bullshit you see is how a great many people pay their mortgages and feed their children. Casting them out to the street is unlikely to make things better.

My way to avoid bullshit is to only read a very highly curated twitter feed. Anybody who mentions a "big" news story gets booted. Like if it's on the front page of the New York times, you get booted. I'll find out about it just by looking at the random media device blaring mainstream bullshit from every airport and doctors office waitimg room, so quit thinking you're the new Paul Revere by retweeting. I value niche information very specific to things I am trying to accomplish.

Even though this proposition is in bold 20-pt type, no arguments were offered to support it. It isn't obviously true, and indeed there are reasons to suspect the converse. Dare I say it, but a bald emotionally-appealing assertion of this sort seems sort of like... bullshit?

As a couple other posters mentioned, be sure to try out the understated "Turn bullshit on?" link on the upper right of the page. It really sells the point.

I sort of hoped that after clicking "I am a racist" to dismiss the "Like us on Facebook" page, that the popup chiding my brazen admission would have hijacked the OK button to post my admission to all logged in social media sites.

But unfortunately Brad seems to be to honorable for that, even after people doubly-confirm that they want the bullshit. And counter-to-reality, the pulsing read "Turn this bullshit off" link works as advertised.

I was just thinking yesterday how inundated we are by ads these days. You see ads on television, the radio, the internet, billboards, public transportation, sports stadium/jerseys, magazines, guerrilla marketing, product-placements and celebrity endorsements, and not to mention PR (which is just advertising by other means). Talk about mental pollution!

Anybody know if there actually are any studies that show that these ads (1) help businesses attract clients and (2) do so without alienating more clients than they attract? Or are they all just for businesses that don't care about keeping customers anyways (e.g. weight loss fads)

I couldn't help but notice that the blog associated with the site (linked at the bottom of the manifesto) is hosted on Tumblr and runs all of its offsite links through Tumblr redirects so that the clicks are all tracked. Surely that qualifies as exactly the sort of bullshit that this person is inveighing against.

This book should have a spot on everyone's desk in hard copy. Use it as a coaster, walk around with it in the hallways, take it to meetings. No need to preach from it though - its very presence will be enough of a sign to others re: your tolerance levels of the amount of bullshit stinking up the current situation.

Perhaps it's a solved problem. Must it be framed as a binary thing where there's either bullshit or no bullshit? What if there's a middle ground: those who find bullshit interfaces and/or bullshit content abhorrent use tools to improve it or completely avoid it, while those who aren't the wiser continue to go along with it?

You want to argue that bullshit content is what's keeping people uninformed? I say no, it takes a certain innate sense to rise above the natural flow of misinformation. Some people can only be guided by rhetoric - they make their decisions based on consensus in their local network and too easily trust people who claim to stand for it.

What we have is a war: between those who guide the senseless and those who exploit them. Take your pick.

This isn't just about advertising. It's equally about bad or distracting design.

But ultimately it's about concentration. I believe, for the most part, that multitasking is a myth. When I am reading something difficult, or that I would like to remember in detail later, I need to focus on it exclusively and read it without interruption. That means ad blockers, print view, etc.

> As the landslide of bullshit surges down the mountain, people will increasingly gravitate toward genuinely useful, well-crafted products, services, and experiences that respect them and their time

This sounds like wishful thinking to me. Marketing and design are surely down to a psychological science by now.

This is by far the most ironic discussion in HN history! To be fair, i cant back up that statement with evidence...but wait, none of the comments here or for that matter content in this death to bs website are backed up by anything other than personal opinions and anecdotal theories.

You would hope that a rallying cry of Death to BS would invoke a slight bias towards withholding BS and focusing on facts that make a difference.

Do I agree that there is way too much noise online? Yes, but complaining about it is as noisy as things come!!!

It sounds like the author resents having to compete for attention with "bullshit", but I'm not sure if there's a realistic alternative. You're going to throw out the baby with the bath water I'm afraid.

As for advertising, paying for entertainment and information with some time and attention is not really bullshit. It's a voluntary exchange, and both sides would not engage if they did not have some inclination that they would be better off than without the exchange. There are other things you can exchange for entertainment and information, and you can completely opt out.

Another piece of bullshit that I am running into with alarming frequency when I click on search results that leads to a local newspaper like the Des Moines Register is that it pops up with a request for a subscription. If I cancel the request, the story appears but with the text replaced by white rectangles. That's just rude. I'm from out of town, for god's sake, I just want to read one story. The New York Times or the San Jose Mercury are more respectful. They give you 10 stories a month.

On the flight/product plus insurance pattern, if it's difficult to cancel the insurance, I wonder what the effect of cancelling the entire product purchase would have on this problem. Just cut to the chase, pull back all your money, leaving a nice, obvious "fuck you" in the resulting vacuum. That is, assuming they haven't inserted the same bullshit cancellation for the purchase itself.

Well actually I like the "bullshit" on pages (that being of course, an exaggeration) because it makes it far easier to filter pages that have content with a low signal to noise ratio. I think content consisting of only bullshit with low substance is the far worse disease and it seems to spread just as quick.

This is the lowest ranked comment right now, but I think it is great, and I want to rephrase it in a way that maybe more people can appreciate it (and hopefully doesn't get me downvoted like crazy)

So the reason I take offence at "advice" or a movement like this (not sure what it's supposed to be) is that it makes the speaker and everybody who associates with it look incredibly good, while it is barely bothering to offer any proof as to why it is actually good advice.

I am aware that this sounds cynical and I beg you to resist the temptation to downvote and/or ignore this comment. Instead I'd invite you to ask yourself: Could this argument have something to it despite the fact that it's pretty uncomfortable?

Going on, why does getting behind this make us look good? It shows that:

* we are not ignorant of questionable business practices in our field

* we don't prey on the (intellectually) weak in order to sustain our businesses

* we value ideals like craft more than money (ignoring that most of these practices are not driven by greed at all but are the only way to ensure the survival of some companies, which brings me to the next point, that)

* we are not afraid to "stick it to the man" (even though "the man" is probably a complete strawman and we don't have to fear any real retaliation for expressing this opinion)

Now there is nothing wrong with advice that makes us look good per se, but is it also good advice?

> People's capacity for bullshit is rapidly diminishing

Again, this may sound good, but it could have been said at any point in time and be true, the question is: Is it diminishing faster then new ways of bullshitting arise?

And maybe it is not diminishing at all. Take gambling for example. It is obviously "intentionally deceptive or insincere" in that it won't make you rich, it is in fact mathematically proven to make you lose money, yet people seem to have gambled for thousands of years and will probably go on to do so for thousands of years to come.

The attempt of linking bullshit to Sturgeon's law is also pretty weak IMO. It's not like anybody set out to put something in the bottom 90%, it's what happens to, well, 90% of things, and it's not at all clear that it was BS that put it there.In other words: Naively looking at the top 10% and saying: "None of those is doing BS" does not mean the lack of BS got them there. All it says is that "In the top 10% you don't have to BS (because you can afford not to)", or even just "BS doesn't get you any further in the top 10% (and that's why nobody is doing it)".

Finally, Buzzfeed is certainly in the top 10% of "lighthearted entertainment on the internet" and it is BS (and only BS) that got them there, because that's the kind of environment that "lighthearted entertainment on the internet" is like. No amount of shaming will change anything about that (but would still make us look good, so produce some quality content over there already!).

tl,dr: Be aware of advice that sounds good. People will like to offer it even when it is not practical at all, or only under very specific circumstances, that may not apply to you.

> Right now, a reply to Justin Bieber by a 16-year-old fangirl goes into the ether, never to be seen again. There is zero incentive in the product to interact with celebrities on Twitter, because no one will see the responses.

This seems like speculation. Empirically, do a search for "@justinbieber" (click on "live") or look at any of his tweets, and you'll see innumerable 16-year-old fangirls who have found some incentive to tweet at him. There's also the subphenomenon of these 16-year-old fangirls getting incredibly excited when those tweets do get seen and interacted with, which indicates, one, that they don't go into the ether, and two, people have a genuine hope of interaction.

I've seen this in practice, because I do actually follow certain parts of popular culture and music and trashy television (not Bieber, as it happens, but enough others) and occasionally look at what they're up to on Twitter. It happens without fail for every celebrity.

So I wonder if the author is actually reporting on how actual people actually use Twitter, or extrapolating from the eyes of a non-16-year-old non-fangirl who cares about things like reply threading.

There might be no fixing Twitter. The reason Twitter grew imho is because of the rich ecosystem of developers they had. Those developers, time and time again, found new ways to use Twitter and did the development, marketing, and educating of the public. The result was rich engagement and growth.

Everyone had a different reason for using Twitter because there were so many apps. Now, those apps are gone. How do you go and tell the developer community to come back? How do you trust Twitter? The answer is you don't.

Most of Dustin's suggested extensions are things other people should have built on Twitter. Of course, it also keys into Dalton's App.Net plan where Twitter should have been the stream and people should have used a countless applications to make the stream more discernible and allow Twitter focus on ensuring the backbone stays in place.

Funny enough, that is how twitter originated. Others built their clients and they focused on the core. They lost that direction and wanted to "own it all" like Facebook. But they took that direction rather too early.

Take Tweetstorming as an example which is a niche need. My team built a tweetstorming app http://writerack.com. It pulls and pushes all it's content from and to Twitter. In an ideal case, Twitter should support it and similar ones rather than making Twitter.com more convoluted with the aim of doing everything themselves.

If Twitter had supported third patrty developers, someone/people would have built a killer app for using twitter to follow and interact live events. That would have brought another set of people into the platform and that extends to other use cases too.

Hopefully, Twitter gets it right because I have come to really find Twitter useful.

> Twitter has turned into a place where famous people and news organizations broadcast text. Thats it.

It has? I don't follow any news organizations or famous people. Well, a couple of Hugo-winning authors. But than's not like Beiber famous. And my timeline is a vibrant place full of friends talking with each other. It's like an IRC channel where I get to decide who's there. And it works great for that.

> Secondand this one is obvious to almost everyoneTwitter needs to focus on realtime events. When I open Twitter during a major debate in the US, or when a bomb has exploded in Bangkok, there should be a huge fucking banner at the top that says follow this breaking event.

Whenever there is a major thing going on my timeline will tell me about it. Because my friends will be retweeting stuff, or tweeting news articles they saw about whatever the thing is elsewhere. I know when there are conventions going on. I know when riots are happening. I know when there is a videogame speedrun charity marathon happening. Well, I used to until I decided to preemptively block the hashtags for those. I know when my friends are musing about their gardens, or their resumes, or their angst about their core skills. I even know when some of my friends are feeling frisky if they've trusted me with access to their private accounts where they occasionally post half-naked selfies. And in the middle of that I get all these weird blips of surreality from various art project bots I follow. I don't need a "huge fucking banner" telling me to follow a breaking event, because my friends will be talking about it.

When I have a problem with some software or some corporation, if I use their @name while bitching about the problem there is a pretty decent they will reply and help fix it.

Yeah, every kid who tweets at Beiber isn't going to get a reply. Duh? Would they expect a reply on other social media? Does Beiber even run his own account? There's a lot of celebrities with mostly-dormant accounts run by their social media specialists, and they're boring as fuck because they're not really there. But a lot of people who are famous, but not Mega Corporate Media Distribution Famous, actually do run their own twitters.

Who the hell is Dustin following here? Does he actually have any friends who use Twitter as his primary mode of communication? Are all his friends on Facebook or G+ or something else instead? Because it sure sounds like he's not using Twitter anywhere near the way I use it.

My problem with twitter is that it feels like a desert land more and more. I get more and more bot 'followers' and more ads in my timeline. The signal / noise ratio has decreased incredibly and continuous decreasing, unfortunately.

As I've written in past comments (https://news.ycombinator.com/item?id=10094396), and as this post suggests, rebuilding developer relations and improving integrations would go a long way. There's a lot of potential locked in the platform right now; they should work on letting developers get access to it more easily and strive to remove the cloud of uncertainty that has built up around it (i.e., will they shut me down if I do something too popular that colors outside the current lines?) It would benefit everyone, especially Twitter.

Twitter doesnt seem like it's capitalized on what it's good at, or anything really.

For personal: Facebook has network effect, complex relationships, share anything and everything, privacy, groups, etcs. Younger generations more focused on sharing are using Instagram, Snapchat, and all the messaging apps.

For work: LinkedIn gives value in seeing work histories, connections, companies, etc (although still a bad product but without competition)

For news: The mainstream just use news sites, search and Facebook or get alerts from all the other apps/networks/reddit and there's RSS which is way nicer for following blogs and niche news.

What Twitter has been good at is allowing people to have a easy public voice (although nobody might see it but its there) without being tied to a personal identity and giving the chance to talk to people you might never be able to reach otherwise. You can tweet at politicians, celebrities, top executives, companies and can reasonably expect a reply or exposure. That's really powerful and a great equalizer. It's also good for real-time obviously, working like a constant stream of consciousness of the collective you follow.

However like the article says, thats it. There's no movement on the product itself. Terrible UI with broken conversations, broken sharing, broken lists, no new features like deduping tweets, non-chronological ordering, better developer APIs, and the ads product isn't great either.

It's kind of sad that the network that originally began as messaging based around sms/phones has been completely overtaken by all other messaging and sharing apps while still keeping completely unnecessary limitations like 140 chars. There's just no focus here...

I think the disaster that is Twitter is a product of its culture. If you read the history of the business, it started with a group of people who really arrived in the business quite randomly. There wasn't much thought given to building a balanced team or a strong culture. Effectively, twitter was an accident that came out of another business.

Most worryingly, the founders couldn't agree on what the purpose of Twitter was, and illuminatingly people in this discussion still don't have a clear idea of its purpose. Is it a news broadcast system, to follow current events? Or is it about sharing your personal life with your friends?

From a technical point of view, I find Twitter very confusing. I read that they were at over a million users before having any kind of backup strategy. They rewrote their systems from Java to Scala, but then seemed to regret that decision, their decisions on shutting down API access to third parties have been really nasty... this kinda thing makes me worry that they don't have clear leadership.

And then there's the politics of infighting, and some of their executives being "overthrown" over time... I can't see how you can create a good culture when people at the top are behaving like that. It's hardly rocket science - just focus on the product and your users.

I think creating a culture from the beginning is a lot easier than changing an engrained culture, so my view is that Twitter is screwed. Failing a Jobsian turnaround, the best they can do is sell and sell fast. I can seriously see Twitter losing out to a startup. Any thoughts on whether they will survive?

I really, really want to like Twitter, but I just can't. Most of the content in my stream is crap, as I (apparently) don't know the "correct" people to follow. Finding the "correct" people to follow is difficult, and even then they sometimes spew multiple boatloads of crap. :/ I wish there was a way to filter some of it out and only keep the good stuff.

Conversations are almost impossible to follow. Once you locate a good tweet, its a confusing process to find all the related tweets. Sometimes, they are below the tweet (which is confusing as I don't read from the bottom up most of the time) and sometimes they are buried inside the tweet. Grrrr!!!

Finally, putting non-text media in a tweet is turning out to be horrible. At least when tweets used to be ASCII, I could reasonably read through them. Now, I have pages and pages of little silent movies that start playing when they come into focus. How annoying is that!?

I really like the "World News Headline" feature that Dustin proposes and would probably use Twitter more if it has something like that. However, given that Twitter is transforming itself into a Vine/Instagram clone I probably won't be hitting the tweet box much in the future. :(

"There is zero incentive in the product to interact with celebrities on Twitter, because no one will see the responses."

Maybe true for the 1mm+ follower people, but ironically, this is 90+% of how I actually use Twitter. I tweet at a mid-high volume (100k->500k) individual, and occasionally get a response, more often get a fave, and ever so often get a retweet.

For < 50k follower people, I almost always get a response if what I sent was thoughtful.

Also, I love looking at the responses to tweets - and often respond to those responses, and get a thread going with the responder - often dropping out the original person who tweeted altogether.

I'm not saying all is well in twitter world - but I quite enjoy (perhaps too much) the back and forth/threading/responding that twitter offers. I really, really don't need any more.

My main issue with Twitter now is the recent (undocumented) change to the mobile apps for "suggested tweets." (example: http://i.imgur.com/AfsmQgW.png )

As mentioned by the commentators, Twitter has a discovery issue. Twitter's solution is to put "suggested tweets" below the normal replies, but without any clear discernible division. (just "Suggested by Twitter" in light gray color). Way too many times I accidentally read suggested tweets instead of normal replies while instinctively scrolling to the bottom and I get very confused.

Bigger issue is no developer trusts twitter any more. Just like LinkedIn. twitter has been unkind to the developers which helped it make popular. Remember things like lists, hashtags, media embed are all brought to twitter before twitter did so itself. But developers of these innovations were treated badly. And hence no developer wants to develop for twitter platform any more.

Twitter need to focus more on developers. The underlying concept behind the site is really solid. Allowing people to build cool things which adds value and brings in new users is good for everyone. So make it easy for them! Their API is not great, and I cannot for the life of me figure out why they don't release official libraries for the major languages?

Twitter's problem is simpler: it is great for power users, shite for everyone else.

Twitter needs to curate the content I see better - especially for newer users. Twitter is boring as sin until you follow a few interesting people, then it becomes overwhelming as it adds too many more.

Twitter needs to focus on the feed being more malleable, both with and without personal effort from me.

Very interesting point regarding the illusion of interaction that Instagram provides. I do find Twitter's displaying of replies and retweets (to the first 100) questionable.

As for how Twitter can improve in that aspect, how about a horizontally-scrolling feed of users who retweet, and a less-annoying version of such for comments? They seem like relatively easy design choices.

Twitter is very confusing and useless to anyone who wants to use less than one hour per day. So, if they focused on interactivity between anonimous,famous and "live events", most people who do not want to live on Twitter, will have a reason to open the app at least a few times a day.

I seem to recognise a pattern where Twitter & Apple's AppStore fail in the same way: Failing to give end losers (er, users) intelligent filters so they can decide what they want to see, and equally important, what they do not want to see. The conspicuous absence of those options speaks loud & clear as to the platform owner's intentions.

Not sure I agree about #1 and maybe my twitter experience is different from others. First, if I want to communicate with others I take it over FB/Email/Messenger of choice. Sure public conversation is nice, but its painful over Twitter and I'm not sure how it could be made better. Everyone talks about how a threaded view would be nice, but people fail to consider that Twitter conversations are never 1-to-1, its usually multiple people tweeting at one person. Having a conversation on something as open as Twitter is like trying to have a conversation with the President during his speech. Not everyone can talk at one, and no matter how you do it, the interface will drown some out. Combine that with the fact that you can tweet anything at anyone (unlike a Facebook/HN thread which is usually around a specific topic), you get a very constrained opportunity to have actually conversations.

However, Twitter does a better job at problem #1 than instagram does. Beiber is not replying to fans over Instagram - and I doubt people are actually communicating to celebrities via Instagram comments, have you seen Beiber's (or any music celeb's) instagram? Its a wasteland of spam, self promotion, and emoji. I doubt Justin Bieber has a higher reply rate on Instagram than Twitter - it's very easy to see that Justin Beiber engages fans on Twitter, not so much on Instagram.

That said, as someone who uses Twitter heavily, but never tweets - my most useful function for it is a realtime news feed (to not just news orgs, but people, parody accounts, comedians, tech nerds, sports news, ...). I place as much emphasis on the ability to "respond and have conversations" to the success of Twitter, as to the success of Buzzfeed and other news orgs (- I doubt you need an active comments section to have a good news site - most of it is garbage anyways).

The second and third points are apt though - Facebook's "trending" seems a lot more useful the Twitter's, however I'm not sure how useful either is without constant curation. Even if Twitter had a super sophisticated algorithm to automatically detect topics - without curation you end up with garbage. Facebook's trending is just as useless once you have the reason why its trending.

Lastly, FTA>Twitter has turned into a place where famous people and news organizations broadcast text.

I'm not sure how Twitter can fix this, but my response to this is if this is what Twitter has become, then its because you made it that way. Reddit now has a subreddit /r/BlackPeopleTwitter which would give you a very different idea on what Twitter is if your Twitter experience was like that.

Tech celebrities such as Guy Kawasaki have had teams of tweeters working for them for years: it seems reasonable to assume large numbers of people are employed to interact with show biz personality obsessives...just like all the faked autographs the Monkees etc used to mail to their fans...

If the MM+ tweeter is half way savvy they'd do what Zuck does and reply to some small % of his followers. I'm sure if they did that, a 13 year old follower would irrationally hope that their tweet was seriously read and would engage in the conversation.

One of the strangest things about Twitter is that its search seems broken. Sometimes when I'm trying to locate past tweets authored by myself or by someone else, I can quickly find them on Topsy, but almost never directly on Twitter.

Twitter has begun to feel stale. Considering how much I love learning from people like pmarca and pg through it, this is something that worries me.

I'm a power user and have been on Twitter since 2007. I used 3rd-party apps until they fucked them all. I'm now forced to use their glittery official app full of ads and suggestions and things I don't care about. I follow less than 50 people and use Twitter 24/7 literally. I never miss a tweet.

> First, for normal users, Twitter feels too much like a one-way broadcast system. It needs to feel more like a community, with meaningful two-way interaction. Right now, a reply to Justin Bieber by a 16-year-old fangirl goes into the ether, never to be seen again. There is zero incentive in the product to interact with celebrities on Twitter, because no one will see the responses.

Let's force Justin Bieber to sit down and read the thousands of replies he gets to each any of his tweets.

Let's also make it so when I click on a Justin Bieber tweet, my browser downloads a webpage of 50MB with all the responses so I can read them all.

> Secondand this one is obvious to almost everyoneTwitter needs to focus on realtime events. When I open Twitter during a major debate in the US, or when a bomb has exploded in Bangkok, there should be a huge fucking banner at the top that says follow this breaking event.

No, please, please no, NOOO. Some of us are just simply not interested in real-time events and use Twitter to talk to our friends. If a bomb explodes in Bangkok, I simply don't care. If I did, I'd use the search engine. And by knowing Twitter, they would probably make the banner mandatory, or would make you dismiss it each time (along with a nice "Did you like this?").

> Third, Twitter has fucked up multimedia integration. Why the hell does adding a photo or video use up some of the 140 characters I want to use for my description? Why does it crop my photo? Why does it not show full-width images in the feed?

Because Twitter is a text-only social network... Or at least that's what it was.

> Fourth, lets talk about third party payloads/integrations on Twitter. They have never felt native, and they are stillafter three yearsin a bizarrely dire state.

Same response as before: I think media integrations should not be encouraged.

> And that leads to me to the final thing I want to talk about, which is also the most important: Twitter has fucked up its platform. Twitter has turned into a place where famous people and news organizations broadcast text. Thats it.

So don't follow them. Follow only humans with real feelings that are not using Twitter to earn themselves money.

> The fact that automatic tweets from apps are considered rude is one of the biggest failings of Twitters product teamTwitter should be the place for apps to broadcast realtime information about someone.

So you want to read automated tweets all day? Don't be silly, who wants to read "Johnny has favourited this vid on youtube!!" "Mike has uploaded this pic to Instagram!!" all day? You? No, nobody, that's why these integrations with automatic tweets are RUDE. If I want to know what you uploaded to Instagram I'd follow you there, jackass!

Clearly, such tricks may already be used by some expert detectives but given the folklore surrounding body language, its worth emphasising just how powerful persuasion can be compared to the dubious science of body language.

It strikes me to the degree that the folklore is mixed with the (so called dubious) science.I was always interested in the topic, and had indeed read a book when I was teenager, with ideas such as the iconic example of Clinton touching his nose.

About 5 years ago I went to the topic again; learned that one of the biggest authority in the topic was Paul Ekman and read a couple of his books

Surprise surprise, the main takeaways were ideas such as:

(...) not to jump to early conclusions: just because someone looks nervous, or struggles to remember a crucial detail, does not mean they are guilty. Instead, you should be looking for more general inconsistencies.

Or

There is no fool-proof form of lie detection, but using a little tact, intelligence, and persuasion, you can hope that eventually, the truth will out.

Ekman repeated all over the place that there is no body language for lies, only for emotions, and that the emotions can have a variety of causes.And that was already clear the last century! If some security entity has bought something that promised to spot lies, it was probably folklore-based and no science-based.

"Thomas Ormerods team of security officers faced a seemingly impossible task. At airports across Europe, they were asked to interview passengers on their history and travel plans"

It is so sad that nowdays it is not seen as absurdal that some kind of policeman is asking passangers about their travel plans. The journalist get excited that new methods of catching "cheating passangers" are beind developed.

Apparently in the brave new World we have created this is considered normal.

This seems to me like the technique that is already employed by Israeli airport security agents. They have a normally-flowing inquisitive conversation with every passenger boarding flights to or from the country, and they are very good at detecting when the details don't add up or when the person is acting too uncomfortable.

Not sure if trying to "trap" the liar is always the right way to go. Once I had to work with a freelance web master and we were trying to cancel his contract and make him transfer the domain name to us (an association I was working for). I confronted him by asking politely for a written copy of his contract because I thought he was lying about the length of the contract. He was insulted and claimed there was a verbal agreement several years ago. I don't know if he was right or wrong about that, but after that he would make up things about basically anything (such as, that changing the owner/registrar of our domain would cause his other customers' files to be deleted etc.). We now offered him money to break out of the "contract" early, but now he was already stuck in his lies. If he would say that it was technically possible to move the domain name, he would also expose his earlier lies.

After almost a year of arguing with him, he offered to transfer the domain before the end of the "contract". But for "technical reasons" it had to be done after the hosting company had shut down the web site and e-mail, which caused some downtime for us. In retrospect it would have been much easier if I hadn't questioned him in the first time.

You will find the techniques described in the article used when you are traveling into Israel. I've gone through this many many times, and am still often taken aback by the weird questions I sometimes get asked by their officers. Still, the whole process is rather smooth and I've never been detained or even treated unfairly.

I read "What every body is saying" by Joe Navarro, an ex. FBI agent a few years back, and one of things it spent a lot of time on was tearing apart the notion that we can recognise liars by body language without knowing them well first. People do have "tells", but as the article says, they vary wildly from person to person.

They're still interesting to look out for, though, as they're helpful hints to let you direct your conversation to probe at areas that makes someone nervous and/or to figure out what someones different tells means.

From what I understand you typically need a 'base-line' of normality to know when someone has deviated from that. People can get nervous or make mistakes at any point. The best way to set the base-line is to have a general conversation to put them at ease and then ask the more consequential questions. This could lead to longer queues though which would be an unfortunate side effect.

Lie detection via facial cues / body language strikes me as the sort of thing best done with a neural network: thousands of tiny noisy cues each with very weak correlations that need to be combined with solid statistics. Humans can't process this many cues at once and bias drowns out the signal, but a smart NN hooked up to a powerful camera is another story.

The 'active' method in the article is useful, but has a limitation: you need to be able to ask questions in real-time.

Collating the answers and automating truth evaluation would be a pretty interesting AI problem. It should also be possible to have an AI formulate the statistically optimum questions to ask, Akinator style.

P.s. That would be the killer app for Google Glass, right there. Would also increase chance of being thrown out of pubs by 1000%, but hey.

Seems to bear a lot of similarity to standard deposition/examination technique. Because a lot of important things are decided on a paper record the standard techniques involve asking open ended questions and drilling down to details, often going back to cover the same ground again, hoping to elicit testimony that's implausible or contradictory.

These sort of things really bother me, because they cause me a lot of trouble.

I am really uncomfortable in front of people, and no matter how honest I am, I am show my discomfort in interacting with people quite a lot. I look around the room while talking to people, I shuffle around, sometimes I sweat just talking about the weather... I have social anxiety obviously, but it's awkward to start a conversation with anyone by saying, "Oh hey, this interaction is going to be really awkward because I have social anxiety." I tried it for a while and most people seemed to just blow it off.

It's caused me quite a few issues with friends who frequently think I'm lying when asked 'truth-seeking questions. More importantly authority figures tend to misinterpret this as me being deceptive or uncooperative.

---

The most recent example was a police officer that came to my house to see if I had seen my neighbor's car recently. The car had been stolen and they were trying to determine the last known time it was present. As usual, during the rather normal questions I was rocking back and forth, chewing on my nails and shuffling my feet. I'm painfully aware of these things and consciously try to stop each little tick, one by one.

It didn't take long for the officer to ask why I was so nervous, and the he promptly switched the subject and asked if I had any hobbies. I'm fairly obsessive about my hobbies, and I pretty much immediately started rambling about what I was doing. I suspect I stopped most of my 'nervous ticks', because he interrupted my ramble to ask why I was lying about not having seen the car lately.

It was extremely offputting, since I wasn't lying. I got really nervous again, and started thinking about how I 'screwed up' the interaction. Instead of responding to his question I simply told him that I had really bad social anxiety and this questioning was really difficult for me. That didn't help at all.

I ended up with another officer at my door, with more questions that I had no answers to, and I became more and more nervous. Needless to say this went on for ~30 minutes just standing at my front door until I suppose they realized that I was either telling the truth, or was a completely unreliable witness (I was!).

---

Things like that aren't rare for me. It sucks, and early in my life it caused me to simply lie a lot. People almost always thought I was 'up to something' or 'not telling the true', so I would just go with it. I eventually learned the value of consistent honesty, but I am treated the same regardless.

It goes without saying that this being in my head during every single personal encounter causes me even more anxiety and unsuredness about my responses to someone.

edit: I noticed I actually started rocking back and forth and itching my head randomly while writing this post... bleh.

I grew up very deaf. I went to mainstream schools, I did everything everyone else would do... except hear much at all. I wore hearing aids which helped somewhat, but that just makes all that jumbled noise louder, which isn't that helpful.

I was deaf since birth. So at a very early age I picked up body-language, micro-expressions and of course lip-reading which were rather an integral way for me to communicate!

> "The problem is the huge variety of human behaviour there is no universal dictionary of body language"

Um. Yes there is. Everyone uses body-language. Everyone uses their mouth, their eyes and their hands. Take in sign language. While it's not the same in every country, someone fluent in any sign language can understand BSL (Brazil Sign Language) or NZSL (New Zealand...), ASL etc. Why? Because sign language is the most literal thing you can think of. If I look at someone and point to them, then point to someone else, what do you suppose that means? The only issue is local dialect/slang which is easy enough to figure out.

I'd like this BBC article to try this on deaf people and see what the results would be. It would be extremely different. Even for people who just wear hearing aids: a frown, does not mean anger... it means they're trying to understand you. If that person misheard a previous question, but then didn't mishear it the second time then... are they lying? No.

For myself in particular, when I was trying to have conversation with people I had a few difficulties. For this, think of dyslexia, say the brains language processor. For some people with dyslexia an example sentence could look like: "I ___ to ___ shop ___ the ______ ____ ___ car". It's exactly the same for someone who is deaf. However, they need to be working that language processor in their head 300% capacity. Not only are you lip-reading, using sound from your hearing aids, you're factoring in context, location, the person talking to you, body-language and so on. So a deaf person, will then fill in those blanks in my above example and hope they got it right. Except, by that stage, more has been said and you're now trying to remember what was said just a few minutes ago. Then you're defeated.

However, if you watched my body language in ann airport you'd probably shoot me or whatever customs does. I'd be the ideal 'liar' that this BBC article refers to.

Since I got my cochlear implant, (I jumped from 4% hearing to something like 80% upwards) my world has grown incredibly. Not only do I have my previous skills, but I can now add verbal input into my once stressed language processor. It's incredible what I can pick up on. Now that I have that extra sense/input, I find that I can tell whether someone is not being truthful or honest. Another poster here said that "give them enough rope to hang themselves with" and that's very true. Someone rambling? Watch their hands. Someone straight to the point, confident, and uses no body language -- very confident of themselves. So simply throw them off. Does their attitude change? If it does, what does that mean? Context comes into play here, and customs simply don't have the time. Nor will Police.

Someone trying to explain the minute details of their drive to work, watch their eyes and see where they go (looks you in the eye, wall, phone?). Then stop. Who exactly remembers details that they've got no reason to remember? So they'll tell the short version, 'cos they have done it 1,000 times. Then if you're probed such as this article says... you'll end up getting anxious, and contradict yourself. "oh, maybe I did take Stuart Street...".

I am a firm believer that body language is really a good way to determine language nuances, even in different languages. It works. I've been friends with people who didn't know English, but I could communicate with them effectively enough. Giving someone a few weeks of body language training, is going to do squat. Getting experts, again I'm not sure -- have they ever had to rely on it? Perhaps they should wear headphones with whitenoise and interrogate people, with someone who is listening -- and compare notes.

I kind of feel like writing a blog post to refute this article, with proper examples etc. Would anyone be interested?

I apologise if I sound disjointed it's 3am in NZ right now, and I just had the need to go "no this is not quite right".

P.S. When I went to Singapore, a customs person glared at me and nodded to the guy with the gun and so I smiled and I said, "Hi! I hope you haven't had a horrible night so far -- hopefully my documentation is in order and you'll not have to deal with boring stuff!" and she went from >:{ to :-) and nodded to the gun guy walking behind me, who turned around back to his spot. I got all that from a split second glance. It's actually even easier for me now with my implant to do this sort of thing in case actual spoken communication is required.

EDIT: As per article, it is common sense -- but you need to know someone well enough to take judgement, which these guys have no time for. Speaking for myself, I learned over a long period of time to do that as quick as possible. Otherwise, I'd have been left to fail.

The whole procedure seems to hinge on catching some people assigned to engage in a naive effort at deception - people who have constructed a story from whole cloth. It seems likely that anyone attempted a sophisticated act of deception wouldn't invent a story but rather take their true experiences and rearrange them to fill holes where things they wouldn't describe are, giving them an unlimited number of true details to recite. Spending some time on learning the rearrangement would give someone a stronger grasp of their supposed itinerary than the average person has of their actual itinerary.

Which is to say this probably catches confused people and people hiding harmless but embarrassing facts but probably isn't useful against "determined evil doers".

Cops have been doing something like this for a long time. Most times I have gotten pulled over for simple speeding there is always a question about where I'm going, where I'm coming from and sometimes a few more "casual" questions as well. And I'm not even suspicious looking.

I also wondered why where I was coming from had any relevance to the speed I was currently driving but always sort of figured it was some kind of fishing technique.

i lie every time i go on vacation by myself. after having to explain myself one time for absolutely no reason, now i just don't even bother. apparently traveling alone for pleasure is highly suspicious.

i tell them i'm traveling on business, or act vaguely rude to the border agent with very curt responses. apparently they're good at filtering the real liars because i've never been hassled since.

Academics spend years writing books nobody will buy. That's practically the definition of academic writing. They go into incredible amount of depths for a very, very niche audience: other academics interested in, e.g., the experience of women in Japanese detective novels of the 19th century. (n.b. Actually a book, assuming I am remembering all the details correctly. An advisor spent years on it.)

This is exactly the result the market should hope for: a profitable method which gets academics to write down very-incredibly-niche ideas and then put copies of them exactly where someone would expect to find very-incredibly-niche ideas. The alternative to these $150 books is not $7.50 books. It is "no book." They do not meaningfully trade off with equally-in-depth blog posts, because academics are not scored on, and hence do not as a matter of practice actually sit down and write, book-length blog posts.

These may sound like stories of concern to academics alone. But the problem is this: much of the time that goes into writing these books is made possible through taxpayers money. And who buys these books? Well, university libraries and they, too, are paid for by taxpayers. Meanwhile, the books are not available for taxpayers to read unless they have a university library card.

In the US, taxpayers are said to be spending $139bn a year on research, and in the UK, 4.7bn. Too much of that money is disappearing into big pockets.

What is this garbage? Why not let an interesting article about a specific problem (academic publishing scams) stand on its own? Why pour on sensationalized, over-simplified, misleading "context?"

According to to study where those numbers came from, the $139bn the federal government spends on "research" is a part of a larger pool of "funding for research" that includes non-government sources, which is like $450bn. Universities (not just the libraries) are getting about $60bn from the big pool.

In other words: the study, at least the results referenced by The Guardian article, does not say ANYTHING about the flow of money from taxpayer wallets to university libraries. Based on the evidence presented, the fraction of university library funding from taxpayer dollars could be anywhere from 0% to 100%. This is blatant dishonesty from The Guardian.

Oh and one more thing: The "good" part of the article is written by an anonymous source with no verifiable facts (no publisher names, no book names). I have no real reason to dispute its authenticity, the idea certainly seems plausible, but I can not verify any of the claims being made without basically starting my own study from scratch.

This article could easily be expanded to include software developers as well as academics. I've seen several excellent developers with strong reputations contributing to the open source community get "hoodwinked" into rushing out shallow books for companies like Apress. I've seen other developers also get sucked into the game, only to see less income from the work in one year as you'd need to buy a modest used car -- after months of at least half-time effort. The market and problem for these books is a bit different than the market described in this article, but the symptoms on the writers is often the same.

Broadly speaking, cheap e-books from non-stellar academics are an incredible amount of data leaked for free: top peer-reviewed journals are not cheap at all and relevant conferences proceedings are often run on a pay-for-view scheme. The non-stellar academic from Windy Hill University, Nowhere, will sell a good summary of his/her field and an up to date literature review for less than $20. Bargain.

One quality specialized publisher is Now Publishers : http://www.nowpublishers.com/ Quality is managed by having leading academics have a say in the editorial decisions.

Their conditions are somehow better than more established ones. Authors still hold the copyright over the material, and according to the librarians at my institution, their campus and course packs are much cheaper to other publishers.

And students are being hoodwinked into buying them. The books also often include minutiae which change from edition to edition, so you have to get the nth edition in order to take your class - because in physics, for instance, there are Q&A in the books which differ between editions.

The main book we had to get was written by two of the lecturers, and was 300. It was so half-assedly bound that you had to cut the pages open.

I wonder how much of this is simply a function of the massive overpopulation of graduate programs and the growing need to fill out CVs with publications? I know that this has been happening for years in the form of obscure journals that no one reads or cares about.

One of the best emails I've received in a while was from the UVA alumni association letting me know that I would retain library access, including access to JSTOR, because there is no way I could afford to outright purchase everything I want to read.

This is why, if you want to be an author publishing through the more or less conventional publishing industry, you need a first-rate agent, and a knowledge of who is a first-rate publisher.

There's a big difference between a global publisher placing your book in their professional publishing imprints versus an obscure publisher leaching a bit of money out of libraries. The first-rate publisher will also get your expensive book into their subscription programs that they market to corporate subscribers, for example.

If you have a good agent, they will steer you toward the goals you are pursuing, if you communicate those goals clearly. That's where some knowledge of which publishers have the best and/or most widely read books in your field is important.

If you want to get your book into the more-widely sold trade paperback format, you need to tune your proposal to that goal. Most publishers require you to do some competitive analysis in your proposal. That's going to be important guidance for where your book ends up.

I used to live in India in early 1990s and I can't wait to go back and visit this beautiful land of kind and generous people, where Hindu, Muslim, Sikh and Christians are united by the fabric of 'Jai Hind' and where the whole nation comes to a stand still when a game of cricket is being played...lol

When I look at these architectural gems, I wonder how it might have been when it was busy with life. I am sure these structures served as focal points of communities and unfortunately now lie in absolute ruins of dilapidation and neglect.

Any construction engineers here? How would one go about building something like this? Do you have to dig a hole big enough to build the whole thing in, then re-fill the sides with sand? Is it possible to gradually go deeper, i.e. build a level under an existing level, or do you have to wait for the dry season and then rush to get it completed before the lowest levels are flooded come the next wet season? That seems impossible, given the size of some of these things.

I'm wondering because I'd like to build a wine cellar similar in concept to these things (a spiraling staircase with an entrance from above only), but I want to get some insight into building methods before inviting contractors. (for anyone who like underground structures, see www.spiralcellars.co.uk ).

Hello all - I'm Victoria, the author of this 3-year old stepwell story, but the one on ThisIsColossal last week went viral and spawned all his miraculous hoopla. Barely anyone even read that article and I couldn't get anything published, anywhere, until last week.

I've been working in total obscurity for years, so please, if anyone wants to know more or I can answer questions, I'm so happy to. Though honestly, there a whole bunch I don't know, and no-one does.

If I'm using your site incorrectly, forgive this poor newb. My crazed-geek-genius-artist-brother frequents it and is impressed by me, finally. Btw if you're interested (since I owe him) here's his most recent Kickstarter campaign:

Nice architecture, but man, those things must've been a public health nightmare. Anything that got tracked in on anyone's feet when the water was low would have eventually have wound up in the water supply when the level increased.

I have visited many of the places mentioned in the article. In terms of architecture they are beautiful but now all the places are in bad shape. The government and local population make no interest in saving the history.

They are very beautiful when dry. However, navigating these steps when there is water is not easy -- they get very slippery when the water level goes down but algae is still present in the supposedly drier steps. A cylindrical design where the steps go around the well with sealable openings may provide similar benefit with little maintenance overhead.

OK, so a hole with steps is great when numerous people must walk down to the water level. But keeping junk out, while letting people in, is a hard problem. As historic places, they of course ought to be maintained. But today, deep cisterns with pumps make much more sense. Or maybe better, groundwater recharge, because you get natural filtration.

When I visited one of these in Agra last Fall, I assumed it to be either a bath or an aquarium to keep exotic animals. Some had so many tiny windows and grills, I thought they might be designed to use as laundry places. It's interesting that all of these were simply wells.

I took some friends on a sightseeing tour of the country last Fall - one of the most memorable sights was Chand Baori (Abhaneri), about 2/3 of the way between Agra and Jaipur. The place was absolutely stunning, and it had maybe 3 visitors in the two hours while we were there.

It can only be stopped at if traveling by car, and while it's a tad more treacherous than plane/train, it's the only way I'll travel between the two cities if I have the time. You'll get to experience some amazing scenery along the way, and it cost no more than $100 for the journey in a large A/C car!

Thanks OP! I was at the Adalaj step well in Gujarat couple of months back, Its amazing to see these structure's usefulness to the common people and beautiful carvings. Inside the step well the temperature is very cool like an AC despite the burning ~43 deg Celsius outside.

That's the most sad part of this article to me. Many people have lost innocent-until-proven-guilty in their minds. Tina has a good response in the article:

> The answer is that I, and other public defenders, dont represent criminals. We represent poor people who are facing criminal charges charges on which they are presumed innocent until proven guilty in court. We represent members of our communities who have a right to real and meaningful legal representation, even if they are poor.

The bulk of legal work however is process based, which is repetitive, routine, administrative and could actually be done through the use of machine learning and AI. Examples of process-based work are document review and legal research. Document review is when parties to a case sort through and analyze the documents in their possession to determine if they are relevant to the case at hand. Legal research is the process of identifying and retrieving the necessary information lawyers need to support legal decision-making.

If LegalTech was to do the lions share of a public defender's process based legal work, they would be able to focus their advisory work. This would allow a public defender to not only to defend more individuals but most importantly, to provide proper legal help to everyone they are defending.

The inequalities and problems in the justice system could be seriously helped/fixed with better adoption and implementation of technology. The problem is that tech must be embraced not just by individual lawyers and defenders who it would help the most, but also by the decision makers themselves, government agencies/law firms, who have the final say on whether to bring tech into their organizations.

The good news is that there are strides being made towards bringing in tech to augment lawyers capabilities with technology, the bad news is that no speed is fast enough as there are a ton of people who require proper legal representation right now that are missing out.

Kind of Off-Topic, but I emailed Tina immediately upon reading this last week. It really moved me. I'd like to chat with any lawyers working as Public Defenders, pro bono or otherwise, to get a better idea of the day to day challenges they have with managing case load and communicating with clients. Things technoloy could help with. Does anyone know a good place/community/group to join?

I ask because myself, friends, and close family have all had experience with a Public defenders office at some point, the article is spot on about many clients being poor, assumed guilty, and lacking resources. I'm currently helping families navigate the criminal justice system with a company I started , but it's been on my head (and heart) to begin looking at providing resources to PD offices as well, as it's so important that these cases, particular low-level and drug offenses, are handled efficiently. I would love to get a better idea of the day-to-day challenges faced by an active public defender.

Also if anyone is interested in a amazing documentary about the courage and dedication PD's have check out Gideon's Army: http://m.imdb.com/title/tt2179053/. I think it's on Netflix.

Slightly OT: I wonder what will happen when marijuna becomes legal across the nation and body cameras are more widely used? Will there be a reduction in overall prison population, will people get better representation due to video evidence, will there be less bad apple cops? All of these may become true to varying degrees. It will be interesting to see how we as a society handle these issues in the future.

"At that point, he realized that the client had never been served to appear for the court date on which he allegedly jumped bail."Why is it the public defender had to notice that? Why didn't someone else notice, why was he even arrested in the first place? There are more problems with the system...

> When people ask how to push back against police misconduct, how to decrease the costs of mass incarceration and how to ensure fairer treatment of our nations most disenfranchised citizens, part of the answer lies in fully funding public defenders offices and enabling us to represent our clients in a meaningful manner.

If the justice system is a funnel, these public defenders are at the very bottom. Adequate funding may relieve pressure, but the long term solution is a better filter at the very top. One solution: https://news.ycombinator.com/item?id=9802861

A close friend of mine is a public defender, and gave me some disturbing stats.

According to the American Bar Association, it is unethical for a defense lawyer to take more than 400 cases a year; 350 if dealing with juvenile cases; three if they're capital (death-penalty) cases. Every PD my friend knows regularly goes double those limits, because they have no choice and the states aren't interested in upholding the ABA's limits.

This is Amazon's wet dream. Your app isn't an app at all, it's just a collection of configs on the AWS Console. When and if the time comes to migrate off of AWS, you realize you don't actually have an app to migrate.

Interesting that the article talks about load tests but omits any results.

I was trying out a Gateway API + Lambda + DynamoDB setup in the hope that it would be a highly scalable data capture solution.

Sadly the marketing doesn't match the reality. The performance both in terms of reqs/sec and response time were pretty poor.

At 20 reqs/sec - no errors and majority of response times around 300ms

At 45 reqs/sec - 40% of responses took more than 1200ms, min request time was ~350ms

At 50 reqs/sec - v slow response times, lots of SSL handshake timeout errors. I think requests were throttled by Lambda but I would expect a 429 response as per the docs rather than SSL errors.

My hope was that Lambda would spin up more functions as demand increased, but if you read the FAQs carefully it looks as though there are default limits. You can ask these to be changed but that doesn't make scaling very realtime.

I see a lot of people disagreeing with the overall direction of "less servers, more services". I totally get it, I used to be one of those people, but I think the shift to "less hassle development" is inevitable.

5 years ago people used to debate whether we should use a virtualized server vs. a physical one. You still can see similar discussions but rarely - we all have more or less agreed that using AWS/Rackspace/etc. is good for a business in majority of use cases.

I think 5 years from now we'll still be debating servers vs. services, but the prevailing wisdom will be that "services" have won.

It is pretty cool but not really serverless, you are still handling http requests via Amazon API gateway and in general you are relying and paying for quite a lot of Amazon services.Not sure how much better this approach is to serving image magic via PHP for example, it would be good to see some numbers.

Are servers really that hard to manage these days? This seems like way more work and pretty limited in what it can really do, especially compared to a few lines of code in any decent web framework that can perform a lot faster.

I'm playing with these exact things now and it is very enjoyable so far.

My main worry is not on the technical side but on how things are charged for. If I build something that starts to get used I am covered in terms of scalability, but not in a way that protects me from 'cost scalability' so to speak. I know I can set up billing alerts and hit a big 'shutdown' button in response to high load, but what I don't think I can do is throttle these services based on the money I want to budget/spend. With my own services I have a hard cost limit, with a hard scalability limit, or rather I just accept that my response times will go down or fail once I've allocated all I can afford.

If there something for AWS in terms of 'cost throttling'? It may be a gap in their services, especially for people want to build things that might get traction?

Beware, link-bait! Title should really be "Microservices without non-Amazon Services", which if you remove the double negate really says "Microservices with Amazon Services", which is well.. not that interesting IMO. I'd rather write against CloudFoundry which abstract away AWS.

I made a pretty cool lambda this week converting using mandrill inbound email api, processing this through lambda, then posting it to my redmine docker server. After a lot of fiddling (lambdas doesnt support x-www-form-urlencoded) it now works great.

It figures out the language/runtime I'm using (Java, Ruby, Go, NodeJS, PHP), builds the code with a buildpack, then hands it off to a cloud controller which places it in a container. My code gets wired to traffic routing, log collection and injected services. I can deploy a 600Mb Java blockbuster using 8Gb of RAM per instance or I can push a 400kb Go app that needs 8Mb of RAM per instance.

I don't need to read special documentation, I don't need special Java annotations.

I just push. And it just works.

I'm talking about Cloud Foundry. It runs on AWS. And vSphere. And OpenStack. It's opensource and doesn't tie you to a single vendor or cloud forever.

I worked on it for a while, in the buildpacks team, so I'm a one-eyed fan.

Seriously: why are we still talking about devops? It's a solved problem. Use Heroku. Install Cloud Foundry. Install OpenShift. And get back to focusing on user value, not tinkering.

Disclaimer: I work for Pivotal Labs, part of Pivotal, which donates the largest amount of engineering effort on Cloud Foundry (followed by IBM).

Yea, that's cute, except that DRM can handle pretty much all of it. You can split hairs and complain about GPU scheduling which is inherently rather difficult because scheduling at the command queue level is problem very much alike the halting problem. The real issue isn't that we don't have the pieces we need but rather that we can't get all the players to agree on using the same ones. On Windows you have one entity (Microsoft) that can post WLK and unless you pass it you won't be certified and on GNU/Linux "a working driver" can be anything from "not catching on fire on boot" through "actually brings up display" to "oh, hey a textured triangle!".And I get it, everyone is frustrated because ultimately displaying a bunch of pixels, seems trivial, that is, until you mix in politics. You have NVIDIA, AMD, Intel and the community at large pulling all in different directions. With GNU/Linux graphics support having marginal effect on the bottom line there's little incentive to deal with it. And you'd still miss a controlling entity that could validate that "works on Linux" means anything but "compiles with some random kernel release".

Everyone who thinks that writing great graphics drivers can be a spare time activity is delusional. The fact that we have Android with Gralloc (which in comparison to DRM is, well, a joke), Ubuntu with Mir, others trying out Wayland and folks still stuck on X11 makes this all so much more complicated than it needs to be (and SteamOS is rather terrible in this regard too, which is a shame because Valve is trying to do the right thing with Vulkan but SteamOS is just not a well put together distro, at least right now). It's just not a driver model problem, it's the politics of it all. Outside of Google adopting DRM instead of Gralloc (or Gralloc getting all of the features on DRM and effectively becoming DRM and replacing it on the desktop) there's probably little chance of unifying all the drivers under one coherent umbrella.

SICP is a classic text, and although it is old, it has not aged. I wonder if this is a sign of a stagnation in the field of computing, that as the machines we use become exponentially more powerful, we still know very little about how to use them. In any case, SICP is still an excellent introduction to computing and does not need to be updated just yet.

The name "SICP Distilled" feels very misleading. The programming language has been changed, in what I assume was an attempt to be more trendy, and the content has been changed to the point that it only superficially resembles the original text. There is no better language to explain the concepts of SICP than Scheme, and it appears the author understands this, as he had to remove sections of the text to compensate for Clojure's unsuitability. It appears that he changed or removed a large portion of the text, in fact, and added in their place new ideas which are arguably unrelated to the spirit of the original book. Perhaps it is merely the name "SICP Distilled" that makes me apprehensive, and I would be happy if it was marketed as something completely unrelated, with only a nod to SICP as its inspiration. However, it feels wrong as it is.

Peter Norvig wrote that SICP "is a very personal message that works only if the reader is at heart a computer scientist"[1] It is entirely possible that this project will bring some of the most important ideas of SICP to those who do not fit that description. But is that a goal we should be striving to achieve? This question makes me think back to a portion of the quote, on the very first page of SICP, by Alan Perlis: "Above all, I hope we don't become missionaries. Don't feel as if you're Bible salesmen. The world has too many of those already. What you know about computing other people will learn. Don't feel as if the key to successful computing is only in your hands."[2]

"To use an analogy, if SICP were about automobiles, it would be for the person who wants to know how cars work, how they are built, and how one might design fuel-efficient, safe, reliable vehicles for the 21st century."

On that note can anyone recommend SICP-equivalents for automotive, locomotive and aerospace engineering?

I've had a lot of experience with various musical languages (ChucK, Csound, Supercollider, Max/MSP, PD, Faust etc.) I've even designed a few music languages for myself. For what it's worth, I did go to a music school and graduated with a bachelors degree in music, with a strong focus in computer music composition.

In the case of Alda, I'm a little underwhelmed. To be fair, it seems that the author of this software has done a good job meeting his goals. The syntax does look quite intuitive, basing it off of Lilypond syntax was a good choice IMO. I wouldn't call it a programming lanuage, and it can't do everything I want, but it seems to fit certain types of music quite well. Here is my big issue:

>In the near future, Aldas scope will be expanded to include sounds synthesized from basic waveforms, samples loaded from sound files, and perhaps other forms of synthesis.

I don't want to say Alda sounds "bad", because soundfonts, samples, and basic waveforms have their place for certain styles of music, but it is certainly quite limiting. Considering that there are massive books written about the "perhaps other forms of synthesis", it doesn't seem like sound itself to be too big a focus.

From what I'm reading, Alda is essentially a MIDI file generator. It doesn't actually produce music, rather, it sends MIDI instructions for how to play music somewhere and leaves it up some other program to make the sounds. All of the other music languages I've mentioned can actually make music internally.

Have you ever considered using other music languages with Alda. Overtone is just Supercollider + Clojure, so I'm sure you do something similar. I imagine it would be pretty trivial to get Alda to write a Csound score, since it's basically just a text table. PD has libpd, which I've had limited success with. I might as well mention my project Soundpipe as a possible DSP engine as well: www.github.com/PaulBatchelor/Soundpipe.git

I'm not a musician of any kind, but I wanted to congratulate the creator of Alda. I believe that (almost) everyone on Hacker News admires and respects those who make interesting projects. We are a community of hackers that are interested in reading about projects like yours. I see you just created your HN account. Congratulations!

I found a few of the comments here a bit critical of Alda, but that's just the way Hacker News works. There is such a large, diverse, intelligent, and involved community on Hacker News that there are always people that have keen insights or have already tried out related ideas resulting in useful, or not so useful, observations. Inside the most critical comments may be the best suggestions.

Keep us posted on the development of Alda; here on HN, everyone is cheering you on.

This is pretty neat. But I found it quickly getting very hard to read towards the bottom.

I feel if there's a computer interpreting some sort of language, that it should have a built in notion of the circle of fifths -- why should I give the computer absolute notes, when everything in music is so relative. Maybe viewing the notes in terms of do,re,mi would make sense, within the context of being in some key; then jumping to another key (like the dominant, or the minor version of the current key). Computers make computation cheap, which means it should be easy to think in terms of abstractions (that's how musicians think about music, then unwind those abstractions into absolute notes when writing the music down).

But then I look at the current syntax, how it attempts to be extremely compact, and worry that it won't be possible to extend to better abstractions very well.

I'd like to add to this discussion by mentioning the ChucK music programming language (http://chuck.cs.princeton.edu/). Here is a TedX talk by the co-author, Ge Wang, talking about the joys of digital instruments (http://www.ted.com/talks/ge_wang_the_diy_orchestra_of_the_fu...). In the talk, Ge Wang talks about leading the Stanford Laptop Orchestra, which uses ChucK to program all sorts of crazy digital instruments, and also talks about "Ocarina" iPhone instrument. Cool stuff!

I get the thinking behind this, and seems a neat little tool. However, I can't help thinking that the problem the author cites with modern GUIs being too distracting to use is more likely to be solved long term by the likes of StaffPad[0]. This allows you to physically write your notation on a blank digital canvas, meaning the only difference to writing on a blank sheet of physical paper is the tactile contrast of using a stylus rather than a pen/pencil - something that is improving all the time.

This is a pretty advanced language, i like it much!!!I want to share what i was experimenting with lately...Its a very easy language to produce music in the form binaural audio, it helps me to sleep with my insomnia.Sbagen is the name and you create .sbg files.http://uazu.net/sbagen/Idoser is based on the same sourceUse this here to create or edit .drg files from or with .sbg files.http://binaural-spot.blogspot.de/2010/08/drg-author-installe...There is also an online converter for .drg files to .sbg and the .sbg files actually show you the source, you can create wav files but they are very large..Use the .sbg files and in case you want this mobile, here is an android version that can handle these kind of files:http://www.normalesup.org/~george/comp/binaural_player/

1. Is there a way to play a selected segment of the music rather than the whole file? While composing, replaying a segment over and over again is quite common. Don't want to start from the beginning every single time.

2. Is there a way to produce music sheet (PDF or PNG) from the source? There would be really useful.

3. Is there a way to format the notes into measures? I know the | is optional and ignored. Can the program do reformat and automatically divides the notes into measures?

4. Emacs integration would be great. Using Emacs to edit. Do ^X^E to execute the program to play the selected segment.

This is neat, of course. Yet, being classically trained myself and a software engineer, both for 30 years, I have yet to find an alternative music notation system that would make me proclaim I would use it.

When I look at a traditional score I see and hear the music in my mind pretty much instantly. There is zero cognitive load, at least at the conscious level. With any kind of text-based notation you have to read text.

Another point, perhaps minor, I think I can say that most of the world does not learn notes as "CDE" but rather "do" "re" "mi". That's certainly how I learned it because I did not study music in the US. Perhaps you've accounted for this?

For me paper and pencil is still the best approach and the most natural experience.

Notation is a powerful tool, in fact it is a tool for thought. I learned this in CS when I studied APL and used the language professionally for nearly ten years. Music very naturally lends itself to a sophisticated notation. I am not sure taking that away can ever produce a better outcome.

Someone more qualified on the subject can probably explain why it is our brains have evolved to be "tuned" for notation. My hypothesis is we evolved powerful image recognition capabilities coupled with high speed classification and semantic processing. We can instantly recognize shapes (predators, something flying towards you, etc.) decide what it is, access a library giving us options and decide how to react. All of that in a very small fraction of a second. Notation benefits from us having developed such capabilities in our brains.

It isn't my intent to criticize this effort. Without attempting new ideas there can never be any progress. Kudos for trying.

I'll end with a question: What would you say is the use case for Alda?

I absolutely agree with everything in this post. When I was a post doctoral fellow, my principal investigator would publish at least one paper a month. She was celebrated in the department.

(a) The papers were published in journals like - Journal of Green Donkey Testicles, Journal of differentiation of dying mouse ... Journals that I had never heard of, had no impact and every tiny bit of an experiment that was conducted in the lab, would get published, without a full picture.

(b) Much of the data was turned into data by turning everything into being 'statistically significant' . I would do experiments and I would see no freaking difference between control and experimental, yet, through the magic of statistics, she would find the difference. It was lame and depressing.

(c) Above is an isolated example. There are countless smart, diligent and hard working professors who continue to push the boundaries of science (ex. my amazing PhD prof, whom I dearly love and admire). Unfortunately, their time is plagued by writing grants after grants, fighting inter-departmental politics, dealing with Chair of the department on regular basis ... basically stuff that distracts them from having the time to relax, think and innovate.

(d) Commercialization of innovations in schools and universities are butchered by the IP policies, where by the University would take 1/3, the commercialization office would take 1/3 and the poor researcher is left with the rest. This kills innovation + tech commercialization and the desire of a researcher to be an entrepreneur.

I met and worked with JF for 6 months. I learned an enormous amount working with him. His creativity in experimental design and his approach to answering questions was inspiring. Sadly, the fact that he is leaving academia does not surprise me. People who care more about doing good science than about publishing (those are absolutely NOT the same things) rarely make it in academia. Funding for true basic research has contracted significantly and scientific communities have become incredibly risk averse with regard to who and what they give grants to. The peer review system reviews based on social norms within that field, not on what is actually good science. Finally, training and education are still based on the guild system. People who actually want to advance the state of human knowledge, not just have an academic position, find this environment toxic.

Best of luck to JF in his future endeavours. Academia is the true looser here.

The CS field gets a lot of bashing for gravitating a lot around conferences rather than journals. Because, you know, journals are supposed to be the serious venue for the grown-ups. But actually, CS conferences (at least the ones I've published in) have a double-blind review system that feels much fairer than the single-blind in the top journals. Of course it's far from perfect (more often than not the reviewers can guess the affiliation of the authors anyway) but things like almost needing to talk to the editor to publish papers, or the editor using author name as an important acceptance criterion, do not happen AFAIK. In general my experience with reviews has felt much fairer in conferences with double blind system than in the typical journals with editorial boards full of sacred cows. And I don't say that out of spite for rejection, because in fact I've had more rejections in conferences than in journals.

A pity that in my country (Spain) the bureaucratic requirements for funding, tenure, etc. are one-size-fits-all and basically conferences count almost nothing and journals are everything, even if in my particular subfield no one cares about journals. So I end up playing a double game: publishing some papers where I know I should to find the right audience, and others where I am forced to to survive.

I'm in the latter half of a PhD. I love it. I work insane hours entirely out of my own choice, because it is the most rewarding and enjoyable thing I've ever been a part of.

The idea that I've found something that I love, that is challenging, that (I hope) I'm relatively good at, and that has a definite net positive good for us as a species/society, yet I may not be able to pursue this long term because of the immense challenges facing academia as a whole (catalyzed, I would argue, by tragic lack of funding) is really concerning, on both a personal and a societal level.

I don't really see, what people mean when they "agree" or "disagree" with this article and the likes of it. Aside of expressing his own disappointment the author points out what is wrong with the academic research, that's true. But point out what is wrong, though isn't useless, is a lot easier than suggest (even pretty lousy) better alternative.

It's easy to imagine how everyone should be free to explore whatever he wants in his own free time, with his own money, at his own home lab (although even this isn't true, because currently even the most basic stuff needed for research in chemistry or biology is illegal to freely buy and sell, as it can be used to produce drugs or bombs or because of some other "national security" bullshit). But what the author is talking about isn't his own time and money it's expecting to be provided with all stuff he needs for research and for him to live and prosper. And if someone is about to give you all that, your promise that you'll discover something great eventually isn't really enough for him. Quite understandably so. So ideally he would like to make sure that you, both: won't use all money and lab equipment that's given to you to smoke crack and do nothing; and that you are actually able to discover something great. Which, I guess, even you yourself won't promise, because you don't know.

So, in fact, even with all that bureaucracy we cannot have any guarantees. And author wants for the system not only to work, but to work without putting too much pressure on him and his colleagues. How he imagines that? He doesn't explain clearly enough.

Usually we only hear this from the disgruntled. It is valuable to hear this perspective from someone who perceives themselves as happy and successful.

> because I know how they were obtained.

This is similar to one of the reasons I left graduate school.

I realized that everyone who plays ball and puts in the hours gets a PhD. And I saw incomprehensible postdoc hires. Lots of things didn't even look like significant accomplishments (or even all that hard, and I'm no rockstar).

I hope the author isn't also similar to those friends of mine in another way: assuming that the marketing job that private industry has done when comparing themselves to academia is true. That things are more rational and productive outside academia. That they won't just enter another game played by chickens with no heads that has slightly different rules. That they won't spend most of their time doing useless bullshit that the system demands they produce even though it does nothing to further the goals they are supposed to be advancing.

I liked the piece, so I do hope the author ends up preferring the different kind of pointless make-work the alternatives provide him.

I feel the same thing in huge companies in which the employees are insulated from market forces. Aren't we supposed to be, like, building stuff that users want? Rather than just trying to get promoted and game the system?

IQ and motivation are independent variables... it's a shame when people with high IQs exert tons of effort in small-minded directions, or just toward fighting each other.

Another possible (unspoken) reason why Garipy is resigning is presumably that he's been a postdoc[0] for the last 3-4 years[1]. Unfortunately, the nature of modern biomedical research is such that you need an army of researchers to do the mechanical grunt work at the lab bench that leads to papers. This has led to a glut of postdocs, who are underpaid, overworked, and have little hope of ever obtaining a permanent academic position[2]. And even at top universities, industry positions are quite difficult to get without putting time and effort that you don't have to spare into extensive networking[3].

During his postdoc, Garipy's had one second-author publication in Nature Neuroscience[4], which probably wasn't enough to get a tenure-track professorship. If, like the first author on that paper, he had gotten an assistant professorship at a prestigious university[5], he probably wouldn't be airing his dirty laundry. Also note that he is a YouTuber[6] and has a book coming out according to his Twitter profile[7], so he's probably trying to leverage any notoriety he gets from ragequitting his postdoc to jumpstart his career in science journalism.

I keep hearing about how miserable things are in academia and have come to perhaps a surprising conclusion: research and education need to be broken up. I know that there's arguments that the two should stay entangled, and that new research feeds quickly into education blah blah blah. But I personally think that the world is far better off with large, dedicated, quasi-commercial R&D labs and institutions like Xerox-Parc, Bell Labs, Howard Hughes Medical, Battelle, Microsoft Research etc. and that those research labs operate off of a "licensed innovation" model.

It feels like these places are struggling (I might be wrong), but I'd argue for a vast expansion of this system on par with the university system, but without either being burdened by the needs of either one. Offer competitive industry pay and work on demand for commercial and public interest.

A kind of kernel of this already exists, either big National Labs that try to spin out mature research paths into companies (giving the researches a shot at making it big as CEO or CTO of these new companies) or as dedicated commercial R&D firms that get hired to produce product ideas for commercialization. But I think it should be institutionalized in the same way universities are rather than running as independent as they do now. And then universities should get out of the research game altogether.

I don't have real concrete ideas on how this should be done, but it would provide a better career track for smart people.

I work as a researcher in a grant funded lab, I can see where this post is coming from. There is a lot of needless politics in academia, and often the right things are not rewarded. However no matter what set of actions you choose to reward, people will find a way to game that system.

The criticism in this post, while at least partly true, is too cynical to make a difference in any way. Sure, there are a lot of poor and useless papers published and terrible labs in the US University research system, but the search for knowledge is moving undeniably forward under this system.

I will give an example how the system is broken. I am a masters student, and my advisor assigns me full reviews without even asking, in the fields that I do not even understand anything a little bit, not even in my area. I review them for him. Some dude's approval to a conference is at my hands. I do not feel in any position to review most of the papers I have been given and talking to my advisor does not help. So, I just accept them all if they do not have major style issues, and that's it.

FWIW, I have a PhD and spent five years as an assistant professor. I then went on to take a different job as a lecturer for a few years before leaving academia completely. The general complaint JF raised rings true. In my field (learning sciences), there was a TON of jank publications. The majority of the published work still is stuff that exists solely to increment the author's publication count. The term "least publishable unit" was used unironically.

The focus is flawed. There is still a lot of good work being done in the system, but that work is really only 10% of the work that is done. I had to choose between artificially inflating my pub count and not making tenure. In the end, I decided to walk away - I personally don't have the kind of perseverance necessary for that.

Of course, I now make significantly more money and am actually appreciated by my colleagues (rather than viewed as competition), and still get to contribute real work in the field in a private sector nonprofit instead. And I get to program too (in LS at least it was all publications, the software you created didnt count for anything.)

As someone who completed a PhD and knows many people who went on to academic positions, there tend to be three stages for people who come to accept the status quo.

1. A desire to make big breakthroughs in the field.

2. A frustration with the slow progress being made in the field, the apparent inability of the field to produce big breakthroughs, and the proliferation of papers whose net contribution to knowledge is small or zero.

3. An acceptance that the stagnation in the field is (1) partly an artifact of it being hard to recognize progress when it is happening and (2) a consequence of the fundamental nature of the discipline, e.g. all the simple elegant theories have been explored already.

It's not that there are not problems in academia, but most academics (at least in my field) don't consider these to be a major barrier to progress. I was particularly wary of the claim:

I will still publish my book, The Revolutionary Phenotype, which contains an important novel theory on the emergence of life.

Surely a novel theory on the emergence of life would be of great interest in the field? At worst I imagine you would have to dress it up in some mathematical model.

This article speaks to me of the most destructive force in our human world: the social collective. It seems that the majority of his ills are sourced directly from the dire circumstances of the mass collective operating on itself in a negative way - that there is something broken in the peer-acceptance process; perhaps it is indeed impossible to advance science without disassociation from the collective reality of all scientists, who - psycho-socially - desire to attain a social goal as an imperative before any kind of natural observation or 'progress' otherwise; i.e. the complaints of the author would be best addressed to nobody in particular; it is the fact of the anonymous-crowd-mass which produces the conditions degrading science, today. There are simply too many social machinations in play. The desire for acceptance at a banal level (grant money), the desire for acclaim at a banal level (peer review), the desire to be heard above the din of the masses, at a most banal level (publication requirements) - all of these banal instincts have accrued much cach in the zeitgeist as reasons for doing things.

tl;dr sometimes you have to shake the sheets if you want to get a good sail on. No great explorer, adventurer, discoverer, scientist, engineer .. ever .. got that way because they followed the processes of the status quo. The fact that many of us must discover, and learn to stomach: life is not special for a majority of people. That includes scientists. It includes people who think they deserve otherwise. If you want to exceed and excel, propel the species forward: beware the collective. It will eat you.

This echoes closely (for me) what I experienced as a grad student before ultimately abandoning it.

In 2005, I was pursuing my then-dream of joining the ranks of academia. I was a history & philosophy grad, and my field was 20th-century intellectual and cultural history (so, you know, a whole lot easier than and not at all like the real sciences). I'd grown somewhat tired of the expected vulture-like hovering about what had been the standard historiographical approaches of the past 30 years. Not because I did not find them valuable, insightful, meaningful, or worth continuing. I absolutely did. I continue to find them incredibly insightful. However, it just wasn't quite what I was looking for. I thought I had something better, something nobody was doing at all at that time in the field.

For an entire year, I found myself locked in an endless struggle of presenting my case, arguing my thesis and its philosophical framework and merit, with every member of the department. I couldn't succeed in convincing a single prof to head my committee. Not one. There were long and impassioned debates. They asked a ton of questions, really forced me to dig further into proving the merits and value of the idea, constantly put me on the spot to really flesh out how I was going to support this idea.

At first, I thought I was simply failing to make my case. I could accept that. It drove me to work harder to make my case. I slowly began realizing something else was up when, without fail, every prof hit a point of being no longer interested in hearing my arguments. This was signified--every single time--when they suggested they'd be willing to lead my committee if I would choose a topic that matched their research. They offered to take me on as their RA because I had so thoroughly proven my ability to quickly gain depth and breadth of understanding in a given topic. They even granted me a TA position by the end of the year to sweeten the pot (I'd been attending with no financial assistance at that point, paying the bill myself).

After this happened with the last remaining prof, I finished the semester out, then emailed them all a thanks-but-no-thanks letter. I left the program.

A month later, I received an email from one of the professors. It was a personal heads-up and invitation to attend an upcoming conference the university would be hosting. The keynote speaker just happened to be an expert on a philosopher who featured prominently in inspiring and underpinning my proposed work. The keynote topic was a talk about that philosopher's work, and a musing on how it needed to be included in historiography alongside the analytic categories employed for so long in modern historical scholarship. There was even a light-hearted mention by this prof of how much it sounded exactly like everything I'd been arguing in the department for a year, and how she thought I wouldn't want to miss hearing what an expert had to say on the topic.

That's deeply unfair in the case of a non-native speaker: almost no one reaches foreign language proficiency at the level you're implicitly demanding, which is basically perfection.

I don't know if this author is a native speaker but his name suggests he may not be. On HN, you should err on the side of charitability about thisespecially since grammar peeves, though we all have them, are the quintessence of off-topic.

I've been meaning to set aside more time to better understand ML as it applies to NLP. I've wanted for a while to understand how to, say, perform unsupervised training on my iMessage, Facebook and Slack history (which spans many years now) to tag messages, threads, and people with topics, sentiment, frequency, etc. I want to correlate that data with other time-series metrics that I either already have it can acquire easily enough, such as weather, medications taken, apps I have open on my desktop (which could signal context), to see if there are any interesting correlations. I have always thought that depression, anxiety, and AD/HD have had their own impacts on my life that are impossible to quantify without this sort of thing exercise, which could help me understand myself better, and help create a tool chain that would let others get the same kind of insight into their own data. I had always planned to make an open source kit of scripts or maybe Python notebooks out of the endeavor.

Things that I get stuck on, which then takes the wind out of my sails: tokenization (specifically handling the Unicode Emoji entities and ascribing meaning to them - Do I use them as tags/signals, or replace with synonyms?), lemmatizing (do I spend time going down the rabbit hole of simplifying all lines of chat to their most basic words?), grouping likes of dialog (if my reply was within ten minutes, consider it part of a "conversation" object), and how best to time stamp things (everything is individually stamped, but for some correlations, the time of day is the important bucket - for others, it may be the calendar day/season/busy work day).

It's such a huge domain, I keep spinning my wheels trying to feel like I'm going down a path that will lead to some form of success, no matter how small.

Another topic I've tried getting into is using ML to process my Hearthstone logs (live, not historical) to try my own approach on an unreleased project I recently read about that sought to predict opponent's cards. My thought was to create a series of dicts from popular "net decking" sites and compute cosine similarity between the cards an opponent has already played - the other project used game histories to predict the "next card", and I was seeking to predict which archetype my opponent is likely playing, since my own domain knowledge would figure out their likely "signature moves" once I had that clue. I'd maybe expand on it to predict how likely it is the opponent can go lethal on their next turn, given the cards they likely have and the ones on the board. With that topic, I've been trying to figure out state machines and various data structures in Python. Computing similarity I've figured out, with Counters seems to work, but the mechanics of doing so against potentially hundreds of "net decks" is challenging. Is a comprehension the way to go? A matrix function? I have no idea.

I guess I'm making this big dump of my own thoughts to see if anyone has any pointers, guidance, example projects, or general knowledge to share or direct me to that could help me figure this stuff out, because I'm excited to learn and I learn best when I have a pet project to which I can apply my newfound knowledge.

What are the best for D3-based visualization/charting libraries when it comes to being high customizable, styleable, and performant? There are dozens of them, but with some research the most interesting ones seem to be:

* Vega - Talked about in this thread

* nvd3 - Meant to work in similar style as D3, supports extension. Seems popular

* Epoch - Real-time charts are purpose-built to be performant and low-overhead. Limited number of visualization types

* D4- Extends D3 instead of wrapping it. Separation of data from view

* C3js - Easier API. extendable.

* rickshaw - Mainly for time series data. Supports extensions. Works in similar style as D3.

While this looks stellar and having a serializable format is cool too, I am personally not a fan of gigantic configuration files as this.

Being declarative is much better than being imperative and configuration files seem like a natural fit for a declarative system, but they lack expressiveness. They are hard to make generic and lend themselves to repetition and fragility (in my experience).

I'd say that composition wins over configuration. If you provide a domain specific language with a set of useful primitives, users can leverage it to describe what they want with more flexibility and freedom.

For concrete examples within js-land, look at gulp.js[1], connect[2]-style middleware, and JSX[3]. All of them describe their structure with code, in a composable, pluggable, reusable fashion.

That being said, with a robust enough representation like Vega's, I bet you could write code that dynamically builds the final JSON structure.

Having worked with many different graph tools and languages (Matlab, Matplotlib, ggplot, gnuplot, Origin, D3, Raphael, Three.js, ...), I strongly believe that declarative languages are the right tool for describing visualizations. This library therefore seems to be a step in the right direction. As some people here already pointed out, pure JSON might not be flexible enough to avoid a lot of repetition for real-world use cases though, but I think it's a good start.

I think what could make this into something really useful would be the addition of special directives. MongoDB is a good example for this, as they have enriched their query language with a special operator syntax (e.g. $in, $all, $or) that allows the user to specify e.g. logical constraints.

Recently I developed a similar descriptive language for describing patterns in source code ASTs, which uses YAML as a default output format and features some regular-expression like operators that make matching of complex patterns containing e.g. repetitions, references and loops possible (for some examples, see http://docs.quantifiedcode.com/patterns/language/index.html).

Personally, I have always preferred YAML over JSON as a serialization language, since it is much more concise, easier to write (after some getting used to) and comes with handy features like anchors/references, which make e.g. self-referencing documents or variable definition much easier.

This is a very cool project...I haven't yet had a chance to use it in production, but the fact that Winston Chang and Hadley Wickham are using it to render interactive graphics via R...i.e. the ggvis [1] library, i.e. the interactive successor to ggplot2...makes me think that it must be a pretty solid library.

Having worked with dc.js and D3 for interactive visualizations, I'm eager to try something else like this that could simplify charting. When recharting via an API, a lot of code needs to be written to handle it. With Vega though, at least from first impressions, it seems that all you need to do is pass a payload and you can immediately re-render. There's no need to run it through reducers or unpack and massage the data to get it to fit. I'm looking forward to trying it.

Also worth noting is Vincent (https://vincent.readthedocs.org/en/latest/), a Python API for Vega, which has been my preferred method of Python data visualization for the last while -- it's been de facto deprecated since the author is not planning to rewrite it for Vega 2, but it works great.

Hello. I'm the author of Black Screen, and I'm upset this post has appeared on Hacker News. The terminal is at a very early stage; I don't even use it by myself. Although, it's nice to see that people show some interest.

One of these seems to pop up every few months, which is great, terminals are old crusty awful things, but it seems like they always die out in development before they can run existing things like Vim which makes them fun POC's and not much else :/

When I saw this post I thought it was going to be an xterm with an oldschool CRT look. I got someone to build me one of those for playing rougelikes a few years ago - see here if anyone is interested:http://ubuntuforums.org/showthread.php?t=1884955

I was running IPFS as "my own pastebin for files" for a while (it's great!), but was wondering what can they do to improve adoption / popularity. This move is amazing! Useful, interesting for people who care, and visible for others.

Reading this felt kind of like the first time I read a writeup on Bitcoin. There's the same sense of throwing out some old, formerly immutable rules here, the excitement of something that's going to test some boundaries and inevitably clash with some authority (how can you, for instance, comply with the EU's "right to be forgotten" when that information is scattered into a redundant mesh?). Interesting times ahead for IPFS.

I'm very excited to see what happens with IPFS. The article talks about replacing HTTP however, and this is definitely a tricky task.

Someone in this thread already asked one of my questions (So is this primarily for static websites?) but my second question is: So is this primarily for personal websites?

I'm having a hard time finding a good way for Facebook, for example, to monetize their website. Targeting ads go out the window with mostly static content. Even more so though, what about Netflix? How is DRM done? How do you make sure only the correct users can access your objects?

edit: Also, doesn't a "permanent web" have an inevitably fatal flaw that you can't free space?

This is great news. Dapps [1] are coming, and DHT [2] /storage technologies like IPFS (although IPFS is more than just a DHT) are the other much-needed side of the coin - no pun intended - to make that reality happen. Exciting times ahead.[1] decentralized app[2] distributed hash table

Well, I'm not 100% convinced this is going to take off, but I'm intrigued. I'm going to make an effort to get Fogbeam Labs' website up and running on IPFS shortly as well. I'm curious to see where this goes.

So... can one load a game on the PS4 at home using that, or is just intended to emulate the playstation on a PC ? I don't understand the intention behind making this open source, seeing all the loaded piracy history of the playstation consoles.

Kinda feels like it's a little short for a SDK too, but I'm no expert.

Aaand... Looking at their landing page[1] I can see they've chosen to go with a horizontal page layout. Because they use a non standard page layout they have a black stripe bumping in and out of the viewport instructing the user that they can/should scroll. This is quite a common layout among artsy portfolio themes[2].

Has anyone ever tried to sidescroll a web page lately? On my Yosemite MBP with latest Firefox/Chromium I just get a jitter and no movement. Sometimes the scrollbar moves in the wrong the direction and then dies.

> At times I think back to when websites were produced in Flash. For all its downfalls (and there were a lot) one thing was always true. Flash sites rarely looked the same.

The author clearly doesn't remember what it was actually like to arrive on a Flash site, and play the fun games of 'where has the designer hidden the navigation today?' and 'oh crap, how do I turn the sound off?'.

Design patterns are there to ensure that functionality works in a roughly consistent manner across different sites, so instead of having to spend ages figuring out an inscrutable interface, the user can easily buy a product or find the information they want and get on with their day.

This is not a trend. This is lots of people slowly figuring out how content should be structured for maximum usability in a web context. Layout conventions will develop over time, as new ideas are incorporated and technology changes, but that's a good thing.

As has been pointed out, books have looked roughly the same for the last few hundred years, but design innovation has only increased, as technology and our understanding of the conventions involved has improved.

The visual design area is more susceptible to trends - a few years ago everything was glossy, then with 'Flat UI' everything became dark blue and a sickly shade of green. But that's ok too. Except for the green, that was horrible.

The danger is with 'cargo cult' design. That's where the complaint against generic themes is valid a style is used because it's popular without thinking about whether it's actually the best fit for the content and what's to be achieved.

I could not disagree more. Don't fix what isn't broken (anymore). I believe that after years of designing websites, we found something that works, and works well. Consumers land on sites and see something familiar. It makes for a comfortable and easier web. I'm all for this "standard" in web design.

It's just one example of fashion in tech. Around ten years ago there was another fashion for web sites - all the panels had rounded corners (and it wasn't supported by CSS, so people created the rounded corners from pieces of images - very unproductive waste of time).

Non-tech people, when ordering a web side, often just don't accept things which look different than other web sites they have seen. At that times it was difficult to convince people rounded panels with borders are not necessary. People often are unable to judge themselves, so they rely on what others do.

There are many other examples of such unhealthy fashion: Spring framework in Java, XML, SOAP, gray text on web pages (even despite it violates W3C accessibility recommendations), not using tables in markup (even if I want tabular layout), etc, etc

On the other hand I agree that uniformity can help people to consume information, and also inventing unique design is often a waste - the content is the most important part. Still, there are many cases of harmful fashion.

Honestly I'm fine with most websites sticking to a similar layout as it helps me navigate it faster plus it's just trendy right now so that'll pass like all web design trends before it.

Having said that this specific layout is garbage, in my opinion and not because of its design but because of the way it's used. It's so incredibly rare to see a company use this type of layout without filling in every single space with utter bullshit about generic buzzwords and it just takes up so much space. I can't count how many start-up websites I go to and I have to scroll down for pages just to figure out what they even sell because everything up front is large, generic images that don't mean anything followed by lots of very general phrases and buzzwords.

For that matter, all books look the same too. And yet everyone knows how to use books. You hand someone a book and they never look at you funny asking how to get to Chapter 1.

His article is negative, but I for one have been able to traverse websites more quickly and easily because they adhere to some now-common conventions. Of course websites need to be original but not SO original that they require the user to adjust their assumptions about what to expect from site while it's loading.

I think it's safe to say, at least from my view, that Bootstrap is the reason for it. Bootstrap made this format easy and clean, and it works well with mobile. Websites will look like this until someone comes out with the next thing that's easier and/or cleaner and/or works better in mobile and then a couple years later THAT will be the format you're seeing everywhere. I don't think this is a bad thing. At least it's clean and works well on mobile...

I take issue with the current 'standard' design, but only indirectly. I feel like giant home screens give companies the freedom to create a great looking webpage without any actual content - like a giant landing page. Since they all look the same, it's easy to compare and contrast.

I can recall a number of times scrolling through the entire home page for a company, only to still be confused about what the product actually does. I see a huge banner image, coordinating colors, tons of whitespace, very high-level text content...but little that says, "Our product will specifically do this, that, and the next for you!". I have to click around to find that out. By that time, I'm quite annoyed, and I'm not sure if your product is worth my effort.

If mentioned style works is simple, and represents/introduces product/service well, then why not. What annoys me is when designers/developers over do it, e.g. scroll hijacking, lots of heavy JS which introduces horrible lag, and unnecessary pop which ruins user experience.

Sorry, but `novolume.co.uk` is stepping into the the category of over doing it.

Huge, and super low contrast arrow buttons to switch articles. Why?

Italic serif slim and narrow font, from which my head hurts, eyes are twitching and is not readable (and some characters are unrecognisable, e.g. '&'). AFAIK, serif font is more readable then sans-serif, but this is not the case.

Custom scroll bar, why the hell do you need to replicate a perfect native widget my browser has (and this seems to be a new trend, probably replacing scroll hijacking)?

Crazy tilted, on hover shape and colour changing (and low-res) social buttons, why make it so complicated?

At least `novolume.co.uk` loads and renders fast, is responsive and does not have lagging UI.

He's right that most bootstrapped startup websites look the same, because they don't have designers on their team, their founders aren't trained in design, and they don't have the money or time to really flesh out the design. They just follow easy examples that are passable or in vogue. Or worse, maybe they just buy a template.

But OP is wrong once you talk about startups that get money. I mean for some well known ones, just look at Stripe, Mattermark, Branient, Mixpanel, Filepicker, Buildzoom... these sites aren't the same at all. If you spend time studying the design of hundreds of YC startups you'll see what I mean... almost to the point where I wonder if YC specifically instructs their startups not to copy other YC startups.

We've arrived with this design from years of design evolution and no one person is responsible. All products seem to ultimately converge on some optimal universal archetype. Websites, books and radio towers are no different in this sense. The same will be true of mobile Apps someday but I don't think its the case at the moment.

- Benefits propositions right below those, exactly where you'd expect them to be if you're familiar with a Web browser

Sounds like a damn good approach to me. I mean, I'd be happy to see an even more efficient design that measurably increased conversion rates for most products, but if there's nothing currently out there, I'm OK with the state of the art :)

All startup websites look the same. They all use this template because it perfectly addresses their goals (grab your attention, explain a new kind of product, convince you to sign up) while also being familiar and repeatable. That doesn't concern or surprise me.

It so happened I was given 5 different pens over a course of a week. Each pen had a different interface twist, pull, slide, press, not one was of the traditional variety of clicking the top. The following week I had to go to the bank to sign something. The banker handed me a pen, I pulled, twisted, pushed, and could not figure it out then I realized it was of the tradituinal variety, and sheepishly clicked the top.

All websites should look the same, but they don't. Sadly.It's all just information in some kind of format. A video as an mp4 or some text as... well text in whatever way your machine stores it, but mixed with a bunch of irrelevant other text.

But then there is "design" and then you get stuff like inconsistent search, inconsistent site layout, you never know where to look, what to look for, you miss things because they are placed somewhere where you are not used to look. It's a mess.

Websites Fall into functional categories. Sites within a given category look the same. Sites about a product need a powerful visual "grabber" element that communicates key brand points along with their name, then they need to provide key informational points on am easy to digest manner, often segregating the audience by interest. The big banner, three subtopics layout is a popular way to achieve this.

But not all websites are about introducing a product. Some are about getting immediate social interaction. Some are about exposing deep information in a set of categories. Some are about hierarchical display of the newest possible information. They don't tend to use the banner/three subheadings layout.

I think it's important to not mess with the customers mental model. In particular the shopping cart pattern was created over a period of time to make the eCommerce user experience as frictionless as possible. So when someone wants to get through the cart, you would want as few surprises for the customer as possible. The hamburger icon is a great example of how a UX pattern has taken a long time to filter down into the the collective mindset of users... "oh.. this is the menu". I still get clients asking when the funny stack of lines are and end up adding "Menu" right next to the icon. So my question is this: is it safe to try new things, or is it better to stick with existing patterns we know work? Or is there a blend of both? Is it better to let larger operations (Facebook, Google, Apple etc) to forge the way with mass assimilation of UX patterns? I do think my first instinct is right (first sentence), but I would love to know the experiences of other UX people about integrating new and fresh UX patterns.

I believe it's a good thing. When websites follow some kind of blueprints it makes it a lot easier for the user since they can recognize it. The negative side is of course that it might slow innovation, although slow isn't always bad.

All "normal" stairs, windows, roofs, tables, chairs look the same. There are good reasons for stable architectural patterns. There are good reasons for web design patterns too. Go back to the 90's and early 00's. There were so many different styles. A few won and became the ancestors of today's styles. Many more lost and got extinct. Sure, there are other styles that nobody thought about that are better than what we have today. They'll get created from time to time, copied and refined.

Who cares about the looks, really? The looks are there just to present the content, in a way that fits all the devices that are used to view it.

Sure, you can do some artsyfantsy pantsy stuff now and then, but then again, it's about the content too, just that the whole artsy website is the content.

And this guy has the most generic looking blog structure also, so why is he blaming the guilt on somebody else. I'm so glad I don't have to go through these "designed just because design" flash websites anymore and try to figure out different structures for each page.

I've often thought that porting a game engine to the browser with a suitably robust object library and hyper-intuitive developer interface would be just what is needed to jump-start a move away from rectangles and columns. And doubly so now that VR is about to hit its stride. I want what "The Lawnmower Man" promised me, dang it!

The designer seems to be harking back to a time when pretty much the only browser was one on a laptop and you could reliably assume 764pixel width.

These days, you have no idea what is browsing your website and more than likely it is somebody on a phone. So priorities change.

Content makers want their message in front of as many people as possible. To achieve this, you make it work on a small screen. This brings good design constraints and stops design for the sake of design.

Let's make website like architects make buildings. It's more important to be original than beautiful or functional. Let's put the navigation menus at the bottom of the page and the disclaimer at the top. Let's randomize the order of the links and elements so that every user has a unique and original experience on every visit. In fact why use english? That's so boring. Let's use hieroglyph!

Most websites are aweful. My monitor is almost 2000 pixels wide and they insist in packing their whole text into about a 10cm little strip of it. I see so many websites that want me to read about 5 words in a row when there is space for about 30 if they weren't too lazy to use the whole screen.

A Website needs to convey an idea in the most convenient form possible. Due to a rise in smartphones, most websites have trended towards a mobile-first layout - scroll to view the entire content without any redirects between multiple pages.I would be interested to see a professional designer's perspective on this.

> RAM is not stressed: of the 1015MiB of RAM, 182MiB are free, and only 6MiB of swap is used. We typically don't worry about RAM on Linux systems until the swap space used exceeds twice the physical RAM

That's a bad metric. The question is rarely "does this machine use too much swap". The problem comes when performance is degraded because pages that were evicted out to disk are now needed again, and those processes wait for I/O. swapin and swapout are the relevant figures.

edit: (If you're only using 6M of swap, it doesn't really matter, of course. But if a system uses any substantial amount of swap, you want to check rate, not quantity.)

Just a heads up, @BrianCunnie, you are dead. I'm not sure why. I don't see anything in the comment history to justify shadowbanning, maybe someone with more insight into how and why people are banned can elaborate. Sorry this is top-level, you can't directly reply to dead comments.

What exactly is the purpose of logging all unhandled breakpoint instructions encountered in user-mode code to the syslog, and then continuing...? On Windows, a breakpoint instruction encountered outside of a debugger causes the standard error dialog to appear, and if you choose not to attach a debugger, the process terminates. IMHO that's more reasonable behaviour, as breakpoint instructions shouldn't appear in code that's not being debugged (either by a debugger or itself, as is sometimes the case like the author does here.)

because unprivileged user-space code is able to force the kernel to waste a lot of time printing messages, slowing the whole system down

This reminds me of one thing I hit 2 years ago, where several android mobile devices would be slow to call signal handlers, because the kernel was booted with user_debug=31(on production devices), which printk stack traces before calling signal handlers. The kernel command line argument was hardcoded on some code aurora branches for Qualcomm boards, iirc.

> Think about the common debugging scenario where the user sets a conditional breakpoint: "break if x > y". Testing that condition is going to take 15ms each time.

Wouldn't the implementation of a software breakpoint look like:

if x > y: bkpt

so it'd only be slow on the case where it needed to break? I guess I can imagine you could implement it the other way (always break, check the condition after breaking, resume if the condition isn't met) but it's not obvious to me why you'd do it that way.

Just guessing, but if you're just patching the binary to insert the breakpoint you might not have enough space? But you need space for the "bkpt" instruction too...

The author almost immediately writes off Common Lisp for lack of "frictionless access to a rich ecosystem of code written in the same language as your software", then recommends Clojure. Most of the ecosystem you have access to in Clojure is, in fact, not built in Clojure, and most Clojure libraries were/are wrappers around Java or Javascript.

Footnote 1 makes no sense, suggesting that the only way to get access to "a bunch of other useful code" is to embed it into a C program and that understanding your dependencies is somehow easier in Javascript.

In reality, Quicklisp[1] offers effortless access to over 1200 libraries and programs. It's nowhere near the 200 thousand packages in npm, but the overall quality is good and these libraries cover a surprisingly large number of things.

The post tries to display Clojure as "Lisp, but with access to open source". Clojure is a fine programming language, but it's also completely different from Common Lisp. CL has high quality optimizing compilers performing extensive type checking, a powerful object system, easy access to C libraries, etc. If you need Java interop, you can just use ABCL[2].

Here are some heretical thoughts. The language is irrelevant. The text editor is irrelevant. The OS is irrelevant. The size of your monitor is irrelevant. All your productivity hacks are irrelevant. The only relevant thing is your ability to formulate and solve problems.

You might say the language can help with both the formulation and the solution but I'd say that just comes down to what language you're most comfortable with and how good of a problem solver you are. So you can use lisp and I'll use ruby and at the end it'll all be a wash because the fundamental bottleneck will always be the speed at which you can formulate and solve problems and how quickly you can respond to market dynamics. All other choices are accidents of history.

If you're asking yourself what kinds of problems is a Lisp good for you should probably watch this: https://www.youtube.com/watch?v=8X69_42Mj-g (A computational chemist built himself his own Common Lisp implementation because no other language available was powerful enough for his needs...)

It basically boils to: if you're solving a truly new and interesting problems for which the current libraries and ecosystems don't matter that much to you anyhow (because what you do is too bleeding-edge / ahead of everyone else, so you'll write your own better stuff anyhow), than you might want to choose a Lisp.

And if your problem is so bleeding-edge and exotic that you also need to build your own programming-language for it, than you might just as well build yourself a Lisp (like the guy in the video did) and add your needed features to it, because this would be easier than any other approaches and allow you to spend more time on your problem and not on the language...

What's with the whole, "modern," meme when people talk about Clojure? SBCL only forked off from CMU CL in 1999. Clozure CL is still well supported and actively developed. It's not like CL was written in the 1960s and had never changed ever since. The final standard was published in 1994. The alpha and beta of the JDK weren't released until a year later. Given that Clojure leverages so much of the JVM what exactly makes it the modern, de-facto standard for new applications? Don't we already have an ANSI standard for such a Lisp?

Is it modern because it can invent without restriction of a specification? That's a good thing and plenty of other Lisps are doing that... but what makes it, "modern," and why is it the standard? Is its relative immaturity a feature when you're taking on the world in a startup?

Either way it's not much of a secret anymore but I still don't see many startups advertising that they use Lisp -- even Clojure is rather rare. I hope more people give Lisp a try and kick butt.

The author seems to not know that there is open source in Common Lisp as well as most every other language. Open Source per se is orthogonal to whether Lisp or Language X is a great language to have in one's toolkit.

Clojure mind share is quite low compared to scala as far as languages built on top of the JVM go. Personally I think clojure and clojurescript is pretty cool but not so cool so far as to tempt me away from python, javascript and CL.

For a startup don't even worry about what language your competitors, if any, are using. Simply produce the best product you can and get it to market and break even before you even begin to think about these other things. HINT: in this age there is not one or a few areas and a bunch of competitors but millions of unique cool niche ideas and products. Build a better system for pushing out new app and product ideas consistently on the side. It isn't like the case of web store competitors at all. Also today there is a large market to buy startups that produce some webapp product that grows enough legs to more than break even.

You don't predicate your business on a secret but on producing a good product people want at a price point they are happy to pay. Secrets are hard to keep. Particularly when the author just told us about his secret weapon and advises as many as possible to use the same "secret".

Back in the "old days" we would joke that if we wanted to derail our competition then we should simply give them our source code. Their best geeks would be bemused and befuddled for months. Today we are drowning in reams of "free code" that may or may not be suitable. Then we have a side helping of hundreds of web APIs to pick from for various subparts of a product. So much time is spent understanding and adapting open source and choosing among and massaging our data to use so many external APIs that it is a wonder anything gets done. :P

Aside, the size of the NPM database ... is that really indicative of the amount of JS code out there? For example, how many trivial modules like "isarray" are there, whose sole active code is this single line?

return Object.prototype.toString.call(arr) == '[object Array]';

Is npm really the largest codebase, larger than PyPI, Maven and Ruby Gems?

On a more subjective level, does npm solve more nontrivial problems than any other programming language repository?

This seems like a little bit (probably a lot) of hyperbole around clojure.

While I love clojure, selling it as a "secret sauce" you just sprinkle on your startup and it succeeds is definitely not the message I think the community should be trying to send.

Lisps have benefits, they're pretty well documented (just the fact that lisp is multi-paradigm in and of itself is a large benefit), but they're not infallible. If you give macros to the wrong programmer, you will slow your entire team down.

Looks like a subtle ad for Clojure. Which I don't mind because I am slowly falling in love with Clojure. I am not sure if it's a "secret" or we should "write all the things in Clojure" but I'd love to see more companies use it, including start ups. I wouldn't mind working for one. Combine JVM's power with Om/Reagent and I think it's a solid stack. Although I am not advanced Clojure programmer. I wonder how it's doing in production?

I've never quite bought into the lisp idea that maximum "power" or "expressiveness" is always good. I could see it being an advantage for library authors who could present better, safer interfaces to the library's functionality. But if you have a startup, how much time do you have to build libraries and refine their interfaces?

I doubt that these qualities are so good that they would be a material advantage for application developers. And certainly not material enough that it would affect the outcome of a startup (unless it makes the engineers more motivated).

That being said, I hope clojure succeeds, and it's on my list to learn sometime (though not in the top 3).

But we need types people! Properly static ones that can be used to encode your intentions and handle inconsistency before even running the program. And yes I know that Clojure has an optional type system. But optional means that you cannot rely on having types in the libraries that you use. So you can't fit your stuff together with their stuff and have reasonable expectations that it will work once the user does that one thing you didn't think would be a most reasonable thing to do (of course it is, what were you thinking! And tell me again why you didn't just let the computer do that thinking for you?).

I've been programming Clojure on the side for about a year now. It all started when I did the exercises in the Clojure for the Brave and True-book, and also simultaneously picked up Emacs and Cider (a nice IDE indeed, and keyboard-based).

Clj is just code like anything else. Anything you can do in Clojure you can do in Java. But with that language, I find myself creating things which I could not in other languages. Solutions come easier. Programs don't become long, but they get a lot done, and are performant.

I dig it, and really recommend it to anyone who is bored with their day job.

This was a great read. I've always heard about Clojure and now I have a better idea as to what it is. I'm still going to stick with what I've heard all along: it doesn't matter what language you write in, it matters how you write it.

In the expressjs example, I wonder how practical is to do all that interop instead of just using plain JS? I mean, is awesome that Clojurescript can leverage all those JS libs, but its worth writing lots of interop just to use Clojurescript? Of course, as Clojurescript libraries grow this will not be an issue, but it is nowhere close to JS for web dev.

For me, a more practical approach for server side would be to use Clojure instead of Clojurescript, the problem is Clojure does not have the rich ecosystem of web libraries as nodejs does.

I tried Clojure, but immediately got the impression that the run-time type-system is ill-designed. I cannot recall the details, but I can remember odd things like different run-time types used for empty lists than for non-empty lists. A simple program doing a "switch" on some run-time types turned out to be really cluttered. But of course, there's always the possibility that I was doing something wrong :

I can't believe an article about Lisp left out QuickLisp. It's the most popular Common Lisp package manager and has over 1200 open source libraries in it, and most of them are written in Common Lisp, or are bindings to native C/C++ libraries. Using libraries written in the same language you're using has a lot of advantages. Using Java libraries from a Lisp seems clunky.

Also, judging solely by the number of available packages is not a good measurement. Browse through NPM, and it's pretty obvious there are a LOT of duplicates, and that people have published libraries for the silliest little things. Search for "PNG", for example, and in the first page of results there are two different png diff libraries, a few libraries for determining if a buffer contains a PNG byte-stream, multiple stream and un-stream libraries, etc. That's all done with a single library in CL.

And finally, I don't think Clojure is that great. It's better than Java, and it's what I'd use if I had a lot of legacy Java code to work with, but it wouldn't be my first choice for a new project written in Lisp. I would hazard a guess that it's popular because it's less clunky than Java, without the over the top type system of Scala.