It's nice to hear about a situation closer to most startup outcomes. When Matthew notes that someone sent him the acquisition terms and the founders made out well while line level / low level managers basically got nice severances, it's something everyone here should keep in mind. It's great if founders make out well as long as everyone else does too; it's when those outcomes diverge that it rankles.

If a potential employer tries to put nonsense like a 2-year noncompete -- during which of course they don't pay you -- plus ownership of side projects into your contract, it's a sign you're dealing with assholes. If you can, you should just walk away. Many of us live in CA, so some of this can't happen, but a startup here tried something similar: their 16 page (!!!) employment contract specified that if I used my personal media device or laptop for any business purpose, they had the unlimited right to inspect/search them on demand. By my reading, even something as tenuously related as two factor auth on work gmail being sent to my personal cell, or taking a business call on my cell would have counted. The founder tried to blame it on boilerplate their lawyers inserted; he couldn't even take responsibility for what was, after all, the contract he was asking me to sign. I walked away. Read those contracts.

"Theres irony in any tech company confronting the government on privacy matters, considering how much heat many take for mining their own customer information and using it for advertising and other profitable purposes."

See, I don't find this very ironic. In fact, my only real issue with data mining and analysis by these sorts of companies is the way governments can demand this info without my approval.

If Microsoft or Google or Apple or Amazon offer me a service and state that "hey, we'll provide this service for no cash outlay but data you submit to our servers will be analyzed to tailor search results, advertising, and other behavior to your usage" I can opt into that knowing that I'm trading targeted ads for free email or hosting or whatever. If I don't think that's a good deal, I don't use the service. If I think "OK, ads are a fair price for this stuff" then again, I'm cool with that.

But just because I agree to let Google read my location to send me traffic warnings before heading out to work doesn't mean I want the FBI to grab that data without my knowledge so they can determine if I might be a troublemaker. Just because I agree to let Amazon use my Amazon searches to suggest other products I might want doesn't mean I want the DEA demanding that info to decide if the gardening gear I purchased was for tomatoes or growing cannabis.

I'm perfectly aware that you pay for the things you get, whether it's directly with cash or indirectly from advertisers who pay for access to your eyeballs. Those are things I can consent to or decline. But when people with guns and the ability to throw me in jail can demand access to that info without my knowledge, I'm no longer agreeing to the same thing.

It's like signing a contract where someone else has the ability to change the fine print after I've signed it.

Microsoft has shown that they are quite willing to access induviduals private data if they have a financial stake in it [0]. Yes, they eventually backtracked under public pressure (after trying very hard to justify how it's totally okay because they were going to pay a lawyer to rubber-stamp things in the future), but it's rather hard to listen to their general council talking about how they value privacy on principle given their history. It's quite obvious they only care about privacy insofar as it affects their bottom line.

The article also conflates (intentionally?) this issue with the mass-surveillance issue, bringing Snowden into it and insinuating that this ruling would have an effect on that, which is just silly [1].

The whole "Company F" section is interesting (hadn't heard before that microsoft is challenging the statement that they were willingly providing user data to the NSA), but it's a bit hard to square with the leaked documents which list microsoft as the first participating partner in the PRISM program [2]

I am not a lawyer, but it seems to me that an American court has the power to demand that an American citizen produce an item or information under his control, even if it happens to be in another country (e.g., a man getting divorced can't drive his car and all his gold and jewelry into Canada to shield them from his ex-wife). I imagine that most other countries would behave similarly: being within their borders and subject to their jurisdiction, they can compel someone to do something.

If that's indeed the case, then it seems that an American corporationa legal person with a presence in the United Statesmay be compelled by a court to produce items or data it controls outside of our borders.

The thing we need to do is to limit the power of the subpoena generally.

"50 countries including Australia and the US may be signing away rights to ensure sensitive customer data remains in its country of origin ... the draft document reveals that the United States and the European Union are pushing to prevent signatory countries from preventing the transfer of data across nation borders."

It's good to know there are people like Brad Smith standing up to government demands for full access to people's data. It brings up an interesting privacy contradiction. While storing data locally seems best for privacy, if it's on a networked computer, there are still ways for people to get it, and unless you have really good lawyers, nobody is going to challenge governments across the world if they want to access it. By moving data to the cloud, we are creating incentives for companies like Microsoft to fight against government intrusion.

this is sad. for all the misguided hate against the US there is a lot of very justified hate that comes from these sorts of attitudes coming from its government and enforcement agencies. they should have more respect for the laws of other countries, especially somewhere like Ireland which could, not unreasonably, be called a crime free paradise compared to the US.

its terrifying when law enforcement doesn't understand the difference between right and wrong...

It's important to remember that this is the same company that snooped through the emails and files of one of their users while looking for evidence of piracy. They came clean about their snooping moments before court documents were publicly released that detailed what they did.

Actually, it decompresses to a 5.8MB PNG. However, many graphics programs may choose to use three bytes per pixel when rendering the image and because it has incredibly large dimensions, this representation would take up 141GB of RAM.

If you follow the "related reading" link on the bottom of TFA, you come to a page by Glenn Randers-Pehrson discussing how libpng deals with decompression bombs. On the bottom of that page you find the following curious note; anyone know what to make of it?

"""[Note for any DHS people who have stumbled upon this site, be aware that this is a cybersecurity issue, not a physical security issue. Feel free to contact me at <glennrp at users.sourceforge.net> to discuss it.]"""

PNGs also have optional compressed text metadata chunks, and it's possible to sneak a decompression bomb into one of those as well. You can get about a factor of 1000 in the compression -- 1MB of 'a' winds up being about 1040 bytes. You can have multiple itxt chunks, and it appears that the chunk size is only limited to 2^31-1.

Having dealt with and printed a lot of very large images, e.g., 60k x 60k pixels, I have been on the lookout for image processing software that never decompresses the entire image into ram, but instead works on blocks or scan lines or blocks of scan lines, but stays in constant memory and streams to and from disk. For example, the ImageMagick fork GraphicsMagick does a much better job of this than ImageMagick. What other software is out there that can handle these kinds of images?

I used to work on a scanning SMTP/HTTP proxy and even back then it wasn't unknown for people to send crafted decompression bombs to attempt to crash the services. We handled it by estimating the total uncompressed size upfront (including sub archives) and throwing out anything with a suspiciously large compression ratio.

I imagine that .pdf files are another avenue for mischief. They contain lots of chunks which may be compressed in varying ways.

That's cool. Presumably the same "attack" could be applied to any file format that uses DEFLATE.

From a legal stand-point, I'd be wary about following through with the authors suggestion of "Upload as your profile picture to some online service, try to crash their image processing scripts" without permission. Sounds like a good way of getting into trouble.

Everyone's focusing on this being a PNG problem but actually if my server unzips a 420 byte file into a 5M file of any kind, I'd say that's the first red flag. Assuming some sort of streaming decompression, you could write an output filter that shuts off the decompressor when it's seen a factor of X bytes. A reasonable factor would be 10 - which in this case would have halted bzip decompression at 4kB.

This would probably be a trivial patch to bzip2. But I like the idea in general of passing an "max input/output ratio" to any process or function that might yield far more output than input.

This is actually a pretty cool idea (although perhaps badly explained, given the other comments here).

Here's how it could work: IPFS ("In some ways, IPFS is similar to the Web, but IPFS could be seen as a single BitTorrent swarm, exchanging objects within one Git repository") is a globally distributed hash-addressed versioned filesystem. (see: http://ipfs.io/)

This is one of the most crucial things we need to make free software viable again. In 2006, I wrote that the only solution to the problem of proprietary services was to "build these services as decentralized free-software peer-to-peer applications, pieces of which run on the computers of each user": https://www.mail-archive.com/kragen-tol@canonical.org/msg001...

We still have a long way to go, but it's heartening to see so much work toward solving the problem! Perhaps one of the systems mnot links to will evolve to solve the problem; perhaps it will be something that we haven't started to build yet.

This is crucial to the future of civilization and to the longevity of your personal work. Nearly all the effort that went into proprietary software in the 1980s and 1990s has been lost rather than becoming part of the cultural heritage of humanity, in the way that Emacs and GCC have. Similarly, everything you invest today into proprietary web services is ultimately destined for the dumpster, whether it's code you write to build them or data you store in them. We need an alternative that has a chance of lasting.

There's already a quite large distributed "HTTP" being used everyday: BitTorrent's DHT network. URIs are just the keys of the distributed hash table. Keys are also mutable so one can change the content stored at specific keys. Right now it's being used to serve very large files and not HTML/CSS/JS files. Things like Project Maelstrom are a step in the right direction.

Problem is that it's hard to find things, just like it was hard when the Web started. There are opportunities for the next "google" of this new DHT space.

We are going to get a distributed something. He mentions a lot of the existing efforts.

I think these are tough problems but actually mostly solved in different projects that are out there. The hardest part is making the ideas work together and agreeing on protocols.

The solutions that become popular could really help quite a few people. I see it as possibly being the key to society's overall struggle for effective organization.

Right now I believe we need a small number of very flexible distributed protocols to be used as widely as possible, and have most if not all other systems built on top of them. That will mean a high degree of automation in systems integration. If we can do that and solve problems like privacy, synchronization, and latency issues at the same time, we could leverage that type of system for addressing things like inequality and efficient use of resources.

I met Mark and Tim Berners-Lee at Extensible Summit and was very happy that they are still actively fighting for the World Wide Web in its full distributed, decentralized glory.

I do work on synchronization in distributed systems, and would like to add my database, http://gunDB.io/, to the list. Why? Because it answers his questions in the "Some State and Processing Really Wants to Be Centralised" section. If you want more info on this, check out the github repo, or ask me.

Mark's "Modifying The Web is Scary" section is important, I do see a lot of people reinventing the wheel but it isn't too hard to get everything to work over PATCH (sadly a verb which didn't take off but is in the specification) and upgrading to WebSockets.

> Well, for one thing, you must always remember the immortal words, DON'T PANIC

So true. A colleague of mine managed - on his second or third day on job - to delete every single user account in our Active Directory. After an hour, we gave up trying to restore the AD (it was an SBS2008, so no AD recycle bin) and simply restored the entire DC (at the time, our domain only had the one DC) from backup. Surprisingly, most of our users took it very well and used the time to get some paperwork done or clean up their desks or something like that. Still, it was one of the most stressful days of my life. So we kind of panicked. In restrospect, I think another hour or so of research might have saved us the eight hours of restoring that server (did I mention that our backup infrastructure really, really sucked at the time?).

In smaller desasters, I've found the ability to remain calm most valuable, though. Having your boss breathing down your neck impatiently can instill a deep desire to simply do something just to show that you are working on the problem. But if you don't understand what's wrong, at best you are wasting time, and possibly making the problem even worse.

I've had to do something similar a long time ago in the mid-90s when Linux switched from libc5 to glibc6. In this case, I hadn't deleted everything, rather I'd stupidly upgraded libc locally.

After learning a valuable lesson in exactly how dynamic library work and the recommended process for live libc upgrade (don't do it if ABI changes) I fixed it by using my IRC client which was already running so unaffected to get a statically linked copy of /bin and /sbin from another machine, via DCC Send...

I can't remember how I got root, either su was statically linked (believable since it's setuid) or I had a logged in root session. I did have to used the tcsh "echo *" trick for file listing and the shell built-in cd...

Yip had a manager do that on a clients site, bestpart was the kit was so new that only a few in the country and the install set for the OS had not arrived and no backups. Was new machine and been partialy configured, awaiting tapes.

Luckily anotehr client had the same RS/6000 (think 3rd in the country outside IBM) and was able to borrow there install DAT to bring AIX back to life.

Odd as had problem with RT/6150 in which (nobody admitted it) had similiar problem and that involved to get it limping along copying files from a working system onto this holed system to fill the gaps. Which given the eventual reinstall that weekend took most of the weekend only to find that floppy disk 70 odd was corrupt, much fun.

But *nix is great as always more than one way to get things done and on many systems can also be true.

Still good education in not only backups, but backup integrity as you never know when you want to read them back.

Alas, their quality has been going down recently. The most important feature - keeping the location inside an article between app invocations, is not working. For long articles, if I stop in the middle and want to resume later, there's a 80% chance that Pocket will happily set me right in the beginning.

Besides, their rendering for articles with code sucks, so I almost always use "web view", a decision Pocket also forgets every other time.

So I end up using Pocket as a convenient keyboard-shortcut to save articles, but on my phone actually open them into Chrome, which has no problem remembering the location in a tab.

When I see stats like this, or like the 400MM users/40 devs, that WhatsApp had at one point, I can't help but think back to, say, 1985. What would it take to develop and scale a software product to that number of users?

I worked for MultiScope in 1991. We had to order discs and have disc labels printed, copy the compiler onto the discs, have manuals and boxes printed, stuff and shrink-wrap the boxes, ship to Ingram Micro for distribution, and then wait 2-4 weeks for our product to show up on the shelves at Egghead. I recall 5 developers, and we were ecstatic to ship 4,000 copies of a major new version.

That gets me thinking in terms of leverage. The leverage that 2015 Internet technology affords a single developer is a potent economic force.

Great read but not a fan of the headline. The underlying concept is good - growth and headcount dont always need to scale together, user count is too relative to the industry, company, or product. In some cases, scaling to 1000 users would be a bigger feat than scaling to 20m users.

I think I remember in the early days there being pushback from content providers about not getting clickthroughs, ad impressions. What's the status of this type of service re: copyright? Neither Pocket nor users have any right to transform / create derivative works -- is there some loophole here about personal use and not re-distributing?

Is it a copyright violation to make a cross-stitch version of a tweet for your living room? To provide a meme generator service that uses NYT headlines?

It's interesting - the article mentions them having a lot of projects on the docket, but doesn't go into detail on most of them. I use Pocket every day and a few bugs notwithstanding, I'm very happy with it. In a weird way, them having that many projects worries me because it means it might bloat outwards from what it is today.

With two people per 1k sq-ft apartment... you need ~100-fold improvement over the conventional wisdom to get full 100% sustainability. So let's assume you can magically get 10x gains in "land use" efficiency, and that you can stack those units 10 levels high (piping 1/10 of the 1kW/m^2 solar incidence to each level). But herein lies the rub: Urban density exceeds 2 people per 1k sq-ft of building rooftop area. So 100% rooftop sustainability is probably a no go.

Of course something is better than nothing (is it truly better than rooftop solar?), but it's worth pointing out that it can't be a panacea.

Interesting as it is, the silk and the history of the family, it seems odd that Vigo teaches weaving it to a few people but not how to make it shine. Or at least it wasn't specifically mentioned about the process of lemon juice and spices.

This strikes me more as keeping it a secret within the family more than protecting people from God. Business failed sure but that's no reason to keep the process of making it shine a secret if she is hit by a bus or drops dead that's it for the knowledge.

This is an interesting history. And perhaps it's nothing more than uniqueness which makes this interesting; however, ms Vigo seems very well suited as ambassador for this dying tradition. She has the lineage, the myth, and aura to make it interesting for a new crop of artisans now that there is more interest in traditional methods.

This is brilliant because the main cost of running gear is power draw (PDUs / electrical circuits). Having OEM/ODM blade ARM setup a-la sgi cloudrack/supermicro is the way to drive costs to the floor, in a Backblaze/Google way. Unfortunately, it's a "Dell/Walmart model" hypercommodity where such a business has to maintain massive customer subscriptions to stay cash positive and still just trickles in $.

It's an interesting space, but if I were launching a cloud IaaS/VPS, I would probably optimize for the other extreme of "Apple model" premium/full-service expensive hosting that has fantastic uptime, gear and sales/support for enterprise/startup and IT/web operations... There's some more money in that and less headaches. (The most money seems to be in the upper-middle pricepoint area.)

For every comparison we should take into account that scaleway offers dedicated hardware, not a VPS.

Also I think its important to note that they (currently) only offer one very "small" server model, so your whole application would have to be able to scale horizontally really well to be able to run on such infrastructure. So you can't have a few big database servers and a lot of small stateless application servers, which I belive is a very typical architecture today.

Scaleway is a highly interesting player in the IaaS market as they're one of the few currently that are offering ARM based servers. Will we see more ARM servers the next couple of years from more vendors?

(The title currently just says '2,99 per month'. This should probably be changed "2.99" instead, since a lot of people would expect HN to default to USD. Even I (living in Germany) was surprised it was in Euro.

Vim clones are a dime a dozen; this is actually an entirely different thing, and although I can't imagine actually using it for anything... it's interesting to see people work on different approaches to text editors.

A simple hackable python text editor may not be useful in the long term for anything (for all the reasons about distributing python and python performance limitations), but its certainly viable for prototyping interesting features.

Interesting project. Vim is riddled with issues and we badly need a replacement (coming from a dedicated Vim user). Currently VimR and NeoVim are in the lead for tackling some of the hard problems (async calls, a real plugin system, obvious features like fuzzy finder with good UI integration, etc).

I found this part amusing: "What did Bram Moolenaar say". After using Vim for a few years, and seeing his design choices, I wouldn't personally be interested in his opinion of an editor!

Am I wrong in thinking that this is a Tk Text widget (or possibly several), with a selection of functions atop? Plugins for the most part are just key press callbacks which manipulate this Text widget and associated state?

I know every project must start somewhere but this doesn't seem to have considerable substance for a purported next gen vi(m), and given its heavy reliance on pre-made tools (like text areas), without much abstraction, it seems like it would be hard to get over the hump to make it competitive with existing editors.

Interesting concept. I especially like that you've made it a little bit more fun, which could make it more accessible. If I'm not mistaken, these are bite-size tutorials? Will it lead eventually to a finished program? Building something that worked and was function/moderately useful was what perked my interest to dive further into development.

Hi, looks good, interesting way to learn about Java. IMHO it would be nice, if there were more space for user input (choosing from two options is not enough). Questions at the end are good example for adding user interaction, not just only revealing the answer.

There is something strange in 03-04, second line starts with "6 myBoolean ...". I don't understand it.

What are some good sites for learning Java, as it's used inside of large organizations? I already know how to code, but have run screaming from Java whenever I started looking at it. But where I work, it's ubiquitous.

I'd rather not learn on my phone. I have this giant desktop sitting in front of me...

This is the well-known CRT fault attack, nothing new. SSL implementations that don't verify their signatures leak the private key if their signature routine has a bug - this is essentially a hardware problem. Verifying your signatures is fast, though, so doing it is worthwhile hardening (NSA can potentially use cosmic rays for this attack, probably nobody else).

The findings are very similar to the classic "Ron was wrong, Whit is right" paper - if you scan the entire Internet, you will find broken hardware. You will also find SSH servers with their root password being `12345678`.

Thanks a lot for sharing. I am an utter novice at investing. Until now I have looked more at technicals(and gambled) when it came to stocks. Needless to say the current market scenario has left me burnt.

I am trying to do fundamental analysis a lot more.(currently trying to get my head around Benjamin Graham's "Intelligent Investor" - with inputs from Buffet himself)

But this mode of investing needs a lot more mental strength than i thought - it is almost a strict regime that needs to be followed.

Wanted to know how regular Stock Investors in the HN community go about doing this. I.e. Studying the macro/micro factors and company fundamentals on a regular basis - and making investment decisions accordingly.

I sit a few buildings down from Cindy's office and have also been studying colormaps quite a bit in the last 3 years. Its interesting how many plotting packages get this wrong but are finally catching up.

I switched from Matlab to Python years ago and was sad to see pyplot using the default rainbow palette still. However, there was some good work done by Chris Beaumont to improve the plot quality. See: http://plotornot.chrisbeaumont.org/ You can easily import these styles into matplotlib using rcparams.

Matlab is using a roughly perceptually linearly luminant colormap they call Parula now. Good job Matlab.

I want to talk about the Luv Lab colorspace. There are several places on the net (even in the literature) that are wrong about these colorspaces saying Lab is for emissive displays and Luv is for reflected light. This is actually not true. (If anything it is reversed). See: https://groups.google.com/d/msg/scikit-image/DIRaSXJoEes/2jD... and Berns reference.

The interesting with colorspaces (and colormaps thereof) is that working in a perceptual space like Luv/Lab is yields a non-linear (and non-convex) gamut in the sRGB space used by most monitors. There is more "headroom" in the magenta hue of colors than say green. However, you have to then look at monitor output as a function of hue and human sensitivity -- with a red object and blue object with the same reflectance under the same illumination, the red object will appear darker to humans. So there are many transfer functions at work here which makes the problem challenging in picking the right colormap that is perceptually uniform, has the maximum number of perceived differences, and has the appropriate number of hues for best represeting your dataset.

Whenever you write an email, you should envision how it'll read as evidence in a court transcript. Envision a jury reading over your shoulder. That's essentially what will happen if there's a preservation order and a court case.

This is just one of tons of reasons why email is overused. Live, interactive, two-way conversations are better for most things. Making better use of interactive conversations does require some planning and discipline, to keep a list of what you need to discuss sorted by person. But the benefits of that practice are numerous, and increased privacy and plausible deniability is a comparatively minor benefit.

A few reasons to prefer interactive discussion to email:

- plausible deniability and increased security

- reduced chance of misunderstanding

- less time spent and potentially wasted carefully crafting the perfect message, because you can monitor your recipients reactions in real time and dynamically alter your delivery depending on which parts are immediately understood and agreed upon

- collaborate on the ideas interactively and rapidly, rather than a simple one way transfer

Taking a conversation offline provides evidence of intent because if youre trying to cover your tracks, you probably know what youre doing is wrong,

I can't tell if this is big business trying to make investigating white collar crimes harder or the federal government trying to drum up support for mass surveillance. I'm leaning towards the former based on where the article is.

This article is a waste of space. Clearly if your work emails are being subpoenaed by a federal investigator, you're already under suspicion (the article is talking about federal insider trading investigations). All this means is that if you refer to an out-of-band conversation, then they will look there.

When I was a child I loved USA, thought it was great country, and dreamed of visiting it one day. Now, when I'm grown up and finally I can afford intercontinental travel I am seriously afraid to even enter USA. I might be arrested for no reason, or have my cash forfeitured, also for no reason... I think that few remaining free countries in the world are Switzerland, Singapore, Australia, and maybe Hong Kong

I usually defer to a different mode of communication when it's more convenient or appropriate to the discussion. Instead of trying to type a novel to explain something, I'd prefer to engage in a conversation in order to identify parts of the topic that can be skipped. This usually reduces the time cost to communication.

> Because Go gives us interfaces and closures we can write much more elegant, generic APIs with a flavor similar to Ruby or Lisp and this is the direction the language naturally wants us to take. Personally I like to use the empty interface for plumbing and only pin things down to specific interfaces or concrete types where I need to for performance or correctness.

That's a lot like using opaque pointers in C. What is it about Go that makes people assume it's shortcomings are beautiful designs?

I learned yesterday that `x == nil` can return false even if x is nil so long as x is an interface type. But it depends on whether x is actually nil or a nil value with a specific type.

(

My other pet peeve is that a method with a non-pointer receiver that tries to modify the receiver object will silently drop those modifications on the ground, because the object is copied. Which makes some sense, except that Go likes to convert to pointer receivers automatically, so the caller can't tell that anything is wrong. The only difference is one character in the method definition. Everyone I know hits this bug at some point and loses half an hour before they learn to look for it. You could almost say "all method receivers must be pointers" except that you need to refer to interface types without the pointer.

I was hoping this article would have more substantive advice for new Go developers. It seemed to mostly have very general advice that applies to most other languages like: "Don't try and write language X in language Y. Keep complexity down by not over using complex language features."

I'm writing my current hobby project, a podcast fetcher, as my first project in Go.

The project has been going generally well but there have been a few annoyances so far:

* Why are you not able to easily version git dependencies? Go's solution to this problem is to tell you to create an entirely new git repository for each major version. Really? If they didn't want to go full blown dependency versioning with something like CocoaPods, they could at least let you specify a git branch or tag.

* The db.Sql abstraction does not support multiple result sets. Therefore database drivers, like the popular mysql driver, don't support multiple result sets. This really limits the kinds of stored procedures you can call.

* The debugger support is bad. I have to fall back to using print statements for most of my debugging.

> I resisted the recommended workspace configuration, as described in How to Write Go Code. Dont bother, especially in the beginning

I've had the same dev folder structure across platforms, languages, jobs and decades. I basically had to abandon that structure when starting Go. I fought & fought and at the end of the day it is just easier to use their expected workflow. It was (is?) galling but it was the only way to stop fighting the tools and get work done.

I was expecting an empty page with a huge "don't" in it. But instead there was interview excerpts from Go developers and a ambiguous list of "best practices".It would be more meaningful if this was backed with some actual practices with some code examples.

I just don't see what people think others would understand by teleologic statements like: "write Go the way it wants to be written"

So, if Go is as bad as the comments so far make it out to be, what are some alternative languages? Specifically, a compiled language that can be deployed without worrying about dependencies. That's the feature that has had me looking into learning Go. I want to be able to just copy one file to my server and run it, no need to install anything extra on the server to make my program work.

its not all go specific, despite the comment about 'C style' heap allocation and pointer usage, you will find your C code will get better if you do not do this as well. the heap is a last resort, not the first.

> LookingGlass is meant to be run on a local, headless (without monitor), always-on computer. Installation consists of copying a disk image to an SD card, inserting that into a Raspberry Pi, and plugging it into your local network (preferably behind a router).

Appliance designs such as this have 0 chance of gaining significant use.

The author should consider rolling this solution into software packages that run on operating systems people actually have.

ol' Randal is going to get (or some permutation thereof). Figuring that Randal is pretty smart, I bet he has a piece of code to parse out that. Still, anyone here have a good hack that can just nuke days of his time whilst completing this form? Only other one I can think of him using is (for Matlab):

The most interesting question, to me, is the one about which words you know the meaning of.

About half of them aren't real words. I assume this question is used partly as a gauge of vocabulary (how many of the real words do you recognize) and partly of honesty (how many of the fake words do you claim to recognize).