I’ve got one of these tiny fan-less systems as desktop. You can screw them to the back of the monitor. Or simply have it lay on the back of the desk somewhere. Works great for me, and no big desktop needed.

I come to the same conclusion from the other side: I agree with the idea of filtering by tags in principle, but I couldn’t care less about whether a new tag is or isn’t adopted since it will or will not be there for filtering at my discretion.

The discussion about the tag itself is just bikeshedding to me. It would be nice if I could filter it.

It’s not tech we don’t have, it’s just a willingness to build systems that are not economically optimal. Could we build 100mi^2 of solar panels? Yes! Is that ever going to be cheaper than if we just picked the most financially efficient path? Super unlikely.

For fun(!) I wanted to make a client for MPD with Elm and display:grid. It now works more-or-less and was easy enough to do, but I’m not really happy with the default websocket library from Elm. Maybe there is one with a more explicit connection state? For this week to finish it I want to make the design presentable.

What in particular are you looking for in controlling the connection state? The idea behind the websocket library is mostly built around the idea that you shouldn’t need to worry about handling the connecting or how messages are sent. You can use the low level functions to put together an explicit model of the state if that’s what you need

I would like to know the state of the connection, so I can display that. Or not show certain parts of the GUI when there is no connection. The current magic reconnects by the high-level module can take quite a while, with the build in exponential backoff (and reloading does not force the establishing of a new connection, somehow).

The low level functions are really rather low level :) Guess I’ll have to dig in them anyway…

While there are more packages for the glibc variant than the musl variant, I would not characterise this as “not many packages”. Musl is quite well supported and it’s really only a relatively small number of things which are missing.

Void has good support for ZFS, which I appreciate (unlike say Arch where there’s only unofficial support and where the integration is far from ideal). Void also has an option to use musl libc rather than glibc.

Void has great build system. It builds packages using user namespaces (or chroot on older kernels) so builds are isolated and can run without higher privileges. Build system is also quite hackable and I heard that it’s easy to add new packages.

Truly minimalist. The fish shell package uses Python for a few things but does not have an explicit Python dependency. The system doesn’t even come with a crond (which is fine, the few scripts I have running that need one I just put in a script with a sleep).

Has a well maintained musl-libc version. I’m running musl void on a media PC right now, and when I have nothing running but X, the entire system uses ~120MB of RAM (which is fantastic because the system isn’t too powerful).

That said, my go-to is FreeBSD (haven’t gotten a chance to try OpenBSD yet, but it’s high on my list).

I’d use void, but I prefer rc.d a lot. It’s why I like FreeBSD. It’s so great to use daemon_option= to do stuff like having a firewall for client only, to easily run multiple uwsgi applications, multiple instances, with different, of tor (for relays, doesn’t really make sense for client), use the dnscrypt_proxy_resolver to set the resolver, set general flags, etc.

For so many services all one needs to do is to set a couple of basic options and it’s just nice to have that in a central point where it makes sense. It’s so much easier to see how configuration relates if it’s at one single point. I know it doesn’t make sense for all things, but when I have a server, running a few services working together it’s perfect. Also somehow for the desktop it feels nicer, because it can be used a bit like how GUI system management tools are used.

In Linux land one has Alpine, but I am not sure how well it works on a desktop. Void and Alpine have a lot in common, even though Alpine seems more targeted at server and is used a lot for containers.

For advantages: If you like runit, LibreSSL and simplicity you might like it more than Debian.

However I am using FreeBSD these days, because I’d consider it closer to Linux in other areas, than OpenBSD. These days there is nothing that prevents me from switching to OpenBSD or DragonFly though. So it’s about choosing which advantages/disadvantages you choose. OpenBSD is simpler, DragonFly is faster and has recent Intel drivers, etc.

For security: On the desktop I think other than me doing something stupid, the by far biggest attack vector is a bug in the browser or other desktop client application, and I think neither OS will safe me from that on its own. Now that’s not to say it’s meaningless or that mitigations don’t work or that it’s the same on servers, but it’s more that this is my threat model for the system and use case.

The built-in Firefox Reader mode is a godsend. I feel much more comfortable reading long texts in the same font, page width, background color + the scrollbar on the right now gives me a pretty good estimate of reading time.

Good start… now what about templating alternatives like Telegram’s Instant View? Millions of links being sent over IM every day are being rewritten into new templates created by a third party. Sure, it addresses the speed / mobile accessibility concerns, but it’s also very heavy-handed backend processing that’s a black box to users.

More like mix of Facebook’s Instant Articles and Reader Mode from Safari/Firebox (also Pocket). It doesn’t require special markup to put on webpage but it has crowdsourced rewrite rules that remove cruft from webpages. It loads processed webpages from their server though, unlike Reader modes.

At least Telegram leaves links posted to chat as is, looking like links, with underline, leading to original URL, and adds “Instant view” button alongside, which looks like button and opens instant article popup.

To my understanding it is - just a different content/service provider that wants to create a platform to give the same experience to its users. There are some differences in the implementation by telegram and the effort that needs to be put by the sites developers, but it is still served by cache that is kept on their servers. In this point you’re getting a good experience loading, say, Medium articles in Telegram - but you don’t get the same experience outside of it.

Unfortunately AMD historically hasn’t had the management and the stockholder return to take on Fortress Intel. So Intel board hires weasel CEO’s to exploit the situation. Ironically, the tech is more than good enough.

It already is. Across the board 30% hit is fairly common on cloud services. So the hit is worse than say, Apple and it’s battery/clock down issue, but clearly Intel weasels think they can outlast it - what are you going to do, not buy more Intel?

Finish a system which makes RSS feeds from release versions, with Wikipedia as source. This gives me a single place to keep track of things I need to keep track of for work, without getting too much noise. Works pretty well for things as git, Kibana, or Ruby. Not so much for very small projects which don’t have a Wikipedia page :)
Mostly done, needs a few things still: https://verssion.one and/or my github.
After that the next side project to keep me distracted from real work…

I plan to do something like this (track software updates), but I would directly scrap software websites or CVS web interfaces (e.g. fetch git tags on cgit) instead of relying on Wikipedia, to get instant updates. I wonder how package maintainers (or Wikipedia contributors) are dealing with this because few pieces of software have an RSS feed for updates.

By the way, to have a user-friendly way to privately scrap websites (elements of pages, text files, etc…) automatically for all kind of updates would be awesome. I know someone that uses a piece of proprietary software that is not so nice for that matter.

Going to the sites directly makes it really hard to filter out betas and other unstable releases. Even the github “releases” page don’t help much here; it also happy lists the -pre1 releases. Regexp-ing version numbers is also less easy than it looks (192.168.1.1 looks like a perfectly fine number), so you would need to twiddle for each and every software project, definiing a source, what a version looks like, and how unstable versions work, and keep them up to date. On top of that there is Vim, which releases a new stable(!) version every 4 hours or so. :)

I had a look at all that, and at what is available on wikipedia. I choose to use wikipedia, since it’s Good Enough IMHO. And as a bonus it helps keep wikipedia up to date!

I’m honestly surprised that it’s so little… I know $8,000 is a lot of money in many places, but given the scale of these attacks I’d have thought a lot more money would have been raised. It must be incredibly tempting to pay - $300 in Bitcoin is nothing compared to the scale of the disruption some companies are experiencing. Hell, if I was CTO in one of the big firms I’d be getting some Bitcoin ready for some future attack where it turns out that paying the ransom is the only real option.

If a CTO’s reaction was to get Bitcoin ready for some future attack then that CTO should be fired - backups, regular system maintenance, planning and testing disaster recovery would all mitigate this form of attack.

If a CTO cannot ensure that the disaster recovery or business continuity processes work - then in my opinion that CTO has failed, both in their duty to the organisation and in their role as a leader.

I have worked at big and small organisations both in the public and private sectors, and worked with both outstanding and incompetent CTO’s. As a leader you are setting the standard that those who work for you will follow.

In a system that size, the CTO can set expectations, but should assume something will be missed. Having a few grand of BTC handy seems like a perfectly reasonable call in case something goes badly wrong.

A CTO should be responsible for DR? Are you sure you dont mean CIO? A CTO should be able to raise the issues with their counterparts and a CTO making sure that DR is implemented is certainly appropriate but I dont understand why they would be responsible for it.

Yes, fair point - DR, et al. is the CIO’s responsibility (certainly from an operational perspective), rather than the CTO’s (assuming the CIO role exists and it hasn’t all been folded into a single CTO role).

Yup, good thinking. I agree it’s little, but I guess they didn’t want to price out the general public. Smarter thing to build in would be to have it analyse how many computers the code can see on the network and price it dynamically, accordingly.

I’ll never understand why people fight over programming languages. Do carpenters fight over which type of hammer is best to hit a nail with? No, they just build the thing, using whatever tool is best for the job.

If we could find two carpenters who had come to identify with their hammers on some level, via some aspect of them (brand, material, shape), I’ll bet the answer is yes! They would have a spirited and ranging argument about whatever details mattered to them, and it would probably be really fascinating to watch for all of us who assume “a hammer is hammer”.

Ultimately, I take your point, but the propensity to identify with a programming language is, for whatever reason, higher among programmers than hammer-identity is among carpenters.

(Aside: I would be incredibly sad to see Clojure begin to decline in popularity earlier than Scala)

Your analogy is broken. Programming languages are nothing like any tool a carpenter uses and they are orders of magnitude more complicated. When a carpenter builds something, the tools they use to build are are not embedded in the resulting artifact. But programs are extensions of the language they are implemented in. I often have to read stacktraces from “production” software I’ve installed, and to understand those stacktraces and what the issue is I need to understand the language. If I want to fix a bug in a program I need to understand how it was implemented and what it was implemented with. For the desk I own it doesn’t matter if the original creator used a nail gun or a hammer. The choices programmers make in how to implement a solution stick with it for its life.

You could have made the desk in oak, or multiplex, or cheap pine wood. You could have made a solid desk with drawers and ink well, or just some boards, or one with just three legs. You could have asked a non-carpenter for your desk and have gotten something something perfectly usable from iron, or from glass, or poured from concrete. Seems like you’re also stick with those choices for its life.

The comment I was responded to talked about tools such as hammers. When I buy a desk the material it’s made out of is actually often a requirement, whereas the language software is made with is often not an explicit requirement. Analogies suck but I don’t think your response really changes the discussion in any meaningful way.

In a idea world, we could pick tools based off of the need of the job. Its not one though, most of the time Whether it be a library, framework, team decision, or relic of the past driving usage, the choice of programming language was made for you and is never within your control.

The fight is over which tools other people have, because we benefit when others have the same tools as us.

I don’t like framing it as a “fight”. Instead, I would prefer framing it as collaboration, or simply sharing ideas. Ruby has certain ideas, Clojure has other ideas; we take good ideas when they are applicable to the problem domain. No need to fight over which is best. Just say “hey, that feature X is a good idea” and use it if it works for you. If it doesn’t work for your situation, that’s fine too.

Nix can produce deb’s and rpm’s. Or when you build, you can capture the path to its binary in the store, and tar up its “closure” (the executables of your package, and all of its dependencies) to copy to any other computer. You don’t even need Nix on the target computer!

If you would like some help with either of these, I’m sure we’d love to help in #nixos on Freenode.

Man in the middle, or if you’ve exploited whoever is serving the JS, it can certainly be bypassed. But in that case, why bother making a comment when you can just insert JavaScript wherever you want in the page itself.

JavaScript hackery in the comment? Only if you wish to attack yourself. All other clients/users will still sanitize it away, because they don’t have your manipulated variant that does not have sanitization. That said, improper sanitization or broken sanitization would be vulnerable, but that’s merely because it’s, well, broken.

If I’m wrong (I have this vague feeling that I am), please, do tell me.

I interpreted sanitization to mean “this comment is approved for submission into the database”. If you can override that sanitization function to always return true, then you can submit whatever comments you like.

It would be something of a waste to sanitize after-the-fact. Imagine watching the page load hundreds of spammy comments, then hide them. Not only would it look odd, but it would slow down the page load.

Even worse, a comment could attempt SQL injection to attack the server. SQL sanitation should always be done on the server.

Ah, yes, of course. SQL sanitation, and anything of a similar nature, should be done server-side. I was thinking more along the lines of unexpected HTML appearing in comments as opposed to an attack on the server.

Additionally, comments could be sanitized on an as-needed basis, and be hidden by default.

It would be something of a waste to sanitize after-the-fact. Imagine watching the page load hundreds of spammy comments, then hide them. Not only would it look odd, but it would slow down the page load.

Spammy comments (xss or not), is solved by a moderator. Stripping the content and specifying “flagged as spam” or “flagged as inappropriate” like other news sites do.

I’ll be adding comment updates (for moderators only?) in the next release.

P.S. Might add realtime support after that since the current http backend is powered by uWebSockets

Man in the middle, or if you’ve exploited whoever is serving the JS, it can certainly be bypassed. But in that case, why bother making a comment when you can just insert JavaScript wherever you want in the page itself.

Right. Barring any openssl vulnerabilities, serving the comments/assets over https should prevent that?

It should, but an attacker could, through various means, convince the target to trust their certificate, perhaps by asking them to “click some buttons” to enable access to a different website. Also, the vast majority of people would probably ignore warnings that are given by browsers about certificates, depending on the warning. Or, perhaps the attacker has crafted their own certificates. Or some of your HTTPS-ified code happens to request something over HTTP for some reason.

“That’s the approach I was thinking of, and then just have the engine recompile the comments thread whenever a new comment is accepted.”

This was what I arrived at in a high-level design, too. It’s also simple enough to generate that the generator could also be highly optimized and safe with current tooling. The analysis part is overhead that would already happen in a dynamic design so is immaterial.

Yeah but that would kind of defeat the purpose of a static blog. An option would be to let a commenter commit a file to the blog (not sure how) and then somehow include the file in the blog post. Recompiling could be done with git hooks but the other stuff I am not sure about.

Blogs are fairly static, updates are very seldom.
Commenting however, is dynamic and would be better implemented with a database imho, especially if you consider the very-nested nature of comments. How would you model that efficiently from static files?

For me it boils down to efficiency. Blogs are most efficiently served by static files, and comments by a database.
(edited for formatting)