I think I’d expect it to report the result in either min² or h². It’s not wrong per se, but to take a non-time example, I think if I gave you dimensions of a road in terms of how many meters wide and km long it is, and then asked for the surface area of the road, I’d expect an answer in either m² or km², not m·km.

edit: Having actually read the help, it looks like it does support this, but only if you request the result in specific units. Otherwise it just naively compounds the units of the input. One of the examples in the help is that 6Mbit/s * 1.5h -> Gb gives you 4.05Gb. But if you type just 6Mbit/s * 1.5h, you get 9Mbit·h/s; it doesn’t attempt to simplify the units by default.

A number of times I’ve signed up on a site where the account can’t be logged into until the verification link comes through e-mail, and the e-mail either never arrives or comes with significant delay (20-60 minutes). If I had to go through that every time I logged in, I would not use that site.

If we’re starting a list of people who clear their cookies regularly, I’m on it. Irregularly, but on average every other day or so. Cookies accumulate a lot of tracking information very fast, so it’s only reasonable to wipe them regularly.

Not 100% satisfied with battstat so I’m thinking about refactoring the code to add printf(1) style formatting instead of space delimited replacement tokens.

I finally got around to setting up a blog (Ghost running on docker) but I’m not sure how much or how little information I want to share on it, and what topics to cover. Getting in the habit of writing more was my intention so we’ll see how long it lasts.

There’s a lot of name-calling: imbecile, crackpot, etc. Wish the author could couch their skepticism in polite terms. I get it - over the years there have been all sorts of disproven free energy machines or what have you. THIS one has yet to be disproven, so I’d not go around slinging that language just yet.

If someone is able to figure out what is the cause of the force they measure, it may be interesting. But I can’t find enough motivation to spend a minute with this stuff because I am confident that whatever will be the actual reason why they think that they’re seeing a reactionless force, it will be dumb.

For now we have observations that do not fit existing knowledge, so why not investigate rather than dismiss? That’s all. I’m confident that they’ll figure out the mechanism of force generation as well, but I’m not as certain that it will be a ‘dumb’ reason. It may fit existing knowledge in some way, but show a novel application - which is totally ok. The point of the device AFAIK is not to prove ‘everything you know is wrong’, but rather ‘this might be useful, though we don’t understand how it works yet’.

Sadly news reporting tends to go towards the sensationalist ‘everything you know is wrong’ angle all too much, which perhaps the author is reacting to. I don’t know. But until we understand this EM drive I don’t see good reason to prance around calling everybody involved with it idiots.

Disproving very simple things can take a lot of effort. However usually something valuable is learned if it isn’t simple instrument error. A gap in understanding here, an important fact there, and rarely a fundamental part of physics. Light wasn’t found to be a wave until someone was willing to do the dumb experiment to show how dumb it was to think that light could be a wave. Poisson really thought it was the STUPIDEST thing to insinuate. Sure this is very likely not that situation we are presently in but if we never tested we’d never actually know.

A little wiki excerpt.
“Poisson studied Fresnel’s theory in detail and, being a supporter of the particle theory of light, looked for a way to prove it wrong. Poisson thought that he had found a flaw when he argued that a consequence of Fresnel’s theory was that there would exist an on-axis bright spot in the shadow of a circular obstacle, where there should be complete darkness according to the particle theory of light. Since the Arago spot is not easily observed in everyday situations, Poisson interpreted it as an absurd result and that it should disprove Fresnel’s theory.”

I fell in love with Pocket Ref and want to do something similar, targeted for travel fun.

It’s meant to be a reference manual for all things you can do to avoid boredom. The idea is that you travel with this tiny manual everywhere and can discover new games and things to do, especially without your phone. From kids games, card games, to conversation topics and a meditation tutorial.

The idea came to me after spending a weekend in a mountain cabin with my wife where we decided that we wouldn’t waste “idle time” playing phone games. We revisited some board games, explored around, looked at the night sky, and spent hours with insightful conversations you don’t usually have at home.

I just started, but I’ve already compiled some games and things to do. I’d love to get new ideas, especially from all around the world. I’m in Spain which has some unique kid games and card games, and I suppose all countries in the world have their own. It would be amazing to include games (though not only games) from all around the world.

I’m totally open to contributions, just drop me a message or an email!

My goal is to self-publish it as a pocket book that you can physically bring with you (plus ebook for those who want search and more). $12 is a reasonable price, I was thinking of that exact price range depending on costs, build quality and end result.

I can prepare a questionnaire once it’s on a later stage, it can benefit a lot from having traditional games from all around the world – though I’ve already done some research and most of them are pretty similar!

also with any luck i’ll release a little poc dashboard thing that queries the bitbucket/confluence apis via haskell and loads them up in the jupyter dashboard project thing!

also, probs be doing a bit more work around the fp/haskell conference some friends and i are organising here in melbs; probs around august-september. maybe some of you other melb lobster peeps will come along!

i’m trying to use the bitbucket api via haskell/wreq to make a jupyter dashboard thing using ihaskell. the hardest part is figuring out how to actually authenticate to the bitbucket api! it’s not the worlds friendliest…

maybe some people aren’t aware, but forking is actually useful even if you never want to change the project, but you simply don’t want it to one day disappear on you. this is important when, say, specifying them as submodules or dependencies in your own projects.

this has happened to me personally before, and hence i now fork any project i’m interested in, as opposed to starring, so i know that it will always be accessible (as long as i keep my account; but this is something i am more in control of).

With a known timestamp (and ability to construct your own user IDs just by registering), there’s only about 24 bits of ‘entropy’ left in the user ID. Except those 24 bits are the counter, which is sequential.

While I don’t argue that history on a hosted site is a bad idea by default, and doubly so when we consider all the terrible issues you listed, I wanted to reflect on one part of your comment: “Why would you want to save your shell history “in the cloud” I mean really”

I have a shared history, and I love it. Context, host, timestamp, exit code and all that, easily searchable. I do a lot of trial & error research on throw-away hosts, and having previous history is very useful there.

More to the point: throw-away hosts? What kinds of things are you hosting? My question stems from my working at a web agency: I work on a lot of different hosts, often from several different clients in a single day, but I can’t think of an instance where I’d be doing similar things on different hosts and wanting that history available to use.

(Maybe one case where I’d want this: one of my clients uses EC2 scaling groups. Usually the instance I debug on yesterday doesn’t even exist today. That gets mighty annoying.)

I use throw-away hosts to reproduce issues customers are facing, and once the host is no longer needed, it gets purged from existence. I burn plenty of hosts a week, but the knowledge I gain by doing stuff on them is something I want to keep. I work from Emacs, pretty much all the time, including ssh'ing into the target systems.

I capture all commands, and save them on my workstation, where Emacs runs from (this is trivial with eshell and a bit of emacs lisp). So, I have a single, unified history, with timestamps, output, exit code and whatnot. I have a small script that parses this and pushes it into ElasticSearch, and I can query that from Emacs again to have easy and convenient access to it. I also have a key combo that captures the last command, and allows me to tag it, or even turn it into a snippet I can paste later and fill out the blanks. This also gets indexed by ES.

The global history and the capture file is in git, and I clone it whenever I need to work on a different machine: for example, if I work from home, I just git pull the stuff, and run a reindex in the background to update ES on my home PC. It would be trivial to set up ES on a VPN, but it is faster if my searches are local.

There are probably better, more efficient ways to set this up, but this has worked remarkably well for me so far.

Have you done much with org-babel? I’ve considered using it for “executable” playbooks, similar in some ways to what you describe, though with your stuff, the creation of playbooks becomes almost trivial, as you can look back on and edit out the right commands…

I was going to say that a poor man’s version could be increasing your history limit to 10,000 and setting up a .bash_profile to do something like git commit + push on your history file on every command.

Phabricator. It’s used successfully by Wikimedia, LLVM, FreeBSD, Blender, and many more communities. A bot to help bridge would be great (e.g. submit a pull request on Github, the bot creates a Phabricator review and directs the submitter there).

I’ve run a small/mid-sized project on here for the past few months, and I’ve been quite happy with it. Does everything I need, except the primary gitlab.com instantiation does not allow commenting over email, though this can be enabled for private installs.

IMO, BitBucket is superior to GitHub in every way except for CI/CD integration. Which I believe they are working on. It’s still possible to at least kick off jenkins jobs and what not but it’s a bit janky and there is no feedback yet. Otherwise, I find BitBucket to be very well done.

EDIT: I’m responding to the above from a feature/quality perspective. Not based on the xkcd cartoon.

Bitbucket recently got CI status integration. As an Atlassian employee I’ve seen some really cool Bitbucket and CI integration being used internally. I’m sure some of this slickness will be shown using public projects soon.

i use both, and find bitbucket mostly worse in most web user experience: no searching, can’t see sources vs forks easily, dashboard shows repos and not activity of people you follow as primary thing (i use this on github a lot).

The two things you mention are two things I basically never use. Most of the repositories I interact with are ones I’m using locally and have in my various tooling already and most of the programming I do is in organizations where forks aren’t really useful at all. BitBucket has robust branch permissions which I make more use of.

The Pull Request system, which is my main use for any tool like this, is significantly superior to GitHub’s for my usecases. It has Reviewers, real Approve buttons, and Tasks, all of which I use a lot. I don’t really care about the social/activity aspect that GitHub is aiming for, I mostly care abotu a tool around development, which I find BitBucket does a lot better. I also have to use GHE at work which I find very aggravating to use.

I used self hosted gogs for a bit, but ended up returning to github because I missed the social/community features. Sure, they technically exist on gogs too, but who’s going to sign up for my gogs instance just to say post an issue, or star/what/whatever it?

One can use cgit and use email for reviews. No need to create an account. Although the barrier of entry may be a little bit higher as not many people use git format-patch/apply-patch, this is more an issue familiarity than something inherent to the process. I like it more than github’s pull requests as it is easier to go back and forth.

For open-source projects with outside contributions/contributors, dead right. For my purposes though gogs is ideal. I’ve been using it for personal projects for a few months. Works well enough that I moved all my private repos from Github onto it and saved myself cash money. Fast, simple and regularly updated, often with nice new features that so far have all seemed pretty well-tested and working. For my v low-complexity requirements, natch. YMMV.

Well, sure, in terms of raw git operations, no reason - but private repos can still have multiple contributors, and even single-contributor projects can benefit from organisational tools like the issue tracker, milestones, wiki for notes, etc. Mostly though I just like the UI, the graphical, easily-click-through-able display of a range of projects at a glance, and the visual diffs are simple and easy to get at. Sure, none of this is anything Github/Bitbucket/etc doesn’t do, but it does all the bits that I need and like, well enough for me, for free, on my server.

I agree that there’s no shortage of OSS GitHub alternatives out there, and most of them work really well.

What kills me is the lack of a hosted free-software alternative to Google Groups. I have a couple projects on librelist.com, but it’s been down for almost a month now, and I haven’t gotten a response about what’s up. Hosting your own mailing list is really easy to screw up.

I see no one has mentioned Launchpad yet. Launchpad supports git repositories now, and they’re improving it steadily. The Launchpad blog has info on their progress.

Keep in mind that I work for Canonical, who started Launchpad and who employ everyone I know of who works on Launchpad development (I’m not really up on who’s doing what, though). There are other organizations who use LP, e.g. Openstack.

My own opinions of LP are mixed. I like it, and I used it heavily for a couple of years, but eventually moved to git, and moved off to mostly use GitHub, back before LP added git support.

LP’s bug tracking is more featureful than github’s issues. There are lots of other features that may or may not be useful, such as PPAs, translation support, blueprints, etc etc.

i’m spending a lot of my free time figuring out how to deploy a yesod thing to some docker image; it’s proving more annoying than i anticipated, but at least i might get an article about how to do it out, after i’m done!

My category theory is bogo, but a common problem in merge resolution is the following. Alice adds a line. Bob adds a line. (The same line.) Sometimes the correct resolution is to add that line. Sometimes the correct resolution is to add both lines. Sometimes it’s to add a single different line. How does the theory handle this?

it doesn’t; i guess this is the point jordi is making above (about exploring the actual patch-resolution mechanisms), this detail is lost in the bit about the author making it a monoid but not defining how patches should compose. which seems like a crazy hack to me.