But I thing one shouldn’t try to “force” one editor to behave like another. I use vi/vim on servers, and it would be chaotic and counter-productive to try and make it emulate my emacs setup. If you have a genuine interest in using Emacs, try either to understand or accept the way it wants to suggest you do things – there’s probably a reason that the hackers behind it set some behavior to be default (be it maybe that it was only a simpler compromise that you’re free to change!).

Yeah, I’m aware of that. indent-tabs-mode is certainly part of the problem; replacing runs of 8 spaces with \x9 characters is certainly a stupid default to be immediately killed :) But my question was about the <Tab>keycode rather than the character code in display.

I’m definitely not trying to turn Emacs into Vim. I’m willing to leave all my Vim packages behind for new equivalents, and even learn the default chorded keybindings. But switching <Tab> from “indent selection” to “insert spaces to next multiple of 8” seems like such a trivial bit of configuration!

Imagine I’m trying to edit a file, and I hit <Tab> and it does nothing. It feels like a heavyweight operation to figure out what mode and minor mode I’m in and debug my configuration to get smart indentation to work for the right minor mode just so I can get back the ability to press a specific key. I just want to insert a few spaces. I’m happy to add an elisp fragment into my .emacs to get back that ability, but I’m not going to go learn about minor modes and whatnot. At this point I’m just not ready for that kind of commitment :)

This is by the same guy who does the delightful Pepper & Carrot webcomic (and the cat here in these illustrations looks very much like Carrot). There’s a long wait between episodes, but they’re all gorgeous.

This article doesn’t list any true “hacks”, but very standard stuff (but you should definitely know them).

Here’s a much more interesting page, imo. “Round up to the next highest power of 2 by float casting” is quite delightful. The FastInvSqrt trick and creating tiny ELF files might be counted as a “low level bit hacks” as well.

Here’s one of my own: y[i] = u[i] + f_c * (y[i-1] - u[i]) is a lowpass filter (cutoff frequency f_c), which can be implemented using fixed-point maths quite easily, and it’s much lighter than the standard biquadfilter. Though, it’s a first-order FIR filter, so the quality might be poorer.

Was surprised to find two apps updated this morning. Thought I might have accidentally hit the go-ahead on the notification from the play store while silencing the alarm. Auto-update remains off, however.

Suppose I’ve isolated an issue to this bug fix commit. In what version of gRPC did that commit release?

Github tells you it’s on the v1.8.x branch, so if you head over to the, v1.8.x branch, you can see it landed after v1.8.5, so it must have released in v1.8.6. Easy enough right?

Well that’s not the whole story. That commit was also cherry-picked over to the v1.9.x branch here, because v1.9.x was branched before the bug was fixed.

Besides, that was silly to begin with. Why did you go to the v1.8.x branch and then manually search for it. Why couldn’t it just tell you when it got merged? That would have been nice.

Many projects maintain many release branches. Some just backport bug fixes to older releases, some have more significant changes. Sometimes a bug fix only applies to a range of older releases. Do you want to track all that with no notion of descendants? It’s not fun.

Even just looking at pull requests, it would be nice to see whether a pull request eventually got merged in or not, what release it got merged into, and so on. That’s all history too.

So no, you can’t always see the past in git. You can only see the direct lineage of your current branch.

I used to find this hella handy at Fog Creek, especially for quickly answering which bug fixes were in which custom branch for some particular client. We actually made a little GUI out of it, it was so helpful.

(Interestingly, while Kiln supports that in Git too, it at least used to do so by cheating: it looked up the Mercurial SHAs in the Harmony conversion table, asked Mercurial for the descendants, and then converted those commits back to their Git equivalents. Because Harmony is now turned off, I assume either they’ve changed how this works, or no longer ship the Electric DAG, but it was cool at the time.)

That doesn’t address the cherry-picking case though. I’m not aware of any built-in tooling for that. Generally Git avoids relying on metadata for things that can be inferred from the data (with file renames being the poster child of the principle), so I’m not surprised that cherry-picks like this aren’t tracked directly. Theoretically they could be inferred (i.e. it’s “just” a matter of someone building the tooling), but I’m not sure that’s doable with a practical amount of computation. (There are other operations Git elects not to try to be fast at (the poster child being blame), but many of them still end up not being impractically slow to use.)

Does Fossil track cherry-picks like this though? So that they’d show up as descendants? In git the cherry-picked commit technically has nothing to do with the original, but maybe Fossil does this better. (It’s always bothered me that git doesn’t track stuff like this - Mercurial has Changeset Evolution which has always looked suuuuper nice to me.)

This is actually illustrative of the main reason I dislike mercurial. In git there are a gajillion low level commands to manipulate commits. But that’s all it is, commits. Give me a desired end state, and I can get there one way or another. But with mercurial there’s all this different stuff, and you need python plugins and config files for python plugins in order to do what you need to do. I feel like git rewards me for understanding the system and mercurial rewards me for understanding the plugin ecosystem.

Maybe I’m off base, but “you need this plugin” has always turned me away from tools. To me it sounds like “this tool isn’t flexible enough to do what you want to do.”

Right but how much manipulation of grafts can you do without a plugin? I assume you can do all the basic things like create them, list them, but what if I wanted to restructure them in some way? Can you do arbitrary restructuring without plugins?

Like this “extras” field, how much stuff goes in that? And how much of it do I have to know about if I want to restructure my repository without breaking it? Is it enough that I need a plugin to make sure I don’t break anything?

In fairness, I haven’t looked at mercurial much since 2015. Back then the answer was either “we don’t rewrite history” or “you can do that with this plugin.”

But I want to rewrite history. I want to mix and blend stuff I have in my local repo however I want before I ultimately squash away the mess I’ve created into the commit I’ll actually push. That’s crazy useful to me. Apparently you can do it with mercurial—with an extension called queues.

I’m okay with limited behavior on the upstream server, that’s fine. I just want to treat my working copy as my working copy and not a perfect clone of the central authority. For example, I don’t mind using svn at all, because with git-svn I can do all the stuff I would normally do and push it up to svn when I’m done. No problem.

And I admit that I’m not exactly the common case. Which is why I doubt mercurial will ever support me: mercurial is a version control system, not a repository editor.

For the past several years, as well as in the current release, you still have to enable an extension (or up to two) to edit history. To get the equivalent of Git, you would need the following two lines in ~/.hgrc or %APPDATA%\Mercurial.ini:

[extensions]
rebase=
histedit=

These correspond to turning on rebase and rebase -i, respectively. But that’s it; nothing to install, just two features to enable. I believe this was the same back in 2015, but I’d have to double-check; certainly these two extensions are all you’ve wanted for a long time, and have shipped with Hg for a long time.

That said, that’s genuinely, truly it. Grafts aren’t something different from other commits; they’re just commits with some data. Git actually does the same thing, IIRC, and also stores them in the extra fields of a commit. I’m not near a computer, but git show —raw <commit sha> should show a field called something like Cherry-Pick for a cherry-picked commit, for example, and will also explicitly expose and show you the author versus committer in its raw form. That’s the same thing going on here in Mercurial.

And having taught people Git since 2008, oh boy am I glad those two extra settings are required. I have as recently as two months ago had to ask everyone to please let me sit in silence while I tried to undo the result of someone new to Git doing a rebase that picked up some commits twice and others that shouldn’t have gone out, and then pushing to production. In Mercurial, the default commands do not allow you to shoot your foot off; that situation couldn’t have happened. And for experienced users, who I’ve noticed tend to already have elaborate .gitconfigs anyway, asking you to add two lines to a config file before using the danger tools really oughtn’t be that onerous. (And I know you’re up for that, because you mention using git-svn later in this thread, which is definitely not something that Just Works in two seconds with your average Subversion repository.)

It’s fine if you want to rewrite history. Mercurial does and has let you do that for a very long time. It does not let you do so without adding up to three lines to one configuration file one time. You and I can disagree on whether it should require you to do that, but the idea that these three lines are somehow The Reason Not to Use Mercurial has always struck me as genuinely bizarre.

Right but how much manipulation of grafts can you do without a plugin?

A graft isn’t a separate type of object in Mercurial. It’s a built-in command (not a extension or plugin), which creates a regular commit annotated with some meta-data recording whence it came from. After the commit was created it can be dealt with like any other commit.

And how much of it do I have to know about if I want to restructure my repository without breaking it?

Nothing. Mercurial isn’t Git. You don’t need to know the implementation inside-out before you’re able to use it effectively. Should you need to accomplish low level tasks you can use Mercurial’s API, which like in most properly designed software hides implementation details.

But I want to rewrite history. (…) Apparently you can do it with mercurial—with an extension called queues.

It’s also worth noting that mercurial’s extension system is there for advanced, built-in features like history editing. Out of the box, git exposes rebase, which is fine, but that does expose a huge potential footgun to an inexperienced user.

The Mercurial developers decided to make advanced features like history editing opt-in. However, these features are still part of core mercurial and are developed and tested as such.
This includes commands like “hg rebase” and “hg histedit” (which is similar to git’s “rebase -i”).

The expectation is that you will want to customize mercurial a bit for your needs and desires. And as a tool that manages text files, it expects you to be ok with managing text files for configuration and customization. You might think that needing to customize a tool you use every day to get the most out of it to be onerous, but the reward mercurial gets with this approach is that new and inexperienced users avoid confusion and breakage from possibly dangerous operations like history editing.

Some experimental features (like changeset evolution, narrow clones and sparse clones) are only available as externally developed extensions. Some, like changset evolution, are pretty commonly used, however I think the mercurial devs have done a good job recently of trying to upstream as much useful stuff that’s out there in the ecosystem into core mercurial itself. Changset evolution is being integrated right now and will be a built-in feature in a few releases (hopefully).

I never could get into these jump tools. zsh with auto_cd and cdpath is good enough for me. I just type dotf<Tab><Enter>, it completes to dotfiles/ and cds into ~/src/github.com/myfreeweb/dotfiles (because that …myfreeweb/ directory is on the cdpath).

Another really useful thing about zsh autocompletion is that it matches on any portion of the directory name. So, for example, when I had multiple projects like project-server, project-hq, and project-mobile, I only needed to type eg -mob<Tab> and it would autocomplete to project-mobile. It was very convenient because I switched between these three directories all the time.

Autojump utilities, such as fasd and pazi (mine), do that as well. Some of them (fasd, not pazi) even have shell autocomplete too so z hq would go to project-hq if that were frecent.

One other cool feature they have which tab-completion sorta mimics is picking from items in a list. With those, z -i project would give an interactive menu of all items with “project”, while with tab completion, double-mashing tab does a similar thing.

If auto_cd is good enough for you, that’s totally fine, I just want to also explain that autojump utilities also do have tab completion and partial matching of path partial parts too.

I think it comes down to, if someones reading your code, they’re trying to fix a bug, or some other wise trying to understand what it’s doing. Oddly, a single, large file of sphaghetti code, the antithesis of everything we as developers strive to do, can often be easier to understand that finely crafted object oriented systems. I find I would much rather trace though a single source file than sift through files and directories of the interfaces, abstract classes, factories of the sort many architect nowadays. Maybe I have been in Java land for too long?

Anyways, I think you nail it on the head: if I’m reading somebody’s code, I’m probably trying to fix something.

Leaving all of the guts out semi-neatly arranged and with obvious toolmarks (say, copy and pasted blocks, little comments saying what is up if nonobvious, straightforward language constructs instead of clever library usage) makes life a lot easier.

It’s kind of like working on old cars or industrial equipment: things are larger and messier, but they’re also built with humans in mind. A lot of code nowadays (looking at you, Haskell, Rust, and most of the trendy JS frontend stuff that’s in vogue) basically assumes you have a lot of tooling handy, and that you’d never deign to do something as simple as adding a quick patch–this is similar to how new cars are all built with heavy expectation that either robots assemble them or that parts will be thrown out as a unit instead of being repaired in situ.

You two must be incredibly skilled if you can wade through spaghetti code (at least the kind I have encountered in my admittedly meager experience) and prefer it to helper function calls. I very much prefer being able to consider a single small issue in isolation, which is what I tend to use helper functions for.

However, a middle ground does exist, namely using scoping blocks to separate out code that does a single step in a longer algorithm. It has some great advantages: it doesn’t pollute the available names in the surrounding function as badly, and if turned into an inline function can be invoked at different stages in the larger function if need be.

The best example of this I can think of is Jonathan Blow’s Jai language. It allows many incremental differences between “scope delimited block” and “full function”, including a block with arguments that can’t implicitly access variables outside of the block. It sounds like a great solution to both the difficulty of finding where a function is declared and the difficulty in thinking about an isolated task at a time.

It’s a skill that becomes easier as you do it, admittedly. When dealing with spaghetti, you only have to be as smart as the person who wrote it, which is usually not very smart :D.

As others have noted, where many fail is too much abstraction, too many layers of indirection. My all time worst experience was 20 method calls deep to find where the code actually did something. And this was not including many meaningless branches that did nothing. I actually wrote them all down on that occasion for proof of the absurdity.

The other thing that kills when working with others code is the functions/methods that don’t do what they’re named. I’ve personally wasted many hours debugging because I skipped over the funtion that mutated that data it shouldn’t have, judging from it’s name. Pro tip; check everything.

Well, I wouldn’t say “incredibly skilled” so much as “stubborn and simple-minded”–at least in my case.

When doing debugging, it’s easiest to step through iterative changes in program state, right? Like, at the end of the day, there is no substitute for single-stepping through program logic and watching the state of memory. That will always get you the ground truth, regardless of assumptions (barring certain weird caching bugs, other weird stuff…).

Helper functions tend to obscure overall code flow since their point is abstraction. For organizing code, for extending things, abstraction is great. But the computer is just advancing a program counter, fiddling with memory or stack, and comparing and branching. When debugging (instead of developing), you need to mimic the computer and step through exactly what it’s doing, and so abstraction is actually a hindrance.

Additionally, people tend to do things like reuse abstractions across unrelated modules (say, for formatting a price or something), and while that is very handy it does mean that a “fix” in one place can suddenly start breaking things elsewhere or instrumentation (ye olde printf debugging) can end up with a bunch of extra noise. One of the first things you see people do for fixes in the wild is to duplicate the shared utility function, and append a hack or 2 or Fixed or Ex to the function name and patch and use the new version in their code they’re fixing!

I do agree with you generally, and I don’t mean to imply we should compile everything into one gigantic source file (screw you, JS concatenators!).

I find debugging much easier with short functions than stepping through imperative code. If each function is just 3 lines that make sense in the domain, I can step through those and see which is returning the wrong value, and then I can drop frame and step into that function and repeat, and find the problem really quickly - the function decomposition I already have in my program is effectively doing my bisection for me. Longer functions make that workflow slower, and programming styles that break “drop frame” by modifying some hidden state mean I have to fall back to something much slower.

I absolutely agree with you that when debugging, it boils down to looking and seeing, step by step, what the problem is. I also wasn’t under the impression that you think that helper functions are unnecessary in every case, don’t worry.

However, when debugging, I still prefer helper functions. I think it’s that the name of the function will help me figure out what that code block is supposed to be doing, and then a fix should be more obvious because of that. It also allows narrowing down of an error into a smaller space; if your call to this helper doesn’t give you the right return, then the problem is in the helper, and you just reduced the possible amount of code that could be interacting to create the error; rinse and repeat until you get to the level that the actual problematic code is at.

Sure, a layer of indirection may kick you out of the current context of that function call and perhaps out of the relevant interacting section of the code, but being able to narrow down a problem into “this section of code that is pretty much isolated and is supposed to be performing something, but it’s not” helps me enormously to figure out issues. Of course, this only works if the helper functions are extremely granular, focused, and well named, all of which is infamously difficult to get right. C’est la vie.

Anyways, you can do that with a comment and a block to limit scope, which is why I think that Blow’s idea about adding more scoping features is a brilliant one.

On an unrelated note, the bug fixes where a particular entity is just copied and then a version number or what have you is appended hits way too close to home. I have to deal with that constantly. However, I am struggling to think of a situation where just patching the helper isn’t the correct thing to do. If a function is supposed to do something, and it’s not, why make a copy and fix it there? That makes no sense to me.

It’s a balance. At work, there’s a codebase where the main loop is already five function calls deep, and the actual guts, the code that does the actual work, is another ten function calls deep (and this isn’t Java! It’s C!). I’m serious. The developer loves to hide the implementation of the program from itself (“I’m not distracted by extraneous detail! My code is crystal clear!”). It makes it so much fun to figure out what happens exactly where.

A lot of code nowadays (looking at you, Haskell, Rust, and most of the trendy JS frontend stuff that’s in vogue) basically assumes you have a lot of tooling handy, and that you’d never deign to do something as simple as adding a quick patch

Ill add that one of the motivations of improved structure (eg functional prigramming) is to make it easier to do those patches. Especially anything bringing extra modularity or isolation of side effects.

I think it’s a case of OO in theory and OO as dogma. I’ve worked in fairly object oriented codebases where the class structure really was useful in understanding the code, classes had the responsibilities their names implied and those responsibilities pertained to the problem the total system was trying to solve (i.e. no abstract bean factories, no business or OSS effort has ever had a fundamental need for bean factories).

But of course the opposite scenario has been far more common in my experience, endless hierarchies of helpers, factories, delegates, and strategies, pretty much anything and everything to sweep the actual business logic of the program into some remote corner of the code base, wholly detached from its actual application in the system.

I’ve seen bad code with too many small functions and bad code with god functions. I agree that conventional wisdom (especially in the Java community) pushes people towards too many small functions at this point. By the way, John Carmack discusses this in an old email about functional programming stuff.

Another thought: tooling can affect style preferences. When I was doing a lot of Python, I noticed that I could sometimes tell whether someone used IntelliJ (an IDE) or a bare bones text editor based on how they structured their code. IDE people tended (not an iron law by any means) towards more, smaller files, which I hypothesized was a result of being able to go-to definition more easily. Vim / Emacs people tended instead to lump things into a single file, probably because both editors make scrolling to lines so easy. Relating this back to Java, it’s possible that everyone (with a few exceptions) in Java land using heavyweight IDEs (and also because Java requires one-class-per-file), there’s a bias towards smaller files.

Yes, vim also makes it easy to look at different parts of the same buffer at the same time, which makes big files comfortable to use. And vice versa, many small files are manageable, but more cumbersome in vim.

I miss the functionality of looking at different parts of the same file in many IDEs.

Sometimes we break things apart to make them interchangeable, which can make the parts easier to reason about, but can make their role in the whole harder to grok, depending on what methods are used to wire them back together. The more magic in the re-assembly, the harder it will be to understand by looking at application source alone. Tooling can help make up for disconnects foisted on us in the name of flexibility or unit testing.

Sometimes we break things apart simply to name / document individual chunks of code, either because of their position in a longer ordered sequence of steps, or because they deal with a specific sub-set of domain or platform concerns. These breaks are really in response to the limitations of storing source in 1-dimensional strings with (at best) a single hierarchy of files as the organising principle. Ideally we would be able to view units of code in a collection either by their area-of-interest in the business domain (say, customer orders) or platform domain (database serialisation). But with a single hierarchy, and no first-class implementation of tagging or the like, we’re forced to choose one.

Storing our code in files is a vestige of the 20th century. There’s no good reason that code needs to be organized into text files in directories. What we need is a uniform API for exploring the code. Files in a directory hierarchy is merely one possible way to do this. It happens to be a very familiar and widespread one but by no means the only viable one. Compilers generally just parse all those text files into a single Abstract Syntax Tree anyway. We could just store that on disk as a single structured binary file with a library for reading and modifying it.

Yes! There are so many more ways of analysis and presentation possible without the shackles of text files. To give a very simple example, I’d love to be able to substitute function calls with their bodies when looking at a given function - then repeat for the next level if it wasn’t enough etc. Or see the bodies of all the functions which call a given function in a single view, on demand, without jumping between files. Or even just reorder the set of functions I’m looking at. I haven’t encountered any tools that would let me do it.

Some things are possible to implement on top of text files, but I’m pretty sure it’s only a subset, and the implementation is needlessly complicated.

IIRC, the s-expr style that Lisp is written in was originally meant to be the AST-like form used internally. The original plan was to build a more suggared syntax over it. But people got used to writing the s-exprs directly.

Exactly this, some binary representation would presumably be the AST in some form, which lisp s-expressions are, serialized/deserialized to text. Specifically

It happens to be a very familiar and widespread one but by no means the only viable one.

Xml editors come to mind that provide a tree view of the data, as one possible alternative editor. I personally would not call this viable, certainly not desirable. Perhaps you have in mind other graphical programming environments, I haven’t found any (that I’ve tried) to be useable for real work. Maybe you have something specific in mind? Excel?

Compilers generally just parse all those text files into a single Abstract Syntax Tree anyway

The resulting parse can depend on the environment in many languages. For example the C preprocessor can generate vastly different code depending on how system variables are defined. This is desirable behavior for os/system level programs. The point here is that in at least this case the source actually encodes several different programs or versions of programs, not just one.

My experience with this notion that text is somehow not desireable for programs is colored by using visual environments like Alice, or trying to coerce gui builders to get the layout I want. Text really is easier than fighting arbitrary tools. Plus, any non text representation would have to solve diffing and merging for version control. Tree diffing is a much harder problem than diffing text.

People who decry text would have much more credibility with me, if they addressed these types of issues.

I can take the ACM version more seriously, since it presumably entails some means of enforcing this contract. Without that, this is just… well, a nice expression of good intentions. But, ACM membership isn’t much of a requirement for practicing as a “computing professional”, nowadays.

When being kicked out of the ACM for violating their Code means that your career is effectively over, then we’ll be on par with the other engineering disciplines – doctors and lawyers aside. I think we’ll get there eventually, but it may take quite some time. The professionalization of civil engineering, for example, took many decades of collapsing bridges and the like.

To really be enforceable it’d need more than ACM being able to kick individual computing professionals out; it’d also need ways to effectively enforce it against the employers of computing professionals, who are often ultimately the ones asking employees to do unethical things (there are also “rogue” unethical acts, but I don’t think it’s the biggest part of them). In legally regulated areas of engineering that’s done with laws that make it very bad for employers to pressure or retaliate against engineers doing certain kinds of work. If you’re fired for refusing to do something that violates civil-engineering ethics, you can sue, and the company can also be subject to fines/sanctions. I don’t see a near-term mechanism where someone at Google or Amazon can say “no” to a manager’s request, citing a professional code of ethics, and be legally backed up in doing so, which is what would be needed to give it teeth.

If we’re going to use terms like “good” or “bad” here, it would help to qualify for whom. To the point, “bad” for practitioners who expect to make high wages with little or no formal training, accreditation, or personal responsibility for the consequences of their mistakes (honest or otherwise) may well be “good” for the general public. It can get pretty complicated, especially once you start considering the employers of engineers as ethical agents too.

Yes, I’m in generally in favor of professionalization, but I’m not exactly holding my breath. I think it will happen inevitably, if slowly, as a consequence of our field maturing and society realizing how potentially dangerous our work really is.

I agree with your analysis in theory, but I have a near 100 kg objection: me.

I have no formal training. I’m completely self-taught. And I really feel this as limit and as a pain.
But I’ve found several accademics and formally trained developers with very weak understanding of their own field.

And in my professional work, it happens even more frequently. I can honestly say that I often meet very incompetent people with both high technical responsibilities and high accreditations from University. And I can also honestly say that several very skilled developers I know, are self-taught geeks.

I hope I didn’t give the impression of being in favor of premature professionalization! I completely agree, the field of computing is still too young to have a really meaningful and enforceable code of ethics, because we can’t yet ground such a code in a strong consensus about normative practice. All the talk of “best practices” mostly goes to show this lack of agreement. Even a brief comparison with, say, the International Building Code shows how weak these norms are.

When there is broad and stable academic consensus about safe and unsafe practices in computing, then perhaps a generation later we’ll be able to hold practitioners to a standard of professional conduct. Again, there is a rich history of this kind of thing in the other engineering professions. The details will depend on historical circumstance, but the general trend is pretty clear I think.

But, even then, I don’t see that having a brighter line between amateur and professional programmer should necessarily discourage amateurs. For example, the aircraft manufacturing industry in the US is very highly regulated. But amateurs can build non-commercial aircraft for their own use without being held to any engineering quality standards at all. The risk in home-build aircraft is mostly assumed by the builder-pilot, rather than the public.

This is essentially what happens in civil engineering, and to a lesser (but still extant) extent in mechanical engineering. I don’t have anything but anecdotal evidence to support that being a good thing, but I and other people I know who work in mechanical design generally support it. Having the stakes be that high for corner-cutting means that a professional engineer’s sign-off on something really carries weight.

I know this post will sound really bad no matter how I say it, but I wonder how much of sexism, in the present (unlikely) or future (more likely) will be more fear than misogyny.

Womens are becoming a touchy subjects and, in today’s world where a trial is decided by the public before it goes to court, a false rape accusation does more damage than the trial itself (at least imo). If I were an employer I’d be worried of female employees, not out of hatred or anything, but because they would hold so much power to screw me over.

I personally don’t care what gender you are or religion or species.. I even like talking to assholes as long as they have something interesting to say. (Sadly I tend to be a bit of an asshole myself) But I would still be scared of talking to random women in a context like a conference because I might say something that puts me in a really bad place. It feels like I would be talking to someone with a loaded gun in my face.

I think the best friends I have are those who made me notice my mistakes instead of assuming the worst of me, while the tech scene today seems more like a witch-hunting marathon to me.

On that subject, why does the world have to work with cues and aggressive stances? Why can’t we be honest with each other? I see it every day, someone above me expects everyone to catch on her cues, if they don’t, they’re the bad guys, without even letting the other end knowing anything.

Most angry tweets and blog posts about this topic are from people who just kept everything in or bursted out in anger at them and they got defensive or responded just as aggressively (kinda to be expected, honestly). I would love to see examples of people who were made aware of their behavior and everything went fine after that.

a false rape accusation does more damage than the trial itself (at least imo).

A genuine rape accusation also does more damage than the trial itself. In both cases, the victim is affected. It’s only how we perceive it that’s different.

I think somewhere along the line communities started to encourage angry reactions as a way of maximising engagement. Somewhere along the line, we forgot to be kind by default, in a way we weren’t offline. I meet people who spend a lot of time in online communities, and you can see (amongst some people) that their online behaviour leaks into their personal offline behaviour, but rarely the other way.

I think the recent furore over Equifax’s CSO having a music degree is a good example of this. Nobody should care about someone’s degree, but a marketwatch piece designed to provoke angry responses, provoked angry responses on the Internet. The Twitter algorithms designed to increase engagement increased engagement and the Internet went twitter crazy.

There has to be a way to use a combo of the tools we use for engagement to promote de-escalation and de-engagement. Deprioritising inflammatory content to make the world a better place is not losing out. It’s winning.

That’s what I really love about lobsters. People may have issues misinterpreting context and social cues here, but generally people are kind to each other.

[Note: Before reading this, readers should probably know I have PTSD from a head injury. The side effects of nervous eyes, mumbly voice, and shaky hands apparently make me look like an easy target for male and female predators alike. I’m like a magnet for assholes who I usually deal with patiently, dismiss, or stand ground. Mostly ignore them. This issue required special treatment, though, since I was always treated very differently when it as something like this.]

Far as scenario you’re worried about, it’s a real thing that’s happened to me multiple times. Not rape claims fortunately but sexual harassment or discrimination. I think I was getting false claims to managers two or three times a year with dozens making them to me directly as a warning or rebuke but not to my bosses. They just wanted me to worry that they could or would destroy me. Aside from the random ones, it was usually women who wanted a discount on something, wanted to be served ahead of other customers, or (with employees) not wanting to do the task they were given since it was beneath them or “man’s work.” Saying no to any of that was all it took…

However, I was in a service position dealing with thousands of people plus dozens of workers due to high turnover. With all those people, just a few claims a year plus dozens of threats shows how rare this specific kind of bully is. Those that will fully push a false, gender-oriented claim are rare but highly damaging: each claim led people [that didn’t know me well] to assume I was guilty by default since I was male, interrogations by multiple supervisors or managers, and a waiting period for final results where I wondered if I’d loose my job and house with no work reference. Employment gaps on resumes make it harder to get new jobs in the U.S.. I got through those thanks to what I think were coworker’s testimony (mostly women) and managers’ judgment that the good and bad of me they’ve seen versus straight-up evil stuff a tiny number of women were claiming didn’t match up.

Quick example: As a team supervisor, I always gave jobs to people in a semi-random way to try to be equal in what people had to do. Some supervisors seemed to cave in if a worker claimed the work was better for another gender, esp labor vs clerical vs people-focused work. When giving an assignment, the most shocking reply I got was from a beautiful, racially-mixed woman who had been a model and so on. A typically-good, funny worker who had a big ego. She said the specific task was a man’s job. I told her “I enforce equality like in the 19th Amendment here: women get equal rights, equal responsibilities.” She gave me a snobby look then said “I didn’t ask for that Amendment. Keep it, get rid of it, I don’t care. (Smirked and gestured about her appearance) I don’t need it. And I’m not doing man’s work.” I was a little stunned but kept insisting. She grudgingly did the job but poorly on purpose to disrupt our workflow. I had to correct that bias in my head where I assumed no woman would ever counter law’s or policies giving them equality outside maybe the religious. I was wrong…

Back to false claims. That they defaulted against males, including other men who got hit with this, maybe for image reasons or just gender bias led me to change my behavior. Like I do in INFOSEC, I systematically looked for all the types of false claims people made esp what gave them believability. I then came up with mitigations even down to how I walk past attractive women on camera or go around them if off-camera. The specific words to use or avoid is important, esp consistency. I was pretty paranoid but supporting a house of five people when lots of layoffs were happening. The methods worked with a huge drop in threats and claims. Maybe the bullies had less superficial actions to use as leverage. So, I kept at it.

This problem is one reason I work on teams with at least two people who are minorities that won’t lie for me. The latter ensures their credibility as witnesses. Main reason I like mixed teams is I like meeting and learning from new kinds of people. :) It’s a nice side benefit, though, that false claims dropped or ceased entirely when I’m on them for whatever reason. I’m still not sure given I don’t have enough data on that one. I also push for no-nonsense women, especially older with plenty of experience, to get management roles (a) since I’ve always promoted women in the workplace on principle and because mixed teams are more interesting; (b) side benefit that a woman whose dealt with and countered bullshit for years will be more likely to dismiss a false claim by a woman. When I finally got a female boss, esp who fought sexism to get there, the false claims that took serious investigation were handled usually in minutes by her. There was just one problem while she was there with a Hispanic woman… highly attractive with excellent ability to work crowds… that wanted my position launching a smear campaign. It almost worked but she had previously tried something on same manager she needed to convince. Her ego was so strong she didn’t think it would matter because she’d win her over too. Unbelievable lol. She left in a few months.

So, yeah, I’d not go to one of these conferences at all probably. If I do, I’m bringing at least two women, one non-white, who barely like me but support the cause. If they leave me, I’m either going outside or doing something on my computer/phone against a wall or something. I’m not going to be in there alone at all given this specific type of bully or claim will likely win by default in such a place. Normally, though, I don’t mind being alone with women if there’s witnesses around that’s a mixed crowd, I’ve gotten to know them (trust them), or they’re one of the personalities that never pull stuff like this. I’ve gotten good at spotting those thanks to the jobs I did working with strangers all day. I get to relax more than you’d think from this comment, though, since vast majority of females on my team, other teams, and customers’ like me or at least neutral. The risk reducing behaviors are so habitual after years of doing them I barely notice I’m doing them until I see a post like this.

Not funny note: There was also real sexism and harassment against women, esp from younger crowd. We had to deal with that, too. On rare events, some physical assault and stalkers that required police and other actions to deal with. One of the problems in many organizations is people will say the woman is making it up. Then, justice won’t happen. Our women were honest enough and male assholes brazen enough that we usually knew who was lying. Similarly when the women were bullshitting about harassment. In many other places or in trials, the defense was the woman might have been making it all up to spite the male. The reason that defense often works is because of the kind of bullies and lies I describe above. I get so pissed about false claims not just since they impacted me but because a steady stream of them in the media is used to prevent justice for real victims. That combination is why I write longer and fight harder on this issue.

a false rape accusation does more damage than the trial itself (at least imo)

In our society, a woman reporting a rape has to deal with a lot of shit from a lot of different people. Stuff like victim blaming, “What did you wear?”, “Oh you must’ve been reckless” make it already very hard for women to report rape when it happens. If anything we should be more concerned with women not reporting rape cases rather than false reports – especially since the latter is very small compared to the former. Sorry for not providing any sources, I’m on mobile right now.

My favorite part is when you use the phrase “witch hunting” to somehow excuse the fear of women being around.

I could not find a gender-neutral term that carried a similar meaning. This is definitely a fault on my part (my english dictionary is not that rich) but I was referring to the act of persecution by one or more individuals to the intended result of ruining someone’s life, humiliating them etc.

Oh so very little. Do not fear for mysoginy, it will be around forever.

What little hope for humanity and its self-improvement you seem to have. I understand the feeling.

My point was not whether misogyny will go away (it won’t), but how much of the perceived misogyny will be out of outright hatred rather than fear of consequences. Someone who doesn’t interact with women will be perceived as misogynous, but maybe he might just want to stay safe from ending up in a really bad situation? My “gun pointed at your head” analogy still stands. It feels uncomfortable and you can’t expect people to behave normally in those situations.

You seem to be the exact type of person I’m talking about, all going on the aggressive thinking I’m your worst enemy, not giving me the benefit of the doubt. I personally find it really hard to express my thoughts (it’s not just a language barrier, sadly), and getting attacked like that makes me really demoralized and demotivated to even talk. When I am not allowed to talk my mind without people instantly getting so aggressive at me, how am I supposed to not fear doing it?

I personally find it really hard to express my thoughts (it’s not just a language barrier, sadly), and getting attacked like that makes me really demoralized and demotivated to even talk. When I am not allowed to talk my mind without people instantly getting so aggressive at me, how am I supposed to not fear doing it?

I’m sorry that I sounded aggressive, because I was not trying to. I’m still not angry, nor replying out of spite or hate. :) I’m not a native english speaker (either?), so it can be that. Oh, and I also never thought of you as my worst enemy.

I could probably hug your right now, seriously, although I’m a little unsure how to understand your analogy that interacting with women is like having a gun pointed at your head.

As far as I can tell, we agree that misogyny will not go away – try to destroy an idea… – but we kinda disagree about how we should deal with it. I am not in a position to lecture anyone on the topic, and deeply nested threads tend to go off-topic easily, so I’ll happily reply to your emails if you’d like to.

I hate to link to it but I think that what best describes my analogy is a scenario like what ESR described. With no proof (even though the source claimed there had been attempts already) either back then or now, that was ruled as “unlikely” at best, but the fact that it doesn’t sound completely ridiculous and could be actually be put to action by a malicious group worries me.

I honestly don’t think most women are like that at all, and as you said, this is going a bit off topic.

About “how to deal with it”, I’m not proposing a solution, I do wonder if being more straightforward with people and less “I’ll totally blogpost this unacceptable behavior” would make anything easier though.

For example, the author quotes Berry’s paragraph about not giving anything for granted, yet instantly assumes that assuming that females are less technical is a big drag for women in tech. What about a little understanding? With so many women in sales and PR positions, the guy might be just tired as hell of having to deal with marketers (although the CTO title should have spoken for itself.)

Both literal witch hunts and the more recent metaphorical sense were frequently directed at men. The notion that “witch” is female is an ahistorical modern one and simply not part of what the word means in the context of a “witch hunt”.

The witches arrested during the Salem Witch Trials (in 1692-3, around 150 being arrested) and killed (24, 20 executed, 4 died in jail) weren’t all women. A cursory scan of the accused show plenty of male names (although it does seem to bias towards women).

The post content here is a man relating his experience of seeing his cofounder get talked over and ignored because she is a woman, so you immediately comment about… how bothersome it is that a woman might one day accuse you of sexual assault?

What the actual fuck is wrong with you? You should be thoroughly ashamed of yourself. Delete your account.

What the actual fuck is wrong with you? You should be thoroughly ashamed of yourself. Delete your account.

I generally avoid these topics like the plague, but this is the exact reason why. It’s absolutely appalling to me that anyone thinks this is a good response to any comment ever. If you are trying to persuade people or this person, then you have completely failed in backing up your comments with anything but insults. If you aren’t trying to persuade anyone, then you are just a troll who enjoys yelling at someone who is clearly (based on the other comments in this thread) is trying to genuinely learn. You took a teaching moment and made it a display of hatred.

If you are trying to persuade people or this person, then you have completely failed in backing up your comments with anything but insults

This assertion is completely absurd. I’ve been this asshole, been told off and/or beaten up, and learned better. Violent complaint is precisely how signalling to people that their behavior is utterly abhorrent works in society.

I’ve been this asshole, been told off and/or beaten up, and learned better.

I’ll just say that I find this comment immensely more helpful than your previous comment. If you’d like to expound on how specifically you’ve “been this asshole” in the past, and what you’ve learned from the experience I’d wager that’s much more likely to convince Hamcha (and the rest of us) to change their mind and behavior.

I questioned the reason she was ignored and proposed a motivation for which people might fear dealing with women.
I also questioned what would have happened if the guy would have put any effort in making the issue clear to the people he’s talking shit about other than vague clues before making accusations with circumstantial evidence.

Normal people can have conversions with members of the opposite or same gender without constantly panicking about rape allegations. Do you specifically avoid female waiters at restaurants or cashiers at supermarkets? Is this somehow different to taking to a woman in a nontechnical role? If not, why do you think it is reasonable to pretend a woman who codes is any different? Hell, how on earth can you pretend the possibility of rape allegations is a valid reason to pretend that a person does not exist while in a meeting with multiple participants?

Your regurgitation of sexist crap is shameful. Your lack of thought about it is bewildering. Delete your account.

I posted it sort of to vindicate my comments in The Return of the Hipster PDA thread, and because it actually came up just today on a site I read, The Imaginative Conservative. I decided to find and link the study to avoid discussion devolving into politics, but the original article that lead me to the survey may be worth reading for some.

So when I read “The Imaginative Conservative” I had no idea what that could even mean, but I feared the worst (which probably says something about our times). I must say, however, that despite being pretty damn far to the left myself I enjoyed some of the articles.

This is something I really appreciate about about my current workplace. We’ve got laptops and insane amounts of compute power around the office, but the unspoken culture is to prefer paper notebooks. It’s nice not feel out of place going scritch-scritch instead of click-clack during meetings.

(It does get a little odd meeting groups of external people, however.)

Many years ago, when taking my degree in physics, I noticed that for those subjects where I had made longhand notes during lectures I had a much better grasp of the details of the subject immediately afterwards - revision consisted of a quick scan over my notes. In the subjects with handouts, I was having to go over the handouts after the lectures in order to make sure that I understood everything. As a result I made a point of trying to keep notes even during lectures that came with comprehensive handouts.

I’m not sure whether the note taking itself was helpful, or whether it was simply that it absolutely forced me to engage with all of the material during the lectures: Without having to make notes, I could convince myself that I was paying attention when in fact I might have been missing bits and pieces of the material that could significantly effect my comprehension later & mean I had to spend extra time going over material that, had I taken notes, I would have been familiar with already.

Lately, I have found that it is extremely hard for me to think without having pencil and paper; I write down what I want to do do, I sketch a few data structures, jot dot some pseudo code I think may be relevant, etc. I find that I cannot do the same kind of exploration at a keyboard, even using a great program like org-mode.

I’ve found that Mercurial’s plugin system lets you build any workflow you want straight into source control. I also don’t think Mercurial is dying off, just that Github has really pushed Git up and nobody has tried to do something similar for Mercurial.

That’s good to hear. I use Mercurial on all my personal projects and strongly prefer it to Git, but reading the blog posts and announcements from Atlassian, it’s really felt like the development velocity there has much more been on the Git side of Bitbucket.

I started using Mercurial for work, and have since grown to prefer it over Git. In large part because of it’s extensibility, but also ease of use. Mercurial makes more conceptual sense to me and is easy to figure out from the cli/help alone. I rarely ever find myself Googling how to do something.

I still like Git though, and it’s likely better for people who don’t like tinkering with their workflows.

Sort of. It’s not a known issue that BidMerge (note that we’ve shipped BidMerge, which is an improvement over ConsensusMerge as a concept) produces worse results than Git. I really meant it when I said I’d appreciate examples, rather than handwaving. :)

I was using hg pre-3.0 (via Kiln). The problem that BidMerge is intended to solve is the problem which gave us so much trouble. I can’t speak to how well BidMerge would have fixed that, as the company is no longer in business.

It may well have technical advantages, but if you’re working on a project that other people will one day work on, I’d strongly urge you to use git. Being able to use a familiar tool will be far more valuable to other contributors. Look at e.g. Python, which chose mercurial years ago but has recently decided to migrate to git.

As Boojum already said, use your favourite compressor and be done with it. ;) Usually, you would not decompress and compress the data inside your filter (for instance written in C), but rather use UNIX pipelines:

png2ff < image.png | bzip2 > image.ff.bz2

We just let the bzip2(1) tool do the job for us…

libpng is one big mess. Most people I know just copy paste code examples from the web. The fun thing here is that even after hours of research, I couldn’t find examples on how to read 16-Bit PNGs properly. Given libpng is also undergoing a huge API change, it generally is not fun to use.

In the end, I just dug myself through the documentation and wrote png2ff, and now it handles 16-Bit PNGs just fine.
Feel free to write yourself a small wrapper library or something, however, it only hides the real complexity of the image libraries, effectively making your program very slow in pipelines (but well, at least my code is short! /s).

Besides, for a pipeline, you would not want to convert to and from compressed image formats every time. separating this into external tools is the only sane solution, also in regard to what the future might bring.

I don’t think you get the point. Using imagemagick, how exactly can I get to the raw pixel data in my program?
Let’s say I have an image x.png and want to invert the colours, or something more complex which I would need to program myself. What exactly should I use? What should I do?

All those image libraries are a pain in the ass to use, so they are not an option (e.g., if you suggested just using libpng in C, hell no!)

Unless you’re doing something exotic and specific to a particular image format, there’s no need to ever read or write the files directly because you can use ImageMagick or some other library. The ImageMagick command line tools are just convenient utilities implemented using the ImageMagick library. That library has bindings to dozens of languages, and most people already have it installed. It gives direct access to raw pixel data, as well as higher level functionality like drawing lines and shapes, filtering, etc.

I just don’t understand creating a new image format to avoid learning an API.

I think that there is value in having very simple, easy to parse image formats; less so for storage than for custom operations. You don’t necessarily want to have to pass raw image data, still less encode and decode from something like PNG at every junction in your pipeline. I take no position on farbfeld, but when working in video, where you do often create a lot of custom one-off manipulation tools, using something trivial to decode without having to worry about implicit typing is really helpful.

I actually wasn’t getting your point, now I do. Well, thing is, ImageMagick is a massive dependency, and their API is not simple enough for my taste. Use what you prefer, use ImageMagick. :) However, in my opinion it should be much simpler and you should not “force” your users to install the heavyweight ImageMagick is.

I still don’t understand how people can routinely put back code into a master branch without ensuring that it even builds correctly, let alone functions as designed. When an accident happens and you break the build, or the software in general, it seems prudent to back the change out of the repository until you can understand it. Whatever the mistake was should be understood, so that you (and your colleagues) can learn from it and avoid it in future.

“I still don’t understand how”: a blameless postmortem is an interesting tool to try to find out. The idea is that things that in hindsight look negligent might have seemed perfectly sensible at the time. Finding out why they seemed sensible might show gaps in tooling or training. (Eg. tests were run but not on exactly the changeset that got merged; tests have been failing for engineer X for weeks but she’s ignoring them because they pass in CI; etc.)

GitHub has a CI-integration “protect this branch”
feature: you can configure master so that a PR can only be merged if a particular CI check has passed on the branch being merged.

I agree wholeheartedly that it is important not to lay blame (or worse) for mistakes. Doing a post mortem analysis of mistakes is a crucial part of avoiding repeating the same mistake over and over; to ignore the problem is to become negligent.

If you run the tests on a patch that is not the same as the one you eventually merge, you didn’t really run the test. Discovering that this is true, and understanding that even small changes can result in unanticipated defects is an opportunity to take ownership of, and learn from, a mistake. To continue routinely putting back changes where you did not test the actual patch is, subsequently, negligence.

If the tests routinely fail on your machine (but not on others) and you ignore those failures without understanding the cause: that’s negligence as well. Every engineer in a team is responsible for the quality of the software. This is a core part of avoiding blame games – if everybody owns, analyses, learns from and shares their mistakes, nobody need point upset fingers at others.

Certainly the blameless postmortem idea is only going to work if you do something with the findings. If the kinds of mistakes you’re talking about carry on happening regularly, then yes you have a problem.

People will still make mistakes, though. That’s the nice thing about a tooling solution like that GitHub configuration: a whole class of mistakes simply can’t happen any more.

Where I work, it usually happens due to portability. Someone checks in code that builds fine on their preferred dev platform and assumes it will work on the others. We have an abstraction layer that helps with differences in the system libraries, but things like mistaken capitalization in an #include will work on Windows but not Linux. Conversely the GCC linker is more forgiving about mixing up struct vs. class forward declarations than VS.

Yeah, developing for more than one platform can make it much more tedious to make sure your code is tested before it goes back. If this kind of failure happened more than once or twice, though, I would probably consider adding some kind of #include case check tool to be run out of make check. We do this already for things like code style checks.

You could conceivably make it relatively easy to install the checks as a pre-commit hook in the local clone via some target like make hooks. Pushing code to a feature branch to trigger some kind of CI build/test process before cherry-picking into master could help as well.

I had close to a dozen build failures in the space of an hour because someone built live-environment integration tests into the CI test process, and they depended on a dev service that was down. “Fixing” the build entailed rerunning it unchanged once the depended-upon service had been restarted. It has always been my experience that broken CI builds are due to unforeseeable problems or circumstances outside the developer’s control, not a lack of due diligence on the developer’s part; so these “build-breaker shaming” devices seem incredibly counterproductive to me.

I had it happen when I changed to a job where I had to use a different IDE from the one I was used to. I was used to making the kind of change that would show up immediately as a failure in my IDE if it was incorrect; if not, I would habitually commit to master, confident that it would work. Running a command-line build or unit tests was simply not justified in terms of the cost given the level of confidence I tended to have in such a change. With the new IDE my confidence was entirely misplaced and I broke a lot of master builds until I adjusted.

A related cool ido-find-file trick that I stumbled over a while back was $ENV/. At work we environment variables to switch between builds, and I have a few others that I use as bookmarks to shorten paths. I was quite pleasantly surprised the day that I absent-mindedly typed one into ido and saw it get expanded.

More seriously, I take issue with all the dodging around about the various definitions and meanings of engineer and silliness about things like why graphics designers and hedge fund-managers aren’t engineers. But then there’s this line:

An engineer is a professional who designs, builds, and maintains systems.

By that definition, yes, I am a software engineer. I design software systems, build (implement) them, and maintain them.

I do think that one of the things going for most professional engineers is that they have the collective clout to push back and make sure they have enough time to do a solid job. If I tried to hire a civil engineer to design a suspension bridge for me but only gave them a week to do it, I’d be laughed out their office. No engineer would agree to that. Yet, too many people fail to see how absurd it is to ask the equivalent of a software practitioner.

That “It’s really pretty simple, just think of …” tooltip totally kills me. I wonder why git seems so obvious in retrospect, but so daunting to the newcomer. Is it the unpredictable commands?

Kind of reminds me of learning to pay attention to state in codebases. Once you catch on to the unwieldyness of variables with too wide of a scope (e.g. globals) it leaps out of the page in less well written code.

If you were going to do a 10 minute git intro focusing only on the data structure used by git and none of the arbitrary command interface, what would you cover?

That’s really nearly everything. Then to actually use it, you map manipulation of the above into commands, which is not as straightforward as it perhaps should be but is also not as difficult as I think people make it out to be (commit, reset, branch and rebase hit probably more than 95% of uses).

I would call it the “staging area” (just seems less confusing to me) and while it’s essential to use, from the user perspective it’s mostly a technical detail of git commit. Advanced use of the staging area, while very powerful, is not integral to advanced use of git.

(As an aside, the lack of a staging area is one of the things I really miss when using Mercurial, especially since it gets a half-assed accidental partial implementation by not automatically tracking new files anyway.)

The commands are certainly the stumbling block in my case. I get along fine with Mercurial, and I used Monotone before that. I have no problems conceptualizing a DAG. But the Git command line just doesn’t agree with my tastes.

Emacs is weird for modern users, but it’s consistent with itself and natural for the early 1980s. A frame is always a frame, a buffer is always a buffer. This hasn’t changed in over three decades of Emacs' existence. The world around Emacs changes, but Emacs fundamentals remain immutable.

Git is not consistent with itself. Over the years, they couldn’t make up their mind if it should be called a cache, an index, or a staging area. Thus, there are vestigial remnants of --cached or --staged in git diff, but the manpages usually call it an “index”. They couldn’t decide if interactivity should be done with --patch or --interactive, so git add and git rebase flip/flop between that. The help system is all over the place with git -h foo, git help foo, git foo --help and git foo -h all doing slightly weird and inconsistent things. The famous git koans contain more examples.

The git UI is about as intelligently designed as a human body full of appendices, blind spots in the retina, and fragile knees.

I used to use the command-line wrapper Easy Git to fix inconsistencies in Git like the ones you mention. Easy Git changes commands, flags, and documentation to refer to concepts consistently. For example, it has eg stage and eg unstage commands, and extra documentation like eg help topic staging. It also has usability improvements like telling you in eg status when a rebase is in progress.

Sadly, the version of Easy Git on its website has not been updated in a long time, and some commands no longer work with the latest version of Git – most notably eg status. But I’ve just discovered a mirror of a more-recent version that may be usable.

Where it more often rubs me the wrong way is when the problem wasn’t very hard to begin with, but is now annoyingly complex because of git (or sometimes, because of some other infrastructure’s specific way of using git).

The traditional way to send in a smallish patch to an open-source project: edit some files, run ‘diff’, mail the patch. This is pretty easy for me to do. And most projects, unlike the Linux kernel, move pretty slowly and don’t have a ton of concurrent development, so issues with patch-tracking and branches are uncommon.

But now many projects (even those slow-moving ones) want contributions through git, which requires a whole song-and-dance around cloning repositories and upstream masters and local branches and pull requests… all for a 5-line patch! The song-and-dance gets even worse with the project-management stuff people have put on top of it. Github isn’t that fun to begin with, but hardly the worst… my least favorite experience was probably making a 2-line bugfix to Mediawiki, which required not only dealing with git’s nonsense, but also with gerrit’s nonsense. I’d estimate I spent about 20 minutes tracking down the bug, 2 minutes fixing it, and an hour figuring out how to submit the patch.

Github seems to recognize the process is really heavyweight for small changes, so they’ve made it possible to edit projects without dealing with git at all, if you’re doing very small changes: just click “edit” in the web interface, and the whole clone/branch/commit/pull-request circus is hidden behind the scenes. I’ve used this a few times and it was a much nicer experience than the verbose way of doing it. Maybe they’ll expand this to handle more cases, which would mitigate the issue.

IMO, the format-patch workflow is under-appreciated. It requires no more infrastructure than a channel of communication and already integrates well with git (as one should expect, since Linux works that way and git was explicitly written to serve Linux’s needs).