Is there any way to specify the current project is using the wasm target so one could just use cargo build instead of relying on npm? I tried rustup override but I keep having an error about the wasm target not found, even though I just installed it on nightly.

So, yes, you can use cargo build to create the .wasm binary, you just have to supply the --target wasm32-unknown-unknown. However, to get the generated JavaScript API glue, you need to also run wasm-bindgen.

The npm run build-* commands just package them both up in one step for convenience.

If maintaining a popular free and open source software project is producing stress… don’t do it!

Really, just stop. Maintaining it, I mean. Unless you have contractual obligations or it’s a job or something, just tune it all out. Who cares if people have problems. Help if you can, help if it makes you happy, and if it doesn’t, it’s not your problem and just walk away. It’s not worth your unhappiness. If you can, put a big flag that says “I’m not maintaining this, feel free to fork!” and maybe someone else will take it over, but if they don’t, that’s fine too. It’s also fine if you don’t put a flag! No skin off your nose! You don’t owe anything to anyone!

Now I’m gonna grump even more.

I think this wave of blog posts about how to avoid “open source burnout” and so forth might be more of a Github phenomenon. The barrier to entry has been set to too low. Back in the day, if you wanted to file a bug report, you had to jump through hoops, and those hoops required reading contributor guidelines and how to submit a bug report. Find the mailing list, see which source control they used (if they used source control), see what kind of bug tracker they used (if they use one), figure out a form to see what to submit where… Very often, in the process of producing a bug report that would even pass the filters, you would even solve the problem yourself or at the very least produce a very good bug report that nearly diagnosed the problem.

Now all of this “social coding” is producing a bunch of people who are afraid of putting code out there due to having to deal with the beggar masses.

I totally agree that your own needs are the top priority if you are an OSS provider. Nobody has a divine right to your time.

I do think that having people be able to report bugs easily is really good. For even relatively small projects, this also serves as a bit of a usability forum, with non-maintainers able to chime in and help. This can give the basis for a supportive community so the owner isn’t swamped with things. Many people want to help as well.

Though if this is your “personal project”, then it could be very annoying (I think you can turn off issues in GH luckily?).

Ultimately though, the fact that huge projects used by a bazillion tech companies have funding of around $0 is shameful. Things like Celery, used by almost every major Python shop, do not have the resources to package releases because it’s basically a couple people who spend their time getting yelled at. We desperately need more money in the OSS ecosystem so people can actually build things in a sustainable way without having to suffer all this stress.

Hard to overestimate how much a stable paycheck makes things more bearable

“Back in the day, if you wanted to file a bug report, you had to jump through hoops”

This is where I disagree. Both maintainer and other contributors’ time are valuable. Many folks won’t contribute a bug report or fix if you put time wasting obstacles in their path. Goes double if they know it was there intentionally. I remember I did one for Servo on Github just because it was easy to do so. I didnt have time to spare to do anything but try some critical features and throw a bug report on whatever I found.

I doubt Im the only one out there that’s more likely to help when it’s easy to do so.

The problem is that projects don’t survive on such drive-by fixes alone. Yes, you fixed a bug and that’s a good thing, but the project would probably still run along just fine without that fix. And you never came back. In the long term, what projects have to care about are interested people who keep coming back. The others really don’t matter that much.

Sure but the question was about how high the bar for such drive-by contributions can be while still keeping a project healthy, based on the premise that making drive-by contributions too easy can result in toxic community behaviour overwhelming active maintainers.

The “height of the contribution bar” as quality control is - in my experience - a myth. The “denying low quality contributions” is not.

I’ll explain why: the bar to unfounded complaints and troll is always very low. If you have an open web form somewhere, someone will mistake it for a garbage bin. And that’s what sucks you down. Dealing with those in an assertive manner gets easier when you have a group.

The bar to attempting contribution should be as low as possible. You’d want to make people aware that they can contribute and that they can get started very easily. You will always have to train - projects got workflows, styles, etc. that people can’t all learn in one go. Mentoring also gets somewhat easier as a group.

Saying “no” to a contribution is a hard. Get used to it, no one takes that off you. But it must be done.

Also, there’s a trend to have people voicing their frustrations blamed as “no respecting the maintainers”. There’s pretty often complaints that have some truth in them. Often, a “you’re right, can we help you with fixing it on your own?” is better then throwing stuff screenshots on Twitter.

I agree with you but quality control is, again, a separate question. I wasn’t talking about quality control. The question is about how to best attract only those people with an appropriate kind of behaviour that won’t end up burning out maintainers, and whether a bar to contribution can factor into this.

I think JordiGH’s point was that if someone has to jump through some hoops to even find the right forum of communication to use (which mailing list and/or bug tracker, etc.), just by showing up at a place where maintainers will listen a contributor shows they have spent time and enganged their brains a bit to read a minimum necessary amount of text about how the project and its community works. This can be achieved, for instance, with a landing page that doesn’t directly ask people to submit code by pushing a simple button, but directs them to a document which explains how and where to make contributions.

If instead people can click through a social media website they sign up on only once and then have their proposed changes to various projects appear in every maintainer’s face right away with minmal effort because that’s how the site was designed, it’s no surprise that mentoring new contributors becomes relatively harder for maintainers, isn’t it? I mean, seriously, blog posts about depressed open source maintainers seem to mostly involve people using such sites.

Id considered this but do we really have data proving it? And on projects trying to cast a wide net vs those that dont? I could imagine that scenario would be fine for OpenBSD aiming for quality but Ruby library or something might be fine with extra little commits over time.

Really, just stop. Maintaining it, I mean. Unless you have contractual obligations or it’s a job or something, just tune it all out. Who cares if people have problems. Help if you can, help if it makes you happy, and if it doesn’t, it’s not your problem and just walk away. It’s not worth your unhappiness. If you can, put a big flag that says “I’m not maintaining this, feel free to fork!” and maybe someone else will take it over, but if they don’t, that’s fine too. It’s also fine if you don’t put a flag! No skin off your nose! You don’t owe anything to anyone!

Totally. In this scenario, you should just quit cold turkey.

The rest of the post is more advice that I’ve found myself giving multiple times to people who do want to keep maintaining the project, or be active in their larger community, but aren’t super focused on that particular library anymore.

There’s a lot of poor communication out there with unstated assumptions on each side for relationships not just open source and that drives a lot of frustration and resentment. There are dozens of books on the subject in the self-help aisle of bookstores. The points in the article are all good advice but I think the best advice is to make it clear what on terms you volunteer your work and not be ashamed to say “I don’t want to do this but feel free to do it or fork it” if it’s not scratching your itch.

Personally, I’ve turned away issues resulting from old and on bleeding-edge compiler or library releases and on OS’s or equipment I don’t run (doesn’t behave on Windows XP? doesn’t work with Chinese clone of hardware? Hell if I know…)

I considered doing that if I got resources. My idea was to just port the Rust compiler code directly to C or some other language. Especially one with a lot of compilers. BASIC’s and toy Scheme’s are the easiest if you want diversity in implementation and jurisdiction. Alternatively, a Forth, Small C, Tcl, or Oberon if aiming for something one can homebrew a compiler or interpreter for. Far as certifying compilers, I’d hand-convert it to Clight to use CompCert or a low IR of CakeML’s compiler to use that. Then, if the Rust code is correct and Clight is equivalent, then the EXE is likely correct. Aside from Karger-Thompson attack, CSmith-style testing comparing output of reference and CompCert’d compiler could detect problems in reference compiler where its transformations (esp optimizations) broke it.

The four-dimensional universe we inhabit has three dimensions of space and one of time. But what would it be like to live in a universe where the roles were divided up more evenly, so that there were two of each: two dimensions of space, and two of time?

But what I really like about Egan is that despite having some of the wildest “hard SF” ideas in SF, he still has really novel social arrangements and characters. In this case, the main characters are a pair of symbiotic organisms, one that can walk around and the other that is immobile but can echolocate for the other. Can you imagine what kind of relationship they might have? Egan actually answers this question really well, and in ways I didn’t see coming.

Egan definitely isn’t for everyone, but if your interested is piqued, then I would whole heartedly recommend the book (and his earlier novels as well! Diaspora, Permutation City, pretty much all of them) To you.

I’m in the middle of first project with Rust. It’s a small compiler for a tiny functional language. I recently got the parser (handwritten recursive descent) working. Including tests, the project is currently ~650 LOC. I haven’t written anything significant in Rust outside of this.

Full algebraic data types + pattern matching add so much to compiler code. It provides a very natural way to build and work with ASTs and IRs. I’ve the experience Rust to be roughly on par with the ADT experience in OCaml. I will say that pattern-matching on Boxes is a little annoying. (Though this is probably a product of my inexperience writing Rust). I like having full ADTs much more than templated classes/structs in C++.

Also, the build system and general environment for Rust has been great. I have a single C++ project that’s nicely set up, and I usually just copy that directory over, rm a bunch of stuff, and start fresh if I need to start something in C++. Getting anything nontrivial working in OCaml is also a huge pain. I believe every time I’ve installed OCaml on a machine, I’ve needed to manually revert to an older version of ocamlfind. Cargo is an incredible tool.

I chose to use Rust because I felt like compilers are a good use-case for Rust + I wanted to learn it. It’s really nice to have pattern matching + for loops in the same language. (Yes, OCaml technically has for-loops as well, but it really doesn’t feel the same. It’s nice to be able to write simple imperative when you need to).

This all being said, I’ve had plenty of fights with borrow-checker. I still don’t have a good grasp on how lifetimes + ownership work. I was a bit stuck on how to approximate global variables for the parser, so had to make everything object-oriented, which was a bit annoying. I would also love love love to be able to destructure Boxes in pattern matching without having to enable an experiment feature (I understand that this can cause the pattern matching to be expensive, as it’s a dereference, but I almost always wind up dereferencing it later in the code).

The &'a [u8] in the tuple is the rest of the input that was not consumed while parsing the Parseable<'a>.

Regarding variables that are “global” for the parser (they probably aren’t really “global”, b/c you probably don’t want two threads parsing independent things to stomp on each others' toes…), I would make something like a ParseContext and thread a &mut ParseContext as the first parameter to all the parser functions:

Just yesterday I setup racket again and started going through the redex tutorial. All I can say is “wow!” This is perhaps the best introduction I’ve had to any software library. The documentation is absolutely fantastic, and everything is going perfectly smooth.

The way the book is presented as building a series of small libraries that you then leverage to make a larger more complex application is absolutely wonderful. The best done “practical” book I’ve read.

Even when Electrolysis is finally released into the wild, though, Mozilla will be exceedingly cautious with the ramp-up. At first, e10s will only be enabled for a small portion of Firefox’s 500 million-odd users, just to make sure that everything is working as intended.

Will there be an about:config setting for the rest of us to use if we want e10s?

Yeah there is a config for it, you can actually do it right now if you want. At first, only users who have no extensions installed will have it enabled by default. If you do have extensions installed and want to try e10s anyway, check this page out to see if they are all compatible. If you see “shimmed” it means the extension should work, but will likely slow things down a lot.

It will be slower than baseline/non-e10s performance. Traditionally, addons in the privileged chrome context could synchronously access JS objects/methods/whatever in content. Those two contexts are now in different processes, so shimming the access patterns some addons used involves blocking on IPC calls.

It’s a pretty faithful port of Haskell’s QuickCheck, and even shares a similar implementation strategy with Arbitrary and Testable traits. (Traits are similar in many respects to Haskell’s typeclasses.)

Just to make it clear BTW: My general impression of your work is that it is very good. I just haven’t put in enough time and don’t have enough familiarity with Rust (yet!) to properly commit to saying that in the post. I’d like to at some point.

Awesome! We’re possibly making a rust API for SpiderMonkey’s Debugger API (the only interface is in JS right now, but Servo doesn’t want to support privileged JS) and the JS fuzzer has been incredibly helpful for catching and fixing bugs for the existing interface. My thinking is that to get the equivalent for the Rust interface, we should be using quickcheck.

Working on emulating MESI (the memory cache coherence protocol) in Rust to get a better understanding of how it works. Have it mostly working but the miss rates reported in my benchmark/exercising code seems to be off or something. For example, my false sharing test case is way slower than when each cache is operating on a unique block (as expected), but despite that it isn’t reporting the higher miss rates I would expect from getting cache lines invalidated by other caches' writes. Need to dig in more.

AFAIK, they are the only folks doing concurrent marking (marking happening concurrently with the mutator) and parallel marking (more than one marking thread). SpiderMonkey does much of sweeping concurrently with the mutator, and compaction is done in parallel but not concurrent with the mutator. We don’t do parallel or concurrent marking; we’re looking into concurrent marking as the next big architectural change for the collector.

I work with embedded devices because I figure that’s where the largest future growth for the industry is.

So I am always squeezed for ram / rom / mips and always will be.

Yes, I know Moore’s Law, but in the embedded realm that just means they want to make it physically smaller, cheaper unit cost, longer battery life and doing more stuff.

So I always will be programming “close to the metal”.

Conversely when a recall costs millions, or worse, a bug can get someone killed….. you get pretty paranoid about bugs and testing.

D has a largish list of features that address all sides of the problem.

Features that make the produced code as efficient as possible,

and features that make it as safe as possible,

and things that make the programmer as productive as possible.

All of these things really matter.

I also use Ruby for “Glue and String”. Build systems, data mining, global code analysis, one liners …..

Why? The dynamic typing / duck typing allows the code to “just flow” from my fingers, and I build up the code progressively from something that just copies stdin to stdout, to something that with each tiny change does more and more of what I need, each run having negligible compile / link / run time .

Curiously enough, D’s “auto” keyword and “generic all the time” and fast compile times allows me to do the same…. but in a “type safe at compile time” manner. And is way faster.

Thanks for the reply! Your reasons are very similar to the reasons that I am such a big fan of Rust. I’m really happy to see the ongoing resurgence of languages interested in close-to-the-metal performance (D, Rust, Nim). Most of my day-to-day programming is in C++, and I would love to see a more modern alternative with stronger safety guarantees gain widespread popularity. That language may be D, or Rust, or Nim (or some mix of all three), but no matter what I think it will be net gain for the world when a variety of basic memory safety problems can be statically eliminated in popular languages without a major performance hit.

Its fate has been clear since the day Mozilla decided to stop supporting Xulrunner. They had a vision of a rich portable application platform that was actually pretty compelling (you can build really cool alternate browsers like Conkeror using only JS on top of the Mozilla runtime) but since they’ve also decided to kill the extension mechanism it feels like any general-purpose functionality that isn’t needed to build their specific vision for Firefox is a casualty.

It’s a shame, because there are loads of people in the community with great ideas that wouldn’t be appropriate for mainline FF but can greatly enhance the browsing for some subset of people. For instance, when I have to use Firefox without the keysnail extension, I feel like I’ve lost twenty-seven IQ points and half my appendages.

I was working on a xulrunner based open source product (Songbird) at the time. The cancellation was preceeded by the kind of neglect we’ve seen of thunderbird. It sucked to be abandoned but even at the time I thought Mozilla was right to be focussing on the web platform rather than native cross-platform apps.

Facebook actually hit git’s limit a while back and contributed patches, etc to Mercurial to work with it. Really interesting stuff. But, stemming from that observation and other experiences, I am a superfan of breaking up repos in DVCS systems. I maintain a mercurial extension to coordinate many repos in a friendlier fashion than hg subrepos (guestrepo!).

I’m kind of persuaded that dvcs is a smell at a stereotypical company though, I think there’s room for an excellent central VCS out there.

I think where we’re heading with Mercurial over the long term is a set of tools that makes doing centralized-model development painless with DVCS tools, while retaining most of the benefits (smaller patches, pushing several in a group, etc) of a DVCS workflow. I don’t think it’s a smell at all.

As for splitting repositories, there are definitely cases where it makes sense, but there’s also a huge benefit to having everything be in one giant repository.

(Disclaimer: I work on source control stuff for a big company, with a focus on Mercurial stuff whenever possible.)

FWIW, I use git with mozilla-central and find it a much more pleasing experience than hg (which I still export to when pushing to shared remote repos). That said, it is also what I am more familiar with, although I did use hg exclusively for a year or so.

I really enjoy having everything in the game repo for many reasons such as the lack of syncing overhead, but it does tend to push performance of version control.