Fine, it was closer to 3 minutes each. I thought it was a good idea to archive the xkcraftia backups in a git repository (since it's the default version control these days), thinking "oh, git can handle binary blobs well enough". Alas, it takes significantly longer to perform a commit than to copy the whole directory and the commit is significantly larger than storing a copy.Note that these were commits for which nearly everything changed. (especially after the update to minecraft 1.13 in which every single piece of the world was repackaged)

Now borg was only twice as fast with committing a backup, but it did manage to de-duplicate a bunch of data so the end result is slightly smaller than separate backups.Of course today's backup is be 'small' in both borg and git because less than a tenth of the files have changed, but now I've got borg set up so screw (g)it.

ECMAScript is starting to look like a proper language what with arrow functions, destructuring, imports and template literals. Now all it needs is a super-strict mode (no side-effects allowed), a good type system and a lazy runtime.

Xanthir, since you're the standards guy around here, should I use quotes around object keys? Should I use on<x> rather than addEventListener for all eventing? Should I use semicolons to end statements?

Flumble wrote:↶Xanthir, since you're the standards guy around here, should I use quotes around object keys?

No, it's just a few unnecessary characters. Unless you're writing JSON, in which case it's required.

Should I use on<x> rather than addEventListener for all eventing?

Feel free to for small examples, but since only one source can use the onFoo attribute at a time, using aEL is always the more reliable method. We've been looking for a while at finally adding a .on() API that makes slightly better choices for events anyway.

In my defence, most of the stuff I write unambiguously continues a statement at the end of a line (with a binop or opening brackets) or ends a statement on the next line (with a keyword).

Unrelated thought: I've recently looked into the Assignment Problem and "implemented" (copy+paste another implementation and change the syntax to match the target language) the hungarian method in a new language, because it promised to be O(n^3) and was easy enough to move (and vaguely understand) the code.But the assignment problem can be expressed in a linear program (link because I keep forgetting the specifics) and linear programs can be solved in O(n^2.5) these days. So I wonder if I should just use a general solver in the future.Will a solution always have 1s and 0s for all worker-job pairings? (assuming I throw in some variation in the weights so you don't end up with e.g. two workers with exactly the same weights so putting 0.3 and 0.7 worker on a job is a valid solution)

Using an arrow function means that you can't use 'this' to refer to kmlReq in the callback. Refering to kmlReq is going to lead to fun things if you ever decide to load multiple shapefiles and refactor that thing into a loop, or you're re-using kmlReq for a second request further down the function.

Flumble wrote:Xanthir, since you're the standards guy around here, should I use quotes around object keys?

Are you using or planning on using the closure compiler? If you are, then quotes have a special meaning: unquoted property names may be minified, quoted property names may not.

(The ability to mangle property names puts the closure compiler way ahead of any other minifier I know, but the hassle of distinguishing between minifiable and non-minifiable property names is often more work than it's worth.)

Otherwise, I usually omit the quotes because it's less typing and it looks better with my current syntax highlighting when they're not string-coloured.

Flumble wrote:Should I use semicolons to end statements?

To elaborate on Xanthirs answer, automatic semicolon insertion is the devil. Unfortunately, placing semicolons doesn't help against its evils, nor can it be disabled.

It bears repeating that one of the following pieces of code is different than the others. If you can't spot which and why, then you need to install a linter and start placing semicolons today.

Tub wrote:Using an arrow function means that you can't use 'this' to refer to kmlReq in the callback. Refering to kmlReq is going to lead to fun things if you ever decide to load multiple shapefiles and refactor that thing into a loop, or you're re-using kmlReq for a second request further down the function.

There's no funny business with all the reqs being one and the same reference despite being block-scoped. At least there wasn't with for (const v of [1,2,3]) {const w = v; setTimeout(()=>console.log(w), 50*v)}.

Tub wrote:Otherwise, I usually omit the quotes because it's less typing and it looks better with my current syntax highlighting when they're not string-coloured.

My highlighter (Atom) is behaving really weird (an identifier is in order of precedence: green in a const declaration or if it's a single capital, yellow if it's a CSS prop or HTML attr name and has a dot on the left side, orange if it has an opening paren, purple if it has a dot on either side, or grey), so only quotes make them stand out. But the thing about possibly minifying sounds like a good enough reason not to do it. It's not like it makes a difference for the parser or makes it more/less valid JSON anyway.

Yeah I came across that too when advocating my non-use of semicolons. Opening-bracket-on-the-same-line styles save the day once more.

Tub wrote:

Flumble wrote:[..]hungarian method in a new language, because it promised to be O(n^3)[..][..]linear programs can be solved in O(n^2.5) these days.

[..]tell me, what does the n refer to in both of these formulas?

Hmm, for the hungarian method n≈max(workers,jobs) ⇒ O(n^3) and an LP encoding is n≈workers*jobs ⇒ O(n^5). Why do you ask? (does binary optimization have a better algorithm than real-valued optimization?)

Immediately after running, you'll get five alerts, all of which say "5". This is because all the functions grabbed the *same* variable binding to i, which was mutated in place by the loop, so at the end i is set to 5 and they all see that.

The solution to this used to be annoying, but luckily it's pretty trivial these days. If what you're looping over is iterable, you can just use a for-of loop and it works automatically:

Yeah, "var" is the thing that breaks stuff. But *also*, it's not just the let/const keyword doing the work; it's important that the for-of loop is specified to create a particular (non-obvious, non-trivial) set of binding scopes that causes it to work correctly. I wasn't aware that plain-for would do the same scope stuff.

It examines what bindings are formed by the initializer, then generates a succession of nested binding scopes establishes new variables of the same name, initialized with the value of the previous binding scope's version of the variable at the end of the loop! *Then* it executes the incrementor on the newly created inner scope variable.

So the tl;dr is that let/const are not just scoping restrictions, but the runtime will actually create a new variable each time program flow encounters a let/const declaration for capturing purposes. Hence, implementing let/const requires more than resolving naming conflicts, enforcing temporal dead zones and preventing const assignments. It's impossible to emulate let/const using var with a simple compiler pass, and babel'ing some of todays examples exhibits interesting workarounds.

'for' loops have additional logic such that any variable declared in the initialization will also be considered a new variable each loop. Thinking about it, they have to be, if you want something like

Flumble wrote:↶Shouldn't an object literal be a valid expression statement? Why isn't the parser retrying it as an expression?

I dunno, I prefer the simplicity of "when we see an open brace, if we're expecting a statement, it's a block, if we're expecting an expression, it's an object"... sure, it's still not as clear as if we weren't overloading the characters in the first place, but at least it's simple and consistent.

Flumble wrote:Unrelated thought: I've recently looked into the Assignment Problem and "implemented" (copy+paste another implementation and change the syntax to match the target language) the hungarian method in a new language, because it promised to be O(n^3) and was easy enough to move (and vaguely understand) the code.But the assignment problem can be expressed in a linear program (link because I keep forgetting the specifics) and linear programs can be solved in O(n^2.5) these days. So I wonder if I should just use a general solver in the future.Will a solution always have 1s and 0s for all worker-job pairings? (assuming I throw in some variation in the weights so you don't end up with e.g. two workers with exactly the same weights so putting 0.3 and 0.7 worker on a job is a valid solution)

The assignment problem, formulated as a linear program in the natural way, is a member of the class for which, barring ties, optimal solutions to the linear program have all integer values. Basically, all relevant submatrices of the constraint matrix have determinants in {0, +1, -1}.

I have a web page with a bunch of iframes in them (because I think that's the best way to show multiple pages side by side) all on the same domain, but I want to go back and forth in history in each frame independently. Unfortunately all frames push history to a global stack, so window.history.back() goes back in whichever frame last opened a page regardless of which frame made the function call.

Assuming there is a workaround, what's the least hacky way to accomplish it? (StackOverflow is a mess on this subject)

It searches for (non-commented out, not within string, not containing {}) //-style comments in CSS, which would break if that was actually parsed as a comment.

Backstory: I read Xanthir's post on Single Line Comments (//) in CSS a while back, which states that a large problem with adding single-line comments to CSS is that minifiers would cause single-line comments to comment out a whole stylesheet. That seems true enough; it's not a big problem for newly developed sites, which could just update their minifiers, but existing sites might already contain instances of // and break. But it made me wonder, what would happen if you changed the parsing rule for // comments to stop the comment at the first occurrence of { or }, or to revert back to the current parsing of // when a brace is encountered within the comment? It's a terrible hack, even by web standards, by it would make // comments in minified stylesheets mostly harmless, and for non-minified sheets it might be fine because it preserves author intent better.

So I wrote a regex that searches for instances of // that would change behavior, ran it on the 3 million most visited pages from webarchive (takes about 40 seconds using GCP BigQuery), and checked for affected elements on the found pages using Selenium. Turns out, about ~1% of all sites are affected, and in 5-10% of those cases, it affects front page styling (mostly minor padding changes).

I imagine that's too much breakage to propose making CSS support such comments, even if the {} parsing rule was palatable... but it was certainly a fun data mining exercise.

There does not seem to be a way for the user to combine keys in a workable way.

Of course it's possible to implement this as a Map of Maps of Maps.... with custom setters and getters, but that just ends up creating way too many Maps, especially for sparse collections.

This could be solved with a single hash table. The javascript engine could just combine the individual hashes. It keeps annoying me that the standard didn't include it, and that the language doesn't give me the tools to solve it properly myself.

This is something we're planning to solve, just in a more generic way, by adding simple collection types that are compared by value, rather than by reference. That way you could store a Pair or whatever, and then as long as x and y are the same object, all Pair(x,y) objects are ===.

(It's taking a while because the proposal to do so is serving several masters, and balancing the different desires in a worthwhile fashion is complicated. This is adding a fundamentally new type of object to JS, after all!)

I'm having trouble coming up with consumers (besides Set and WeakMap). Surely there's some user code that'd be prettier with these types, but you can always implement your own comparison function. I can't find anything besides Map/Set were you have to rely on behind-the-scenes equality checks. Would you mind elaborating? This seems like a huge addition, so there must be a good reason for it..

It won't solve my immediate near-term problems, but it's good to know that it's being worked on. For now, I'm looking into porting the number crunching to webassembly, where I can bring my own maps. Progress is slow (this is a hobby project, not a work project), but I'll report back eventually.

Value semantics types make entire categories of error less likely. They also permit a whole pile of optimizations, because identity no longer has to be preserved, just value.

Ie, if you copy a value semantics type, the two copies can be elided -- actually have the same identity, and not actually be two different objects -- without telling the programmer, because there is no way to detect the copy in-language.

Static single assignment optimizations open up, for which there is a pile of research and practical applications in other languages.

Now, you can sometimes do this with reference semantics objects, but it takes *work* to prove that you are permitted to do this (and the cost can blow up to do that). With value semantics the proof is free.

It gets even better when you start passing them to functions. When you read a value semantic value and don't modify it? You don't have to read it later to see if it changed. When you modify a value semantic value? Another value semantic value is guaranteed not to change.

Pass two pairs [a,b] [c,d] to a function that uses reference semantics, and modifying one could modify the other (they could refer to the same object!), and entering any unanalyzed code could result in a separate reference to one of them modifying it (which means you have to repeat any reads done after the code, and cannot cache anything).

But that isn't a JS expert talking, just someone who really really really appreciates value semantics in another language, and imagining how it could help JS compilers.

One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Before going WebAssembly, I wanted to try boosting performance with WebWorkers.

tl;dr:- doing the task in the main thread: 2-3 ms- doing the task in a worker: 20-100ms

welp.

This is a webgl game. The task is to prepare an object for rendering, and I have >1000 of these objects. They are prioritized by visibility and distance, but those priorities can change every frame. I cannot just post every task to the workers, because it's not possible to revoke messages or to re-order messages worker-side. Thus I need to queue them in the main thread, then dispatch one at a time whenever a worker is free.

The internet told me there's a >1ms latency for messages. To hide those latencies I expected to either use more workers or to implement a queue-depth of 2-3 for each worker so they never idle. But 100ms? I have no idea how to make that work.

If the latency is greater than one frame, then I need to take care not to submit a job twice. As jobs are defined by (object-id, object-generation), I need - wait for it! - another multi-key Map (or Set).

For now, I've ignored that problem and just did some benchmarks on total work done per time (even if some of that work is discarded later). Trying multiple combinations of worker counts, batch sizes etc. I was too lazy for rigorous numbers or graphs, but the best I could get was 1-2 jobs per frame using workers, on an 8-core CPU. The single-threaded implementation easily does 3-5 jobs per frame.

Those objects aren't exactly small (~4k each), and neither are the replies (up to 50k each, but often empty), so communication overhead is to be expected. But copying a few K shouldn't be that expensive. The effective bandwidth of that IPC connection is lower than the bandwidth of my internet connection.

First thing I've tried was pulling it through a linear integer solver, which of course didn't work because it's a byte array so all operations are mod 256.So then I just encoded the code above into SMT statements:

and let Z3 do the magic. (and magic it is, because it's done immediately instead of bruteforcing 256^17 values)It reports 'sat' and gives values for most symbols, but not all. Does that mean it still has free variables? If so, can I tell it to spew out some candidate values or a system of equations?[edit]Silly me, of course there are free variables. There's no way s_0_0 can be used for anything because it's overwritten by s_0_1 in the first assignment. And likewise a couple of other elements in the array.

[edit 2]Now I do have a question that I won't figure out a minute later: old_arr[4] seems to be forced at 0x7b (123) but I don't understand why, since all the other elements seem happy to change at a whim. (That is, (assert (not (= s_4_0 #x7b))) is unsatisfiable.) It affects arr[5], arr[7], arr[1] and arr[9] but the set of dependent elements is wildly different between any two of them.

Why can't they just figure out a way to statically guarantee undecidable properties without restricting expressivity?

[1]

Spoiler:

I've talked about the actual problem a few posts earlier: I have a collection containing 3d objects, and I need to iterate over them and sometimes (re-)generate their meshes. The complication arises because mesh-generation sometimes needs (read-only) information about other objects, which requires an immutable reference to the collection during iteration. Note that the collection is not a simple vector, so I cannot avoid the iterator by manually indexing.

Preliminary numbers say that mesh-generation is faster than it was using javascript, but I haven't reached feature parity yet.

For a suitably insane implementation of a slice, these two references might point to the same element, so the compiler will refuse the code. Just in case.The whole approach of mutating one element of a collection using information of another, without copying either, is not going to work. Which kinda sucks, because that's what I need to do.