It’s also very effective at keeping Tor out. ReCATPCHA will, more often than not, refuse to even serve a CAPTCHA (or serve an unsolveable one) to Tor users. Then remember that a lot of websites are behind CloudFlare and CloudFlare uses ReCAPTCHA to check users.

For the Cloudflare issue you can install Cloudflare’s Privacy Pass extension that maintains anonymity, but still greatly reduces or removes the amount of reCaptchas Cloudflare shows you if you’re coming from an IP with bad reputation, such as a lot of the Tor exit nodes.

I don’t hate it because it’s hard. I hate it because I think Google lost its moral compass. So, the last thing that I want to do is to be a free annotator for their ML efforts. Unfortunately, I have to be a free annotator anyway, because some non-Google sites use reCaptcha.

Indeed, also annoying is you have to guess at what the stupid thing is trying to indicate as “cars”. Is it a full image of the car or not? Does the “car” span multiple tiles? Is it obscured in one tile and not in another? Which of those “count” if so? Should I include all the tiles if say the front bumper is in one tile or not? (my experiments have indicated not).

Or the store fronts, some don’t have any signage, they could be store fronts, or not, literally unknowable by a human or an AI with that limited of information.

I’m sick of being used as a training set for AI data, this is even more annoying than trying to guess if the text in question was using Fraktur and the ligature in question is what google thinks is an f, or an s. I love getting told I’m wrong by a majority of people not being able to read Fraktur and distinguish an f from an s from say an italic i or l. Now I get to be told I can’t distinguish a “car” by an image training algorithm.

I’m a law student from Europe, specifically Germany, so I can’t say anything about the legal situation in the U.S.A. Maybe I should have clarified that. For Germany, emulators operate on the exemption for private copies (§ 53 German Copyright Act, and related § 44a for the ephemeral copy in RAM).

This however does assume that you obtain your emulatable software yourself. It does not cover purchase of software ripped by anybody else than you. Specifically, § 53 of the German Copyright Act does not permit publishing anything you ripped. There are some unhealthy paragraphs — which I’d like to not be there — on the prohibition of DRM circumvention in the law as well. §§ 95a ff. forbid circumvention of DRM (making the private copy exemption pretty useless for DRM’ed content) culminating in a criminal law paragraph § 108b that penalises circumvention of DRM under certain conditions. I find it cynic that § 95b(1)(Nr.6)(a) specifically allows DRM circumvention under the premise that your private copy is on paper. That being said, I have no idea whether whatever Nintendo used or did not use on the Pokémon game cartidges counts as DRM or not.

If you did not only rip the software, but also modified it, you are probably in breach of another paragraph as well, because § 69c(Nr.2) makes modification of computer software dependant on the consent of the rights owner (this is different from modification of all other kinds of copyright-protected works, where modification does not require consent, but only publishing of the modification). There might be some more sections relevant, all the above is what I tought of off the top of my head.

At least in Europe, I thus conclude that publishing software ripped from cartidges on the Internet is illegal. What about people downloading the software? That’s only illegal if this repository is “clearly illegal” (original wording § 53). Given my lengthy legal explanation above, I wouldn’t say it’s “clearly” illegal, so users are probably fine. OTOH, since I now gave these explanations, to anyone who made it this far in this post it may now be “clearly illegal”. So you must decide yourself now. The familiar “ALL THE WAREZ FOR FREE!!” site however is probably “clearly” illegal.

It’s not ripped/modified software though. It’s hand written code which used Pokémon Red as a reference. If you want insight into their reverse-engineering process, look at pokeruby. Right now pokeruby falls into the “clearly illegal” category (since it’s full of raw disassembly), but once it is finished, it will be all hand-written C code.

That’s interesting. I’m sorry that I didn’t immediately understand. In that case, the judicial outcome depends on what you mean by “reference”. The process here appears to have been then that the author did walk through all the machine code and then produced a programme that does the exact same like the machine code he viewed at. For that matter, he could have just written the programme in any other language as well.

Taking something as inspiration is of course not covered by copyright law in any way. If a programme is reproduced in all its instructions and structure however, I would qualify this as a copy. It’s an interesting issue about which I need to think more deeply. It is a question of the definition of “copy” then. And if it isn’t a copy, it might still be a “modification”. Both actions are reserved for the rights owner in case of computer programmes.

On a side note: I haven’t checked, but if the author uses the original Pokémon graphics, then we’re at a copyright infringement there more easily than with the code.

Edit: Decompilation is specifically regulated as well and usually forbidden as well §69e. :-)

if the author uses the original Pokémon graphics, then we’re at a copyright infringement there more easily than with the code

I thought the Internet made this part of copyright law essentially meaningless? As far as images go, anyway. Sites like Serebii and Bulbapedia host these images, not to mention all of the screencaps and whatnot that are posted on Twitter/Reddit/4chan/whatever. It would be kind of weird to go after pokered for hosting those sprites when there are tons of other people/businesses who do the same thing.

the judicial outcome depends on what you mean by “reference”. The process here appears to have been then that the author did walk through all the machine code and then produced a programme that does the exact same like the machine code he viewed at. For that matter, he could have just written the programme in any other language as well.

The translation process was probably just using a tool to disassemble code and labeling pieces. This is most likely an act of “decompilation” as 2001/29/EC understands it. In all likelyhood, the original was programmed in assembly as well; C was very rare in the Game Boy days. The author could not have “written the programme in any other language”: the few compilers that do exist for the Game Boy yield code that is unsuitable for the constraints of the Game Boy. Furthermore, if 1:1 identical binaries are your goal, you cannot just rewrite it in C when the original wasn’t; there’s no realistic way to get identical results.

pokeruby first disassembled Pokémon Ruby and then adds a pass to conversion to C with the same goal, which is only possible because they have also unearthed the correct compiler.

And yes, by definition of having an identical ROM, there are all of the original assets, namely player-visible text, graphics, sound. They’re shipped as part of the repository.

(The judicial outcome is a total crapshoot anyway. Copyright law in the context of software makes for surprising decisions one after another. It’s simply unsuitable and doctrine in other countries has incessantly pointed it out, but due to international pressure from the United States of America, it happened anyway.)

The project has gotten rather large. It seems to be while the legal situation is indeed far from a clear one, Nintendo and The Pokémon Company international seem to be leaving this repo (or any of the other github/pret efforts) alone.

The funny thing is that I agree with Schwarze about Markdown as a presentation language.

And still I can’t find another language that can be consumed both as a plain text (via cat) and as input to produce nicer presentations (as HTML and/or PDF).

In a perfect world I would hire a programmer to design and develop such language with me.
But in a perfect world, I could pay the bills programming in C for Jehanne, not in Javascript for banks.
Maybe in a perfect world we wouldn’t have neither Javascript nor banks (nor blockchains for what it worth)! :-D

So if you have a better solution to propose I’m really eager to listen. Really!

My requirements are rather simple: the language for Jehanne’s manual pages must be

easy to read in source form (in UTF-8)
1.5) semantic enough (maybe though the adoption of conventions) to build index and cross references

precise enough to transform into a nice PDF

1 and 1.5 are more important than 2.

The only alternative to a (slighly customized) Markdown (that I know a bit) is AsciiDoc, that unfortunately is designed to be easily written, but not to be easy to read in source form.

I have little to no horse in the race of the source format; my nausea is a reaction to the presentation of the rendered form – emacs (even though I’m an emacs user!), HTML, and (worst of all) web browsers.

I like man pages. In my pager, in my terminal. More terminal, less web.

Yes, I even thought about simple plain text. And it is still an option!

But I guess that one can design a simple typesetting language to be very readable in source form.

It’s just a matter of trade offs. Markdown explores this design space.
I think we could have something even better and I would actively help in the design anyone who is interested, but Jehanne is a full operating system and it leaves me with no time for this.

Still implementing a compiler for such hypotetical language would be the only way to evaluate the trade-offs and to ensure that it is formal enough.

I come to the same conclusion from the other side: I agree with the idea of filtering by tags in principle, but I couldn’t care less about whether a new tag is or isn’t adopted since it will or will not be there for filtering at my discretion.

The discussion about the tag itself is just bikeshedding to me. It would be nice if I could filter it.

Either you’ll require a hardware RAID controller that OpenBSD supports or you’ll have to choose between disk encryption and software RAID. Of course, you should always be making backups, but sometimes it’s really nice if one of your drives can just fail and you don’t really have to care too much.

I wouldn’t consider Gilles’ method a hack at this point, now that online.net gives you console access. Like usual, you first have to get the installer on to a disk attached to the machine. Since you can’t walk up to the machine with a stick of USB flash, copying it to the root disk from recovery mode makes all the sense.

As for installing, I would vastly prefer PXE boot. It’s not just about getting it installed. It’s about having a supported configuration. I am not interested in running configurations not supported by the provider. What if next year they change the way they boot the machines and you can’t install OpenBSD using the new system anymore? A guarantee for PXE boot ensures forward compatibility.

Or what if some provider that is using virtualization updates their hypervisor which has a new bug that only affects OpenBSD? If the provider does not explicitly support OpenBSD, it’s unlikely they will care enough to roll back the change or fix the bug.

You’re not paying for hardware, as Hetzner showed, hardware is cheap, you’re paying for support and for the network. If they don’t support you, then why pay?

Yeah I share your concerns. That’s why I’ve hesitated to pay for hosting and am still running all my stuff at home. It would suck to pay only to hear that I’m on my own if something changes and my system doesn’t work well after that change.

Given how often OpenBSD makes it to the headlines on HN and other tech news outlets, it is really disappointing how few seem to actually care enough to run or support it. It’s also disappointing considering that the user base has a healthy disdain for twisting knobs, and the system itself doesn’t suffer much churn. It should be quite easy to find a stable & supported hardware configuration that just works for all OpenBSD users.

It should be quite easy to find a stable & supported hardware configuration that just works for all OpenBSD users.

Boom! There it is. The consumer side picks their own hardware expecting whatever they install to work on it. They pick for a lot of reasons other than compatibility, like appearance. OpenBSD supporting less hardware limits it a lot there. I’ve always thought an OpenBSD company should form that uses the Apple model of nice hardware with desktop software preloaded for some market segment that already buys Linux, terminals, or something. Maybe with some must-have software for business that provides some or most of the revenue so not much dependency on hardware sales. Any 3rd party providing dediboxes for server-side software should have it easiest since they can just standardize on some 1U or 2U stuff they know works well with OpenBSD. In theory, at least.

I have two OpenBSD vservers running at Hetzner https://www.hetzner.com . They provide OpenBSD ISO images and a “virtual KVM console” via HTTP. So installing with softraid (RAID or crypto) is easily possible.

Since one week there is no official vServer product more. Nowadays, they call it … wait for it … cloud server. The control panel looks different, however, I have no clue if something[tm] changed.

What’s the deal for cloud providers for not making OpenBSD available? Is it technically complex to offer, or just that they don’t have the resources for the support?
Maybe just a mention that it’s not supported by their customer service would already help users no?

As far as I know, it’s a mix of things. Few people ask for OpenBSD, so there’s little incentive to offer it. Plus a lot of enterprise software tends to target RHEL and other “enterprise-y” offerings. Even in the open source landscape, things are pretty dire:

Docker is becoming the de-facto way to distribute server-side software and obviously that’s inherently quite unportable.

Rust’s platform support has OpenBSD/amd64 in tier 3 (“which are not built or tested automatically, and may not work”).

OpenBSD doesn’t get people really excited, either. Many features are security features and that’s always a tough sell. They’d rather see things like ZFS.

For better or for worse, OpenBSD has a very small following. For everybody else, it just seems to be the testing lab where people do interesting things with OS development, such as OpenSSH, LibreSSL, KASLR, KARL, arc4random, pledge, doas, etc. that people then take into OSes that poeple actually use. Unless some kind of Red Hat of OpenBSD emerges, I don’t see that changing, too. Subjectively, it feels very UNIX-y still. You can’t just google issues and be sure people have already seen them before; you’re on your own if things break.

Rust’s platform support has OpenBSD/amd64 in tier 3 (“which are not built or tested automatically, and may not work”).

I can talk a little about this point, as a common problem: we could support OpenBSD better if we had more knowledge and more people willing to integrate it well into our CI workflow, make good patches to our libc and so on.

It’s a damn position to be in: on the one hand, we don’t want to be the people that want to inflict work to OpenBSD. We are in no position to ask. On the other hand, we have only few with enough knowledge to make OpenBSD support good. And if we deliver half-arsed support but say we have support, we get the worst of all worlds. So, we need people to step up, and not just for a couple of patches.

This problem is a regular companion in the FOSS world, sadly :(.

Also, as noted by mulander: I forgot semarie@ again. Thanks for all the work!

I don’t have a smartphone. Never did. My dumbphone is almost always out of power.

As it turns out, if you never do the motion of going back, you’ll meet little resistance. E-mail still exists. I end up meeting people more often than others do, just because it’s the best remaining way to catch up and all. I, for one, like that state of affairs.

If the situation arises that I’ll have to get a smartphone, I’d probably go with Apple’s iPhone line. And that purely for having the better security track record than Android-based devices. I’m not looking forward to tossing it every couple of years for a newer model because Apple drops updates again, however.

I really love legacy, and have been working on a DOS application that is in use since 1986. I helped to patch the blob to solve clock frequency issues around 2005, and virtualized it completely in 2015 (now allowing the app to reach files and print over the network!)

I really hate legacy, and have found enormous amounts of garbage and myself struggeling not only with the incomprehensible and untangible structure of bloated software architectures, but also with consequent motivational problems. I even had to disappoint the customer, who invested a lot in me: despite the promising progress, fixing it for real would cost way too much.

Sometimes I like to tell junior programmers some war stories, especially when they complain when working with the code of others. I romanticize what I call “software archeology,” and declare my love of unraveling the mysteries hidden behind the unknown. This I do for two reasons: I hope to motivate them beyond the point of misery (the trap, in which you believe you can not deal with the problem, and give up) and I hope to give them another perspective, as follows.

Legacy is something to be proud of. It is the work of a precious generation (be it 30 years or 6 months ago), which dealt with perhaps completely different circumstances. Respect their work, just as you wish others will respect your own. Instilling this picture, that legacy is something great and is what you ultimately hope to produce, might result in work that one can be proud of: work that builds upon the great work of others, and tries to improve upon it!

“Your code could be legacy some day!” is a legitimate motivational phrase, in my opinion. There’s often a lot wrong with legacy code, but that’s because you’re often looking at it from a very different perspective. Understanding the original authors’ viewpoint is important. You might call it “code empathy”.

I have made similar experiences with dealing with legacy. It’s easy to complain about certain design decisions, but really, sometimes it just seemed like a good idea at the time. Much can be learned from legacy code, too. Tricks that nobody uses today, space and memory optimization and such.

Grab a copy of some 70s or 80s source code and go to town with it sometime. Bring it into the 21st century. Enjoy the journey.

Yes! I often really enjoy working in fifteen year old legacy code for exactly that reason. Sure the abstractions may not be great, but it is useful code that has served the company well all that time. My main job when working in legacy code is to not break what it gets right.

This is all fine, but what turned me off a bit wrt consulting is a high frequency of modernizing legacy.

The code did not appear in a vacuum and there are always some of the original forces in place; budgets and schedules and such. Tradeoffs have to be made, and often these include not taking the upgrade path all the way to the latest version.

This leads to boredom, although the customers are always super and their domains where they work different from each other. It also raises the barrier to invest time in the latest and greatest, since the bulk of it would have to be done out of passion on free-time hobby projects.

I’m not sure if it qualifies as a “markup” language for your purposes as it’s closer to typesetting than markup, but troff also knows macros. In fact, using troff without macros is a fairly painful experience.

The fact that they exist at all. The build spec should be part of the language, so you get a real programming language and anyone with a compiler can build any library.

All of them:

The fact that they waste so much effort on incremental builds when the compilers should really be so fast that you don’t need them. You should never have to make clean because it miscompiled, and the easiest way to achieve that is to build everything every time. But our compilers are way too slow for that.

Virtually all of them:

The build systems that do incremental builds almost universally get them wrong.

If I start on branch A, check out branch B, then switch back to branch A, none of my files have changed, so none of them should be rebuilt. Most build systems look at file modified times and rebuild half the codebase at this point.

Codebases easily fit in RAM and we have hash functions that can saturate memory bandwidth, just hash everything and use that figure out what needs rebuilding. Hash all the headers and source files, all the command line arguments, compiler binaries, everything. It takes less than 1 second.

Virtually all of them:

Making me write a build spec in something that isn’t a normal good programming language. The build logic for my game looks like this:

if we're on Windows, build the server and all the libraries it needs
if we're on OpenBSD, don't build anything else
build the game and all the libraries it needs
if this is a release build, exit
build experimental binaries and the asset compiler
if this PC has the release signing key, build the sign tool

with debug/asan/optdebug/release builds all going in separate folders. Most build systems need insane contortions to express something like that, if they can do it at all,

My build system is a Lua script that outputs a Makefile (and could easily output a ninja/vcxproj/etc). The control flow looks exactly like what I just described.

The fact that they exist at all. The build spec should be part of the language, so you get a real programming language and anyone with a compiler can build any library.

I disagree. Making the build system part of the language takes away too much flexibility. Consider the build systems in XCode, plain Makefiles, CMake, MSVC++, etc. Which one is the correct one to standardize on? None of them because they’re all targeting different use cases.

Keeping the build system separate also decouples it from the language, and allows projects using multiple languages to be built with a single build system. It also allows the build system to be swapped out for a better one.

Codebases easily fit in RAM …

Yours might, but many don’t and even if most do now, there’s a very good chance they didn’t when the projects started years and years ago.

Making me write a build spec in something that isn’t a normal good programming language.

It depends on what you mean by “normal good programming language”. Scons uses Python, and there’s nothing stopping you from using it. I personally don’t mind the syntax of Makefiles, but it really boils down to personal preference.

The build spec should be part of the language, so you get a real programming language and anyone with a compiler can build any library.

I’m not sure if I would agree with this. Wouldn’t it just make compilers more complex, bigger and error prone (“anti-unix”, if one may)? I mean, in some cases I do appriciate it, like with go’s model of go build, go get, go fmt, … but I wouldn’t mind if I had to use a build system either. My main issue is the apparent nonstandard-ness between for example go’s build system and rust’s via cargo (it might be similar, I haven’t really ever used rust). I would want to be able to expect similar, if not the same structure, for the same commands, but this isn’t necessarily given if every compiler reimplements the same stuff all over again.

Who knows, maybe you’re right and the actual goal should be create a common compiler system, that interfaces to particular language definitions (isn’t LLVM something like this?), so that one can type compile prog.go, compile prog.c and compile prog.rs and know to expect the same structure. Would certainly make it easier to create new languages…

I can’t say what the parent meant, but my thought is that a blessed way to lay things out and build should ship with the primary tooling for the language, but should be implemented and designed with extensibility/reusability in mind, so that you can build new tools on top of it.

The idea that compilation shouldn’t be a special snowflake process for each language is also good. It’s a big problem space, and there may well not be one solution that works for every language (compare javascript to just about anything else out there), but the amount of duplication is staggering.

Considering how big compilers/stdlibs are already, adding a build system on top would not make that much of a difference.

The big win is that you can download any piece of software and build it, or download a library and just add it to your codebase. Compare with C/C++ where adding a library is often more difficult than writing the code yourself, because you have to figure out their (often insane) build system and integrate it with your own, or figure it out then ditch it and replace it with yours

+1 to all of these, but especially the point about the annoyance of having to learn and use another, usually ad-hoc programming language, to define the build system. That’s the thing I dislike the most about things like CMake: anything even mildly complex ends up becoming a disaster of having to deal with the messy, poorly-documented CMake language.

I still think we can get way better at speeding up compilation times (even if there’s always the edge cases), but incremental builds are a decent target to making compilation a bit more durable in my opinion.

Function hashing is also just part of the story, since you have things like inlining in C and languages like Python allow for order-dependent behavior that goes beyond code equality. Though I really think we can do way better on this point.

A bit ironically, a sort of unified incremental build protocol would let compilers avoid incremental builds and allow for build systems to handle it instead.

I have been compiling Chromium a lot lately. That’s 77000 mostly C++ (and a few C) files. I can’t imagine going through all those files and hashing them would be fast. Recompiling everything any time anything changes would probably also be way too slow, even if Clang was fast and didn’t compile three files per second average.

Codebases easily fit in RAM and we have hash functions that can saturate memory bandwidth, just hash everything and use that figure out what needs rebuilding. Hash all the headers and source files, all the command line arguments, compiler binaries, everything. It takes less than 1 second.

Unless your build system is a daemon, it’d have to traverse the entire tree and hash every relevant file on every build. Coming back to a non-trivial codebase after the kernel stopped caching files in your codebase will waste a lot of file reads, which are typically slow on an HDD. Assuming everything is on an SSD is questionable.

Everything that is not make: I’d like not having to install external software just to build yours, thank you.

autotools: It’s 2017. If you check for rudimentary C89 compliance anyway, I think you can expect that stdlib.h does, in fact, exist. I’d rather have a solution for missing strlcpy, strlcat, the arc4random family that also provides me with something I can actually use. If I have a define that it’s broken, I’ll have to ship my own anyway and at that point there’s no reason to actually use the stdlib. And if you could stop checking for FreeBSD 2.x in a C11 codebase, that’s be great.

make: Writing portable/POSIX-conformant Makefiles, i.e. ones that do not require to install GNU make on a system, is hell. A lot of the useful implicit variables are GNU extensions. Autotools forces GNU make anyway, too.

Non-GNU make (OpenBSD, I’m looking at you): Seriously, can we have some of the GNU implicit variables yet? Thank you.

I wonder if it wouldn’t be a better solution to actually teach people to work with legal terms so that they can understand terms of service themselves. Identifying the legally critical sections and skimming the rest is a vital (but difficult) skill.

The main benefit that I can see is actually having access to raw HTML. Sometimes, you want to put a screenshot on your website so people can see what your tool does. mandoc does not support images in any way, though it’s on the TODO list for two years now.

Most people know some degree of HTML, but learning mandoc properly does take a few days of looking at well-formatted man pages and going over mdoc(7) as well as Practical Manuals: mdoc. mandoc -Thtml has the advantage that you already have written a “proper” man page that you can ship as such.

HTML is a less esoteric format than manual pages, and the HTML output of mandoc is less useful for styling than it could be, probably due to limitations of the original format.

If you try to say ‘I want the second entry in SYNOPSIS to be blue’, it seems hard/impossible to do, as the entries in SYNOPSIS are not distinguishable as such, and aren’t even distinguishable from CSS as separate entries.

The Andrew Project, from Carnegie Mellon, created a distributed computing environment around the same time and of the same scope as MIT’s Project Athena.

Andrew had a windowing system called “WM” or “Window Manager”. Eventually, Andrew WM was discontinued and the Andrew programming environment was ported to X.

I have two questions:

Andrew WM was a tiling windowing system. In its first incarnation it used a complex system of constraints to reposition windows, and later it switched to a simpler tiling mechanism after users complained the first system moved windows around too much. I cannot find documentation on either layout algorithm anywhere. Anyone know where I could find it?

They created a window manager for X that mimicked the Andrew interface, but searching for “Andrew WM” doesn’t get me anywhere because that’s also the name of the original system. This would’ve been X11R3 or so, maybe earlier. Does anyone know if there’s still a copy of the X window manager floating around?

The first page I wrote when I was making the site was this one (https://ircdocs.horse/specs/) – and the initial drafts were a fair bit screechier than what’s there now. I wanted people to know that everything on ircdocs is pretty much just my thoughts (as opposed to the more consensus-based approach of IRCv3), and figured the horse TLD would make people take it less seriously.

Didn’t exactly work out, now that a fair number of devs are using it as a legit protocol reference. Still, gives the site some decent character and makes it memorable :P

Clean-looking and simple. Still had to validate the ones I was using against the man pages to spot any BS but definitely a time saver. Why couldn’t all man pages have at least one section with similarly nice-looking stuff? TLDR pages look close to that concept. Neat project. Hope it or something similar gets popular.

Man pages are usually nice references when you know what you want and just need to find the names of the correct flags. Learning a program by reading the man page feels a bit like learning a language by reading the dictionary.

Them as a reference only is a good counterpoint. Some could be intended that way. Gotta wonder, though, with how many time veterans tell new people in various places they shouldve just read the man pages.

openbsd is the only large software project I have seen where the man pages are the primary reference. Because of this, googling openbsd doesn’t seem to show much, but once learn to use the man pages, you find perhaps one reason it isn’t explained in forums, is because the primary source had the info to begin with (another reason might just be popularity).

With linux and mac, the information seems to be so scattered that the best source is always a google search of someone who had been puzzled, though you must always check the date of the post and guess whether or not the info is still good.

I presume it’s supposed to be one of the last sections so that you get there immediately when you press G/End. The quality of the examples may vary wildly for command-line utilities, though. gpg(1) puts a number of other things in the way of its EXAMPLES section, which makes it a bit less convenient.