Here's the thing: they don't just run whenever. They're not like page code, capable of doing anything it wants at arbitrary times. It's completely event-driven, and those events are things the browser controls. (Things like "the page wants this URL from your server, tell me what to return" or "a page in your origin registered a push notification service and I just got a push, here's the data, what do you want to do with it?".) They're an extremely light version of a page, basically, that we can use to handle things that the page would like to do, but without having to actually start up the full page with all of its arbitrary powers and higher CPU/memory usage.

They're also killed regularly and restarted from scratch when needed, precisely so they *don't* drain your battery all the time. If a response to an event takes too long, it's just gonna get killed. (To handle things that might legitimately take long, like fetching something from a url and returning it, we have functions that'll return Promises for the thing you want, so the SW can just get the Promise quickly, hand it to the browser, and then shut itself down. The browser will handle the rest by itself.)

For serious, y'all, browsers have some of the best security devs in the entire computer industry. Take, like, a single moment to consider that they might have already thought of the attack surface before concluding that they're massive idiots. In particular, we're usually quite happy to throw permission dialogs onto unsafe things, or take other "we know better" actions that help protect users. When we allow something without a security dialog at all, it's probably because it's safe.

Sorry, but you'll just have to excuse me if I'm not enthusiastic about the prospect of J. Random Web Developer getting to run code on my computer from a page I'm not even currently on. If the developers did consider the potential for attack exploits and still concluded this was a good idea, then, well, no, sorry, they're not as good at security as all that. Because this is a really stupid idea.

"'Legacy code' often differs from its suggested alternative by actually working and scaling." - Bjarne Stroustrupwww.commodorejohn.com - in case you were wondering, which you probably weren't.

Xanthir wrote:It's completely event-driven, and those events are things the browser controls. (Things like "the page wants this URL from your server, tell me what to return" or "a page in your origin registered a push notification service and I just got a push, here's the data, what do you want to do with it?".)

This is well and good if the programmer is Nice. But if the web programmer is Naughty (or even incompetent; imagine that!), then what?

Could one (for example) set up a service worker to listen for URLs to get requested, and compile a list of what they are, to be returned at a later date to the service worker's home the next time the user connects to the site? Would it be even necessary for the user to connect in order for the service worker to phone home? Could the service worker for included content (such as an ad site) affect other aspects of the page (such as a menu item?) Just how much of my computer does a service worker have access to, and can you guarantee that this won't be expanded?

Can you really not think of ways for an Eve to exploit this? To me, this sounds more like a rootkit for the web. After all, even cookies are "just harmless text files", right?

Jose

Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

Sorry, but you'll just have to excuse me if I'm not enthusiastic about the prospect of J. Random Web Developer getting to run code on my computer from a page I'm not even currently on. If the developers did consider the potential for attack exploits and still concluded this was a good idea, then, well, no, sorry, they're not as good at security as all that. Because this is a really stupid idea.

Go read what I wrote again about when a SW is allowed to run. And then try actually reading about SWs at all, rather than knee-jerking to an imaginary stupid version you've constructed in your head based solely on what ucim posted.

Could one (for example) set up a service worker to listen for URLs to get requested, and compile a list of what they are, to be returned at a later date to the service worker's home the next time the user connects to the site?

No, SWs don't get to intercept URLs for other domains. A SW is tied to the origin of the page that installs them, and they can get asked about requests from a page on their origin to another page on their origin. A future bit of functionality ("foreign fetch") will let them respond to requests from other origins requesting things for their origin (so you don't actually need to wake up the radio on your phone, if the SW for that origin that's already on your phone can handle it). But at no point is a SW ever informed of requests to other origins, as that would be a massive security problem.

Would it be even necessary for the user to connect in order for the service worker to phone home?

As I said in my previous post, yes, SWs are only woken up for certain events that the UA controls. They don't run continuously.

Could the service worker for included content (such as an ad site) affect other aspects of the page (such as a menu item?)

No. For a SW for an ad-site to be invoked at all, the ad must be running in an iframe pointing to the ad-server's domain. That SW is only capable of handling things for the ad domain. It has no access to anything going on in the main page.

Just how much of my computer does a service worker have access to, and can you guarantee that this won't be expanded?

Far less than a normal webpage does. SWs actually lose most of the funky DOM tooling that web pages have accreted over the years. The functionality of SWs will be expanded over time as we add more tools, but will always be scrutinized for security holes.

Can you really not think of ways for an Eve to exploit this?

I can think of ways for an imaginary "kind of like a SW, but worse, and created by people who don't understand security" concept to be exploited. But an actual SW is quite safe.

Xanthir wrote:they don't just run whenever. They're not like page code, capable of doing anything it wants at arbitrary times. It's completely event-driven, and those events are things the browser controls. (Things like "the page wants this URL from your server, tell me what to return" or "a page in your origin registered a push notification service and I just got a push, here's the data, what do you want to do with it?".)

One of the things the video said was possible was to go offline, then for the worker to sync with the server when you reconnect. My memory is that he said that it could do this without loading a page.

Is that true? Can the service workers make external connections (even back to the same origin) on their own accord? What permissions, if any, are needed to be granted by the user for this to be the case?

Edit:

A SW is tied to the origin of the page that installs them, and they can get asked about requests from a page on their origin to another page on their origin.

And to clarify, this honors directories too, right? I think I saw that in some description somewhere. For example, suppose I can control stuff in example.com/~evan, I can set up a SW to watch loads from example.com/~evan but not the rest of the domain, right?

Thesh wrote:So if a user connects to an unsecured wifi, can anyone just inject a service worker into any unsecured page and hijack it without the user knowing?

Service workers (1) cannot be installed over HTTP, they require HTTPS, and (2) last a maximum of 24 hours before being redownloaded and compared with the latest server version (unless offline, or maybe some other things), meaning that even if you somehow MITM an SSL connection your window of action will be limited.

Edit: server admins can also decrease, but not increase, the 24 hours by applying cache directives to the service worker file.

Is that true? Can the service workers make external connections (even back to the same origin) on their own accord? What permissions, if any, are needed to be granted by the user for this to be the case?

Not "of their own accord". While the page is open, it can request that the SW be sent a sync event when you come back online (explanation). That might fire immediately, if you're currently online, or be queued for later if you're currently offline; then, because it's a SW, it can do the sync even if the page has been closed in the meantime. (Note the "Permissions" and "The Future" parts of that page too, for some nuance and future direction.)

And to clarify, this honors directories too, right? I think I saw that in some description somewhere. For example, suppose I can control stuff in example.com/~evan, I can set up a SW to watch loads from example.com/~evan but not the rest of the domain, right?

Yes, tho this isn't absolute. The per-directory option is a hack that we had to add to support legacy sites. You only get real protection when you're on a subdomain.

Yeah, the "unsecured wifi = pwn every page you look at by mitm'ing a SW" was, like, the very first attack we thought of when writing the spec. It's why SWs are HTTPS-only. (And why *most* new powerful features are HTTPS-only, for that matter.)

The "https, but the server was pwned, so now anyone who visited is pwned forever" scenario is why we put in the "must update at least every 24 hours" thing. If there's a 404 when we try to update the SW is just uninstalled; if there's a byte-different file, we just update the SW to the new (hopefully fixed) code.

Xanthir wrote:Not "of their own accord". While the page is open, it can request that the SW be sent a sync event when you come back online (explanation). That might fire immediately, if you're currently online, or be queued for later if you're currently offline; then, because it's a SW, it can do the sync even if the page has been closed in the meantime. (Note the "Permissions" and "The Future" parts of that page too, for some nuance and future direction.)

OK, so let's see if I understand this right. While the page is open, it has to register some sync operations. It can potentially queue up a bunch, but they'll all fire at once.

Basically, the threat I'm trying to exclude, which it sounds like probably won't work but I'm not entirely convinced yet, is a site installing a SW that phones home whenever you get a connection. (I'm unconcerned with what it phones home with; that it does so is bad enough.)

Is the idea that the following scenarios would prevent this from being a threat?

User visits malicious site; site installs a SW

While the user remains online, the site can continue to request sync notifications but they will continue to succeed immediately, same as the page could do.

If the user closes the tab, the SW can't request more syncs(?) and there's no way to say "wait an hour then sync"(?), so the SW is basically done running at that point; if the user never visits the site again, it will never get another event and never run again.

If the user goes offline while the tab is still open, then it can install some SW sync notifications. Suppose the user then closes the tab. When they go back online, there will be a one-time phone-home event, but after that we're back in the previous case.

Is that basically it?

Edit: it sounds like the "periodic sync" thing mentioned at this site is something that's been thought about and determined distinct from what you can do now. And that would require user permission. So I'm becoming more at ease.

commodorejohn wrote:Good thing 24 hours isn't nearly enough time for a malicious actor to do any real damage.

If the server is pwned, the malicious actor can already do real damage, even without service workers.

(Aren't there similar security risks with normal HTTP caching? Do browsers have limits on that for non-HTTPS sites?)

Also, if I'm understanding correctly: A service worker can (aside from push notifications?) only run in situations where JavaScript would normally run from the server (i.e., when the user navigates to the site, or there's an iframe with the site), except that it can run when the user is offline (but navigates to the page)? So, like a script that runs on every page on the site, but it can be cached to run offline, basically? (ETA: in terms of when it can run, that is)

EvanED wrote:↶Is the idea that the following scenarios would prevent this from being a threat?

User visits malicious site; site installs a SW

While the user remains online, the site can continue to request sync notifications but they will continue to succeed immediately, same as the page could do.

If the user closes the tab, the SW can't request more syncs(?) and there's no way to say "wait an hour then sync"(?), so the SW is basically done running at that point; if the user never visits the site again, it will never get another event and never run again.

If the user goes offline while the tab is still open, then it can install some SW sync notifications. Suppose the user then closes the tab. When they go back online, there will be a one-time phone-home event, but after that we're back in the previous case.

Is that basically it?

Yup, only way for a malicious site's SW to get a "you're back online!" ping is for you to have the malicious page open *when you go offline*, and then the SW only gets the one.

Edit: it sounds like the "periodic sync" thing mentioned at this site is something that's been thought about and determined distinct from what you can do now. And that would require user permission. So I'm becoming more at ease. :-)

Yup, it's quite distinct, precisely because it has more abuse potential.

Also, if I'm understanding correctly: A service worker can (aside from push notifications?) only run in situations where JavaScript would normally run from the server (i.e., when the user navigates to the site, or there's an iframe with the site), except that it can run when the user is offline (but navigates to the page)? So, like a script that runs on every page on the site, but it can be cached to run offline, basically? (ETA: in terms of when it can run, that is)

Not quite - SWs are less-powerful scripts that run whenever certain things happen. The most common "thing", which SWs were originally designed for, is you have a page open and it makes a request to its own origin; if the SW is registered for fetch events, it'll get woken up so it can respond.

But there are a few things that'll wake up a SW when there's no pages open - right now that's just a background sync and push notifications, but in the future might include periodic syncs, or *other* domains making requests to your domain.

(Aren't there similar security risks with normal HTTP caching? Do browsers have limits on that for non-HTTPS sites?)

They don't, no, and that's a problem - if you're using long-term caching, your users can get stuck on an outdated (or in bad cases, pwned) version of your site for a long time, and there's nothing you can do about it. The 24-hour max limit on SW caching is actually *more* conservative than normal, balanced with the fact that we don't want to be constantly waking up your radio to check for updates to the SW (as that would defeat some of the point).

Good thing 24 hours isn't nearly enough time for a malicious actor to do any real damage.

I'll make myself more explicit here:

If your server is hijacked by a malicious actor, they can send a malicious page to your users with *enormous* cache times, effectively sticking your user to the malicious page *forever* (until something causes the cache to evict, or the user forces a cache refresh). This applies whether you're on HTTP or HTTPS; it's just easier with HTTP because you can MITM without having to pwn the server itself. There's effectively nothing a page can do about this once it happens.

A SW, on the other hand, is guaranteed to only be malicious for at most 24 hours after the hijack is fixed.

If I understand correctly, push notifications and other events that may repeat for all eternity do require user permission? Then it's all good with me. (It came across to me as if SWs were as free as OS services/daemons in that they could run indefinitely or continuously request new events, like setTimeout or other asynchronous functions with callbacks.)

Yeah, push notifications are opt-in, and repeated-sync will probably be opt-in as well. It depends on how our security and UX people figure it; for example, we might figure that someone who visits a news site every morning is okay to silently opt-in to auto-sync in the morning. This is similar to the heuristics that we use to automatically offer an "install this page to your Home Screen?" prompt when you visit a site that looks app-like enough times in a short period.

I've been day-dreaming a bit about CSS -- specifically, about an "image-animation" property. Mainly I would really like to be able to use ":not(:hover) { image-animation: disable; }" as a user stylesheet, but it seems like it should more widely useful as well, for e.g. thumbnails, Twitter-like "only animate images in focus", not wanting to animate user-uploaded content, etc.

So I'm wondering, basically, is this a reasonable thing to propose for standardization? Or would it just get shot down, or not find interest, or has it been suggested before, etc. (And if so, what's the process like? Just opening an issue on csswg-drafts?)

I feel like it might have been suggested in the past, but I don't recall anything specific coming of it. Feel free to open an issue about it in csswg-drafts, with the usage examples you're citing (which are why Twitter doesn't actually do gifs; it transcodes them to videos so it can pause them >_<).

Xanthir wrote:I feel like it might have been suggested in the past, but I don't recall anything specific coming of it. Feel free to open an issue about it in csswg-drafts, with the usage examples you're citing (which are why Twitter doesn't actually do gifs; it transcodes them to videos so it can pause them >_<).

GIF isn't bad. It's just a format that was invented and optimized for blinking "under construction" signs of the 90s web, e.g. animated pixel art. And it works really well for those.

If people abuse it to encode full motion video, then you cannot blame gif for not being efficient at it. Gif was created at a time when full motion digital video simply didn't exist on consumer devices; even tv stations were run (mostly?) analog.Is a current codec designed for full motion video better at encoding full motion video than some lossless pixel art format that's 20 years older? Well, duh. But would h.264 have worked on early 90s machines with 50 MHz CPUs and unaccelerated framebuffers? Not at all.

If you're annoyed by some website trying to send a 10MB gif over your mobile connection, don't blame gif. Blame the video industry for actively preventing royality-free codecs for consumers; blame the browser vendors for not introducing a cross-browser alternative until 20 years later; blame microsoft's apathy towards web standards for delaying those browser improvements. GIF is great at what it was meant to do, and the only reason it keeps being used for video is the lack of alternatives.

But really, the ideal solution is to take full-motion video, convert it to GIF, and then take that resulting GIF and convert that to MP4. Because the dithering patterns from dropping to 256-colour do great things to MPEG compression.

At least Twitter lets you upload actual MP4s and skip that whole step, even if it does transcode them down to a teensy bitrate. I'm looking at you, though Imgur, and your "gifv" nonsense...

Xanthir wrote:I feel like it might have been suggested in the past, but I don't recall anything specific coming of it. Feel free to open an issue about it in csswg-drafts, with the usage examples you're citing (which are why Twitter doesn't actually do gifs; it transcodes them to videos so it can pause them >_<).

phlip wrote:But really, the ideal solution is to take full-motion video, convert it to GIF, and then take that resulting GIF and convert that to MP4. Because the dithering patterns from dropping to 256-colour do great things to MPEG compression.

Aren't you then supposed to import it into a Word document, and then save that out as HTML?

Jose

Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

Is the goal here to allow people to enter arbitrary expressions in the config file? Or are you just trying to make it so they can enter numbers/strings/etc and have it load in as the appropriate types? Because in the latter case, it's probably going to be a lot cleaner to just read the values in as strings (from configparser or whatever) and convert them afterwards, according to what the field is supposed to be, rather than what syntax they happen to have put in the config file. So you would read:

phlip wrote:Is the goal here to allow people to enter arbitrary expressions in the config file? Or are you just trying to make it so they can enter numbers/strings/etc and have it load in as the appropriate types? Because in the latter case, it's probably going to be a lot cleaner to just read the values in as strings (from configparser or whatever) and convert them afterwards, according to what the field is supposed to be, rather than what syntax they happen to have put in the config file.

I would normally only expect literals and calls to builtins (i.e. float(..) and int(...)) with literals as parameters.The typing thing is a little more complicated - there are cases where a parameter can be a number like 0 or 1 and a string like "Open" or "Closed", but not the string "0" or "1".

phlip wrote:If you're absolutely tied to the exec thing, then what you want is:

That's more in line what I was originally thinking when I first looked at this. Prevents use of the existing namespace - probably a good thing.

EvanED wrote:What about INI syntax via configparser, part of the standard library? I, personally, would not expect to have to quote strings (at least absent special characters).

Looking at the older files, strings indeed are not quoted (although they often omit the equals for some reason, just being "KEY VALUE", damn liberal parser).I suspect this is the road I will eventually walk, if I deviate from existing code at all.

Indeed it appears that none of Internet Explorer, Edge, and IE Mobile support the 'tab-size' CSS property, at least according to MDN's documentation. I guess Microsoft really want you to use a very specific tab-size, or else...

You can still run a flash plugin in an isolated VM on a rented computer on a separate network somewhere across the globe to see HSR. And the most important videos are on the youtubes.

Is there a linuxy way to monitor services and send an email whenever a services dies or comes back up (and preferably every hour while something's down)?I'm currently using systemd timers to run a small script to log the status of things like the xkcd minecraft server and I could run a sendmail command when the status is negative, but it seems there ought to be a better way.If there's no linuxy way, I'll look into graphite+grafana or <insert better suggestion>.

[edit]actually, the minecraft monitor is the one thing run by cron instead of systemd, which sends an email each minute the server is down.