Recent Activity

Today

We could ensure that people aren't confused with a survey aimed at users who have this turned on. But I suspect that Google's reprocessing removes all of our javascript, seeing how extreme it looks. And we need a way to detect when it's Google's proxy making the request. As it stands it's unclear whether they use a different Google bot for this, or if they repurpose the HTML they already index for their search engine (the latter would mean we can't even inject HTML for the Lite case).

Yesterday

@Peter can probably provide a screenshot of the settings where this is enabled, where it’s indeed not mentioned anywhere that enabling it means Google gets to intercept full URLs of everything you visit. In fact Google misleads the user into thinking that we served the request, since our domain is displayed in the URL bar, not Google’s.

Certainly. You should provide the identical webp and png after having verified that they contain the same color information in GIMP, Photoshop or any tool of that nature, and a screenshot showing that they render differently side-by-side in Firefox.

We should expect browsers to render identical color information identically, irrespective of the image format it's expressed in... What we get from GIMP by looking at the files directly is the reference and we're getting identical color values between the 3 formats we're serving. If there are color rendering discrepancies when displayed, it's a browser bug.

Fri, Mar 8

All changes have been deployed. The next phase will be moving current firstPaint recordings from the NavigationTiming schema to the PaintTiming one. And it should all magically go to the right place with dashboard unchanged.

Thu, Mar 7

I can live with the cron being under my user. The script is indeed writing to the datasets mount with the --publish option. Then the performance site pulls the tsv from there with a bit and publishes it in a more human-friendly form at https://performance.wikimedia.org/asreport/

Wed, Mar 6

In order to maintain our existing dashboards, particularly the breakdown ones, we could have the navtiming collector still write the metrics coming from the new schema to frontend.navtiming2. However, in order for oversampling to be accounted for and not have an oversampling campaign mess with the overall firstPaint metrics, we need to make the PaintTiming schema oversampling-aware.

The reporting script is getting very close to being finalised and ready for review. However, generating the numbers for February, I notice that the only 2 countries where there's enough data for the average transferSize to be stable across ISPs are France and Russia. Which have been getting more CPU benchmark runs thanks to the performance survey. Since we haven't had complaints in those countries, I think it's time to run the CPU benchmark more often across the board.

You can't check things server side on a pageview coming from Varnish, that's the issue here. A realistic pageview from a reader is one that only hits our edge cache layer. It needs to be a signal that the CentralNotice client side code can see and act upon.

Yes, you can go ahead and switch Thumbor to Mcrouter. Thumbor uses memcached for non-critical request throttling. Memcached can be down for a bit and Thumbor will continue working normally, albeit without some types of throttling working.

Did something happen on 2019-02-21? I was looking at the ulsfo RUM perf metrics for the change in that DC. No change is apparent after the 2019-02-19 config change, except that particular day (the 21st) that stands out as having noticeably bad performance for ulsfo. Pretty much the same TTFB as esqin users on that day.