Bugs

Before submitting a bug report:

[x] Double-check that the bug is persistent,

[x] Double-check the bug hasn't already been reported on our issue tracker, they should be labelled bug or bugsnag.

After the work I submitted for handling cabal dependencies was merged, I wanted to check a couple of my repositories for their new dependency information (e.g. https://libraries.io/github/alunduil/collection-json.hs). I've not seen any of the dependency information added and when I use the re-sync button in the bottom right of the page, it gives me an updated timestamp but still no dependency information.

I've run the cabal file through the cabal parser at http://cabal.libraries.io/parse by hand and it comes back as I'd expect. Let me know if there are any more details and I can help troubleshoot this one but I'm not sure if it's related to #1963 either.

Bugs

Before submitting a bug report:

[x] Double-check that the bug is persistent,

[ ] Double-check the bug hasn't already been reported on our issue tracker, they should be labelled bug or bugsnag.

I'm not sure how I ended up in this state but when I go to my /repositories page nothing is up to date and the page is marked as "currently syncing." Problem is that it's been in that state for over a week with no change. Not sure if it's something I did in github with permissions (don't think I've touched this) but would love to see it get out of syncing and have up to date information.

If any other information would be helpful, let me know and I'll get it ASAP.

json
{
"message": "[Request] ** [http://127.0.0.1:9200]-[500] {\"error\":{\"root_cause\":[{\"type\":\"query_phase_execution_exception\",\"reason\":\"Result window is too large, from + size must be less than or equal to: [10000] but was [15000]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level parameter.\"}],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[{\"shard\":0,\"index\":\"cpan_v1_01\",\"node\":\"euEoqisPSk68CnedNAzoZA\",\"reason\":{\"type\":\"query_phase_execution_exception\",\"reason\":\"Result window is too large, from + size must be less than or equal to: [10000] but was [15000]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level parameter.\"}}]},\"status\":500}, called from sub Search::Elasticsearch::Role::Client::Direct::__ANON__ at /home/metacpan/metacpan-api/lib/MetaCPAN/Server/Controller.pm line 125. With vars: {'request' => {'method' => 'GET','ignore' => [],'path' => '/cpan/release/_search','serialize' => 'std','qs' => {'q' => 'status:latest','fields' => 'distribution','sort' => 'date:desc','size' => 5000,'from' => 10000},'body' => undef},'status_code' => 500}\n"
}

The docs suggest using the scroll api: https://github.com/metacpan/metacpan-api/blob/master/docs/API-docs.md#being-polite but the links to the docs are dead.

More recent scroll api docs here: https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-scroll.html but I couldn't seem to get it to accept scroll_id as a parameter: