There are some 3,627 geocaches in NSW that are not archived.If I set up a query to grab them all I have no issues at all. Both GPX and ZIP.The GPX file is just under 30MB. The ZIP file is just under 5MB.

There are some 1,930 geocaches in Tasmania that are not archived.If I set up a query to grab them all I have no issues at all. Both GPX and ZIP.The GPX file is just under 27MB. The ZIP file is just over 5MB.

The difference is logs. The Tasmanian geocaches are found much more frequently.

The download of the GPX and ZIP files from the state lists was an original implementation when we had very few geocaches per state and at that time we did not have the My Query function. So while the data was small and the ability to filter geocaches was unavailable, this was a good approach.

As the number of geocaches grows the "just give me everything" has caused a problem with the size of the data being returned from those lists for the GPX files, etc. We are now forcing you to get the data via a My Query. If we continue to maintain both output lists we run the machine out of RAM (memory). If someone would like to donate between $8K and $10K we could buy an additional database server. We would also like an additional $1,500 per year for the hosting and bandwith costs. Pause for a donation to appear... ... ... No. Didn't think so.

You're done with setting the query up.Now you can use the Count link to see how many geocaches there areGPX to get the GPX fileZIP to get the ZIP file

So taking (approximately) 10 seconds to set up a query, you can now run that query any time you like to get the whole state of Tasmania.

I don't recommend you do this every day and that will cause us to exhaust our bandwith and then no-one gets anything until the new month starts.

The Chicken Little sky is falling scenario is tired and worn out.The accusations of denying Tasmanians access to the entire state data is unneeded passive aggressive behaviour and is tired and worn out.The "why don't you do everything I want" is tired and worn out.

The volunteers at Geocaching Australia put in tens of thousands of dollars of free time over hundreds of hours per year (I average around between 750 and 1,000 hours of support at this site per year, at my usual rate of $100 per hour this is between $75K and $100K of free service the site gets). We will provide free and open access to the data as per the mantra of Geocaching Australia.

Free: You will need an account to create a my query, but there is no cost.Open: Access to the data is (until we run out of memory) unlimited by a My Query and the new GCA API.

Most of the GCA API calls are restricted to groups of 500, some with pagination and some without.

If you're in the field and using an app that uses the API, I doubt you would need more than 500 geocaches from where you are right then and there but if you use the nearest search, you can have tens of thousands returned to you.

If you are using a My Query via the app you are restricted to 500 geocaches via the JSON call. Because JSON is not compressed output (it's plain text) 500 is a limit we set to minimise the bandwith but maximise the number of geocaches you might like to find that are near you. You cannot do more than 500 in a day, so there is not intent to give you 10,000 geocaches via the My Query API JSON response.

If you are not using an app and the API, the My Query should return what you need so you can store these offline (such as GSAK). For example I use a macro in GSAK which gets my list of queries and will return the results to me for the query I select. At the moment it returns all 671 unfound by me geocaches in Victoria. That comes in as a GPX file. This is not restricted but it would be preferable to get the ZIP file to minimise bandwith.

No one can have the whole database at once, on demand. I have stated before and I will state again if you are collecting stats, then we can work with anyone to provide the data as needed without sending GB's of geocaches and logs to them at once.

In very simple terms you can get what you want and what you need, you just cannot have custom code that runs to the detriment of the rest of the users and the site. There are many ways to get the data. If you tell us what you need we can try and accommodate that. Demanding change without consideration of the server, the users and the site will summarily dismissed.

Again you assume the way the data is stored and prepared for delivery. This is not possible without a major rewrite. The point is moot because you can't have that volume of data and I shan't be re-enabling those links. Please use the correct tools for the correct job.

Again you assume the way the data is stored and prepared for delivery. This is not possible without a major rewrite. The point is moot because you can't have that volume of data and I shan't be re-enabling those links. Please use the correct tools for the correct job.

This was in relation to the note about the bandwidth usage for the queries, not about the nuking of the GPX links.

I had a discussion with a non GCA cacher at an event last night and have probably convinced her to take a serious look at GCA caching. As a bit of research to help her see the benefits of this listing site I checked her general stats page with the idea of emailing the page to her. I found a probable fault with the stats. As she has never logged a hide or find on a GCA cache her verbosity stats show Longest log of 0 words, shortest log of 9,999,999 words.

I noticed a post in the secret Senate forum a short time ago with a post time of 0946 18/7/17 yet the actual time was 0926 18/7/17. A known fault or just one of those mysterious computer things that we should ignore until the new forum software is up and running?