I would have preferred a simple, "good idea" or "we'll look into that", then ignoring my idea. I guess if they don't want a smoothly running system, and timely PQs then they can do what ever they want...but one would think that they might be a bit nicer to the paying members and consider their ideas.

I don't think StarBrand speaks for groundspeak.

Personally, I want to get the latest logs. I just wish that you could select a shorter period of time than 7 days, so that the number of updated caches is less than 500.

I would have preferred a simple, "good idea" or "we'll look into that", then ignoring my idea. I guess if they don't want a smoothly running system, and timely PQs then they can do what ever they want...but one would think that they might be a bit nicer to the paying members and consider their ideas.

"No, you can't always get what you want
You can't always get what you want
You can't always get what you want
And if you try sometime you find
You get what you need"_________________Tupperware doesn't belong in the kitchen!

By selecting the last 7 days we would at least get smaller queries returned compared with what we get now. No sense in pulling what hasn't changed...

I also think if you pull with the last 7 days option that you could probably combine some of the queries, especially the ones for the older caches as they are no longer hit as frequently._________________Hmm...

In the end, I can keep doing what I'm doing, I just thought that this was a simple and easy way to A) make what many users are doing easier. B) save strain on the GC servers, C) Make PQs faster to run.

In the end, I can keep doing what I'm doing, I just thought that this was a simple and easy way to A) make what many users are doing easier. B) save strain on the GC servers, C) Make PQs faster to run.

I'm guessing you don't know how much of a strain (or just the opposite) this may cause. I'm guessing that each additional 'filter' you add to the query ADDs processing time. It may reduce the size of the file when you're done, but it may be cheaper in processing time to apply less filters and send you more queries. gc.com's problem doesn't appear to be bandwidth, it appears to be processing time on the database server(s).

I would just be cautious not to trim it too close. One week might be a slow week (and thus you'll get fewer caches), but the next week may not be. I'm thinking I'll try reorganizing my queries this way but only set them up so the dates give me about 400 per query so I have room for expansion._________________Hmm...

I would just be cautious not to trim it too close. One week might be a slow week (and thus you'll get fewer caches), but the next week may not be. I'm thinking I'll try reorganizing my queries this way but only set them up so the dates give me about 400 per query so I have room for expansion.

Good point. I knew there was a caveat to this. I was able to trim my pocket queries down from from 11 to 5. I'll go back and adjust the dates._________________Tupperware doesn't belong in the kitchen!

In the end, I can keep doing what I'm doing, I just thought that this was a simple and easy way to A) make what many users are doing easier. B) save strain on the GC servers, C) Make PQs faster to run.

I'm guessing you don't know how much of a strain (or just the opposite) this may cause. I'm guessing that each additional 'filter' you add to the query ADDs processing time. It may reduce the size of the file when you're done, but it may be cheaper in processing time to apply less filters and send you more queries. gc.com's problem doesn't appear to be bandwidth, it appears to be processing time on the database server(s).

It's just a guess - I'm not a MySQL expert.

I'm a DBA for a fairly hefty database that requires 98% up-time I understand what you are saying, but I disagree with it, if they create an index on their update_date, or status_date column and filter only on that one column, the response time would be extremely fast. It would reduce overhead. Granted they also need to filter on distance as well, but still because you are reducing the number of returned results, this would be very fast.

I'm a DBA for a fairly hefty database that requires 98% up-time I understand what you are saying, but I disagree with it, if they create an index on their update_date, or status_date column and filter only on that one column, the response time would be extremely fast. It would reduce overhead. Granted they also need to filter on distance as well, but still because you are reducing the number of returned results, this would be very fast.

Then I stand corrected - thanks.

The other thing I recall reading recently is that they were hiring outside help to assist them with correcting db problems.. I wonder if that is still going on, and if we will see new features/optimizations when that happens?

OK, you've had an interesting discussion while I've been off working. So, some of you are going to use "updated" queries instead of pulling all the caches.

After you do this, how are you going to tell which caches have been archived? Archived caches don't show up in your PQ, and now caches that have not been updated won't show up in your PQ.

You're going to have a database full of archived caches that don't look any different from caches that have not been updated. How will you know which is which?

It doesn't matter for me because I'm not using GSAK. I wrote a program to load my queries into a MySQL database and mark caches as archived if they haven't been updated in more than 7 days. But if the cache is unarchived in the future, it will be updated to active in the database._________________Tupperware doesn't belong in the kitchen!