Scenario: A single version 12 server is running 100 separate databases, all started at the same time (or as fast as 100 START DATABASE statements can run), and all lightly loaded (let's say "completely idle").

Idle databases take checkpoints every 20 minutes or so, and when the 20 minutes are up all heck breaks loose, performance-wise, as all 100 databases take their checkpoints one after the other.

If something ELSE is running, say on database 101, or even some other non-database process, you can just forget latency, throughput, or whatever other measure of performance you're interested in... if the computer isn't a super-high-performer, it will be crushed for maybe a minute or two or more.

So... all I want is some method to get those 100 databases on separate checkpoint schedules, something cheaper than "upgrade the hardware".

setting the checkpoint_time option to (group-wise) different values for each database or

running explicit CHECKPOINT statements within each database at different times (under the expectation that this will have an influence on the next "automatic" checkpoint), say by an event with a schedule with a start time (or frequency) calculated by the database start time and the database number?

Wide variations in response time for ALL sybase.com processes is a way of life... what, you're not used to it yet?

As far as this forum is concerned, if you want to take a break from productive work, but you like gambling, try the "Search" button: sometimes it takes a few seconds (too long but not stupid long) to over 30 seconds (that's just craaaazy)... with the two experiences back to back, like a slot machine.

Anyway, based on years of experience, sybase.com performance is not going to become stably adequate any time soon. Folks do respond if you complain loudly enough, but the fixes never stick for very long, and you just end up getting a reputation as a whiner... don't let that happen to you!

Volker alluded to a solution in his comment to the question.... but here is the idea in more detail:

Create a database start event in each of the databases that delays a variable amount of time and then executes an explicit checkpoint. The idea is to spread the checkpoints out evenly across the ~20 minute regular checkpointing interval.

In case all those databases have one common DBA account (or you know all of their DBA accounts), you could also use one particular database as "checkpoint controller" and use remote database access to FORWARD CHECKPOINT statements to each database in a round-robin fashion.

That would prevent the need to alter each and every database...it would just need a remote server object - and possibly you could use one (or a few) remote servers and adapt them via ALTER SERVER USING ... on the fly:)