As a school assignment, I need to find a way to get the L1 data cache line size, without reading config files or using api calls. Supposed to use memory accesses read/write timings to analyze & get this info. So how might I do that?

In an incomplete try for another part of the assignment, to find the levels & size of cache, I have:

for (i = 0; i < steps; i++) {
arr[(i * 4) & lengthMod]++;
}

I was thinking maybe I just need vary line 2, (i * 4) part? So once I exceed the cache line size, I might need to replace it, which takes sometime? But is it so straightforward? The required block might already be in memory somewhere? Or perpahs I can still count on the fact that if I have a large enough steps, it will still work out quite accurately?

You can use assembly and then CPUID instruction (it's a processor instruction, not an API) to get this information. I know you are probably not looking for a solution like this one, but anyway I think It's worth to share...
–
Hugo CorráOct 5 '12 at 14:31

This question might give you some ideas. It doesn't measure the cache sizes, but it does show significant performance drops at each cache level.
–
MysticialOct 12 '12 at 0:04

7 Answers
7

Allocate a BIG char array (make sure it is too big to fit in L1 or L2 cache). Fill it with random data.

Start walking over the array in steps of n bytes. Do something with the retrieved bytes, like summing them.

Benchmark and calculate how many bytes/second you can process with different values of n, starting from 1 and counting up to 1000 or so. Make sure that your benchmark prints out the calculated sum, so the compiler can't possibly optimize the benchmarked code away.

When n == your cache line size, each access will require reading a new line into the L1 cache. So the benchmark results should get slower quite sharply at that point.

If the array is big enough, by the time you reach the end, the data at the beginning of the array will already be out of cache again, which is what you want. So after you increment n and start again, the results will not be affected by having needed data already in the cache.

Probably HW prefetching will figure out steps of 'n', and will load ahead of you.
–
auselenOct 1 '12 at 21:21

2

I think this idea should work, however try to take n steps in a random way to avoid prefetching, something like n + (r * c), where c is a 2's power value bigger than possible cache line size and r is a random value. you need to be sure that n + (r * c) is in your array probably using modulo.
–
auselenOct 2 '12 at 5:42

1

I guess it is also fair to make some assumptions like, cache line size must be 2's power, at least 32 bytes, max 512 bytes.
–
auselenOct 2 '12 at 5:43

1

Your data array is only 32KB, so the whole thing will fit in L1 cache. Please pay attention to the first thing I said above: "Allocate a BIG char array. Make sure it is too big to fit in L1 or L2 cache.".
–
Alex DOct 5 '12 at 19:21

1

@JiewMeng, I'm not posting my code right now, because you will learn more by figuring out how to fix your code yourself. After you figure it out, I can send you my code for comparison.
–
Alex DOct 6 '12 at 4:20

Have a look at Calibrator, all of the work is copyrighted but source code is freely available. From its document idea to calculate cache line sizes sounds much more educated than what's already said here.

The idea underlying our calibrator tool is to have a micro benchmark whose performance only depends
on the frequency of cache misses that occur. Our calibrator is a simple C program, mainly a small loop
that executes a million memory reads. By changing the stride (i.e., the offset between two subsequent
memory accesses) and the size of the memory area, we force varying cache miss rates.

In principle, the occurance of cache misses is determined by the array size. Array sizes that fit into
the L1 cache do not generate any cache misses once the data is loaded into the cache. Analogously,
arrays that exceed the L1 cache size but still fit into L2, will cause L1 misses but no L2 misses. Finally,
arrays larger than L2 cause both L1 and L2 misses.

The frequency of cache misses depends on the access stride and the cache line size. With strides
equal to or larger than the cache line size, a cache miss occurs with every iteration. With strides
smaller than the cache line size, a cache miss occurs only every n iterations (on average), where n is
the ratio cache
line
size/stride.

Thus, we can calculate the latency for a cache miss by comparing the execution time without
misses to the execution time with exactly one miss per iteration. This approach only works, if
memory accesses are executed purely sequential, i.e., we have to ensure that neither two or more load
instructions nor memory access and pure CPU work can overlap. We use a simple pointer chasing
mechanism to achieve this: the memory area we access is initialized such that each load returns the
address for the subsequent load in the next iteration. Thus, super-scalar CPUs cannot benefit from
their ability to hide memory access latency by speculative execution.

To measure the cache characteristics, we run our experiment several times, varying the stride and
the array size. We make sure that the stride varies at least between 4 bytes and twice the maximal
expected cache line size, and that the array size varies from half the minimal expected cache size to
at least ten times the maximal expected cache size.

I had to comment out #include "math.h" to get it compiled, after that it found my laptop's cache values correctly. I also couldn't view postscript files generated.

I think you should write program, that will walk throught array in random order instead straight, because modern process do hardware prefetch.
For example, make array of int, which values will number of next cell.
I did similar program 1 year ago http://pastebin.com/9mFScs9Z
Sorry for my engish, I am not native speaker.

I think it should be enough to time an operation that uses some amount of memory. Then progresively increase the memory (operands for instance) used by the operation.
When the operation performance severelly decreases you have found the limit.

I would go with just reading a bunch of bytes without printing them (printing would hit the performance so bad that would become a bottleneck). While reading, the timing should be directly proportinal to the ammount of bytes read until the data cannot fit the L1 anymore, then you will get the performance hit.

You should also allocate the memory once at the start of the program and before starting to count time.