Universe1010101Is1000011CodeChaos011001001000Of10111111 Code

Cache is a small static RAM between CPU and main memory. Also caches are called "slave memories". Generally, main memory represents dynamic RAM and it has much more capacity than cache. A cache hit occurs when the requested data can be found in a cache, while a cache miss occurs when it cannot. Cache hits are served by reading data from the cache, which is faster than recomputing a result or reading from a slower data store; thus, the more requests can be served from the cache, the faster the system performs.

The cache keeps a copy of the most frequently used data from the main memory. Reads and writes to the most frequently used. We only need to access the slower main memory for less frequently used data. Because it is impossible to fit all data fill in the cache.

Relation Between CPU Speed & Memory SpeedThe cache must be run very fast and its reaction time (latency) should be very quick. For example, we have main memory which has 100ns access time. If we use this, our CPU's maximum speed will be 10mHz even it runs more then 10mHz. Our memory's access time must be 1ns to run our CPU at 1gHz.Memory access speed increases overall because we’ve made the common case faster. Addresses will be serviced by the cache. We only need to access the slower main memory for less frequently used data.In CPU, What is difference between L1, L2 and L3 cache?

Level 1 (L1) cache is extremely fast but relatively small, and is usually embedded in CPU.Level 2 (L2) cache is often more capacious than L1; it may be located on the CPU or on a separate chip or coprocessor. So as not to be slowed by traffic on the main system bus.Level 3 (L3) cache is typically specialized memory that works to improve the performance of L1 and L2. It can be significantly slower than L1 or L2, but is usually double the speed of RAM. If we look at a CPU under a microscope, we see that the most space is occupied by the L3 cache.In the case of multicore processors, each core may have its own dedicated L1 and L2 cache, but share a common L3 cache. When an instruction is referenced in the L3 cache, it is typically elevated to a higher tier cache.Set sizes range from 1 (direct-mapped) to 2k (fully associative). 1-way set associativated cache is equal to direct mapped one. Larger sets and higher associativity lead to fewer cache conflicts and lower miss rates, but they also increase the hardware cost. Due to design and chipset conditions, engineers who design to CPU architecture, cannot change some cache ways, levels and amount easily.In practice, 2-way through 16-way set-associative caches strike a good balance between lower miss rates and higher costs. Nowadays, Intel uses 8-way to 12-way caches in their i-series CPUs. (i5,i3,i7) But amounts is changing CPU by CPU. CPUs which values and costs are higher, for sure it's cache is higher than cheaper ones. The increase of the cache increases the performance of the user.Example of Hardware caches-SSD, HDD-CPU-GPU-DSPs-Some flash drives-Translation lookaside buffer

Example of Software Caches-Web Cache: web browsers, web proxy servers, programs that uses p2p networks use it.-Disk Cache: In windows OS, pagefile files exist. They are very good example of software caches. It created by OS in some conditions and is managed by the operating system kernel.Inside of CacheA cache is divided into many blocks, each of which contains a valid bit, a tag for matching memory addresses to cache contents, and the data itself.Larger block sizes can take advantage of spatial locality by loading data from not just one address, but also nearby addresses, into the cache. Associative caches assign each memory address to a particular set within the cache, but not to any specific block within that set.Mehmet Çağrı Aksoy