Home » Computer architecture » Cache Memory course

Cache Memory course

Cache Memory course

A computer has different types of memory that are distinguished by their access speeds. The faster the memory is, the more expensive it is and the less the amount available on the computer. The internal or floating internal registers of the microprocessor constitute the memory to which it accesses the fastest but they are in very limited number. Then there is the RAM, the hard drives and then the tapes, which are very slow but which allow to store very large amounts of data.

The caches are intermediate between the internal registers of the microprocessor and the RAM. They are made of fast memory but they are reduced in size compared to the main memory.

When it comes to speed of access to memory, we must distinguish between latency and throughput. Latency is the time that elapses between the request for data and the arrival of the first data. The flow rate then measures the flow of data transmitted during the steady state, that is to say after the latency has elapsed. For a hard disk, the latency is relatively long because the read head must be positioned mechanically and then wait until the right sector is under the head.

For some time now, there has been a gap between the speed of microprocessors and the speed of dynamic memories that are used as main memory of computers. SDRAMs increase throughput but minimize latency. To prevent the microprocessor from losing time to wait for the data in the memory, memory caches formed from faster static memories are interposed between the microprocessor and the main memory. The purpose is similar to that of disk caches to speed up access to hard disks.

Table of contents

  • Memory Hierarchy
  • Principles of locality
  • Cache memory
  • Direct access
  • Partially associative
  • Completely associative
  • Considerations with respect to memory entries
  • Instruction cache and unified cache
  • Common definitions
  • Cache Organization
  • Side note: byte address, block address
  • Design code suitable for cache
  • Rearrange the loops to improve the spatial locality
  • Use blocking to improve temporal locality

Pages :
File type : pdf
Downloads: 568
Submitted On: 2016-12-31
License:
Author(s):

Take advantage of this course called Cache Memory course to improve your Computer architecture skills and better understand Memory.

This course is adapted to your level as well as all Memory pdf courses to better enrich your knowledge.

All you need to do is download the training document, open it and start learning Memory for free.

This tutorial has been prepared for the beginners to help them understand basic Memory Computer architecture. After completing this tutorial you will find yourself at a moderate level of expertise in Memory from where you can take yourself to next levels.

This tutorial is designed for Memory students who are completely unaware of Memory concepts but they have basic understanding on Computer architecture training.

Preview Download

Tutorials in the same categorie :