Logo

Computer Memory – CompTIA A+ 220-1101 – 2.9

<a class="wp-block-button__link"Download PowerPoint
Show lesson content
Computer Memory – CompTIA A+ 220-1101 – 2.9
Let’s have a look at computer memory.

What is Computer Memory?
In computing, memory is used to store data for immediate use in a computer or on related digital electronic devices. It is often referred to as primary storage or main memory.

Memory operates at a high speed compared to the computer’s storage, which is slower but less expensive and higher in capacity. The disadvantage with memory is that it is volatile, which means the data is lost when power is lost.

Memory is utilized by programs, disk caching, and write buffering. In modern operating systems, the computer will attempt to utilize any installed memory in the computer to make it operate as fast as possible.

To understand memory better, let’s have a closer look at how it works.

How Memory Works
Memory consists of transistors, capacitors, and other components with the sole purpose of storing data in the form of a one or a zero. There is a lot of complexity in how memory works, but when you break it down, it works a lot like a spreadsheet.

You don’t need to know the complexities of memory for the exam, but I will go through it a little bit so you get an understanding of what you are buying. Also, having a little knowledge helps to understand how new memory like DDR5 works.

For the computer to access data in memory, the process is essentially, selecting the row, selecting the column and getting the data. Different memory modules will be faster than others at selecting the row and the column. Shown here are some example timings for a memory module. Unless you’re an enthusiast trying to get the best performance, you won’t need to worry about memory timings. Lower is generally better.

To access memory, the first step is to access the row. This is given by the third memory timing. Once the row is selected, the next step is to select data on that row. There is a delay before data on that row can be accessed. This is the second memory timing value. If multiple bits of data on the same row are needed, this delay still applies to all accesses of that row. Basically, if you want any data from that row, this delay always applies. If you change rows, a new row timing applies in order to change the row, plus the delay to access data on that row.

There are also optional timings which determine how long you need to wait before more data can be accessed. Shown here are some example timings that you will find on a memory module. The first three lower options are better. For the last one, higher is better. The last one is higher because it is how often a refresh is performed on the data inside the memory. Memory is like a bucket. If the bucket is full of water the value is a one. If the bucket is empty, it is a zero. You either fill or drain the bucket when you want to change the data; however, there is one problem. Memory is like a bucket with a hole in the bottom. Over time the water will leak out the bottom and the bucket will become empty, thus losing its charge. Memory essentially loses its charge if not refreshed at regular intervals.

This is a pretty simple explanation of the timings of memory modules and does not take into account other factors like the speed of the memory. Unfortunately, to work out the effective speed of a memory module, there are a few things to take into consideration and some math involved. We have another video which goes into this process in a lot more detail.

We now have a basic understanding of how memory works. You don’t need to remember any of this information, but it does help later on in understanding how DDR5 has changed things. I will now look at a few basic terms used when talking about memory.

DRAM (Dynamic Random-Access Memory)
Dynamic Random-Access Memory or DRAM is the most commonly used memory type today. It is fast and is accessed dynamically. The only downside is, it needs to be refreshed periodically otherwise data is lost.

Typically, DRAM is integrated onto a single chip. For example, in this memory module, you can see there are eight DRAM chips visible. A single chip can work in isolation, but in the case of memory modules, they work together in parallel. Before we look at that, we first need to look at something else.

SDRAM (Synchronous DRAM)
Memory used with computers nowadays will typically be SDRAM. SDRAM stands for Synchronous DRAM. You generally won’t hear the term Synchronous DRAM used as it will just be called DRAM. To understand what this means, let’s first consider what Asynchronous RAM is.

To understand how it works, consider that memory is like a big warehouse. In order to get data out of the warehouse, a worker needs to get it for you. So, you ask the worker to do this.

In the old days, this was the way computers would operate. The CPU would request data and then wait for it to be returned. This would be like the worker saying, “Wait here while I get it for you.” The CPU had no way of knowing how long the data would take to get and thus the CPU just had to wait. This was not an efficient way of doing things.

Now let’s consider Synchronous DRAM, called SDRAM. This time when the data is requested, rather than asking you to wait, the warehouse worker gives you a time to come back. When you come back at the specified time, you can be assured that the data requested will be ready. The CPU no longer has to wait for the data to be retrieved. It can get on with other things, knowing that the data will be there when it returns. You can see why SDRAM is used nowadays, it is just a better way of doing things.

Now let’s have a look at what else has been done to improve performance.

Double Data Rate (DDR)
In order to improve performance, computers generally now use Double Data Rate or DDR memory. To understand how this works, let’s first consider Single Data Rate. In order to keep the computer and memory in sync a clock signal is used.

Using Single Data Rate, data is sent at one point in the clock signal. This means there is a direct relationship between clock speed and performance.

If you have not guessed it already, Double Data Rate transmits at two points in the clock signal, essentially doubling the amount of data that can be sent in the same clock cycle. As the first DDR was released in 1998, all modern computers now use DDR memory.

Now let’s have a look at some of the other terminology that is used concerning memory.

Dual Inline Memory Module (DIMM)
When talking about memory, you will hear the term Dual Inline Memory Module or DIMM. A DIMM is a memory module that contains memory chips on both sides of the circuit board. Since each chip can work in parallel with the others, having more chips increases the performance of the memory module.

Now that we have covered enough theory about memory, let’s have a look at some.

Double Data Rate 3
The exam objectives list three different types of memory: These are all DDR memory. The first is generation three, known as DDR3. There were other generations before this, but they are not listed as exam objectives. With each new generation, improvements are made which increases the speed, performance, and amount of memory supported.

In the case of DDR3, memory speed is 800 to 2133 Mega Transfers per second. Mega Transfers is millions of transfers per second. This unit of measurement came about due to technologies like DDR being able to transfer two items in one clock cycle. Traditionally, the clock rate was a good measure, but nowadays it does not give a good indication of how much data is actually being transferred, since in this case, the data transfer rate is double the clock rate.

Transfers per second are also used to describe how much data transfers over other technologies such as a bus. Some buses will have parity or other data added. This overhead effectively reduces the amount of real data that can be transferred. Transfers per second gives you a real indication of how much effective data can be transferred when the overhead is removed. Be careful when comparing different products to make sure you are looking at a common unit of measurement. Some manufacturers will list data transfer rates including the overhead to make you think a device will perform faster than it actually does.

In the case of DDR3, the transfer rate is 6 to 16 Giga bits per second. The figure listed here does not include overhead. As new generations of DDR are released, the clock rate increases as well. I have not included the clock rate because, as we will see shortly, there is a lot to consider when working out how fast memory will perform. When comparing different memory modules, it is generally easier, when possible, to just look at the data rate to see how the memory will perform.

With DDR3, the maximum size of one memory module is 32 Gigabytes in size. There are not too many memory modules to choose from that use that size. Also, you should consider if a memory module of that size is supported by the motherboard that you are using. Not all motherboards will support memory modules that large.

When looking at different memory modules, each will support a maximum clock rate. If your motherboard has a lower clock rate, the memory module will reduce its clock rate to match that of the motherboard. Memory timings also need to be adjusted for that clock rate to get good performance. Let’s have a look at how this is achieved.

Serial Presence Detect (SPD)
When a computer is switched on, it performs a power-on self-test which includes configuring hardware. The computer and the memory module need to agree on clock rate and memory timing. In order to do this, there is a small chip on the memory module that contains configuration data, referred to as ‘profiles’.

The manufacturer may provide multiple profiles for the memory module. Generally, high-performance memory modules will contain multiple profiles, while lower-performance memory may not. To see the profiles your memory modules support, you can use free software called CPU-Z to view them. In this example, you can see the different profiles at the bottom under the SPD tab.

In your computer setup, you may have the option to select which profile you want to use. Some setups will also have the option to configure your own settings. Normally, you will only need to plug the memory module in and it will be configured. If you want to get more performance out of your memory, you may want to change these settings; however, you increase the risks of data corruption. If you are having problems with the memory and can’t afford to replace it, you can try and increase the timings. This may make the memory more stable.

Let’s now look at the next generation of DDR.

Double Data Rate 4 (DDR4)
DDR4 improved on DDR3 in that it offered higher transfer rates, lower voltage, and greater memory density. You will see this trend with new generations of DDR. The main objectives are to increase storage and performance.

The memory speed of DDR4 increased to between 1600 and 3200 Mega transfers per second. The data rate increased to between 12.8 and 25.6 Gigabytes per second. The maximum size of memory modules also increased to 64 Gigabytes. Some manufacturers have created 256 Gigabyte memory modules which are intended for enterprise use. You probably won’t see memory modules of that size available for sale through regular outlets.

The last memory I will look at is DDR5, but before I do that, there is another topic I need to look at.

Error Correcting Code (ECC)
Another type of memory available is Error Correcting Code or ECC. ECC has nine chips on one side rather than eight. On this memory module there is an extra chip in the middle – I will cover that in a moment.

ECC, as its name implies, can correct a single flipped bit in each byte. Although modern memory modules are generally reliable, there is still a small risk that data can become corrupted. For business applications such as banking systems, it is not worth taking this small risk if it can be avoided. Thus, you will generally find ECC memory used in business applications but not generally in home systems.

In the case of multiple bits being flipped, ECC can detect but not correct them. Depending on the ECC memory used, this will determine how many flipped bits it can detect. When ECC detects this, it will generate a stop error which is essentially a blue screen error. The logic here is, it is better to reboot the computer and start again rather than continue to run with corrupted memory.

In order to use ECC memory, it requires a motherboard and CPU that support it. You will generally find ECC memory used in servers and high-performance workstations. For general computing you will find the motherboard does not support ECC memory. ECC costs more than non-ECC memory, and this is the main reason you don’t find it in low-end and home computers.

Now, let’s have a look at the extra chip in the middle of the memory module. This chip is a buffering chip. All the chips on the memory module work in parallel with each other. The buffer chip takes all the data from the other chips and combines it together. This makes the memory module a little more reliable when transmitting data and allows error checking to be performed. There is a cost to this, and it adds a little overhead. Depending on the CPU and the implementation of ECC, the process of checking the data may be passed to the CPU to perform.

When a memory module has this extra chip, it is commonly called Registered DIMM. The logic behind the name is that the data from the chip is being stored in registers before being transferred to the computer. This may also be called buffered.

The chip will be present only on one side of the memory module. If no chip is present, the memory is unbuffered. The vast majority of ECC memory modules on the market will be buffered. There are, however, some that are not. All non-ECC memory modules are unbuffered nowadays. Previously there was buffered memory that was non-ECC, but it never took off.

The key thing to remember about ECC memory is that it will only work in ECC motherboards. ECC and non-ECC both use the same connectors. If you get them mixed up and plug them into the wrong motherboard, they simply will not work. Furthermore, buffered and non-buffered memory are not interchangeable. Some motherboards may have slots for both, but generally it is one or the other.

Now that we have had a look at ECC, we can now have a look at DDR5.

Double Data Rate 5 (DDR5)
As before, DDR5 offers increased capacity and decreased power use. As before, it also increases the data rate. DDR5 has a memory speed of 4800 to 6400 Mega Transfers per second. This makes for a data transfer rate of 38.4 to 51.2 Gigabytes per second. The maximum size, for the moment, is 512 gigabytes for a memory module. This may increase later on, only time will tell.

You may have noticed that there is a whole heap of chips in the middle of the memory module. This is different to the other memory modules we have looked at so far.

DDR5 Voltage Regulators
In order to get higher speeds, DDR5 has changed the way power is delivered to each of the DRAM chips. Previously, voltage was delivered directly to each DRAM chip from the motherboard. With DDR5, voltage is delivered to voltage regulators on the memory modules. Now we know what all those extra chips on the memory module are for! Each manufacturer is free to design their own methods for voltage regulation. It is possible that your memory modules could have a different number of chips or different types of chips.

The voltage regulators are designed to provide consistent power to the DRAM chips. The reason they are on the memory module is to reduce the distance to the DRAM chips. Having a shorter distance means less voltage is required. If the distance was longer, there is more chance of signal loss or corruption. To reduce the chance of signal loss or corruption, the easiest way to increase signal quality is to increase the voltage. Increasing the voltage, however, means more heat.

When they came up with DDR5, they needed a way to increase the transfer rate; however, increasing the transfer rate will also increase the heat. Increasing the heat causes a whole heap of problems, thus ideally you want to reduce heat. The simplest way to do this is to reduce the voltage.

Thus, with DDR5, by using voltage regulators they were able to reduce the distance the voltage had to travel. This increased signal quality which, in turn, allowed them to reduce the voltage. Reducing the voltage reduces the heat and, thus, is the reason DDR5 can run at such a high transfer rate.

If you want to change the voltage for overclocking, this will require memory modules that support it. You will be able to change memory timings which will give you some control, but voltage is controlled by the voltage regulators on the memory modules. Before DDR5, this function was controlled by the motherboard. If your memory m odules don’t support changing the voltage, this will limit what you can achieve with overclocking.

DDR5 allows much higher storage amounts than DDR4, but that causes some problems. Let’s have a look.

DDR5 Includes ECC
With DDR5, the memory cells inside the DRAM are more densely packed than with previous DRAM. Having the memory cells more densely packed means there is an increased chance of defects and errors. In order to make DDR5 more reliable, ECC is built into each DRAM chip.

The ECC inside each chip can correct a single flipped bit. It can also detect multiple flipped bits; however, it can’t correct them. So far, this sounds just like ECC memory; however, there is a difference.

DDR5 also comes with full ECC. The difference here is that there is an extra chip which checks the final output. This means the final output to the CPU is checked. Single bit errors can be corrected before they reach the CPU, while multiple bit errors can be detected but not corrected. This is different to standard DDR5 which only checks at the single chip level. Since there are voltage regulator chips and the ECC memory chip can be quite small, it may be difficult to tell standard DDR5 memory to full ECC DDR5 memory just by looking at it.

It is more likely that memory problems will happen at the chip level; however, it is still possible for them to occur in other places. Thus, there is still a need for DDR5 to have full ECC and we will most likely still see it used in servers.

Having some ECC by default helps with reliability when memory is more densely packed. The speed has also increased which causes some problems, let’s have a look.

DDR5 Banks
DDR has data grouped into a bank. Essentially, it is like having multiple big excel spreadsheets filled with ones and zeros. To improve performance, DDR5 increased the number of banks to eight. Previously, DDR4 had four banks.

The reason that banks are used is that there is a wait time period before the bank can be accessed again. With DDR5, the speed was so fast that by using only four banks, the bank did not have enough time to refresh; thus, eight banks were required. Having eight banks gives enough time for the bank to refresh before being accessed again.

There is one more change that DDR5 makes to improve performance.

DDR5 Bus
The bus used to transfer data from the memory module to the memory controller has been divided into two 32-bit buses. Previously, it was one 64-bit bus. The buses can work independently of each other, but this is not the same as dual channel. I will cover dual channel in a later video, but essentially this allows two memory modules to be combined together to give increased performance. In this case, there are two buses, but we don’t have any control over them.

DDR Compared
Shown here are the different DDR generations that I have looked at in this video. You will notice that with each generation the speed and capacity increased. You will also notice that each memory module has a notch in it. With each generation, the notch is in a different spot. This notch prevents the memory module being put in the wrong slot or in the wrong way.

Before ending this video, there is one more memory module that I need to look at.

Small Outline DIMM (SO-DIMM)
Small Outline DIMM or SO-DIMM is a smaller version of other memory modules. For each generation of memory modules there is a SO-DIMM version. Since they are smaller, the maximum capacity of these modules is less than the larger versions. SO-DIMMs also have fewer chips on them which means less parallel processing. Thus, SO-DIMM will generally not perform as well as the larger versions. Normally, when you make things smaller in computing there is a trade-off. In this case, it is maximum capacity and performance. Keep in mind that if you purchase a good quality SO-DIMM, it may outperform a low-quality DDR memory module.

End Screen
That concludes this video from ITFreeTraining on computer memory. I hope you have found it informative and I hope to see you in more videos from us. Until the next video, I would like to thank you for watching.

References
“The Official CompTIA A+ Core Study Guide (Exam 220-1101)” pages 66 to 70
“Mike Myers All in One A+ Certification Exam Guide 220-1101 & 220-1102” pages 121 to 137

Credits
Trainer: Austin Mason https://ITFreeTraining.com
Voice Talent: HP Lewis http://hplewis.com
Quality Assurance: Brett Batson https://www.pbb-proofreading.uk

Back to: CompTIA A+ 220-1101 and 220-1102 > Installing System Devices