Storage Devices
In this video from ITFreeTraining, I will have a look at storage devices. Storage devices are what hold our customers’ data. Our customers rely on this storage to help keep their data safe. Loss of data may result in lost productivity, or the work may never be able to be recovered; thus, it is important to keep it safe.
Hierarchy of storage
Before I start having a look at storage devices, I will first have a look at the hierarchy of storage. Understanding this will give you an idea of what you are trying to achieve with each of the different storage types.
To start with, I will look at primary storage. Primary storage is used to run software and store data that the computer needs to access quickly. There is a small amount found in the CPU in the form of registers and cache. Registers are small amounts of data that are used by the CPU for calculations. Cache is a fast second copy of some of the computer’s memory, used to improve performance.
This memory may be fast; however, when the power is switched off, the data is lost. Thus, this storage is often referred to as volatile.
The next storage I will look at is secondary storage. Secondary storage is in the form of your hard disk drives, optical drives, and flash drives. Essentially this is storage that is easily accessible by the computer. Even if the storage is not always online, such as a CD-ROM or flash drive, it is still easily accessible very quickly.
The advantage of this storage is that it keeps its data when the power is switched off. Also, price wise, this storage costs less than primary storage does. So, essentially, you can have more of it compared with primary storage. The disadvantage is this storage is slower than primary storage.
Since secondary storage is slow compared with primary storage, it is difficult to work with directly and so will generally be swapped in and out of the computer’s memory as required.
The last storage I will look at is tertiary storage. Tertiary storage is offline storage. In the old days it was quite common to have a large room full of tapes or other storage media. This was generally used for physical long-term backups. You don’t tend to see this as much nowadays. A lot of companies, in order to protect their data, will use an off-site storage location in case the primary site is destroyed by an environmental disaster.
Having storage like this requires someone to be employed to get the storage media when required and load it into a drive. As technology got better, this manual process was replaced by robots. The robot will retrieve the storage media and load it into a device. Either way, there is a delay from the request to when the storage media can be loaded into a device to access. As before, the information from tertiary storage is transferred to and from the computer.
Nowadays, we are finding that a lot of these types of storage is being replaced by cloud storage. Cloud storage is always available and, since it is off-site, it protects the organization from data loss if the computers on-site are lost for any reason. There is also cloud storage that is only available at certain times or on request. For example, Amazon Glacier storage. This storage can replace off site backups and is a lot cheaper than storage that is always available online.
At the top of the hierarchy, this gives the fastest speed but the highest cost for data. As you go down the hierarchy, the price for data goes down and also the speed that you can access the data goes down. You can see the trade-offs between the different types of data storage. I will next have a look at the different types of storage devices that are available.
Hard Disk Drives (HDD)
To start with, I will look at Hard Disk Drives (HDD). Hard disks were first developed in 1956. They started to be used a bit more in the 60’s and as time passed became very commonly used in computers.
The first hard disks were very large. As time went on, they got smaller and smaller. A typical hard disk nowadays is three and a half inches in size. For laptops or small devices, there is also a two and a half inch hard disk. This is just a smaller version of the larger hard disk. They are generally slower and have less storage capacity then the larger hard disks. Their small size makes them popular in laptops, and nowadays solid-state drives have replaced these smaller hard disks in a lot of cases.
Hard disks work by using magnetic storage to store data. Essentially, the media inside the hard disk is made of a magnetic material. The magnetic material can be read or it can be changed. This allows data to be stored on the hard disks for long periods of time even if the hard disk does not have power. Let’s have a closer look.
How Hard Disks Work
Although there are a number of different manufacturers of hard disks, the basic design remains the same. The hard disk contains one or more platters. Each platter is made of a strong material to give it strength and then has a thin magnetic layer applied. There may also be other layers applied to protect the platter from damage.
These platters are spun at high speeds. In order to read the data on the platter, a read-write head is used. This head will read data on the platters or modify it as required. In order to read or write the data, an actuator is used which moves the head across the platter.
This allows the head to read and write data anywhere on the platter. Now that we have a basic idea of how a hard disk works, let’s next have a look at what to look for when selecting a hard disk.
Hard Disk Specifications
The first consideration, generally when purchasing a hard disk, will most likely be the size of the hard disk. This is the amount of data that will be able to be stored on the hard disk. You can see in this graphic, that hard disk capacity has been increasing as time goes on. Back in the 90’s, hard disk capacity was measured in Megabytes. It was not until the 90’s that hard disk capacity went over 100 Megabytes. Nowadays hard disks mostly start in the Gigabytes. As time goes on, the minimum size keeps going up. Later in the video I will look at the reasons why. When this video was created, the largest hard disks on the market were over 20 Terabytes.
Hard disk technology keeps improving and it is estimated, that with improvements in hard disk technology, one day we could have hard disks as large as 100 Terabytes on the market.
The next consideration of hard disks is the transfer rates. This will be determined by a number of factors. Generally, the hard disk manufacturer will provide different data rates and will provide the maximum data transfer rate that is achievable. Keep in mind that a hard disk requires a head to be moved across the platter to access the data. If the data is spread all over the platters, this process takes longer, as the head has to move over the platter to locate the data.
Moving the head is referred to as the seek time. The bigger the seek time, the longer it will take to locate data and thus a slower average transfer rate. Keep this in mind, as a hard disk with a fast transfer rate but a slow seek rate will perform slowly if the data on the hard disk is being accessed randomly. For example, if the hard disk has a lot of small files on it. If data is being accessed sequentially, for example playing a large video file, the transfer rate will be more important than the seek time.
The next consideration is the speed and interface of the hard disk. The platters in the hard disk spin at a certain number of revolutions per minute or RPM. When the head of the hard disks moves, it will need to wait in the required position until the data it wants to access is under the hard disk head. The faster the hard disks spins, the less time it will need to wait for the area containing the data to be under the head. Also, if the hard disk is spinning faster, this means more data can potentially travel under the head in a shorter time period, making the hard disk transfer data faster.
It is harder to read data at faster speeds, so faster spinning hard disks tend to have lower capacity. The hard disk will also have an interface that is used from the computer to transfer data. It is important to consider the interface that is used to transfer data to and from the hard disk. The interface will support a maximum amount of data. It is important to consider this, because if the interface does not support the maximum speed of the hard disk, the hard disk will underperform. For example, if the hard disk is SATA 3 but the interface on the computer is SATA 2, the hard disk will be limited to performing at SATA 2 speeds.
Keep in mind that the faster the hard disk spins also will affect the reliability of the data on the hard disk and its life span. This brings us to the next consideration which is reliability and data integrity. The hard disk manufacturer will give you some specifications that will indicate how long the hard disk is likely to last. Some manufacturers design different hard disks for different purposes. For example, hard disks with low use may be designed to use less power. Hard disks that are constantly under load made be designed to last longer but cost a bit more.
If you are using a hard disk in a data center, you will most likely want a hard disk that is more reliable. For the home computer, saving a little bit of money may be more important to the buyer than the reliability of the hard disk. The manufacturer may also release a failure rate. This specification will give you an indication of the probability of the hard disk failing, and is generally given as a percentage per year.
Before I look at other storage types, I will first have a look at the future of hard disks. This will give you an idea for your future planning and what you should consider using hard disks for.
Hard Disk Future
The first thing to consider about hard disks is the base cost per unit. This includes the case, the motor and electronics. Essentially, this means that the basic parts of a hard disk come with a cost. Regardless of how much data the hard disk can store, this base cost does not change. You can see why you won’t see very small capacity hard disks on the market. If you are going to pay for the parts of a hard disk, you are going to at least put a decent amount of data inside to offset the base cost. Given this design of hard disk has been around for a long time, it would seem more likely the basic design will remain the same; however, what will change is the technology inside it.
We are starting to reach some hard limits with how much data can be packed onto a single platter. To get around these problems, some other changes have to be made. One of the changes is filling the hard disk with helium rather than air. The advantage of this is that helium causes less turbulence and friction than air does. This means that more platters can be added to the hard disk allowing for greater capacity.
The downside of this is that helium is very difficult to contain. If you consider a helium balloon, the balloon will float because it is lighter than air. However, as time goes on, the helium will escape from the balloon and the balloon will no longer float.
You may be thinking, why not make the hard disk in a vacuum? The reason is the heads of the hard disk basically float on the platter. As the hard disks spins that air or helium pushes the head upwards. This causes the head to float in close proximity to the platter. Without air or helium in the hard disk the head would hit the platter of the hard disk and would not work.
The other downside of a helium filled hard disk is the extra difficultly in manufacturing them makes them expensive to build. For these reasons, you don’t see many of them on the market. Maybe in the future we will see more of them on the market.
There are also other improvements we may see in the future. One promising approach is HAMR. This stands for Heat-Assisted Magnetic Recording. HAMR uses a laser to heat the platter before the head performs a write. Essentially, to store a one or a zero on the platter requires the magnetic material to be turned to a particular position. One position for a one and another for a zero. Heating the platter beforehand allows the magnetic material to be positioned with more accuracy. This allows more data to be stored on the platter.
The downside with this approach is it is expensive to build and has low reliability. Although heating the platter holds the heat for a split second, long enough for the head to write to the platter, the process can damage the platter over time. Although a promising technology and some hard disks on the market use it, it looks like it is starting to be phased out, but only time will tell.
The next promising technology is MAMR. This stands for Microwave Assisted Magnetic Recording. This is essentially the same idea as HAMR; however, to heat the platter, a spin torque oscillator is used. This is cheaper and has greater reliability than using HAMR. As time goes on, technology improves, and we will have to see which technology ends up being used. You never know which way technology will go.
It appears the trend is for larger capacity hard disks which are still the cheapest option. With new technology, it is unlikely this will change anytime soon. Some have predicted that there will be 100 Terabyte hard disks by 2030. Will this happen, maybe, only time will tell.
Although hard disks have been in use for a long time, the seek time, that is the time it takes to move the hard disk head, has caused random access of data on a hard disk to be slow. The next technology that I will look at addresses this problem.
Solid-State Drives (SSD)
Solid-State Drives address the limitations of hard disk drives. Solid-state drives or SSD’s use integrated circuits. Since they use integrated circuits there are no moving parts. Generally, SSD’s use flash memory to store data. You can see here the inside of an SSD drive. There will be a number of chips that store data on them. There will also be one or more additional chips which are the memory controller(s) which control reading and writing on the chips.
SSD’s entered the market in 1991 and were originally better for random reading, while hard disks still performed better for sequential reading. With improvements over the years, solid-state drives now outperform hard disks in both random and sequential reading.
There are some disadvantages; the cells that contain the data in a solid-state drive wear out the more they are used. Also, capacity wise, solid-state drives cost more per byte than hard disks do.
Essentially, the more you write to one cell in the SSD the more chance it has of wearing out and not work anymore. A modern SSD will attempt to even out the writing of cells, so all the cells are written to equally. If a cell is being written to significantly more than others it will wear out faster.
The performance of SSD drives will slow down when they are almost full. This is not a problem with hard disk drives. The reason for this is that SSD also uses blocks to store data. When writing data to a block, the block may not be completely used. When an SSD has lots of free blocks, this is not a problem since it simply puts the new data in an unused block. When all your blocks are partially full this is not possible. So, the SSD drive has to do some rearranging in order to write the new data.
Think of it like you have shelves on which you store items in boxes. When you have free boxes, you can just put the items in the boxes and place them on the shelves ¬– this is not making efficient use of the space in the boxes. Sooner or later the shelves will get full and the only way to get more items on any shelf is to rearrange the items in the existing boxes to fit new items in. This involves taking the boxes off the shelf, rearranging the items and putting the boxes back on the shelf.
To put this in SSD terms, an SSD drive with free blocks to write new data will take one write command. If the SSD is becoming full, it may need to read a block, add the new data to that block and then write the block. This means there is a read and write command rather than just a write. Depending on how the data is laid out, it may even take more operations if it has to rearrange data from multiple blocks.
Modern operating systems, when not under load, will rearrange the data on an SSD to make it more efficient at writing new data. However, it won’t do this while under load. You can now see why it is advisable not to completely fill a solid-state drive. Although opinions differ, attempt to keep utilization between 75% and 90% to get the best results. Less is better if the SSD is being written to a lot.
SSD drives like the ones shown were a great first step; however, the performance has become so good that SSD drives can outperform the SATA interface and protocol. The SATA 3 interface is limited to 600 Megabytes per second which SSD drives can now outperform. Hard disks were originally designed using one head, so the protocol was designed with that in mind. An SSD, on the other hand, has multiple chips so it is possible to access multiple chips at the same time, something the protocol was not designed to do. Let’s have a look at the next technology that addresses these problems.
M.2
To address these issues with solid-state drives, a new technology called M.2 was created. The formal name being Next Generation Form Factor or NGFF. Rather than using a cable to connect the drive, the storage is plugged directly into the motherboard.
For solid-state, there are two different versions. The newer version being M Key which uses the protocol NVMe and stands for Non-Volatile Memory Express. The keying refers to a notch being taken out of the board as shown. In the case of M Key, the notch is in a certain position. There are 12 different positions this notch can be located; however, in the case of solid-state drives, only two are used. At least for the moment.
M Key supports four PCI Express lanes. It also supports SATA and SMBus devices. Your motherboard will need to have an adapter that is the same keying as the device. Shown here, you can see the notch lines up with the adapter on the motherboard.
High performance M.2 storage will use M Key. This is for two reasons. Firstly, it needs the extra bandwidth that having four PCI Express lanes provides. Second to this, the NVMe protocol was designed with solid-state devices in mind, so will give better performance.
M Key seems to be the current standard used for motherboards. On old computers you may find a B Key used. However, this is not the only thing you need to consider. An M Key will most likely support PCI Express as its interface. This means the computer needs to also support this interface. Older computers may not support this or may have only limited support. For example, it may not support booting using PCI Express.
So, to have backwards compatibility, B Key can be used. B Key is compatible with the SATA interface, so essentially should work with older computers. The computer will see the storage as a SATA drive and not a PCI Express drive. Thus, the interface won’t be as fast as PCI Express, and the protocol will be the older protocol.
So, the problem is, how do we keep backwards compatibility but also allow for newer M Key devices to be used? The solution is B plus M Key. Storage devices that use this connection have both a B Key notch and a M Key notch. This means that it can be put into a connector that is either B or M Key.
You will find that, on the market, the M.2 storage will be M Key or B + M Key. This way, we can use M Key on motherboards that support it. Since B + M Key has both notches, the storage devices can be used in either M Key or B Key motherboards.
The advantage of using this connector is that we now have storage that is backwards compatible with older systems but can still be used in newer systems. The disadvantage is that, due to having two notches, this reduces the number of pins on the connector. This means that when using PCI Express, the storage will be limited to two PCI Express lanes.
If you want maximum compatibility, purchase an M.2 storage device with the B + M notches. If you want performance, then purchase the M Key version. However, check to make sure that the computer you are putting it in will support it. If it does not fully support it, you may need to make changes in the BIOS in order to use it or it won’t work at all.
The last consideration is the size. The size will be given as width followed by length. This will be in millimeters. For storage, they basically all start at 22 millimeters wide and thus start with 22. The length can vary; in this example the length is 80. In order to use it, make sure the motherboard has the correct screw holds. If you are not sure, the screw holes will generally have the size next to them.
Solid-state drives started out being very expensive; however, the cost has come down significantly. Hard disks are still the cheapest option; however, I will next look at a technology that was briefly around when solid-state drives were still very expensive.
Hybrid Drives
Hybrid drives or SSHD’s are essentially a hard disk combined with a solid-state drive. The solid-state drive is used for frequently used data. A hybrid drive will look just like a regular hard disk. Inside the hybrid drive will also look just like a hard disk.
What is different is the circuit board inside the hard disk. If I compare this with a regular hard disk circuit board, you can see the extra chips on the circuit board. These extra chips are the flash memory and also controller chips for the solid-state part of the drive. Frequently used data will be kept on the flash memory. Basically, the hybrid drive is attempting to keep a cached copy of the most used data on the solid-state drive for higher access speed.
When hybrid drives were developed, solid-state drives were very expensive. The idea of hybrid drives was a trade-off between cost and performance. However, they were not on the market for long. They are not sold anymore due to the reduced cost of an SSD. Nowadays, rather than buying a hybrid drive, most users will purchase a solid-state drive to run the operating system and data since this will give them the biggest performance boost. If this is not enough space, they will add a hard disk drive to the system to increase its capacity or even a second solid-state drive. It is just a matter of the user making sure the data they want to access quickly is on a solid-state drive.
So far, I have looked at storage that does not have changeable media. I will next look at storage that does have changeable media.
Optical Discs
The first changeable media storage that I will look at is optical discs. There have been many different types of optical discs released over the years, most of which have now disappeared, with a few still in use. I will cover the main ones since they are the ones you are most likely to come across.
The first one is CD or compact disc. CD’s have been around since the 80’s. They have an upper capacity of 700 Megabytes. It is possible to get small sized CD’s, like 640MB. Due to it being old technology most manufacturers just sell the biggest size as the price difference and demand is not much. There were other standards such as laser disc, however they were never widely adopted.
In 1996 DVD was released which stands for Digital Versatile Disc or Digital Video Disc. There were other standards available as well, one of the more notable being HD DVD. For those of you who are old enough, you may remember the VHS and Beta format war in the late 70’s and early 80’s. Consumers had to choose between different formats, with VHS eventually winning out over Beta, but consumers don’t like having to choose between different media and hoping for the best. So, HD DVD, although a good format, was dropped early on so the industry could put their efforts behind just one format.
DVD holds 4.7 Gigabytes on a single layer or 8.5 Gigabytes with two layers. Two layers essentially means there is a second layer of data that can be written to. This is achieved by the laser refocusing to access the second layer. The downside of this is that, if you are watching a movie, there may be a pause in the playback while the DVD player is refocusing its laser.
DVD turned out to be very popular. In 2006 the next generation was optical disc, and Blu-ray was released. There were other formats; however, Blu-ray was the only one widely adopted and that has been slow; I will explain why in a moment.
Blu-ray provides 25 Gigabytes of storage on a single layer and 50 Gigabytes of storage on dual layer. This is a big improvement over DVD; however, the problem with its adoption was that DVD already provided good quality movies. Although Blu-ray is better, consumers were reluctant to switch to the new media due to the extra cost over DVD and with only a small perceived improvement in quality. Most customers were happy with the quality they got from DVD at the time and did not want to pay more for this small perceived quality improvement. This was not such a problem when moving from CD and earlier media because the improvement was more noticeable.
Blu-ray has been on the market for over ten years now and really due for something to replace it; however, nothing has. This is mainly because optical media is being phased out by online services. With online services offering video up to 4K which is easy and convenient to access, there is not as much demand to purchase optical media. Since there is not much demand, it is hard to make money from Blu-ray, so a replacement for Blu-ray is not really likely.
Currently on the market, in the game industry, there are some consoles that are using Ultra HD Blu- ray and some movies that are also available in that format. If you want to play movies back at 4K you will need to purchase this format. Ultra HD Blu-ray provides 50 to 100 Gigabytes of storage, so about twice that of Blu-ray. Not bad, but in the future, we are likely to need more.
Although no winner for a replacement format is clear at the time this video was created, the one that shows the most promise is Archival Disc. This format provides from 300 Gigabytes up to one Terabyte of storage. There are devices available on the market that can read this format. However, it appears, at least for the moment, they are aimed more at backup storage and archiving. Whether this technology is used for published media on this optical medium later on only time will tell.
Now that we are on the topic of archiving, this brings me onto the next storage device.
Tape Drives
Before I start, I will first point out that Tape Drives are not currently on the CompTIA A+ exam objectives. However, I will cover them briefly because they are still often used in companies for backups. Even if they are currently not being used, many companies will have years of backup tapes which you may need to access in order to retrieve some lost data.
Tapes nowadays generally come in a plastic case with the tape contained inside. The tape is inserted into a tape drive in order to be read or written to. Since the tape has to be wound to the required location, the tape can only be read or written to sequentially.
For example, if the data you require is at the end of the tape, the tape will need be wound to the end in order to access it. Thus, it is a slow process to the seek the required position, but once found reading and writing can be done quickly. Since data is not available quickly, tapes are generally used for backups and archiving.
Many companies, if they use tapes, will purchase an auto loader. An auto loader, as the name suggests, will automatically load tapes in as required. In this example, the auto loader will hold 16 tapes. Different auto loaders will have different capabilities. For example, some may have robots that can access a large storage area of hundreds if not thousands of tapes. Some auto loaders will also have multiple tape drives.
Tape systems like this may require tapes to be inserted and removed on a daily basis. For example, if your site has a requirement to have off-site backups, the tape drive may require tapes to be removed each day in order for them to be taken off-site. You may also need to put tapes in. Some auto loaders will have barcode readers, so they know which tape is in the drive. Your job may involve swapping tapes out as required.
There are a lot of technologies on the market that change the way that we store data. Tapes are not used as much as they used to be. For example, some companies are storing their data in the cloud rather than using tapes. This meets the need to have an offline backup and storing in the cloud can be a lot easier than having to worry about changing tapes each day. With people working more at home, it makes sense to have data in the cloud.
The largest storage currently available on a single tape is 300 Terabytes. If tape drives can continue to store large amounts of data for a cheap price, we may still continue to see them used in the future. However, it faces stiff competition from other technologies. Only time will tell if they keep getting used or become obsolete.
USB Flash Drives
The last storage device that I will look at is USB Flash Drives. USB flash drives are small and re-usable. Old flash drives contain a controller and flash storage. The controller will control how the data is stored on the flash memory in the flash drive. This takes up a little extra room in the flash drive, but not too much.
When this video was created, the largest USB flash drive was two Terabytes in size with a prototype of four Terabytes being tested. Who knows how large these devices will get, but at present you can store a lot of information on them and keep the flash drive in your pocket making it easy to transport.
To make the storage even smaller, the controller and interface can be removed leaving only a small card containing the flash memory. These are called SD cards. These come in a number of different sizes, with SD being the largest, followed by miniSD and then microSD.
The size does not make a difference to how they work, in fact some SD cards will contain an adapter so they can be used in different devices. For example, a miniSD card may come with an adapter to convert it to SD size. It is just a matter of using the right card in the device or if the SD card is too small, using an adapter to convert it to a larger size. You won’t be able to get an adapter to make it smaller for obvious reasons!
Since SD cards do not have a controller, they need to follow a particular standard in order to be used in a device. The standard that they follow will also determine how much data they can store. Shown here are the different standards and how much data they can store.
In order to use SD cards, your device needs to support this standard. Generally, you will find that the manufacturer may not say which standard they support, and instead give you a maximum capacity the device will support. Generally, you need to make sure the SD card you purchase does not store more data on it then the device supports. Devices are generally backward compatible, so you don’t need to worry what type it is, just make sure it is the maximum size or less.
End Screen
That concludes this video on storage types. In later videos I will have a look at some of these storage devices in more detail. I hope to see you in those videos, and I would like to thank you for watching.
References
“The Official CompTIA A+ Core Study Guide (Exam 220-1001)” Chapter 6 Paragraph 84-91
“CompTIA A+ Certification exam guide. Tenth edition” Page 289
“Computer data storage” https://en.wikipedia.org/wiki/Computer_data_storage
“Hard disk drive” https://en.wikipedia.org/wiki/Hard_disk_drive
“HAMR HDD capacities to scale from 4TB in 2016 to 100TB in 2025” https://hexus.net/tech/news/storage/85769-hamr-hdd-capacities-scale-4tb-2016-100tb-2025/
“Solid State Drives” https://en.wikipedia.org/wiki/Solid-state_drive
“M.2” https://en.wikipedia.org/wiki/M.2
“Hybrid drive” https://en.wikipedia.org/wiki/Hybrid_drive
“Tape drive” https://en.wikipedia.org/wiki/Tape_drive
“Picture: Robot Arm” https://commons.wikimedia.org/wiki/File:Closeup_of_robotic_arm_in_StorageTek_tape_library_at_NERSC_(1).jpg
“Tape Vault“ Picture: Robot arm2” https://en.wikipedia.org/wiki/File:StorageTek_Powderhorn_tape_library.jpg
“Picture: 5.25 inch hard disk” https://commons.wikimedia.org/wiki/File:5.25_inch_MFM_hard_disk_drive.JPG
“Picture: Hard disk exposed” https://en.wikipedia.org/wiki/Hard_disk_drive#/media/File:Laptop-hard-drive-exposed.jpg
“Picture: Inside Hard Disk” https://commons.wikimedia.org/wiki/File:Hard_drive-en.svg
“Picture: Trolley” https://pixabay.com/illustrations/hand-truck-hand-trolley-steekkar-564242/
“Video: Console hacking code” https://www.pexels.com/video/matrix-console-hacking-code-852292/
“Video: Hard disk spinning” https://www.pexels.com/video/a-computer-component-in-operation-3289569/
“Picture: Balloons” https://pixabay.com/illustrations/balloons-kawaii-animals-penguin-5570585/
“Picture: Sold State Drive” https://commons.wikimedia.org/wiki/File:Super_Talent_2.5in_SATA_SSD_SAM64GM25S.jpg
“Picture: Inside Solid State Drive” https://commons.wikimedia.org/wiki/File:Embedded_World_2014_SSD.jpg
“Picture: CD” https://commons.wikimedia.org/wiki/File:CD_icon_test.svg
“Picture: Ultra HD Blu-ray” https://upload.wikimedia.org/wikipedia/en/2/21/Ultra_HD_Blu-ray_%28logo%29.svg
“Picture: Archival Disc” https://en.wikipedia.org/wiki/Archival_Disc#/media/File:Archival_Disc_logo.svg
“Picture: Blu-ray disc” https://en.wikipedia.org/wiki/Blu-ray#/media/File:Blu-ray_Disc.svg
“Picture: DVD” https://en.wikipedia.org/wiki/DVD#/media/File:DVD_logo.svg
“Picture: SD Cards” https://commons.wikimedia.org/wiki/File:SD_Cards.svg
Credits
Trainer: Austin Mason http://ITFreeTraining.com
Voice Talent: HP Lewis http://hplewis.com
Quality Assurance: Brett Batson http://www.pbb-proofreading.uk