Hey, I'm Rob Witcher, and I'm here to help you pass the CISSP exam. We're going to go through a review of the major topics related to asset classification in Domain 2 to understand how they interrelate and to guide your studies. This is the first of two videos for Domain 2. I've included links to the other Mind Map videos in the description below.
Asset classification is fundamentally about ensuring that assets receive the appropriate level of protection. What is an asset? Anything of value to the organization, people, buildings, equipment, software, data, and intellectual property are all assets, among many other things. In security, we often speak about data classification.
We should be talking about asset classification, which encompasses data classification. and clearly implies that we should be classifying all the assets of the organization and protecting them appropriately. The first step in the asset classification process is creating and maintaining an asset inventory, a catalog, a listing of all the assets from across the organization.
For every single asset there should be a clearly defined owner. It is critical to determine who the asset owner is as the owner is accountable for the protection of an asset. The owner is best positioned to determine how valuable an asset is to the organization, and thus what classification the asset should be assigned.
As I already mentioned and want to emphasize here, the reason we classify assets is so that we can identify how valuable they are to the organization and therefore the appropriate level of protection required. Before we can begin classifying anything, we first need to define the classification levels, the classes. and clearly define who is accountable and responsible for what.
All of this should be documented in the data classification policy. Standards, procedures, baselines, and guidelines should then be created based on the policy. Procedures will define step-by-step instructions for classifying data based on the classes defined in the policy.
Baselines will define minimum security requirements for each class. Remember that point for the exam. Classification is a system of classes ordered according to value.
For example, public, proprietary, and confidential may be the three classes that an organization defines, with public being the least valuable and confidential being the most. Different organizations will choose different classes based on whatever best suits their needs. So don't memorize any particular classification scheme as they vary significantly from organization to organization.
Labeling is noting the classification of an asset on the asset. Labeling is essentially the what. What the classification is for the asset. For example, putting a label on a backup tape, noting that the tape is top secret.
This is labeling. Marking is the how. How the asset should be protected based on its classification. Marking involves noting the handling instructions on the asset based on the classification.
how the asset should be protected. And the final major piece here is categorization, which is the act of sorting assets into the defined classes. Categorization is a process of putting assets into different classes. How do we go about protecting assets based on their classification?
We can begin by having clearly defined roles of who is accountable and responsible for what. The data owner, also known as the data control The controller is the most important role, as the owner is accountable for the protection of the data. The owner will define the classification for data, and the owner is then accountable for ensuring the data is protected accordingly. Data processors, as the name implies, are responsible for processing data on behalf of the owners. A typical example of a data processor is a cloud service provider.
They are storing and processing data on behalf of the owner. Data custodians have technical responsibility for the data, meaning custodians are responsible for ensuring data security, availability, capacity, that backups are performed, and that data can be restored. They are responsible for the technical aspects of data. Data stewards, on the other hand, have a business responsibility for the data, meaning stewards are responsible for ensuring data governance, data quality, compliance.
Essentially, data stewards are employees from the business who are responsible for ensuring the data is useful for business purposes. And the data subject is the individual to whom any personal data relates. It is data about them.
We can also think about how we would protect data based on whether it's at rest, on a storage device somewhere, or in motion across a network, being used, archived, or even defensively destroyed. We'll start with techniques for protecting data at rest. One of the major techniques we can use is encryption.
We can use one of the many excellent encryption algorithms, which we'll discuss in Domain 3. to encipher the data and turn it into ciphertext. The ciphertext is then well protected unless an attacker can get their hands on the correct encryption key to decipher the data or discover a flaw in the encryption. We can further have strong access controls in place, which I've discussed in Domain 5, to ensure that only properly authenticated and authorized individuals have access to the data. We can implement controls like multi-factor authentication and have good logging and monitoring in place to make sure users are accountable for what they do with the data.
To ensure data is not accidentally lost or destroyed, we can have all sorts of different data backup and data resiliency controls, which I discussed in domain 7 and will link to below. The next major grouping of controls we can look at for protecting data are for data in motion, data that is in transit across a network. All of these data in motion controls involve encrypting the data in some fashion while it is in transit across a potentially insecure network. End-to-end encryption means that we encrypt the data portion of a packet right from the sender, and the data remains encrypted through all the nodes, the switches, routers, firewalls, etc. that it passes through on the way to its intended recipient. The data is then only decrypted once it has reached the recipient.
The data is never in plain text while in transit. It is encrypted and decrypted only at the endpoints. A perfect example of end-to-end encryption is a VPN, a virtual private network. which I'll discuss in domain 4. The downside of end-to-end encryption is that the routing information, the source and destination IP addresses for example, must be in plain text and visible to anyone.
So end-to-end does not provide anonymity. Link encryption differs significantly in that the data is decrypted and then re-encrypted at every node. It passes through from source to destination.
So the packet, including the header, is encrypted. at the source and sent to the first node, which decrypts the packet, looks at the destination address to determine who to send the packet to next, re-encrypts the packet and forwards it onto the next node, which then does the same decryption and re-encryption process. The advantage of link encryption is that the routing information is hidden in transit, but the huge downside is that data is decrypted at every node. Link encryption is not the best for protecting data.
And now let's talk about onion networks. This is a cool idea to provide confidentiality of the data and an inibity to make it very difficult to determine who the sender and receiver are while the data is in transit. Here's how onion networks work. The sender will pre-determine a series of nodes that a packet is going to pass through on its way to the destination.
The sender will then encrypt the entire packet multiple times. Each layer of encryption will use the encryption key of a specific node. And thus, when the sender sends the packet, the first node will decrypt the outermost layer of encryption, which will reveal the next node to send the packet to. The next node receives the packet, strips off the next layer of encryption, which again reveals the next node to send the packet to, and so on and so on, until the packet finally reaches the destination, which will finally decrypt the data stored within the packet.
The big advantage here is that each node along the way only knows which node the packet came from and the next node, but not the ultimate source and destination. And each node has zero access to the encrypted data within the innermost layer. A perfect example of an onion network is Tor, the onion router.
The big downside of course is performance. Data in use is inherently more vulnerable than data at rest because by definition, data in use must be accessible to people and processes to view and edit the data. The major controls we can put in place to protect data in use are good access controls, potentially data loss prevention controls to monitor and control what a user is doing with the data.
And if you want to get really fancy, you could potentially use homomorphic encryption, but you don't need to know that for the CISSP exam. Data archiving is moving data that is no longer being actively used into a cheaper storage solution for long-term retention. From a security perspective, we need to ensure we retain archived data for a sufficient period of time to meet requirements as defined by the data classification policy and continue to protect the data based on its classification. Just because the data has been archived on a tape somewhere does not mean we get to forget about protecting it. The final way we protect data is actually related to how we destroy data when we no longer require it.
There are laws, regulations, and contracts which require us to defensively destroy data. Which means we must securely destroy the data and render it unrecoverable in a manner that will stand up as reasonable and consistent. We can prove the data is unrecoverable. There are many ways to destroy data, and some are much better than others. So let's go through the three main categories.
and then the specific techniques. The first and very best category is destruction, which means we physically destroy the media, the hard drive, the tape, the zip disk, whatever, the media that the data is stored on. The next best category is known as purging, which means using logical or physical techniques to sanitize data, thus making it so the data cannot be reconstructed.
And finally, the worst category is known as clearing. which means using logical techniques to sanitize the data, thus making it so the data may not be reconstructed. That's not super reassuring.
May not be reconstructed. Okay, now let's look at the techniques starting from best to worst. The best is of course to physically destroy the media. Ideally melt it. Burn it to the point that all that is left is some smoke and maybe a puddle of metal.
There's no way you're getting that data back. The next best method is to shred, disintegrate, or drill a hole in the media. These techniques are not nearly as good because with the right tools it is possible to read data even off of little shredded pieces of hard drive or tape.
Degaussing is applying a very strong magnetic field to magnetic media like hard drives or tapes. The strong magnetic field destroys the data. The reason degaussing fits between destruction and purging is because it may render the media unusable ever again, thus essentially destroying the media. Crypto shredding is the idea that to destroy the data, We encrypt the data with an excellent algorithm like say AES-256 and then we destroy every single copy of the encryption key. With the encryption key destroyed, we have effectively crypto-shredded the data and made it unrecoverable.
Crypto-shredding fits between purging and clearing. So long as the key is never recovered, or brute force or a flaw is found in the algorithm, then the data cannot be recovered, it has thus been purged. But if any of those were true, the data may be recoverable and thus it has just been cleared. Overwriting, wiping, or erasure all refer to writing all zeros or all ones or some combination of those to all the sectors of a storage device, replacing the original data with this overwritten data.
This process can be done multiple times, but even so, research has shown that pretty much no matter how many times you overwrite the data, some of the original data may be recoverable. Thus, this is a clearing technique. And the worst method for destroying data is to format the drive.
This is the worst technique because formatting by default leaves most, if not all of the existing data on the disk, meaning the data may be easily recovered with the right tool. And here is a summary of the different data destruction methods from best to worst. The final thing we need to think about related to asset classification. is that we need to periodically review and assess the classes we have created and what classification assets have received.
Laws, regulations, business requirements all shift over time, which may require a change in the classes and the classification of assets. And that is an overview of asset classification within Domain 2, covering the most critical concepts to know for the exam. If you found this video helpful, you can hit the thumbs up button, and if you want to be notified when we release additional videos in this mind map, map series, then please subscribe and hit the bell icon to get notifications.
I'll provide links to the other mind map videos in the description below. Thanks very much for watching, and all the best in your studies.