Transcript for:
Security Fundamentals for Certification Exam

Welcome to Domain 1 of the Security Plus Exam Cram Series 2024 Edition, and here in Domain 1 we'll focus on general security concepts. We'll begin with a look at the categories and types of security controls before moving on to coverage of an array of fundamental security concepts. We'll explore the impact of change management on security, and we'll close out Domain 1 with a look at the importance of appropriate cryptographic solutions.

Domain 1 helps us establish the foundation for everything else we cover in the Security Plus syllabus. And as always, we'll go line by line through every skill measured in the official exam syllabus. Important stuff.

Let's get started. Welcome to the Security Plus Exam Cram Series 2024 edition in this installment covering every topic in the official exam syllabus for Domain 1 of the Security Plus exam. Because it's so often requested, I've included a PDF copy of this presentation available for download in the video description so you can review at your leisure as you prepare for the exam.

And I've also included a clickable table of content in the video description so you can move forward and back through topics as necessary as you prepare. And as with the previous release of the Security Plus exam, I recommend the official study guide from Cybex which includes 500 practice questions, 100 flashcards, and two practice exams, as well as the companion practice test manual which brings another thousand practice questions and two practice exams. And if you register for the online resources so you can leverage these questions in an electronic format.

I believe it's all the practice quizzing you're going to need to prepare yourself for exam day. And I will leave you links in the video description to the least expensive copies on amazon.com. And that brings us to domain one, where we will focus on general security concepts. And we're going to go line by line.

through every topic mentioned in the official exam syllabus. So section 1.1 focuses on comparing and contrasting the various types of security controls. So this is a fairly short but very important section. So we'll start with the categories which include technical, managerial, operational, and physical.

Now what's different here versus past versions of the exam and other exams out there is the inclusion of the operational category. Really just a more granular way of considering the control types, which have not changed. They are preventive, deterrent, detective, corrective, compensating, and directive.

I'll give you two bits of advice for exam day. Number one, you should know some examples of each for the exam. I'll help there. And know that controls can fit into multiple types based on the context of the situation. I see folks get wound up on this.

fact as they're working through their practice exams and their exam prep, I'll take you through a logical way to think about this to ensure you can get the right answer on control-related questions on exam day. So we'll start with categories of security controls. We have technical controls.

These are hardware or software mechanisms used to manage access to resources and systems and to provide protection for those resources and systems. Next, we have physical. These are security mechanisms focused on providing protection to the facility and real-world objects. Then we have managerial, which are the policies and procedures, administrative controls really, defined by an organization's security policy.

The managerial controls use planning and assessment methods to review the organization's ability to reduce and manage risk. And then we have that operational category, which helps to ensure that the day-to-day operations of an organization comply with their overall security, primarily implemented and executed by people instead of systems. I think of operational as people enforcing the managerial controls, supporting physical security, and using the technology we've implemented through technical controls to ensure that we comply with our overall security strategy.

Let me give you some examples here. We'll start with technical. We have encryption, smart cards, passwords, biometrics, access control list, firewalls, routers, intrusion detection and prevention.

Again, it's the technology. Next, we have the physical. Guards, fences, lights, motion detectors, dogs, cameras, alarms, locks, protecting what we can touch.

Next we have managerial, policies and procedures, hiring practices, background checks, data classification, security training, risk assessments, vulnerability assessments. But the focus here is all of these practices laid out in policies and procedures the organization follows. And then we have the operational category, which would include things like conducting the awareness training, configuration management, media protection, the doing.

So to summarize those a couple of different ways, we have technical, which is the implementation of the hardware and software technology. The physical controls, which are tangible, touchable. Managerial controls, which are really policy and procedure based.

The process is documented. And then the operational, people doing stuff. So to visualize these categories, we have our assets, the focus of our protection. And we have our managerial, technical, and physical controls if we're looking at this historically. The policies, which give us guidance on the what.

The technical controls, we're implementing the hardware and the software to help with the how. And then a layer of physical security around our facilities, devices, and other assets. It's important to remember there is no security without physical security.

If I can get into your facility, get into your data center, get into your wiring closet, there is no... Technical control that can then stop me. There is no managerial policy that's going to prevent me from doing damage as an attacker. Now let's insert that operational layer. People-centric activities.

Conducting the awareness training. Ensuring the backups have completed. Making sure the media is stored appropriately so we can use it for recovery when necessary. Implementing the managerial policies.

Supporting the technology and the physical. And while these categories are important, it's really the types of controls that are going to come up in questions on the exam. But before we dive into control types, I want to sink on the definition of a security control.

Security controls are security measures for countering and minimizing loss or unavailability of services or apps due to vulnerabilities. You'll often hear the terms safeguards and countermeasures used interchangeably. But to put a finer point on it, safeguards are proactive controls.

They reduce the likelihood of occurrence. And countermeasures are reactive. They reduce impact after occurrence of the security event. Now let's dive into control types. We have the deterrent control, which is deployed to discourage violation of security policies.

Preventive controls, deployed to thwart or stop unwanted or unauthorized activity from occurring. Detective controls, deployed to discover or detect unwanted or unauthorized activity. Compensating controls, which provide options to other existing controls to aid in enforcement of security policies.

They're supporting or redundant controls. Next, we have corrective controls, which modify the environment to return systems to normal after an unwanted or unauthorized activity has occurred. And... Directive, which direct, confine, or control the actions of subjects to force or encourage compliance with our security policies.

And you'll notice I've highlighted the key descriptors of each type along the way here that you'll want to remember for the exam. Now let's look at some examples of control types together. We have preventive controls deployed to stop unwanted activity, and examples here include fences, locks, biometrics, alarm systems, data classification, penetration testing, access control, all of which can prevent unwanted behavior. Next, we have deterrent, controls deployed to discourage violation of security policies.

This control picks up where prevention leaves off. Our examples here, locks, fences, security badges, guards, lighting, cameras, alarms, separation of duty, security policies, and security awareness training. Do you notice the overlap in control types here?

The fact of the matter is every security control is generally going to fall into one control category, but will map to multiple control types. And that lock, while preventive, is also a deterrent. It is a psychological barrier.

Locks create a visible and tangible barrier. Even if the lock is unlocked, if I have a padlock that's even unlocked and hanging there on a gate, That sends a signal that not just anybody should be walking through there, and it also conveys increased perceived effort when it is locked. It makes the would-be trespasser think twice.

But stick with me and I'll show you how to navigate the overlap on the exam here in a moment. Next, we have detective controls deployed to discover. These include security guards, dogs, motion detectors, job rotation, mandatory vacation, audit trails, intrusion detection. These all allow us to detect or discover unwanted activity. Directive, which direct, confine, or control actions, policies and procedures, standards, guidelines, physical signage, directing.

Behavior, verbal instructions, contracts and agreements, all intended to direct or encourage a specific behavior. Corrective controls, which restore systems to normal. Backups and restores, patching, antivirus or anti-malware, forensic analysis, disciplinary action, all play a direct or indirect role in returning our systems and environment back to normal.

And finally, compensating controls. which provide options to existing controls to aid in enforcement and supporting our security policy. They are additional backup supporting controls.

These could include security policies, personnel supervision, monitoring, work task procedures, and when I say security policies, that could be anything from segregation of duties to dual control, mandatory vacations, background checks. All right, so it's time to address the overlap we see here in type. So we have one control that maps to multiple types or functions.

And you saw it in those previous examples. A single security control can be identified as multiple types depending on the context of the situation. And that is just a fact of life. Security controls are designed to work together and their functions often overlap. For example, a security camera system is both deterrent, it deters unwanted entry, and detective.

It records potential security incidents for later review if, as a deterrent, it doesn't do its job successfully. So context matters. The classification of a control can depend on how it's implemented and the specific risk it's addressing.

So a context-based example. We have an access control list that can be primarily preventive if it blocks unauthorized access, or detective if it mainly logs access in a scenario for later investigation. Perhaps the access control list showed that an individual should be granted access to the file repository, but they then deleted sensitive data that shouldn't have been deleted. Well, at that point the activity was logged and can be investigated later. So when we take this knowledge to the exam, it comes down to the language.

Exams often use specific words or phrases to hint at a control type. So let's look at some keywords for each of the six types that you can use to reason your way to the right answer on an exam. Words like warning, a sign, visibility, perception.

These indicate a deterrent control. Preventative, access control, authentication, firewall, encryption. These all prevent access.

These are preventive in nature. We have policy, procedure, standard, guideline, all designed to direct good behavior. So they are directive. Monitoring, auditing, logging, alerting, all designed to detect.

Behavior, so that's a detective control. Backup, restore, incident response, patching, all correcting negative conditions. Assure sign of a corrective control. And alternative, backup, redundancy, supporting, all signs of a compensating control.

So keep this information in mind, and I think security control-related questions on the exam should be quite easy for you to get through successfully. That brings us to 1.2. Section 1.2 asks us to summarize fundamental security concepts.

So read that as foundational concepts that apply across all security vendors. So 1.2 begins with the CIA triad, confidentiality, integrity, and availability, non-repudiation. Authentication, authorization, and accounting, sometimes called the AAA protocols. We'll dig into authenticating people, systems, and authorization models here. The purpose and outcomes of a gap analysis.

We're going to go deep on zero trust, both at the control plane and the data plane. And the language we see here tells us that CompTIA is pulling a page from NIST's special publication 800-207. which covers zero trust. So we see the terminology listed here for zero trust represented directly in NIST's industry standard document. We'll dive into the elements of physical security touching on bollards, access control vestibules, fencing, lighting, guards, cameras, and we'll touch on four types of sensors.

Even if you've been around security for a while, those may not all be clear to you, so we'll dive deep on those. And we'll wrap up 1.2 with deception and disruption technology, the honeypot, honey net, and supporting components. And what do we see in that physical security topic? We see security controls. So I want you to be thinking about control categories and control types as we go through this content, reinforcing what you learned in our previous installment.

So let's dive into the CIA triad, which as a security professional, you should know by heart. So CIA stands for confidentiality, integrity, and availability. We see it represented in the triangle one, two, and three. beginning with confidentiality.

So access controls help ensure that only authorized subjects can access objects. We'll dig a little deeper in this session but think of subjects as people and objects as resources such as data for the moment. Next we have integrity which ensures that data or system configurations are not modified without authorization.

That the file sent exactly matches the file received. that the system files on an operating system are not modified without authorization. And availability, because authorized requests for objects must be granted to subjects within a reasonable amount of time. Confidentiality and integrity have no value without availability.

Next, we have non-repudiation, which guarantees that no one can deny a transaction. And the most common method to provide non-repudiation or digital signatures, which prove that a digital message or document was not modified intentionally or unintentionally from the time it was signed. That document could be an email message, a contract, any artifact that's part of a transaction.

Digital signatures are actually based on asymmetric cryptography, a public-private key pair. It's the digital equivalent of a handwritten signature or a stamped seal. and it provides non-repudiation in a publicly verifiable manner, the public key.

Stated another way, non-repudiation is the ability of one party to defeat or counter a false rejection or refusal of the other of an obligation with irrefutable evidence. So the digital signature on that message, on that e-signing of a contract, gives us a publicly verifiable assertion that Both of these parties were involved in the transaction and it cannot be denied. Side note, do remember that shared accounts and identities prevent non-repudiation. Simple example, if I have a Twitter account and three people have access using the same credentials, I can't prove who posted a tweet ever. We'll often hear the concept of AAA mentioned in the context of several protocols that provide authentication, authorization, and accounting services.

And we'll touch on those protocols here and there throughout the series, but I want to focus right now on these three concepts. So we have authentication, where a user or service proves identity with some type of credentials, like a username and a password. And then we have authorization, where the authenticated users are granted access to resources based on the roles and or permissions assigned to their identity.

And accounting refers to the methods that track user activity and records these activities in logs. It tracks user activity and resource access as part of the audit trail. And these three concepts really are prefaced by a fourth. They go hand in hand. So I want to take another pass at this from a slightly different angle.

Let's talk about identification and authentication. So identification is where a subject Claims an identity and identification could be as simple as a username for a user. As simple as an Active Directory account. And authentication, again, where the subject proves their identity by providing authentication credentials.

The matching password for a username, for example. And this leads to authorization. So after authenticating subjects, systems can authorize access to objects based on their proven identity. And then there's accountability. Auditing logs and audit trails record events, including the identity of the subject that performed the action.

So we have authorization that comes after authentication and accountability that provides proof of it all. So identification plus authentication plus auditing. gives us accountability. So why is accountability important?

Let's go through the hows to get to the why. So accountability is maintained for individual subjects using auditing. Logs record user activities and users can be held accountable for their logged actions.

So how does this help us? Well, it directly promotes good user behavior and compliance with the organization's security policies. Generally speaking, users are going to behave when they know their actions are being audited, when they are being logged, and it provides an audit trail for investigation if the fact that we're logging doesn't deter that bad behavior, or if heaven forbid we have a security breach, a compromised identity.

We're going to have that audit trail so we can go back and piece together the sequence of events. And this discussion can extend beyond users to systems and devices as well. It's common in modern enterprises that systems and devices will have identities also.

Two good examples, virtual machines in the cloud will have a managed identity managed by the platform created and deleted with the VM sharing its lifecycle and used by the VM when it accesses resources such as data. So we have an audit trail. And client devices will often have machine identities in a mobile device platform often tied back to the identity provider platform. And that can be leveraged to make decisions.

around authentication and authorization of the user on the device. Which brings us to our next topic, authorization models. So you want to be familiar with all these models for the exam, beginning with non-discretionary access control, which enables the enforcement of system-wide restrictions that override object-specific access control. Role-based access control is an example of a non-discretionary authorization model.

In discretionary access control, every object has an owner, and the owner can grant or deny access to any other subject at their discretion. This model is considered to be use-based and user-centric. A good example of discretionary access control is the NTFS file system on Windows, widely used for more than a couple of decades now. Next, we have role-based access control, a key characteristic of which is the use of roles or groups. So instead of assigning permissions directly to users, the user accounts are placed in roles and administrators assign privileges to the roles.

These are typically mapped to job roles. Then we have rule-based access control. A key characteristic is that it applies global rules.

to all subjects. The rules within this model are sometimes referred to as restrictions or filters. A good example of rule-based access control is a firewall that uses rules that allow or block traffic to all users equally.

And finally, we have mandatory access control. A key point about mandatory access control is that every object and every subject has one or more labels. These labels are predefined and the system determines access based on assigned labels. An example of mandatory access control that comes immediately to mind is military security, where the data owner doesn't set access.

If data is top secret, they don't determine who has top secret clearance. Nor is that individual data owner allowed to down classify data, so they couldn't down classify data from top secret to secret just to allow someone access. And next we have attribute-based access control. where access is restricted based on an attribute on the account such as department, location, or functional designation.

For example, admins may require user accounts have the legal department attribute in order to view contracts. Now, just to be sure you're clear for the exam, let's touch on subjects and objects directly. Key concepts in access control for sure.

So subjects are the users, groups, and services accessing resources known as objects. And the objects are the resources, files, folders, shares, printers, databases, any resources being accessed by the subject. And the authorization model determines how a system grants users access to files and other resources.

But these two come up pretty often in discussions of access control, so just make sure you have them straight in your head for the exam. The syllabus also calls out gap analysis, which is a common task performed on a recurring basis and often in preparation for external audits. So in a gap analysis, auditors will often follow a standard like ISO 27001 and compare standard requirements to the organization's current operations. And deficiencies versus the standard will be captured in the audit report as gaps, sometimes called control gaps.

A control gap is a discrepancy between the security measures an organization should have in place versus the controls they actually have in place. The outcome is an attestation, which is a formal statement made by the auditor on the controls and processes in place and as to whether or not they are sufficient. And both internal and external auditors should have independence in the audit process, but attestations from external auditors tend to carry more weight Higher confidence because the auditor is not employed directly by the organization. Zero trust is called out in great detail in section 1.2 on the syllabus. So zero trust is an approach to security architecture in which no entity is trusted by default.

And zero trust is based on three principles. Assume breach, verify explicitly, and least privilege access. Zero Trust has largely replaced the old trust but verify model, which was based on a network perimeter strategy, where everything inside the perimeter was automatically trusted. And it's supported by defense in depth that advises a layered approach to security. To think about it another way, Zero Trust really addresses the limitations of that legacy network perimeter-based security model.

It treats identity as the control plane. and it assumes compromise and breach in verifying every request. Again, no entity is trusted by default.

In Zero Trust, we verify identity, we manage devices and apps, and we protect data. So let's talk access policy enforcement in the context of Zero Trust. We have the policy enforcement point, which is responsible for enabling monitoring and terminating connections between a subject, like a user or a device, and an enterprise resource. The policy enforcement point acts as the gateway that enforces access control policies. So when an access request occurs, the policy enforcement point evaluates the request against predefined policies and applies the necessary controls.

For example, a policy enforcement point might enforce multi-factor authentication for access requests from unexpected locations. which would imply that enforcement is dynamic based on conditions and context around the request at the time of the request. And then we have the policy decision point, which is where access decisions are made based on various factors like user identity, device health, and risk assessment.

The PDP evaluates the context of an access request and decides whether it should be allowed, denied, or subjected to additional controls. The policy decision point considers the five W's, who, what, when, where, and why. But to state it in short, the policy enforcement point enforces policies at the connection level, while the policy decision point makes access decisions based on contextual information. The exam syllabus calls out several key elements of zero-trust network architecture in the control plane and the data plane.

So in the control plane, we have adaptive identity, Threat scope reduction, policy driven access control, policy administrator, and policy engine you'll need to be familiar with. This drives the policy based decision logic for zero trust. In the data plane we have implicit trust zones, subject and system, and policy enforcement point.

This enforces the decisions defined in the control plane. If you're wondering where these elements of zero trust network architecture come from, they are described in detail in NIST Special Publication 800-207. Let's unpack all of these, beginning with the control plane. Adaptive identity changes the way the system asks a user to authenticate based on the context of the request.

So the policy decision point is going to look at elements like location, the device the user is coming from. Is that device healthy? Are they using an approved app? Is there any risk associated with this user?

Threat scope reduction is really an end goal of zero trust network architecture, which is to decrease risks to the organization. Policy driven access control, which are controls based on a user's identity rather than simply their system's location. Probably the most popular Policy-driven access control out there is conditional access in Microsoft's Entra ID, formerly Azure Active Directory, which is used with Office 365. In fact, I'll show you around a conditional access policy here in just a moment so you can get a sense of what a system like that looks like.

And we have the policy administrator responsible for communicating the decisions made by the policy engine. This is an element of the system, not a human person. And then we have the policy engine, which decides whether to grant access to a resource for a given subject. Another example here is EntraID, the identity platform used with Office 365. But the policy administrator and the policy engine together make up the policy decision point.

Moving on to... the data plane. We have implicit trust zones, which are part of traditional security approaches in which firewalls and other security devices formed a perimeter.

Systems belonging to the organization were placed inside the boundary. So we see subject and system called out here. The subject is a user who wishes to access a resource and a system is a non-human entity, often the device used by the user to access the resource. And then we have the policy enforcement point. When a user or system requests access to a resource, the policy enforcement point evaluates it against predefined policies and applies the necessary controls.

Microsoft EntraID is a good example of a policy enforcement point. We're going to visualize these concepts a couple of different ways for context. So let's consider conditional access in EntraID.

So the system will look at the signals. around the request, the user, their location, the device, the application, the real-time risk of that user. If the user's current risk level is high based on recent activities, that's going to influence the decision.

It will verify every access attempt. It may just allow access if conditions are good. It may require MFA, some additional... authentication to deal with any concerns around location, device, or risk, or it may block access altogether. But if the user meets the bar, if all the conditions of the request are acceptable, they'll gain access to the apps and data they're requesting.

So let's look at a logical diagram of the zero trust concepts we've been talking about here. So we have the control plane and the data plane. And in the data plane, we have the policy enforcement point. In the control plane, we have the policy decision point, which is comprised of the policy engine and the policy administrator.

We have our system and subject, which make the request, and the policy enforcement point will enforce the final decision there and, if granted, give the subject and system access to the enterprise resource. There are certainly many supporting systems and functions here, from identity management, to PKI, data access policies, activity logs, threat intelligence. But these are the core components. So the policy enforcement point is where security controls are applied, it's where they're enforced, and the decisions are made in the policy decision point. So we talked about concepts like adaptive identity, so I want to give you a quick tour of conditional access in Microsoft EntraID so you can see how those conditions...

around access come together in a policy. But again, this is just for context to help you connect the dots. Security Plus is vendor agnostic.

So I'm going to switch over to a browser and I'll go to the Microsoft Entra Admin Center and I will look for the conditional access area. I will look at the policies and we'll take a look at an existing policy. So Exchange Online requires compliant device. So I'll look at the settings of this policy so you can get a sense of the conditions.

You see here I can apply this to specific users, all users or specific users and groups. I can exclude specific users and groups if I wish. I can specify the target resources. In this case, we're targeting Exchange Online. So I can target this to as much as all my cloud apps.

I can go very broad or very narrow. And then when I look at my conditions here, I see I can look at the user risk, for example, so I can make decisions based on the user's risk level. And I can look at their sign-in risk, so if we have concerns about the sign-in itself.

But you'll notice here it mentions the sign-in risk level is generated based on all real-time risk detections. I can apply this to specific device platforms. You see I can drill down and apply a policy that applies only to Windows or Mac or Android or iOS, for example.

And we can look at locations. So I can exclude trusted locations if I wish. Maybe I don't want to prompt users for additional authentication factors when they're on a managed device in a known trusted location like the corporate office.

We get sign-in fatigue and unhappy users when we're overdoing it in that respect. So we have to establish our boundaries based on our competence. And I'll go over here and look at my access control. So I can grant access based on conditions here. So I can grant access, but I can require multi-factor authentication, require a specific strength of authentication.

I can require a device to be marked as compliant or to be joined. to my organization, like join to my intra organization, for example. I can require an approved app, and you'll notice I can require any one of these selected controls, or I can apply them all and say you must meet all of these conditions. And the more sensitive the operation, the more likely I'm going to go that route of requiring multiple conditions in that respect. But that's how adaptive identity flows in the Microsoft ecosystem, but you'll find similar concepts across many platforms out there.

So if you don't have any exposure, hopefully that gives you a bit of context. Let's move on to physical security. It's important to remember there is no security without physical security. Without control over the physical environment, no amount of administrative or technical access controls can provide adequate security.

If a malicious person can gain physical access to your facility or your equipment, they can do just about anything they want, from destruction of property to disclosure and alteration. So physical security is that first outer layer of protection. And we'll go through the physical security controls mentioned in the exam syllabus in order, beginning with the bollard, which is a short, sturdy, vertical post, usually made of concrete, steel, or some heavy-duty material. They can be fixed in place or retractable, but they act as physical barriers preventing vehicles from forcibly entering a restricted area.

They often delineate pedestrian areas, parking lots, and sensitive zones to minimize accidental damage, but they're primarily used to control traffic flow and protect buildings or areas from vehicle-based attacks. Next, we have the Access Control Vestibule, which is a physical security system comprising a small space with two interlocking doors, only one of which can be open at a time. It's designed to strictly control access to highly secure areas by allowing only one person at a time to pass through.

This will protect against tailgating, where a user slips through an entry based on someone else's badge when they themselves don't have a badge. It's also preventable. piggybacking, which is just like tailgating but typically with bad intent to gain access to a restricted area.

You don't really need to be too worried about the details between those two. They both describe a situation where somebody tries to follow someone with a badge into a system without using a badge of their own. And the Access Control Vestibule will really help block unauthorized access of any kind. You may have previously heard the access control vestibule called a man trap.

The naming has been updated in recent years, but two names for the same thing. So fences are called out on the exam. So let's talk about the characteristics of fences.

Typically, efficacy comes down to their height and their composition. So a fence of three to four feet deters the casual trespasser. A six to seven foot fence.

is too difficult to climb easily. It might block vision, which provides additional security if folks standing on the ground can't see what's behind the fence. On the other hand, an eight-foot fence topped with barbed wire will deter determined intruders.

And then we could even employ what's called a PDIS, a Perimeter Intrusion Detection and Assessment System, which will detect someone attempting to climb a fence. PDIS is an expensive control and it may generate false positives. So a fence tends to be a deterrent control and PDIS is a detective control. Focus on fence for the exam.

But to augment fences, some orgs may also erect stronger barricades or zigzag paths to prevent a vehicle from ramming a gate. So really think of that as a layered defense, as defense in depth, where we're adding additional supporting controls, compensating. controls of a fashion if we go back to our previous installment.

So next we have video surveillance. So cameras and closed circuit TV systems can provide video surveillance and reliable proof of a person's identity and activity. And many cameras nowadays include motion and object detection capabilities, which will kick them into action when necessary, when there's activity to capture.

That makes combing through camera footage for meaningful events much easier after the fact. We have security guards, a preventive physical security control, and they can prevent unauthorized personnel from entering a secure area. They can recognize people and compare an individual's picture ID for people they don't recognize. Access badges can electronically unlock a door and help prevent unauthorized personnel from entering a secure area.

So if we were to put a label on these, video is detective, security guards are preventive, access badges are preventive, maybe you can see how each may also serve to deter potential attacks. Just like a lock is a psychological barrier as well as a physical barrier that may deter bad behavior, a video camera can do the same thing. If someone sees that video camera they may simply think twice. It discourages them from acting. With lighting, we need to think about location, efficiency, and protection.

So in terms of location, installing lights at all entrances and exits to a building can deter attackers from trying to break in. In terms of efficiency, a combination of automation, light dimmers, and motion sensors can save on electricity costs without sacrificing security. They can automatically turn on at dusk, automatically turn off at dawn, they can even be motion detecting.

And we need to protect the lights. If an attacker can remove the light bulbs, it defeats the control. If the attacker can break the light bulb, it defeats the control.

So either place the lights high enough that they can't be reached or protect them with a metal cage. And your lighting is a deterrent control. There are four types of sensors called out in the syllabus. The first is infrared, which detects heat signatures in the form of infrared radiation emitted by people, animals, or objects.

Infrared sensors are often integrated into security cameras and alarm systems to improve detection capabilities. Next we have pressure sensors which are designed to detect changes in pressure on a surface or in a specific area such as a person walking on a floor or stepping on a mat. Pressure sensors are used in access control systems to ensure that only authorized individuals can enter. Microwave sensors use microwave technology. to detect movement within a specific area.

They're often used with other types of sensors to reduce false alarms. Ultrasonic sensors emit high frequency sound waves and measure the time it takes for the sound waves to bounce back after hitting an object or a surface. Ultrasonic sensors are commonly used in parking assistance, robotic navigation, and intrusion detection. In the category of deception and disruption we have The honeypot.

Honeypots lure bad people into doing bad things. It lets you watch them. But honeypots should only entice, not entrap. You're not allowed under U.S. law to let them download items with enticement if you want your evidence to be admissible in court. For example, allowing them to download a fake payroll file might be considered entrapment.

The goal of a is really to distract from real assets and isolate in a padded cell until you can track them down. A group of honeypots is called a honey net. Be familiar with both of these for the exam. Then we have the honey file which is a decoy file deceptively named so it attracts the attention of an attacker. Then the honey token is a fake record inserted into a database to detect data theft.

These are all intended to deceive attackers and disrupt attackers and divert them from live networks and allow observation of our security team. The goals are to detect, isolate, and observe. That brings us to section 1.3. 1.3 in the syllabus is explain the importance of change management processes and the impact to security. So we'll be focused on...

business processes impacting security operations from approval to testing to back out plans maintenance windows. We'll look at technical implications and finally documentation and version control. So these are really more about what these processes solve for and why do we use them. And we're about to cover every one of them right here, so buckle up. And I'm going to take one step further right out of the gate and mention configuration management because when we make changes often we are affecting system or application configuration.

And if we manage these disciplines correctly, it can prevent security-related incidents and outages. That's our top-level goal. So to cover off configuration management just briefly, it ensures that systems are configured similarly, that configurations are known and documented. It ensures that a true current state is known to all, and perhaps more importantly, that our intended current state is actually enforced.

and in an automated way where possible. We can automate some of that using baselining, which ensures that systems are deployed with a common baseline or starting point. Imaging is a common baselining method, for example, in virtual machines or even in desktops. But I can establish baseline configurations for just about any service.

And in the world of CI-CD, continuous integration and continuous deployment, I can often automate implementation of that baseline through a pipeline, through a DevOps pipeline. And then we have change management, our focus here, which is the policy outlining the procedures for processing changes. Change management helps reduce risk associated with changes, including outages or weakened security, from unauthorized changes. To do this right requires changes to be requested, approved, tested, and documented. Going a step further in change management, I want to clarify the difference between change management and change control.

You'll often hear these two terms used interchangeably, and the difference in their meaning may not always be clear. So change control refers to the process of evaluating a change request within an organization and deciding if it should go ahead. In this process, requests are generally sent to the change advisory board, often called the CAB, to ensure that it is beneficial to the organization. So essentially, change management is the policy that details how changes will be processed in an organization.

And change control is the process of evaluating a change request to decide if it should be implemented. So change management is guidance on the process, and change control is the process in action. Now let's talk through business processes impacting security operation, because any change management program should address a few important business processes, including approval, which ensures that every proposed change is properly reviewed and cleared by management before it takes place. This ensures alignment across teams and really throughout the organization. Changes should always have clear ownership.

We want to clearly define who is responsible for each change by designating a primary owner. And that owner will be the key decision maker and sponsor of the change. Stakeholder analysis identifies all the individuals and groups within the organization and outside the organization that might be affected by the change.

So this enables the team to contact and coordinate with all relevant stakeholders. Impact analysis is review of the potential impacts of a change, including any side effects. This ensures the team is considering potential impact to systems and stakeholders.

And we have testing, which first and foremost, confirms that a change will work as expected by validating it in a test environment before production rollout. From a process perspective, test results should be captured in the change approval request. This will be one of the core questions every change approval board is going to ask.

That same board will also want to talk about your back-out plan, which provides detailed step-by-step sequences that the team should follow to roll back if the change goes wrong. This ensures systems can be quickly restored to an operational state if we have a problem. And often, as a matter of policy, organizations won't allow a change to be approved if it hasn't been tested and if it does not include a back-out plan.

And then we need to think about when a change should be rolled out, which is where maintenance windows come into play. A standing window of time during which changes can be implemented that minimizes impact to the business, often outside of business hours. There are certainly inconsequential changes that can happen during business hours. but when we think about critical services, it's going to be outside of business hours. And often, the maintenance window is defined in customer contracts.

And when you roll all of these processes up together, these elements together can define a standard operating procedure for change management. And remember, any change that affects system or data exposure may impact security. So we need to make sure we update our documentation, our data flow diagrams, and potentially do threat modeling to identify any new attack surfaces and address any new potential vulnerabilities with appropriate security controls.

So shifting gears, let's talk through the technical implications that need to be considered as part of the change management process. Do we need to update allow or deny lists on our firewall? Are there any restricted activities here, potentially involving sensitive data? What are our expectations of downtime?

Any application restarts? Impact to legacy applications? And what other dependencies are there in the service chain?

We need to check all of these boxes in our planning process and at the end of the day, We're looking to address any new exposures, even temporary exposures, of our data or our systems. Why? Well, to avoid service disruptions and security vulnerabilities.

As system configurations change, attack surfaces may change as well, and we need to plan for that throughout the change process. So let's drill down on each of these technical implications. We'll start with allow and deny list. So firewall rules, application allow deny list, access control list may all need to be updated. Some activities may need to be restricted, like data updates during database replication or migration.

If you have an orders database being updated while you're replicating data, you could lose orders. So we need to think about that. And we need to consider any potential downtime. because some changes may cause service interruptions which results in direct impact to the business. This is where our maintenance window comes into play.

Next, application restarts. So putting controls around risky activities like application and service restarts. Whether that's taking a security function offline for its update or taking down a business application. We need to think about how that's going to affect service availability and if we're taking down security-related functions, how that affects our security posture during the time that system is offline.

And then we have to think about legacy applications. So modifications to legacy apps that may not support some changes like component or service version updates. Legacy applications are a big reason many organizations still use a hybrid cloud because the advantage of the public cloud is your services are always up to date.

And sometimes the organization is not ready to update certain applications and services. And in some cases, you may have a legacy application that's coasting to end of life. And so you need to maintain that aging service until the business is ready to retire it. And legacy applications bring with them special security concerns.

You have certainly vulnerabilities because an application that was developed or architected many years in the past. was created without awareness of modern security concerns. There are going to be risk factors that the architects didn't think about or could not be aware of 10 or 15 years ago.

And then we need to think about dependency. So tracking dependencies between systems and services to identify downstream effects of current and future changes. If I'm updating a backend API or database, am I making a change that's going to impact the applications that leverage that data or that API, for example.

So let's move on to documentation. So documentation helps us understand the current state of and the changes to our operating environment. This is a weak spot of many organizations and a real concern when it comes to security. Documentation provides team members with a repository of information about the way that systems and applications are designed and configured. It serves as an ongoing reference for current and future team members.

And your change management processes should ensure that changes are not closed out until all documentation and diagrams are updated. It is a continuous process across new deployments and changes, and there may be multiple teams involved in keeping documentation of a system or service fully up to date. And we have to remember that documentation applies not only to the environment, but to policies and procedures that direct operation and support of that environment.

At the end of the day, there are some upsides and a downside we need to think about from a security perspective. So on a positive note, documentation provides benefits to IT and security operations, to business continuity and disaster recovery efforts, to incident response, and to future design and planning iterations. Having a good picture, an accurate picture, of current state is going to be helpful to everyone trying to secure and support that system or service. And we need to remember... that you cannot fully secure a system or service for which you do not have a true picture of current state.

If you're implementing security controls based on inaccurate information, you may be leaving security vulnerabilities open to potential attackers that no one is aware of. And we'll close out 1.3 on version control, which is a formal process used to track current versions of software code and system or application configurations. Most organizations use a formal version control system that is integrated into their software development processes. And for most organizations, this is some platform based on Git, which is the most widely used version control system in the world, invented by Linus Torvalds, the creator of Linux.

Developers modify the code and they check it into a version control system that can identify conflicts in their changes with those made by other developers. And any version control system that is Git or based on Git is going to do so with great accuracy. It also tracks the current dev, test, and production versions of code. And when we think about the DevSecOps discipline, security is everyone's responsibility.

So we're going to be scanning the code that's being checked into that Git repository. There will likely be multiple types of security testing involved from very early in the development process, and just one of those can be scanning of code that's checked in to our Git repository. Code for different environments is typically tracked in Git using code branches. We might have a dev branch, a test branch, a main branch for production.

For the exam though, Focus on the function of version control, not on any specific version control system. But if any version control system is mentioned, it's going to be Git. And that brings us to section 1.4, where we're challenged to explain the importance of using appropriate cryptographic solutions. We're going to cover public key infrastructure, or PKI, a variety of encryption mechanisms, both types of encryption and scope of encryption.

encryption related tools, obfuscation techniques, a number of encryption concepts including hashing, salting, digital signatures, key stretching, blockchain, and open public ledger. And finally, a number of certificate types. Now, PKI and certificates are directly related because we produce certificates from a PKI system. So I'm going to cover certificates right after PKI to keep these two together to make your job in preparing for exam day a bit easier. But beyond that, I'm going to cover everything else in the order presented as I always do.

Let's dive right into public key infrastructure concepts. Beginning with key management, which is management of cryptographic keys in a crypto system. So operational considerations include dealing with generation, exchange.

storage, use, and crypto shredding or destruction, and replacement of keys if a key is lost or expires. From a design perspective, we have to look at cryptographic protocol design, key servers, user procedures, and any related protocols related to management updates and revocation. The certificate authorities create digital certificates and own the policies related to certificate. creation, functionality, and issuance.

Now, a PKI hierarchy can include a single certificate authority that serves as the root and the issuing CA and manages all the policies, but this is not recommended because if that server is compromised, your entire PKI hierarchy itself is compromised. There's really no way back from that sort of breach. You'll have to start from scratch.

You may also hear a certificate authority called a certification authority by some vendors. Microsoft is one of those. Just know those are two ways of saying the same thing.

For best security, you'll see a three-tier PKI system with an issuing CA as the first layer, a subordinate or intermediate CA, sometimes called a policy CA as the second layer, and then a root certificate authority at the top. So the root CA is usually maintained in an offline state. This will typically only be brought online for specific operations, like issuing certificates to new subordinate CAs. You'll see that subordinate CA sometimes called a policy CA or an intermediate CA, multiple names for the same thing, and its role is to issue certificates to new issuing certificate authorities. And the issuing CA focuses on exactly that, issuing certificates for clients, servers, devices, websites, etc.

That represents your chain of trust. And this can be consolidated into fewer servers, fewer layers, creating a one or two level hierarchy. Generally speaking, in production you want a two layer hierarchy at minimum. So if you have a breach, for example, of your issuing CA, you can redeploy that without having to start from scratch. And in a three-tier system, you could have a breach at the issuing or subordinate levels and still recover by revoking and reissuing new certificates for subordinate and issuing CAs.

The Certificate Revocation List contains information about any certificates that have been revoked due to compromises to the certificate itself or to the PKI hierarchy. The CRL of the issuing CA contains information on revocation of certificates it has issued to clients, devices, for websites, etc. And CAs are required to publish CRLs, but it's up to certificate consumers if they check these lists and how they respond if a certificate has been revoked. For example, if you have a web application to which clients authenticate with a certificate, it's up to that web application to go check the CRL.

of that PKI to see if the certificate is indeed still valid or it has been revoked for some reason. So each certificate revocation list is published to a file and the client must download that file to check and this file can grow quite large over time in busy environments. And that fact led to the creation of Online Certificate Status Protocol or OSCP which offers a faster way to check a certificate status compared to downloading a CRL. With OCSP, the consumer of a certificate can submit a request to the issuing CA to obtain the status of a specific certificate, rather than downloading that entire list. So some other terms related to PKI you should be familiar with include the Certificate Signing Request, or CSR.

The CSR records identifying information for a person or a device that owns a private key, as well as information on the corresponding public key. It's the message that's sent to the CA in order to get a digital certificate created. The common name or CN that appears on a certificate is the fully qualified domain name of the entity represented, such as the web server.

So I've mentioned online and offline certificate authority. So an online CA is always running. An offline CA is kept offline except for issuance and renewal operations. Offline is considered a best practice for your root certificate authority.

There is certificate stapling, which is a method used with OCSP, which allows a web server to provide information on the validity of its own certificate. It's done by the web server essentially downloading the OCSP response from the certificate vendor in advance and providing it to browsers. And then there's pinning, which is a method designed to mitigate the use of fraudulent certificates. Once a public key or certificate has been seen for a specific host, that key or certificate is pinned to the host. And at that point, should a different key or certificate be seen for that host, that might indicate an issue with a fraudulent certificate.

Certificate chaining refers to the fact that certificates are handled by a chain of trust. So you purchase a digital certificate from a certificate authority, so you trust that CA certificate. and in turn that CA trusts a root certificate in its hierarchy.

The trust model in a PKI is a model of how different certificate authorities trust each other and how their clients will trust certificates from other certification authorities. The four main types of trust models that are used in PKI are bridge, hierarchical, hybrid, and mesh. What you're going to see in your own organization is that hierarchical structure that I showed you with an issuing CA, a policy CA, and a root CA.

In very large organizations or between organizations that are collaborating in some unique way, you may see hybrid or bridge-type trust models where they're creating trust between their disparate hierarchies. Hierarchical is going to be the norm. what you really need to think about for the exam.

They're not going to get into the weeds on the different trust models, but know those four main types by name. Then we have key escrow, which addresses the possibility that a cryptographic key may be lost. The concern is usually with symmetric keys or with the private key in asymmetric cryptography, because remember, the public key in asymmetric cryptography is shared by design. So if they lose that key, there's no way to get the key back, and the user can't then decrypt. messages.

And organizations establish key escrows precisely to enable that recovery of lost keys. I don't know for certain that certificate formats will come up on the exam, what we'd call x.509 certificate formats. That's technically the type of certificates we're dealing with here. You'll sometimes hear them called SSL or TLS certificates.

In reality, TLS has supplanted or replaced SSL. So in this table, in column 2, I have the file extension which tips you off to the format of the certificate and column 3 tells you if the private key is included in that file. Remembering if you're trying to install a certificate on a new device or transfer a certificate, it's not whole without the private key.

Many times what you'll find is a certificate is issued and the private key is marked as not exportable, meaning you can't export the whole certificate and transfer it somewhere else. There are a few certificate types you should know for the exam as well. So we have a user certificate which is used to represent a user's digital identity. In most cases, a user certificate is mapped back to a user account. We have a root certificate which is a trust anchor in a PKI environment.

It's the root certificate from which the whole chain of trust is derived. That's the root CA. We have certificates used for domain validation. So a domain validated certificate is an X.509 certificate that proves the ownership of a domain name.

An extended validation certificate provides a higher level of trust in identifying the entity that is using the certificate. This is common in the financial services sector. When money is on the line, it raises the bar. But when we look at that hierarchy, the root CA is...

the root of trust. So to recap, in a PKI, the root certificate serves as the trust anchor as it is the most trusted component of the system. And your organization's root certificate will be deployed to your organization's devices to the list of trusted certificate authorities.

But generally speaking, your CA's root certificate is only known and trusted within your organization. So for external, customer-facing, vendor-facing use cases, we need to take a different approach. For resources accessed externally, you'll buy a certificate from a trusted third party.

Some examples would include DigiCert, Entrust, GlobalSign, GoDaddy. They all offer certificates for purchase. So the root CAs in their organizations and their hierarchies will be widely trusted and generally...

pre-installed on most devices out there, computers and phones and the like. In fact, let me just show you this. So think of the certificate of a root CA as your root of trust, but let me just show you the trust hierarchy and that root of trust in the real world on a device.

So I've launched the certificate snap in on my computer here and I'm looking at the certificate store for the local computer. So I'll drill down into certificates here and For example, I see a Microsoft Intune MDM device CA. So that's a client certificate for my device. If I double click on that certificate, I can see when it's valid, when it was issued.

When I look at the details, I can scroll down and look at the enhanced key usage, which tells me what it's used for. It's used for client authentication. So for this client, this device to authenticate to Intune. But what about that chain of trust up to the root of trust?

If I go to. certification path, you'll see the certificate, the MDM device CA from which it was issued, and then the Microsoft Intune root certification authority. So there is your chain of trust up to the root of trust.

Now, what about those trusted third parties that we would leverage for external facing use cases when we're communicating with entities that need that trust outside of our organization? Well, if I go buy a certificate from a third party, if you see here under trusted root certification authorities. I can see the certificates of trusted root CAs. And you will see all those companies I mentioned. DigiCert, which has multiple root CAs, as you can see.

Entrust, GlobalSign, GoDaddy, and others. These were all pre-installed on this device. I didn't have to do anything beyond installing Windows. But if I work for, let's say, Contoso, the only reason my root CA certificate will be here is because either, one, it's integrated. with Active Directory Domain Service, is what we'd call an enterprise PKI, at which point it more or less gets installed automatically.

Or the IT team has, through some other means, installed that root CA on my device. So it is then a trusted source. PKI is a pretty complicated subject. If you get good at PKI in your career, early in your career, it's going to serve you well.

But I hope that clears up some of the basics. Let's talk through a few more certificate types. We have the wildcard certificate that can be used for a domain and a subdomain. So for example, in the contoso.com domain we have two servers called web and mail.

The wildcard certificate is an asterisk.contoso.com and when installed it would work for the fully qualified domain names of both of these. In short, a wildcard certificate can be used for multiple servers in the same domain which will save us on costs. particularly if we're buying certificates for external facing functions, but it supports multiple FQDNs in the same domain. Next, we have a code signing certificate.

So when code is distributed over the internet, it's essential that users can trust that it was actually produced by the claimed sender. For example, an attacker would like to produce a fake device driver or web component that's actually malware that is claimed to be from some legitimate software vendor. Using a code signing certificate to digitally sign the code mitigates this danger of malware because that bad actor won't have access to the PKI organization to produce such a code signing certificate for that software vendor's domain.

A code signing certificate provides proof of content integrity. Next, we have a self-signed certificate, which is a certificate issued by the same entity that's using it. However, it does not have a certificate revocation list and cannot be validated or trusted. It's the cheapest form of internal certificates and can be placed on multiple servers.

You should only use self-signed certificates in test and development scenarios that should never be used for production, generally speaking. If you need actual trust, A self-signed certificate is not going to do the job. If you need to simulate that trust with a certificate for test and development, a self-signed certificate is just fine. We have a machine or computer certificate, which is used to identify a computer within a domain.

Email certificates allow users to digitally sign their emails to verify their identity through the attestation of a trusted third party known as a certificate authority. And this can allow the users to encrypt the entire contents, messages, attachments, etc. Then we have a third party certificate, a certificate issued by a widely trusted external provider such as GoDaddy or DigiCert.

This is strongly preferred for TLS on public facing services like a company website. Because as you saw in the demo, the root of trust for that widely trusted third party is already present on most devices, already trusted by virtually all organizations out there. Next, we have the Subject Alternative Name or SAN certificate, which is an extension to the X.509 specification that allows users to specify additional host names for a single SSL or TLS certificate.

It's standard practice for SSL certificates and it's on its way to replacing the use of common name. You can also insert other information into a SAN certificate. like an IP address.

So we don't even have to use just names. We could use IP addresses as well. So this enables support for FQDNs from multiple domains in a single certificate.

So remember with the wildcard certificate we could support multiple host names for the same domain. With a SAN certificate we can support FQDNs for multiple domains and we can add IP addresses in there so we can navigate to an IP address in a browser. We don't even need a name.

And be aware of certificate expiration. Certificates are valid for a limited period from the date of issuance as specified on the certificate. The industry standard moves over time. Current industry guidance last I checked was a maximum certificate lifetime from widely trusted authorities like DigiCert at 398 days, a little over one year. Will organizations cheat on that and issue for a little longer so they have to buy certificates less often?

Yes, they will. Particularly if you're buying a subject alternative name certificate from an external source that supports many names, those can get quite expensive into the hundreds of dollars. So I do see folks cheat on the lifetime.

Not a crisis if you're maintaining securely, but you need to balance cost and security to be sure. Next, we'll take a look at encryption by level or scope. So we'll start with file encryption, which operates at the individual file level, meaning files could have unique encryption keys. This would be useful for files containing sensitive info, of course.

This could be financial data, protected health information, personally identifiable information, or PHI and PII, respectively. Volume encryption. which is encryption that targets a specific partition or volume within the physical drive. It's useful when different volumes need varying levels of protection. So in the Windows or Linux world, think about the data volume versus the system volume.

You know, one where data lives, the other where the operating system lives. Then we have disk encryption, which automatically encrypts data when it is written to or read from the entire disk. This would be BitLocker on Windows, DMCrypt on Linux, but you can see that when we look at this from the perspective of scope, the scope of encryption at the file level is very low, very granular, and the scope of encryption at the disk level is very high. We're encrypting everything.

So basically the scope is inversely proportional to the granularity. And I called out partition or volume there. So those are actually two separate concepts and I want to touch on those to make sure you're clear in case something comes up on the exam. The partition represents a distinct section of storage on a disk.

On Windows, the C drive is typically a primary partition. It's a distinct physical section of the storage, typically. Volume represents volume. A logical division of a storage device. It represents a single accessible storage area and a volume can span multiple partitions or disks even.

But a volume logically assembles one or more partitions into a unified storage area. I don't expect the exam is going to get too wound up around that detail, but I wanted to call it out just in case. So let's talk about drive encryption.

We have full disk encryption, FDE for short. which is built into the Windows operating system. That's BitLocker, and BitLocker protects disks, volumes, and partitions.

Then there's a self-encrypting drive, which is encryption on a drive that's built into the hardware of the drive itself. Anything that's written to that drive is automatically stored in encrypted form, and a good self-encrypting drive should follow the Opal storage specification. So if we just go under the hood, we're really talking about protecting data at rest here. So full disk encryption under the hood uses a system's trusted platform module.

The TPM is on the motherboard. It's used to store encryption keys so that when a system boots, it can compare keys and ensure that the system has not been tampered with. We call this a hardware root of trust.

When using certificates for full disk encryption, they use a hardware root of trust that verifies the keys match. before the secure boot process takes place. A TPM is a hardware root of trust.

Now I mentioned self-encrypting drives should use the Opal storage specification, which is the industry standard for self-encrypting drives. It's a hardware solution that outperforms software-based alternatives. And they don't have the same vulnerabilities as software and therefore are generally considered to be more secure.

They're solid state drives. They're purchased already set up to encrypt data at rest, and the encryption keys are stored on the hard drive controller. They are immune to a cold boot attack and are compatible with all operating systems. So the self-encrypting drive is effective in protecting data on lost or stolen devices, such as a laptop, because only the user and the vendor can decrypt the data.

There are a couple of other data at rest scenarios we should touch on. One is cloud storage encryption. Your cloud service providers, your CSPs like Microsoft, Azure, Google. and Amazon Web Services.

They usually protect data at rest automatically, encrypting before persisting it to manage disks, blob storage, file, or queue storage. Amazon went through years of grief because in the early going, they didn't automatically encrypt data, didn't automatically protect it at rest, which led to some breaches of aging cloud storage out there that customers didn't get rid of in a timely fashion. Then we have transparent data encryption, which helps protect SQL database and data warehouses against the threat of malicious activity, with real-time encryption and decryption of database backups and transaction log files at rest without requiring app changes. And notice I mentioned it's real-time encryption and with nearly zero performance impact.

And you'll find this is available for multiple flavors of relational database management systems out there. from Microsoft SQL to MySQL to PostgreSQL, most have some form of transparent encryption. I may use that CSP acronym more than once in our sessions through this series. CSP equals cloud service provider, and there I'm talking about Microsoft Azure, Google Cloud Platform, and Amazon Web Services, or any other public cloud provider in that vein.

The syllabus mentions transport. or communication, we're talking about data in transit. Data in transit is most often encrypted with TLS or HTTPS. This is typically how a session would be encrypted before a user enters credit card details in a web transaction, for example. And while similar in function, TLS has largely replaced SSL.

So when you see TLS and SSL used interchangeably, TLS is really what's typically being used there. TLS is common for encrypting a widespread variety of network communications like VPN as well. You'll also hear data in transit called data in motion. Two ways of saying the same thing.

You may see mention of protecting data in use or data in processing. Two ways of saying the same thing. And data in use occurs when we launch an application like Microsoft Word or Adobe Acrobat. Apps not running.

the data from the disk drive but running the application in RAM, in random access memory. This is volatile memory, meaning that should you power down the computer, the contents are erased. But nonetheless, in some cases, data in memory will be encrypted.

One place it comes in mind is with the credential guard feature in Windows. They're encrypting your password hashes in memory, so if they're dumped, they're not accessible. I want to revisit data protection in relational databases because we can go beyond encrypting at the database level. Many of your relational databases support row or column level encryption. Row level encrypts an entire record.

Column level encrypts specific fields within the record. This is commonly implemented within the database tier. I would say it's also possible in code of your front-end applications if you wanted to do it that way.

We see masking done that way. And to restate it here briefly with the other relational database encryption options, transparent data encryption, it's full database level encryption. Database files, logs, backups requires no changes in application, comes with virtually no performance impact, and is offered on most relational database management platforms. MySQL, Microsoft SQL, PostgreSQL, MariaDB.

And it's usually available in PaaS versions of these services in the cloud as well. So let's move on to symmetric and asymmetric encryption. So symmetric relies heavily on the use of a shared secret key. It lacks support for scalability, easy key distribution, and non-repudiation.

So when I say lacks support, I mean it does not support scalability to many users because distributing that key is challenging, that single shared key. Asymmetric, which relies on public-private key pairs for communications between parties, supports scalability, easy key distribution, and non-repudiation. It doesn't mean one is better than the other.

It just means their most useful purpose differs. So asymmetric keys, the public keys, are shared amongst communicating parties. Private keys are kept secret. So when we're dealing with data to...

In CryptoMessage, we use the recipient's public key. To decrypt a message, you use your own private key. With digital signatures, to sign a message, you use your own private key. To validate a signature, other users would use the sender's public key. So if you're the sender, they'll use your public key.

But each party in asymmetric encryption has both a private key and a public key. So how are asymmetric and symmetric encryption commonly used? Well, symmetric is typically used for bulk encryption, encrypting large amounts.

of data because it can do so very fast with that single shared key. Asymmetric encryption is used for distribution of symmetric bulk encryption keys, that shared key we talked about. It's commonly used in identity authentication via digital signatures and certificates and for non-repudiation services and key agreement.

So in that respect, the two can be used together. Symmetric algorithms can encrypt large amounts of data much faster than asymmetric, but an asymmetric algorithm can allow us to distribute that shared key securely to large numbers of parties. So I want to show you how those private and public key pairs are used in an example scenario.

So here we have Franco and Maria. So Franco sends a message to Maria requesting her public key. Maria sends her public key to Franco.

Franco uses Maria's public key to encrypt the message and he sends it to her. Maria then uses her private key to decrypt the message. And this could represent any number of transactions, any number of client application scenarios, but that's how the keys are used. Everyone else can use your public key to encrypt a message and you can use your own private key to decrypt.

which ensures anyone can send you an encrypted message, but only you can decrypt. So let's take a look at some examples of symmetric and asymmetric encryption algorithms. So a few common symmetric encryption algorithms.

We have Advanced Encryption Standard, or AES, as it's commonly called. It's the current industry gold standard, highly efficient, widely implemented, offers various key lengths from 128 to 256 bits. providing some flexibility and security levels. We have triple DES, which is a variation of the data encryption standard applying the encryption three times. Triple DES is being phased out and replaced by AES where it has not been already.

Two other examples, there's Twofish, a finalist in the competition where ultimately AES was selected, known for its flexibility and security, and then Blowfish which was the predecessor to Twofish. also known for its strength and speed. A bit of trivia, Two Fish and Blowfish were both written by Bruce Schneier of Schneier on Security, who's written some of the most popular books on security and encryption ever published.

Symmetric algorithms are used for bulk data encryption. And if I had to guess which of these algorithms would be most likely to be mentioned on the exam, I would say it's AES. It's widely used in the Microsoft ecosystem, a go-to in the US military in some very high security operations with a 256-bit key.

I guess it'd be that one. So how about some asymmetric encryption algorithms? We have RSA, one of the oldest and most widely used asymmetric algorithms named after its creators.

Often used for key exchange and digital signatures, its security relies on the... the difficulty of factoring large prime numbers. Then we have Elliptic Curve Cryptography, or ECC, which is a more modern approach using elliptic curves.

It offers similar security levels to RSA, but with smaller key sizes, and that is the key element to remember with ECC. It makes ECC suitable for resource-constrained environments. Think IoT devices with limited memory and processing resources. We have Diffie-Hellman, primarily a key exchange protocol, allowing two parties to establish a shared secret key over an insecure channel. And then L.

Gamal, an algorithm based on the difficulty of the discrete logarithm problem, used for encryption and digital signatures. As for mention on the exam, I'd say any of these could come up. Elgamal would be the least likely. So to revisit our common uses, AES-256 would be a common symmetric encryption scenario. And then on the asymmetric side, we've got RSA, Diffie-Hellman, Elliptic Curve, or ECC.

A few cipher types you should be familiar with. First is the stream cipher, which is a symmetric key cipher, where plain text digits are combined with a pseudo-random cipher-digit stream, also known as a key stream. It's basically a sequence of pseudo-random bits or digits, depending on the system, that's generated by a cryptographic algorithm using a secret key and some initialization vector. What you really need to remember is that each plain text digit is encrypted one at a time with the corresponding digit of the key stream to create a digit of the ciphertext, the encrypted data stream.

So the plain text is unencrypted, the ciphertext is encrypted data. Then we have a block cipher, which is a method of encrypting text in which a cryptographic key and algorithm are applied to a block of data. For example, 64 contiguous bits. all at once as a group rather than to one bit at a time. Block cipher is generally considered to be more secure than stream cipher.

Next we have the substitution cipher which uses the encryption algorithm to replace each character or bit of the plain text message with a different character. You don't see these in active use. They're really historical at this point. You have the Caesar cipher and the Vigenere cipher.

And the transposition cipher, which rearranges the order of plain text letters according to a specific rule. The message itself is left unchanged, just the order is scrambled. Examples here would include the rail, fence, cipher, and columnar transposition. Now let's shift gears and talk cryptographic key length.

An effective way to increase the strength of an algorithm is to increase its key length. In fact, the relationship between key length and work factor is exponential. A small increase in key length leads to a significant increase in the amount of work required to break the encryption.

The work factor. Examples. Asymmetric.

RSA. The primary public key cryptography algorithm used on the internet. It supports key sizes. of 1024, 2048, and 4096. NIST recommends a minimum key length of 2048. On the symmetric side, we have the gold standard, the advanced encryption standard, the go-to for the federal government. It supports key sizes of 128, 192, and 256 bits.

A 256-bit key is recommended for best quantum resistance. And I believe the US military still requires AES-256 for top secret data. But remember, doubling the key length from a 128-bit to a 256-bit doesn't make the key twice as strong.

It makes it 2 to the 128th times as strong. It's all about the number of possible combinations. Then we have static.

versus ephemeral keys. So these are the two primary categories of asymmetric keys. So static keys are semi-permanent and they stay the same over a long period of time. A certificate includes an embedded public key matched to a private key and the key pair is valid for the lifetime of a certificate. Certificates have expiration dates and systems continue to use those keys until the certificate expires and one to two years as a common certificate lifetime.

RSA is an example of an algorithm that uses static keys. And then remember your certificate authority can validate a certificate static key using a certificate revocation list or the online certificate status protocol. Then we have ephemeral keys, which are keys that have very short lifetimes and are recreated for each session.

So an ephemeral key pair includes a private ephemeral key and a public ephemeral key. And the system uses these key pairs for a single session and then discards them. So some versions of Diffie-Hellman use ephemeral keys. Now we're going to step into the tool section of 1.4 from the syllabus. And here we have the Trusted Platform Module or TPM.

This is a chip that resides on the motherboard of the device. It's multi-purpose. For example, for storage and management of keys used for full disk encryption solutions.

It provides the operating system with access to keys, but prevents drive removal and data access. In addition to full disk encryption, TPM is also leveraged by the secure OS boot process. Next we have the hardware security module or HSM, which is a physical computing device that safeguards and manages digital keys, performs encryption and decryption functions for digital signatures, strong authentication, and other cryptographic functions.

It's like a TPM but or often removable or external devices where the TPM is a chip on the motherboard and it's going nowhere. Next we have the hardware root of trust, which is a line of defense against executing unauthorized firmware on a system. And when certificates are used in full disk encryption, they use a hardware root of trust for key storage.

It verifies that the keys match before the secure boot process takes place. So as you might already guess at this point, the trusted platform module and hardware security module are both implementations of a hardware rooted trust. And next we have the key management system or KMS. So your cloud service providers offer a cloud service for centralized secure storage and access for your application secrets called a vault. So the name varies by cloud platform.

So Azure has Key Vault, AWS has their KMS offering, and Google Cloud Platform also has a KMS Vault, they call it. In this case, a secret is anything that you want to control access to. It could be API keys, passwords, certificates, tokens, or cryptographic keys.

The service will typically offer programmatic access via API to support DevOps and the CI CD process. Access control at a vault instance level and to secrets stored within is generally assumed. Secrets and keys can generally be protected either by software or by FIPS 140-2 level 2 validated HSMs or as time passes FIPS 140-3.

Next, we have the secure enclave, which provides a secure and isolated area within a system or application for processing sensitive data. A secure enclave uses hardware-based security mechanisms to create an isolated, trusted execution environment. It allows sensitive data to be processed and stored securely, even in a potentially insecure computing environment.

It's also called a trusted execution environment. So in the category of obfuscation, we have steganography, where a computer file message image or video is concealed within another file message image or video, and attackers may hide info in this way to exfiltrate sensitive company data. Obfuscation technologies are sometimes called privacy enhancing technologies, but not always used for benign ends.

Also in this category we have tokenization where meaningful data is replaced with a token that is generated randomly and the original data Is held in a vault. It's stateless It's stronger than encryption and the keys are not local and then there's pseudonymization which is a de-identification procedure in which Personally identifiable information fields within the data record are replaced by one or more artificial identifiers or pseudonyms reversal requires access to another data source. And then we have anonymization, which is the process of removing all relevant data so it is impossible to identify the original subject or person. This is only effective if you do not need the identity data.

If you want the information about the person so you can establish trends over time and so forth, but you don't need to know the name of the person or any identifiers related to that person, you should be good. Next, we have data minimization, where only necessary data fields required to fulfill the specific purpose should be collected. In other words, we collect the minimum amount of data to meet the stated purpose and manage retention to meet regulations.

This is a good practice. Less sensitive data means less cyber risk. And then we have data masking.

which is when only partial data is left in the data field. For example, a credit card may be shown as asterisks where we only see the last four digits. This is commonly implemented in the database tier, but it's also possible in code of your front-end applications.

But data masking is very common in the database tier, and when you get into the cloud with platform-as-a-service database offerings, oftentimes they'll have a data masking feature. where they will recommend a masking strategy for you proactively based on what the service sees in the database. Next, we have hashing.

I think it helps to compare hashing to encryption to really appreciate the difference. So encryption is a two-way function. What is encrypted can be decrypted with the proper key, where on the other hand hashing is a one-way function that scrambles plain text to produce a unique message digest.

A hash. And there's no way to reverse a hash if properly designed. A few common uses of hashing.

Verification of digital signatures. Generation of pseudo-random numbers. Integrity services. Data integrity and authenticity. We can use a hash for file integrity monitoring and validation of data transfer.

A file will have a known hash. If a file has been changed, that hash will be different. at which point in a file integrity monitoring scenario we know that something has changed.

And when we're transferring data, when we're transferring a file, we can generate a hash of the file before we transfer and another hash after and compare the two. And if they match, then the data is intact. Its integrity remains.

So let's just add those common uses of hashing to our list we used previously for symmetric and asymmetric encryption. So we can see from the list here how these work together. So we talked about asymmetric encryption being used as a way to securely transmit that shared key from our symmetric algorithm. So asymmetric algorithms are used for digital signatures and a hash function can verify a digital signature.

So again, these technologies working together. Just to wrap up hashing, a good hash function has five requirements. So hash functions must allow input of any length. They must provide fixed length output.

So no matter the length of the input, the output will be the same size. They need to make it relatively easy to compute the hash function for any input. They provide one way functionality, meaning when the hash is generated it cannot be reversed. And a hash function should be collision free and what that means is that no two inputs should ever generate the same output.

And it's number five that is precisely the reason why MD5 is limited in the scenarios where it's used because it is at some level prone to collision. For ease of reference as you prepare for the exam, I've put the differences between algorithm types into a table for you. So starting with the number of keys.

A hash algorithm has no keys. It's a one-way function. Symmetric cryptography is a shared key, a single shared key used by any number of parties.

And in asymmetric cryptography, every party in a discussion has their own public-private key pair. So it's two keys per times the number of parties. The recommended key length. So for hashing, 256 bits.

For symmetric, 128 at the lower end if we think about AES. And for asymmetric, for our public-private key pair, 2048 is the NIST recommendation, 2048 bits. Common examples, we've got SHA for hashing, symmetric algorithm AES is the gold standard, and RSA is one of the oldest asymmetric algorithms out there today.

Speed, so in terms of encrypting data, Symmetric is very fast for bulk encryption. Hashing needs to generate that hash value quickly. Asymmetric for bulk encryption is going to be relatively slow, but it plays that role in other areas, like helping us with secure transfer of a shared key, digital signatures, non-repudiation.

And we have complexity. So asymmetric is going to be the most complex of the lot. The effect of key compromise.

So with hashing there is no key, it's a one-way function. With symmetric encryption if we Lose our key if our key is compromised, everybody in the equation is compromised, sender and receiver. So the only key you can lose in asymmetric is your private key. And the one who loses in that case is the owner of that private key. So if you have 10 parties in a conversation and party number 10 loses their private key or it's compromised, everybody else is perfectly safe.

Accepting the fact that the holder of that private key for user number 10 can then decrypt any messages sent to that user. Key management. So key management in the symmetric scenario is challenging because secure transfer of that key to multiple parties is our challenge.

But that's easy with asymmetric. And then I have examples of each of these algorithms. We talked through many of these.

For the hash family, you're looking at the secure hash algorithm family there. So you'll see SHA and you'll see MD5 or MD6 called out pretty frequently. And remember on symmetric, it's really 128 bits or more for some sensitive data types. I believe I mentioned that in the military for top secret data, it's a 256 bit. key, but this is always evolving and it's eventually going to be affected by quantum computing, so this is all going to change eventually.

So hopefully this makes for a convenient reference page as you're preparing for the exam. Next we have the process of salting, which involves the use of cryptographic salts. Attackers may use rainbow tables which contain pre-computed values of cryptographic hash functions to identify commonly used passwords.

It's a table of password hashes. And a salt is random data that is used as an additional input to that one-way hash function for the password or the passphrase. So adding salt to the passwords before hashing reduces the effectiveness of rainbow table attacks because the expected output of the hash function for a common password is going to be different because a random value was added.

So even if every user in our environment used a common password, a rainbow table wouldn't help because the salt changes the output of the hash, of that hashing process. Next we have digital signatures. So digital signatures are similar in concept to handwritten signatures on printed documents that identify individuals, but they provide more security benefits. It's an encrypted hash.

of a message encrypted with the sender's private key. So in a signed email scenario, a digital signature provides three benefits. Authentication.

It positively identifies the sender of the email. Ownership of a digital signature secret key is bound to a specific user. So that means we also get non-repudiation. The sender cannot later deny sending the message. This is sometimes required.

with online transactions and the fact that that digital signature secret key is bound to that user gives us non-repudiation and integrity. This provides assurance that the message has not been modified or corrupted. That way recipients know that the message was not altered in transit.

These are the basics important for the Security Plus exam. For good measure, I want to touch on the digital signature standard or DSS. So the digital signature standard uses SHA-2 and SHA-3 message digest functions, hashing algorithms, to hash the message.

It creates a fingerprint of sorts, which is good for the integrity, and it also makes it less work for the asymmetric algorithms that actually create the digital signature to do their work. And DSS works in conjunction with one of three asymmetric encryption algorithms, the digital signature algorithm or DSA, the RSA algorithm or elliptic curve DSA. So the DSS is documented in FIPS 186-4 at the URL here.

DSS may not come up on this exam, but you've got your introduction now just in case. So let's talk key stretching. So I want to start with key length.

So some cipher suites are easier to crack than others. Larger keys tend to be more secure because there are more possible key combinations. Key stretching are processes used to take a key that may be weak and to make it stronger by making it longer and more random. A longer key has more combinations a brute force attack has to go through to crack.

Since 2015, NIST recommends a minimum of a 2048-bit key for RSA, for example. That will change over time as computing power advances. Quantum computing will eventually impact this recommendation.

Next we have blockchain, which was originally the technology that powered Bitcoin, but it has broader uses today. So it's a distributed public ledger that can be used to store financial, medical, or other transactions. Anyone is free to join and participate. It does not use intermediaries such as banks and financial institutions. Data is chained together with a block of data holding both the hash for that block and the hash of the preceding block.

To create a new block on the chain, the computer that wishes to add the block solves a cryptographic puzzle and sends that solution to the other computers participating in that blockchain as proof. This is known as proof of work. Next we have the open public ledger and I think the easiest way to understand an open public ledger is to compare its characteristics.

to the blockchain. So the first is decentralization. Blockchain is decentralized.

It is distributed across a peer-to-peer network with no central authority. An open public ledger, on the other hand, can be centralized and maintained by a single entity. Immutability. So blockchain data is immutable and cryptographically secured. Once data is added to the blockchain, it is extremely difficult to alter.

Whereas data on a public ledger can be changed more easily. And there's the matter of validation. Blockchain uses consensus mechanisms like proof of work or proof of stake to validate new data added to the chain.

Public ledgers rely on the integrity of the central authority. And finally, transparency. Blockchain transactions can be pseudonymous for privacy.

meaning written under a false name or a pseudonym. Public ledger transactions are typically fully transparent. Since section 1.4 is focused on appropriate cryptographic solutions, I have a bit of a bonus section at the end here for you, and I'm going to take you through some common use cases and limitations that will give you context for applying these technologies on the exam.

So let's talk through a few common scenarios for specific cryptographic choices. Low power devices, for example. These devices often use ECC, elliptic curve cryptography, for encryption as it uses a small key.

IoT devices and the little Pi type devices don't have the processing power for conventional encryption often, so ECC fits the bill for that use case. Low latency. This means encryption and decryption should not take a long time. Specialized encryption hardware is a common answer in this scenario. A VPN concentrator or encryption accelerator cards can improve efficiency.

High resiliency. Using the most secure encryption algorithm practical to prevent the encryption key from being cracked by attackers. Device, application, or service compatibility may influence your decisions here.

One scenario that comes to mind was the key length on our Certificates on our public-private key pairs with certain legacy network devices. These legacy devices would only support a 1024 key length. We couldn't go to 2048, which is the bit length recommended by NIST, because we had to accommodate for those legacy devices. And supporting confidentiality. Encryption should be implemented for exchange of any sensitive data, and in a way, that ensures only authorized parties can view.

For example, connecting remote offices via an IPSec VPN. So traffic between the offices is always encrypted. Supporting integrity.

So two scenarios that come to mind. Ensuring file data has not been tampered with and communications are not altered in transit. We can use a file hash to check file integrity, a digital signature for an email. Then we have obfuscation.

So obfuscation is commonly used in source code or with data to ensure it can't be read by anyone who steals it. There we have steganography, tokenization, data masking can all be used. to obscure data. Supporting authentication. So we know a single factor username and password are not considered secure as theft of the password leads to compromise.

That's where MFA, multi-factor authentication for user authentication, certificate-based authentication for devices, gives us a stronger solution. Supporting non-repudiation. So when you digitally sign an email with your private key, you cannot deny it was you as there is only one private key. It's tied to you. Non-repudiation is important in any legally binding transaction.

We need to ensure neither party can deny having consented to the transaction. And to touch briefly on limitations, so speed, for example. Application and hardware have to be able to keep pace with the selected encryption, which is why we talked about ECC being such a great fit for IoT scenarios due to its smaller key size.

Size. If we're encrypting 16 bytes of data with a block cipher, the encrypted information is also 16 bytes. That overhead has to be considered in resource planning. We need enough memory, storage, and network to support the result.

Weak keys. We know larger keys are generally stronger and thus more difficult to break. We need to find the balance between security, compatibility with our devices, and capacity. So in that network hardware scenario I mentioned, NIST...

recommends the 2048-bit key in our certificate scenario, but we had legacy network devices that only supported 1024. So we might have to make a decision there between using a weaker key and replacing that legacy hardware. Time. Encryption and hashing take time.

Larger amounts of data and asymmetric encryption take more time than small data and symmetric encryption. So your selections need to match time constraints in transactions. transactions. And longevity.

So consider how long encryption algorithms selected can be used. Older algorithms will generally be retired sooner, as will scenarios where you select a smaller key size because you will be impacted by larger compute by quantum sooner. Predictability. So cryptography relies on randomization. Random number generation that can't be easily predicted is crucial for any type of cryptography.

Reuse. We know using the same key is commonly seen in a number of encryption mechanisms and if an attacker gains access to the key, they can decrypt data encrypted with it. And while changing those keys out frequently is great, some IoT devices may not allow for a key change. which might lead us to use a stronger key than we would otherwise. Entropy, a measure of randomness or diversity of a data generating function.

So data with full entropy is completely random with no meaningful patterns. Cryptography relies on that randomness. And always consider your resource versus security constraints. The more secure the encryption used, the higher the key length, the more processing power and memory your server or other device will need.

It just requires a balance between algorithms and hardware selections. And that's a wrap on Domain 1 of the Security Plus Exam Cram Series 2024 Edition. I hope you're getting value from the series.

If you have any questions, be sure to ping me in the comments below the video or directly on LinkedIn, and I'll join you back here soon for Domain 2. And until next time, take care and stay safe.