Transcript for:
CISSP Training Outline

as we begin this cissp training one of the first things we have to realize from a practical standpoint and from an exam standpoint is that isc squared will expect us to know and adhere to a code of professional ethics as we can see here all information security professionals who are certified by isc squared recognize that such certification is a privilege that must be both earned and maintained in support of this principle all isc squared members are required to commit to fully support the code of ethics otherwise known as the code ise squared members who intentionally or knowingly violate any provision of the code will be subject to action by a peer review panel which may result in the revocation of certification isc squared members are obligated to follow the ethics complaint procedure upon observing any action by an isc squared member that breaches the code failure to do so may be considered a breach of the code pursuant to canon 4. it would be helpful on the exam to memorize the code of ethics preamble the safety and welfare of society and the common good duty to our principles and to each other requires that we adhere and be seen to adhere to the highest ethical standards of behavior therefore strict adherence to this code is a condition of certification also on the exam be able to answer questions that regard the code of ethics canons there are four of them canon one protect society the common good necessary public trust and confidence and the infrastructure canon 2 act honorably honestly justly responsibly and legally canon 3 provide diligent and competent service to principals 4. advance and protect the profession [Music] in this lesson we'll explore the distinctives of the two different cissp exams first we have the cat exam cat is computerized adaptive testing and cissp uses cat for all english exams the length of the english exam or the cat is three hours the computerized adaptive testing exam is made up of multiple choice and advanced innovative item question types for example drag and drop the passing grade is 700 out of 1000 points the exam language availability for the cat is english and you can take the exam at authorized ppc pearson professional center and pvtc pearson view authorized test centers and select pearson view testing centers the cissp linear exam is administered as a linear fixed form exam the exam language availability for the linear is french german brazilian portuguese spanish modern japanese simplified chinese and korean it's made up of multiple choice and advanced innovative item type questions such as drag and drop the length of the linear exam is six hours as opposed to three hours for the computerized exam the passing grade is the same 700 out of 1000 points and you can take the cissp linear exam at authorized ppc and pvtc select pearson view testing centers [Music] let's look at the domains and the examination weightings of the cissp 2021 exam they're actually very similar to the cissp 2018 exam the domains are the same and the weightings have only been slightly changed domain 1 is security and risk management making up the largest amount of the exam 15 percent domain 2 is asset security making up 10 of the exam domain 2 and domain 8 software development security are the two smallest domains of the exam domain three is security architecture and network security thirteen percent domain four is communication and network security fourteen percent identity and access management iam 13 the security assessment and testing domain is 12 and domain 7 security operations is 13 on cissp 2018 software development security was nine percent in the new version 2021 it's been raised to ten percent [Music] in course one isc squared in the cissp exam you learned about the isc squared code of professional ethics the four code of ethics canons the computerized and linear exam information and the cissp examination weightings of the cissp common body of knowledge eight domains in the next course you'll look at key core security goals and principles the osi reference model and the tcpip reference model [Music] we have a number of goals in security and to meet those goals we're going to introduce controls to raise the level of difficulty or create resistance for threat actors and threat agents one of the primary goals is confidentiality confidentiality measures the attacker's ability to get unauthorized data or access to information from an application or a system and realize you were mentioning unauthorized data or access somebody could still be authentic and be authorized but not be able to actually see the data or use the application or system because of confidentiality mechanisms in other words they don't possess the right credentials or key confidentiality involves using techniques most often cryptographic mechanisms to allow only approved users the ability to view sensitive information and of course it goes beyond just viewing the information it could be writing the data or using the application of the system and notice that we often use cryptography but not always we can use tools like compartmentalization encapsulation and other methods to provide confidentiality the confidential information could include the passwords themselves or past phrases cryptographic keys like session keys for instance we might use an asymmetric key cryptosystem to protect symmetric session keys pii personally identifiable information such as a national id number social security number passport information and other background information personal health information data that should remain private for compliance to regulations and governance intellectual property of a company or organization such as formulas marketing campaigns corporate secrets and other information and if it's a certain type of information that has classification and sensitivity levels it could be secret or top secret information as well as a candidate for cissp we need to go beyond just basic confidentiality mechanisms and consider some of the high level confidentiality controls for example using hybrid encryption involving combinations of symmetric and asymmetric cryptosystems for example in s mime where we use algorithms for confidentiality and digital signatures for origin authentication and non-repudiation we can employ advanced post-quantum encryption as an ongoing countermeasure against quantum cryptography and homomorphic cryptosystems to protect data in use for example information in a redis in-memory storage cluster and high-level confidentiality can combine secure compartmentalization with the most recent modes of encryption available for example on your upgraded iphone or mobile device confidentiality represents the c of the cia triad coming up next we'll look at the i in the cia triad and that's integrity [Music] integrity represents the eye of the cia triad at this point in our security careers we've moved well beyond just the basic checksum as an integrity mechanism for example the frame check sequence in the trailer of an ethernet frame or the checksum that's performed when you move or copy a file from one volume to another integrity security measures an attacker's ability to manipulate modify change or remove specifically data at rest and or data in transit realizing that the data at rest could be configuration of applications or systems integrity involves implementing the controls that make certain only authorized subjects can change sensitive information realize that integrity might also include affirming the identity of a communication peer in other words origin authentication in fact the a of the cia triad doesn't stand for authenticity or authentication therefore the origin authentication is often combined with the integrity mechanism in the form of a hashed message authentication code for example sha-1 or sha-2 h-max other examples of a breach of integrity would be injection or hijacking attacks on data in transit modifying files changing access control lists corrupting route tables on routers and dns or arp cache poisoning let's talk about high level integrity in other words going beyond just protection from unauthorized modification or changes later on in this training we'll talk about mandatory access control and in that mac architecture there's a clark wilson model clark wilson has advanced goals for integrity obviously we want to prevent unauthorized users from making modifications however clark wilson goes further to ensure separation of duties preventing even an authorized user from making improper modifications clark wilson ensures well-formed transactions and maintains internal and external consistency as an experienced security practitioner it's important to understand mechanisms and techniques for delivering high levels of integrity at the enterprise level not just for the small to medium-sized business but for large enterprises as well both on premises and in the cloud [Music] availability is the a of the cia triad and it measures an attacker's ability to disrupt or prevent access to services or data availability controls will protect systems and services from spoofing attacks floods denial of service or distributed denial of service poisoning and other attacks that negatively affect the ability to deliver data content or services vulnerabilities that impact availability can affect hardware software firmware and network resources examples would be flooding network bandwidth consuming large amounts of memory on a server for example a front-end web server or a relational database management system stealing cpu cycles or unnecessary power consumption to name a few as a cissp candidate we want to think about high level availability for example understanding how availability zones work at cloud service providers such as amazon web services google cloud platform or microsoft azure the big three according to gartner each availability zone will have at least one data center the availability zones are tens of miles apart lessons we've learned from the past are to make sure that our high availability or our disaster recovery solution involves sites that are not across the street from each other but rather many miles apart often on a fiber network for example in the amazon web services northern virginia region there are six availability zones all of them tens of miles apart they're connected with high-speed fiber in other words metropolitan area networks and cloud service providers provide high availability and durability by placing data centers in regions all over the world and an organization can replicate their data or their services in a multi-regional model this is also done to distribute through content delivery or content distribution networking services into metropolitan areas all over the world you can also describe the goals of the cia security triad by looking at the opposite dad d disclosure is the unauthorized revealing of data and information the opposite of confidentiality a is the alteration or unauthorized change or modification of data or systems the opposite of integrity and destruction involves rendering an entity inaccessible it can also add the element of lack of durability in some scenarios destruction or disposition is the opposite of availability [Music] in this lesson i want to expand on the traditional cia triad and take a look at dr don parker's parkarian hexad parker expanded on the cia by adding three additional elements let's take a look at those first parker added authenticity it's not uncommon for security practitioners to assume the a of cia is for authenticity however a is for availability as we learned earlier in this course parker added authenticity and made it part of his hexad authenticity refers to the accuracy and identity of the origin of the entity or the information or data authenticity is often tied to identity and in modern information systems we typically have more than one factor of authenticity in other words dual or multi-factor authentication the next addition was utility or usefulness while an asset such as data would be confidential controlled integral authentic and available it is not always useful or valuable in its form so for example assets can lose value over time as their value or utility diminishes the need to introduce controls for confidentiality authenticity and availability become less valid the third edition was possession possession is also referred to as control an attacker takes possession or control of a physical or logical asset that's true they may do that but it may not be a breach of confidentiality for example if an attacker steals a drive but the data on the drive has been encrypted the attacker may have possession but they haven't breached confidentiality if an attacker steals a stand-alone safe they may have the safe but they may not have possession of the contents of the safe if they're unable to crack that safe we often deliver authenticity with technical controls utility with administrative controls and possession with physical controls let's take authenticity an aspect of the parkarian hexad to another level remember origin authentication is a basic form of authentication it only provides a degree of confidence that the correct password passphrase or private or secret key was used additional levels of authentication rely on trusted third parties and certificates digital signatures and multiple factors like biometrics a new trend in high level authenticity is kba knowledge based authentication this is often accomplished by asking an entity a series of questions often multiple choice questions with five possibilities this information basically comes from the taxpayers personal history or financial history for example what color is your 2020 honda accord what county or province is a particular address in has the party ever been associated with a particular business or a particular individual if the subject or the entity answers the knowledge based questions correctly they're authenticated and further identified it's important on the cissp exam to be aware of the next waves of high-level authenticity and identification [Music] in this brief lesson we're going to look at the concept of non-repudiation there's actually five pillars of information assurance we've already talked about the other four availability integrity confidentiality and authentication non-repudiation is the fifth and it's defined as the inability to refuse participation in a digital transaction contract or communication for example email when s mime is being used now here refuse participation means to deny participation so when a guarantee that a message transmission or a transaction or a contract between parties on the internet is guaranteed we often accomplish this with cryptosystems where a public and private key pair is used in other words non-repudiation is a level of assurance that the owner of a signature key pair that was actually used to generate an existing signature that corresponds to some type of data cannot convincingly deny that the data was signed therefore its responsibility of the owner or creator of the private key to protect that key you may attempt to repudiate or reject or deny that some transaction was valid or true however if your private key was used you are responsible therefore the owner or creator of a private key must notify the trusted third party or certificate authority when the key is lost stolen or compromised the key could be on a smart card it could be on a laptop a personal device a usb key or fob or some other appliance for the sake of the exam remember that non-repudiation in the context of information assurance or ia is typically accomplished with digitally signed certificates [Music] if you look at the cissp common body of knowledge and the different objectives you'll notice that they want you to know about the iso osi reference model and the tcpip model in this lesson we're going to look at the seven layer osi model something you're probably already familiar with now obviously memorizing these layers from the physical layer one up to layer seven would be valuable perhaps using a mnemonic but from an exam standpoint the important thing to remember is how security relates to these different layers now if we go back in time to 1974 to the original protocol all we had was the transmission control program there were no layers just one layer and thanks to john postel we soon divided the tcp into tcpip and of course we have tcp and udp at the transport layer 4 and we have ip at network layer 3. now the big takeaway here is that if we look at these different layers and we look at the descriptions of these layers one thing we realize is that natively security is not built in especially to functionality at layer one through four we have to use additional protocols or other extensible mechanisms to provide security specifically at layers two through four as time passed developers and programmers did give us security protocols and mechanisms so a quick review layer one is the physical layer that specifies our connectors our data rates the way that we encode bits how do we represent zeros and ones our physical media ethernet wireless fiber layer two is the link layer the communication across a single link including media access control the link layer has two sub layers the top layer interfaces with the network layer 3 and the bottom layer interfaces with the physical layer 1. layer 3 is the network or inter-network layer facilitating multi-hop communications across potentially different link networks in other words forwarding ip packets ipv4 and or ipv6 layer 4 is the transport layer connecting multiple programs on the same system of course using port numbers layer 5 was the session layer and this is where developers began to introduce security protocols for example protocols that could be used with application layer protocols like http the session layer accommodates multiple session connections layer 6 the presentation layer is there to help us express and translate data formats we can also see some encryption at the presentation layer as well and layer 7 is the application layer to accomplish a networked user task remember this is not all applications but only applications and services that use the tcpip stack in this table we can find out more information about the protocols and services used at different layers at layer 2 the link layer we have protocols like ppp and its predecessor slip like the link layer ppp also has two sub protocols a protocol to interface with layer 3 and a protocol interface with layer 1. we also have ethernet frame relay and atm at the link layer as well as various device drivers at layer 3 at the network layer we have protocols like ipv4 in version 6 novells ipx we also have other protocols an assist ip like icmp with its event and informational messages and address resolution protocol arp to resolve logical ip addresses to the mac address of the link layer we also have routing protocols at layer 3 like rip version 2 ospf eigrp and bgp border gateway protocol the transport layer has protocols like tcp and udp with its over 65 000 port numbers novells spx and apple's apple talk at the session layer we see ssl tls today we really use tls almost always we have sql with its structured query language rpc and nfs at the presentation layer you may have encryption but for the most part it's there for translation and other representations using mechanisms like ascii png for graphics mpeg and avi for movies and midi commonly used in music and film and at the application layer we have http ftp smtp dns intel net notice none of those protocols has any native security so typically we combine secure shell or ssl tls with those protocols so for example if the protocol is sftp that's going to be secure shell one or two with ftp if it's ftps it'll be ftp over ssl or today ftp with transport layer security and of course https is http over ssl tls later on in this training when we talk about network security we'll revisit these protocols and services in the next lesson we'll look at the four layer tcpip model which is actually the model that we use most often today [Music] if we think about reference models it's actually quite rare to use the iso osi model today even though we still use the terminology of like layer 2 for layer 2 devices or access switches a layer 3 process terms like that thanks to microsoft however we now use a tcpip reference model that has four layers so today we combine the session presentation and application layers of osi layers five six and seven into one application layer in other words most of the activities that happen in modern applications today that would be traditionally layer 5 6 and 7 are combined and performed by the application whether traditional programming or microservices or container in the tcpip model from microsoft the physical layer 1 and the link layer were combined into a single network access layer that's the physical medium the network interface cards and the device drivers layer three is the internet or inter network layer that still involves routed protocols like i p and apple talk as well as routed protocols like ospf and bgp in ipv4 we still use arp but we don't use arp in ipv version 6. as a matter of fact ipv6 relies heavily on icmp version 6 and its additional messages to perform the activities that version 4 arp performed the transport layer is pretty much the same however you see it referred to as the host to host layer because one host using port numbers connects logically to another host using port numbers specifically tcp and udp and of course in the four layer model we have layers five six and seven of the osi model combined into the top application layer although this is the model most commonly used today on the exam be aware of both the four layer tcp model and the seven layer iso osi model now before we leave this discussion of reference models the iso osi and the microsoft tcpip model you may have noticed the absence of a very important extensible protocol one that operates at layer 3 of the osi model and the internet or internetwork layer of the tcpip model and that's ipsecurity ipsec it goes without saying that's one of the most important protocols that we use in networking today perhaps ssl tls being a close second or in some cases equally important we'll have dedicated discussions of ipsec for ipv4 and ipv6 coming up in later courses [Music] in this course fundamental concepts and principles you learned about the cia triad and other fundamental concepts like the parkarian hexad and non-repudiation you explore the osi reference model and how it relates to security and the tcpip reference model in the next course you'll explore secure design principles [Music] the first concept we're going to review in this course is that of least privilege probably one of the most important concepts to implement as a control in your environment especially the countermeasure privileged insiders doing unauthorized actions it's an aspect of authentication authorization accounting aaa and identity and access management iam where the subject has just the proper level or amount of permissions and rights to perform the job role or responsibility and nothing more these permissions can be granted directly to the subject or as part of membership in a group or a container lease privilege should be built into all access control architectures as a matter of fact some architectures like mandatory access control take the concept of lease privilege quite seriously whereas in a discretionary access control environment or role-based the implementation may be a little bit looser with more responsibility given to the creator or owner of the object regardless the principle should be at play at all times any deviation escalation or elevation from least privilege if allowed should go through an established change control it service or possibly a service desk implementation and a change control model for standard normal and emergency changes police privilege is also referred to as need to know staying within one's pay grade or classification level especially involving architectures that have sensitivity levels lattices and different classifications like top secret and confidential one of the documents you'll want to add to your knowledge base for the cissp exam is the nist special publication 800-53 nist gives some examples of implementing least privilege in this document they encourage us to authorize access to all security functions in other words auditing the auditors in many regards or getting visibility into administrative security actions as a top priority use non-privileged accounts or roles when accessing non-security or low-level functions in an active directory environment many administrators have normal user accounts which they'll use on a regular basis but then when necessary they will run as administrator or in a linux environment using sudo to change the context to a higher privilege level or a root user we also want to prevent non-privileged users from executing any privileged functions that should be built into the architecture it also should be audited on a regular basis for example there are tools at amazon web services that can do machine learning analysis of the ways in which identity and access management are deployed using json documents and visibility is key audit the execution of any secure functions there's also iso iec 27001 they have some examples of lease privilege implementation 27001 encourages us to get visibility into access to all networks and network services this can involve a combination of firewall services intrusion detection and intrusion prevention services as well as endpoint protection and other logging and monitoring tools they emphasize the management of privileged access rights using enterprise solutions like cisco identity services engine can help in this regard monitor and control the use of privileged utility programs only select users in your organization high-level users should be using vulnerability assessment tools vulnerability scanners and other penetration testing techniques you want to control the installation of type 2 hypervisors where unauthorized programs and kits can be run you also want to audit the usage of elevated or escalated privileges like local administrator or root level privileges it's vital that you manage access control to any and all program source code whether it be traditional applications or containerized or micro services securing sites like github and other code repositories is tantamount [Music] in this lesson we're going to look at the very important concept of defense in depth or did defense in depth is also referred to as layered defense in the upcoming course we'll talk about due diligence and do care but at this point realize that using the least privileged principle and defense in depth is a function of due care and ongoing continual improvement did should be systematically planned and designed with an outward in approach or an inward-out approach defense and depth can be applied physically for example starting at the enterprise facility property line or the entryway to your building it could be your floor in a high-rise building but starting at the edge and working your way back to the keep where the most valuable resources are perhaps the data center or a safe or a locked closet in the ceo's office this can also be done logically defense in depth with secure routers firewalls intrusion prevention system sensors and layer two in a multi-layer switch security you can also take an inward out approach as well the most important thing is to be systematic about it defense in depth is also a common element of supply chain risk management scrm layered security is accomplished by end-to-end security using several components both physical and logical it could be a single appliance with multiple integrated engines operating on packets in an order of processing they can be physical that can be cloud-based or virtual or logical layered security applies to networks applications and the physical facility as well the trend of deprimatization has increased the need for organizations protecting data and systems with defense in depth by implementing a combination of cryptographic schemes using more secure protocols using hardened systems and next generation access control and endpoint protection services defense in depth no longer totally depends on its dmz or public access zone network boundary to the internet and the various isps and itsps basically removing the perimeter or outer security boundary we also see defense in depth being accomplished using managed security service providers like fortinet and fortigate or palo alto networks or cisco as well as cloud access security brokers with software as a service providers let's get a visualization of defense in depth in this example we're going to start at the edge with perimeter security then we look at network security endpoint security application security and data security at the center are our mission critical assets we also want to have monitoring and response operations and prevention using policy management here we see defense in-depth at the perimeter using perimeter firewalls intrusion detection and intrusion prevention systems secured dmz's and public access zones secure messaging for example using email security appliances or cloud-based email security for our message transfer agents we'll deploy honey pots and honey nets we'll also implement data loss prevention engines such as the rsa dlp and machine learning with dhs einstein network security can be done with network admission control nac inline patching ids and ips for the enterprise protection of voice over ip packets also firewalls at the enclave and data center web proxy content filtering and web application firewalls enterprise message security for example cisco's esa enterprise wireless security enterprise remote access and again data loss prevention or prevention of data leakage endpoint security goes beyond just host based ids and ips we have desktop firewalls and security suites endpoint security enforcement for example palo alto networks traps or cisco advanced malware protection for endpoints there's also content security with security suites from vendors like malwarebytes and sofos with anti-virus and anti-malware solutions we also have compliance on endpoints for example fdcc or hipaa compliance several it services come into play with endpoint security for example patch management and of course data loss prevention application security defense in depth also applies to application security with static app testing code review database and secure gateways database monitoring and scanning database activity monitoring dam dynamic app testing and fuzz testing and web application firewall to counter measure cross-site scripting sql injection cross-site request forgery and other malware attacks there are many solutions for data security of course we have data loss prevention and data classification but we can also introduce identity and access management the next generation of aaa services there's enterprise rights management for example through cisco identity services engine data integrity monitoring encryption public key infrastructure and data cleansing our next aspect of defense in depth is prevention through policy management continuous improvement risk management vulnerability assessment and pen testing threat modeling cyber threat intelligence through the cloud security awareness training and secure architecture and design and our subset of governance i.t security governance and continual operations we'll perform monitoring response monitoring our sock our security operations center and knock network operation center using seam systems security information and security event management incident reporting detection and response through the cyber incident response team digital forensics escalation management security service level agreements sla and slo reporting using security dashboards for visibility and continuous monitoring and assessment and situational awareness for mission critical assets that in a nutshell is defense in depth [Music] our next key security principle is separation of duties sod also referred to as segregation of duties now this is an important principle because it's often tied directly to compliance and regulations so sod violations could be a serious problem for your organization if you're in a violation of industry regulations if you have government contracts other types of entities this is a principle where more than one entity is necessary or required to complete a particular task such as a separate backup operators group and a data restoration group you also see separation of duties in agile and spiral and cicd development where you're dealing with containers and micro services where certain individuals will work on certain components or certain modules it may be separating into developers and then testers and then the production team it can be applied a lot of different ways for the cissp exam however it's more than just understanding how to define separation of duties it's having kind of a management approach all these principles demand that you have kind of an i.t management approach and understanding an application so we'll talk later in upcoming courses about the controls that we'll use to implement lease privilege for example and defense in depth but here you want to think about sod and that it may involve other principles like dual operator principles where you have two or more subjects needed to modify or make a change or to provide approval so for example two signatures or two different cryptographic keys are necessary or tokens for certain activities certain actions rotation of duties is also a related principle to separation of duties where you have different individuals rotating into a job role or responsibility after a certain period of time sometimes it'll be a random amount of time the goal here is to reduce any type of single point of failure any illegal activities data theft exfiltration misuse of systems or applications and it can involve example mandatory time off or forced vacations of employees or certain employees so the goal is to reduce if not eliminate your sod violations so from a management standpoint we're going to have some automated tools we want to have management of sod or segregation or separation of duties in this particular example of automating this process this is going to involve our is security team or information systems security group working in the security operations center i want to break this diagram up into three parts as we talk about it but notice in the first part where we're doing our application development and our application security model that the output of that is going to go to the is security team and the application or app control owners and then the outputs of that particular team and the corrective actions they take will lead to making the corrections okay application administration correcting user access correcting role access and so on and then of course output to a dashboard where we'll do after actions reporting and lessons learned and improve the process so regardless of the model that you're using for application development whether it be spiral or agile or ci cd you're going to build security into that model so it's dev sec ops so on the is security team the auditors the compliance managers you'll have application access rules part of your iam the rules manager may manage the application test environment there we'll use tools for extract transform and load etl basically a data integration process that combines data from different sources in this case application developers working on containers or microservices into a single consistent data store and we'll take an application security snapshot of that now from the application test environment this is where our team is going to enforce separations of duty and do it in an automated way so we have access analysis where we detect policy violations and sod violations we have a violations manager who will analyze those violations often doing a gap analysis against regulations or compliance so we've got our is security team but we also may have what we call app control owners involved in this process yes there's going to be exceptions any acceptance to this will go through a change control process maybe through a change control board because there may be situations where you want to violate the sod policy it's not going to cause a compliance regulations issue it actually is going to improve the process and of course we're going to take corrective actions and those corrective actions will be part of a workflow with approval with iterative processes and the output of the corrective actions is going to go to the application administrator where you're going to correct the user access you're going to correct the role access so the roles can be part of your role based access control methodology it may be like a role or permissions assignment at a cloud provider for example where you are assigning permissions to different entities the application administrator will then output this correction to a dashboard where that individual or other individuals is part of the is security team will get visualization will get ongoing monitoring and improvement the dashboard should give reports that can be presented to other team members escalated to c-suite or c team and again the goal is continual improvement and a more mature security posture for your organization so again as we go through these different principles remember for the cissp exam it's not simply just defining the principle or understanding the principle it's understanding how we manage the implementation and deployment integration and improvement of these security principles [Music] in this very short video we're going to cover a concept that is really needs to permeate all of the other concepts and technologies that we're talking about in this course as a security practitioner we must always find the delicate balance between security and protecting data applications and systems while still maintaining user productivity the synergy between the security compliance and our organization's value proposition and our organizational charter or mission over complexity can often lead to configuration errors and configuration errors are one of the most common vulnerabilities in systems and applications and in code development keeping it simple can also mean using automated tools and orchestration techniques both on-prem and in the cloud it can involve infrastructure as code or using templates or predefined sources of truth in json and yaml documents keeping it simple often means using the obvious administrative technical and physical control within your budget here's an example of keeping it simple if you have too complex of a password policy that in itself can create vulnerability you may think you're creating a more secure environment and you're actually delivering the opposite so for example here's a password or a pass phrase i've got three seven letter words bizarre spandex and dolphin capitalizing the first letter and separated by a dot or a dash this would take a computer about 700 sextillion years to brute force crack however organizations often force complex password policies on their employees in this example 13 characters including uppercase lowercase numbers and symbols this looks complex but actually this would only take about 2 million years to crack in other words this is a much less secure password than three seven letter words that can be remembered an end user will quite likely document the w dollar sign g8 password on a sticky note or a text document which in itself creates a vulnerability whereas they could easily memorize three seven letter words separated by a dot or a dash so as we apply the principles we've talked about so far like lease privilege and defense in depth as well as other principles we'll look at the rest of this course it's always important to remember at the heart to keep it simple and don't forget about the basics [Music] in this lesson we're going to explore the zero trust principle and again on the cissp exam we want to go beyond just definition and understanding but also look at it from a management standpoint a security management information security administration standpoint but to define it xero trust is an evolving paradigm moving focus to individual users individual assets and resources in other words moving security down the osi model close to the end point and layer two it uses zero trust principles which means trust no one trust nothing to design industrial and enterprise infrastructure and technological and communication workflows it assumes that no implicit trust is given to subjects based merely on their physical or their network location in other words just because they're located in the r d department as opposed to the call center doesn't mean that we're going to implicitly trust them nor do we implicitly trust the endpoint devices used by the c suite or the c team typically we use zero trust network access solutions for example cisco's identity services engine microsoft zero trust and others to perform authentication and authorization as a distinct task before the actual session is established this can also be done through technologies like ieee 802.1x port-based network access control zero trust focuses on protecting resources and not necessarily network segments or locations subnets and vlans a management standpoint you want to implement a zero trust architecture now there are some variables here so for example on the left hand side we have identities those are actually going to be individual users applications and services so we'll have an identity provider and that would be something like active directory open ldap kerberos it could even be a cloud service provider like amazon web services google cloud platform or microsoft azure and we'll be using permissions and roles identity access management multi-factor authentication those types of tools that tie into user and session risk management below that to the left of the devices corporate devices and unmanaged devices and these can have agents installed or they could be agentless if it's an unmanaged device we could use something like our security policy enforcement or dot one x to place that device into a restricted zone or vlan go through device inventory and then perform remediation regardless this is all managed through a centralized security policy enforcement tool that of course does real-time policy evaluation but also is going to have threat intelligence and threat modeling often with the cloud or a software as a service provider giving us visibility and ongoing analytics and as much as possible automating the process and orchestrating all the tasks if we look to the right we can see that we have a full implementation solution which involves data apps infrastructure and network so full implementation would apply the zero trust architecture to all of those elements or we could do a point solution that just focuses on for example software as a service apps and on-premises apps possibly using a cloud access security broker or we could just focus on infrastructure we'll classify we'll label we'll encrypt and we'll use our threat intelligence to give us adaptive access often using attribute based access control advanced solutions for access control that can be applied to our mac model or our dac model or a role-based model to be adaptive based on different variables and different characteristics and attributes of the identity or the subject or the user as mentioned zero trust is an ongoing evolving infrastructure and there are quite a few vendors that offer solutions to integrate this into a wide variety of different enterprises and organizations across many different business sectors [Music] in this lesson we're going to look at secure defaults or security by default and we'll describe this by comparing it to some other types of sdlc methodologies or software development life cycle approaches one would be secure by design with secure by design the programmer application is developed with security integrated into the entire software development life cycle sdlc we often call this dev sec ops another method is secure by deployment in this example the application of the system is deployed into an environment where security is highly considered in the network and system design then we have secure by default this design consideration assumes that the application is natively secure without any modifications or extra controls it's gone through a large amount of testing and even somebody with white or gray box knowledge shouldn't be able to penetrate the security now of these three secure by deployment would be the least secure security by obscurity which means basically hiding things or making the code complex is less secure than all three of these methods with secure defaults the existing default configuration settings should be the most secure possible so again we can be talking about systems we can be talking about software secure defaults can often be delivered using infrastructure as code in other words pre-tested stacks often in yaml or json format a single source of truth built on secure defaults you can use tools like terraform or aws cloud formation to accomplish this keep in mind however there's some downside to secure defaults often you can lose productivity or create a less user-friendly environment you have to find that delicate balance secure defaults can be native to the platform or they can be policy based and they can involve other principles such as compartmentalization and mediated access through a proxy a real world example would be microsoft azure secure defaults for instance they require all users at azure to register for azure active directory multi-factor authentication require administrators to perform multi-factor authentication mfa block legacy authentication protocol require users to perform multi-factor authentication when necessary and protect privileged activities such as access to the azure portal [Music] let's talk about the concept of fail or failing securely this involves the implementation of a mode of system termination functions that prevents loss of the secure state when a failure occurs or is detected in the system or application we see this in systems that have a state machine a trusted state machine for example like sc linux or tpm trusted platform module the failure still might cause damage to some system resource or system entity but it doesn't create a vulnerability to fail securely means to implement secure defaults to deny access deploy failure undo changes or rollbacks to a secure state we also call this secure fallback failing securely involves check return values and conditional code or filters for failure defaults and to make sure that even with a loss of availability confidentiality and integrity still remain a real world example would be fail open versus failed closed firewalls keep in mind this could also apply to other security appliances as well as intrusion prevention sensors or secure logging fail open means if there's a component failure or a system crash of a firewall or an ips sensor the traffic is still allowed to flow from the ingress interface to the egress interface in order to prevent inconvenience to users or productivity of data flows this is a less secure option and again it can apply to a wide variety of devices for example to netflow collectors or syslog servers or seam systems fail closed means if there's a component failure or system crash of a firewall or sensor the traffic is not allowed to flow from the ingress interface to the egress interface in order to prevent an attacker from launching an exploit by forcing a failure to enforce this we typically apply administrative or managerial controls which means it's part of a policy a written security policy as well as technical controls in other words a setting on the security device the appliance the snmp agent or firewall or sensor in this lesson we're going to explore the concept of privacy by design and we're going to use the nist privacy framework as our prototype individuals particularly end users who are untrained do not always understand the possible consequences for their privacy when they interact with applications systems products and services this is especially true when using endpoint devices and mobile devices failure to design for privacy can have a direct negative effect at both the individual and societal levels affecting an organization's brands in other words their value proposition of a product or service the financial bottom line of the organization and future prospects for growth the primary and secondary loss due to privacy breaches can be overwhelming and often the secondary loss for example future prospects for growth or increased cost for borrowing money from lenders can have a cascading effect making secondary losses often more than primary losses for the cissp course i want you to be aware of the nist privacy framework the nist privacy framework is a tool for improving privacy through enterprise risk management and it's on version 1 as of this recording it's a voluntary toolkit developed in partnership with various industry stakeholders the vision of the privacy framework is to assist organizations in identifying and managing their privacy risk and the goal is to protect individual privacy it also helps to build innovative products and services with integrated privacy features to accomplish privacy by design using the nist privacy framework you're taking privacy into account as you design and deploy systems products and services that affect individuals you're communicating about your organizational privacy practices with all of your stakeholders you're realizing that privacy issues can be different when dealing with other countries and jurisdictions for example general data protection regulation or gdpr when operating within the eu or dealing with the eu encouraging cross-organizational workforce collaboration by developing profiles choosing tiers and the achievement of outcomes there are three elements to the nist privacy framework the framework core provides an increasingly granular set of activities and outcomes that enable an organizational dialogue about managing privacy risk profiles are a selection of specific functions categories and subcategories from the core that an organization has prioritized to help it manage privacy risk for example database privacy could be a category and one of the activities could be to use abstraction or tokenization to represent the underlying data implementation tiers support communication about whether an organization has sufficient processes and resources in place to manage privacy risk and achieve its target profile [Music] in this final short video of this course we'll look at the concept of trust but verify trust but verify is not a zero trust approach to security however it does introduce stronger identification mechanisms to fulfill a multi-factor authentication mfa something you have prerequisite we continue to deploy physical tokens that generate different one-time passwords otps every 60 seconds or with mobile phones using otps being sent as text messages however mobile malware can intercept these messages and forward them to fraudsters this rising threat has driven nist and others to recommend moving away from sms-based otps the advanced verification comes in the form of more stringent multi factors such as biometric authentication vendors often identity and access management solutions with identity analysis for attribute based access control or abac trust but verify may eventually lead to expanded usage of user behavioral analytics uva and artificial intelligence to enhance and improve the trust but verify model in certain environments [Music] in this course on secure design principles you learned about least privilege and defense in depth separation of duties and zero trust failed securely and secure defaults and the concepts of privacy by design and trust but verify in the next course you'll learn about organizational roles responsibilities and processes do care and due diligence and compliance legal and regulatory issues let's begin this course looking at a very important principle that all security practitioners engineers architects and developers must understand and that is you must align security with the needs and the goals of your business now if it's a non-profit or some other organization like an agency or government we'll just call it an organization regardless there is a value proposition and a security practitioner must align all of the security functions to a business's strategy or to the organization's value proposition whether they're a product or a service profit or non-profit public or private to the organization's charters to their goals to their missions and their objectives every security initiative must be coupled and aligned and have synergy with the strategic and tactical goals of the enterprise now this alignment must permeate through all of the organizational processes this includes top-level governance it could be steering committee charters or publications any corporate initiatives to name a few security must also permeate through all i t service management initiatives like change and configuration management availability management and so on security strategists must account for any major changes to organizational operations or activities that can be due to a wide variety of factors but for example an upcoming merger or acquisition or a sudden provisioning or de-provisioning due to some event security must react to changes mergers are ideal to join two existing companies and do one new ongoing enterprise that will greatly affect your security initiatives it could be a divestiture or a de-merger this will involve cleaning up access rights normalizing identity information and of course data loss prevention as you engage in the divestiture process there may be legal ramifications for example during an acquisition or a merger there may be a dark period that has to be enforced it may be the job of the security practitioners or administrators to make sure that security and privacy is maintained and that data is not lost or leaked other reactions would be dealing with privacy issues data sharing throughout the change of the business model interconnection agreements that may be set up or terminated these are all often involved when there's a change to your business operations another aspect of aligning security to business or to your organizational enterprise is to understand all of the internal influences you have to consider is your organization functional in other words a top-down traditional organization where your departments and business units are based on functions for example the finance department the marketing department the sales department or is it a more flatter projectized organization where individuals who may have a job title will actually work horizontally across different projects and programs that will affect for example the way that you implement security for example single sign-on and federation who are the members of executive management your c-suite or your c team which of those team members do you answer directly to or which of those team members does your direct supervisor answer to you to be aware of the needs of stakeholders both positive and negative stakeholders and the internal consumers of the resources and services that you offer let's say your service desk internal customers would be your own departments and business units and organizational units you have to understand the management structure do you have a good idea of your organizational chart in your organization not just your department do you have internal auditors do they do auditing on a regular basis or on an ad hoc basis or both are you an auditor and do you understand the internal key value propositions what value do your users and your employees get from the company and vice versa do the employees of your company understand their role and their responsibility in creating a more secure and private environment and of course there's external influences now as a security practitioner you're more likely to deal with the internal influences on a day-to-day basis but there may be some situations where you will deal with external influences typically not stockholders bondholders and partners unless you're setting up secure channels for strategic partners or providing security at a stockholder or bondholder meeting dealing with regulators supply chains and vendors are important external influences and your role as a security engineer in securing the supply chain and your relationship with vendors and those channels is critical your customers and your clients are external influences your lenders and any socio-political or economic factor can be an important external influence that can affect how you implement security for example a global pandemic [Music] we might assume that understanding the organizational roles and responsibilities and processes is part of the human resources department or maybe some other management however security initiatives require a broad awareness of all organizational roles and responsibilities and this really ties into identity and access management placing the right people in the right groups of the right containers with the correct rights and permissions just for starters companies are organized in different ways some are traditional top-down organizations functional organizations some are more flat horizontal organizations that are projectized or many organizations do a lot of outsourcing they may be on premise they may be in the cloud they may have a large number of remote workers and teleworkers due to unforeseen circumstances as i mentioned directory services like active directory e-directory and open ldap as well as cloud service provider directories are often closely aligned and mapped to organizational duties and job titles roles and responsibilities will often directly affect access control methodologies and sensitivity levels for mandatory access control architectures mac responsibilities can also drive our role-based or attribute-based access control models rbac and abac access decisions typically rely on organizational charts roles responsibilities or locations in a user base the role is often set based on evaluating the essential objectives and architecture of the enterprise aligned with the subject's job title and responsibilities the security practitioner must be aware of these details we must also be aware of who owns data and who owns assets in the organization let's start with the owners owners are often the creators of the object or the data especially in a discretionary access control model data or asset owners often determine the classification level they'll decide on handling and tagging or labels of the asset or the data while i'm here let me also mention data processors on the exam the processor may be a separate category of data ownership now typically a processor has zero ownership they just do data input in a command line interface or a console or some type of graphical interface console they're just simply there to input data into a spreadsheet or a database we call them data processors next we have data stewards data stewards manage assets from a business perspective stewards often deal directly with customers internal customers and external customers and stewards often ensure compliance compliance with standards and administrative or managerial controls as well as data or asset quality imagine you were a guest on a luxury yacht the stewards are often the ones dealing directly with the guests or the customers then we have the custodians custodians maintain the assets from a technical perspective so going back to our luxury yacht analogy this would be the first mate the second mate and the third mate the ones working on top of the deck dealing with the yacht from a technical perspective custodians may also deal directly with stakeholders and management custodians ensure confidentiality integrity authenticity and availability of data and assets finally we have officers this is where the buck stops officers are often a higher level than owner these would be chief information officers cio chief privacy officers cpo and chief information security officers ciso this is executive management and they're the ones ultimately responsible for how data and assets are handled in the organization more often than not data and asset owners will answer to one of these officers directly or to a supervisor who answers to an officer directly regardless it's important for the security practitioner to be aware of who owns who's a steward who's a custodian who processes data and assets and who are the officers in the organization and their roles and responsibilities [Music] in this lesson we're going to talk about governance but before we do that let's define a couple of terms first due diligence due diligence relates to the act of performing thorough research thorough information gathering planning before committing to a particular plan of action it involves proper information gathering planning testing and strategizing before development production and deployment so for example from a human resources and security standpoint it would be due diligence to perform comprehensive background checks before hiring or additional background checks before promoting somebody to a higher position due diligence could involve investigating a cloud service provider thoroughly before signing a memorandum of understanding mou or moa memorandum of action it could involve testing and evaluating non-repudiation techniques such as digital signatures and digital certificates before assigning contracts or developing code due diligence often involves understanding which framework is required by law or is applicable under the vendor due diligence for example how federal agencies adhere to security mandates when controlled unclassified information cui must reside in a non-federal system and organization due diligence can also relate to supply chain security as well do care is something different do care refers to the degree of attention that a reasonable person takes for a particular entity for example an enterprise mobility management initiative it's the level of judgment attention and activity that a person would engage in under similar circumstances due care refers to ongoing activities after the due diligence has been performed some do care activities would be performing the necessary maintenance and patch management to keep a system or application available and secure it could be taking all the necessary precautions to ensure that an ip packet arrives with confidentiality integrity and availability cia properly applied using various technical controls due care can involve using security principles like leased privilege defense in-depth separation of duties zero trust and more for continual improvement and maturity due care and due diligence is all a part of governance the need for governance exists anytime a group of people comes together to accomplish an end or a goal governance typically focus on three attributes or characteristics one authority there is typically a chain of command that delivers governance often a board of directors executive management c-suites and c teams or other types of officers governance involves decision-making top level macro decision making and accountability governance is focused on the structure and processes for sound decision making accountability management and conduct at the top of an organization these decisions will flow down into other subsets of governance for example security governance supply chain management technical and strategic governance it directs how an organization's objectives are determined and achieved how risk is controlled and addressed and how the delivery of value is improved on an ongoing basis a subset of global governance or corporate governance is security governance this is broadly defined as the rules that protect the assets and continuity of an organization security governance can include mission statements charters declarations of value propositions policies standards best practices and procedures security governance guides the course and control of organizational security operations initiatives and activities often the result of a security steering committee or security team or group in the security operations center or sock the security practitioner strategy will be derived from this effective security governance some security governance activities would be creating a risk register or a risk ledger or log or populating the risk register on an ongoing basis publishing in a written format or to an intranet website all compliance and regulatory requirements tracking and recording all compliance and remediation initiative aligning security strategy with the organizational goals most often through a written security policy performing vital roles in risk assessment and risk management and documenting stakeholder interactions and reporting related workflows [Music] in the previous lesson we looked at governance and we looked at security governance due diligence do care now we're going to look at a component of governance known as compliance along with other requirements for example privacy protecting data intellectual property and those types of issues now compliance is defined as observing a rule such as a policy standard specification or law regulatory compliance which is different than just organizational compliance actually outlines the goals organizations want to accomplish to certify or be accredited that they understand and take actions to comply with certain policies government regulations laws and other relevant issues for example companies that provide products and services to united states federal government must meet certain security directives set by the national institute of standards of technology or nist specifically nist special publication 800-53 and sp 800-171 are two common mandates with which companies working within the federal supply chain may need to apply security governance and security compliance is often responsible for publishing all compliance and regulatory requirements for the organization all personnel compliance and remediation initiatives should be tracked and recorded in a compliance database this could be the risk register or risk log in a small to medium-sized organization in a larger organization it'll be a larger database often under the umbrella of the chief privacy officer the cpo there should be guidelines for using special compliance scanners for finding user vulnerabilities and again only certain authorized individuals should use those compliance scanning tools the risk register or ledger can also be used to help fulfill compliance policy requirements as mentioned some requirements of privacy policy are to describe controls to protect intellectual property ip personally identifiable information pii personal health information phi and other sensitive data from data leakage data loss and data breach privacy policy is often needed to assure adherence to regulations such as the computer fraud and abuse act electronic communications privacy act and the identity theft and assumption deterrence act specifically in the united states in the eu for example for the avoidance of penalties from the gdpr you should realize the first violation is up to 10 million euros or 2 percent of the company's global annual turnover of the previous financial year whichever is higher the second violation has a penalty of up to 20 million euros or 4 percent of the company's global annual turnover of the previous financial year whichever is higher the bottom line violating privacy policies especially when its governance and regulations from the government can be quite costly privacy protection is often mandated in regulations or industry compliance beyond gdpr with standards in the us like hipaa or pci dss hipaa for the health care and medical field pci dss for credit cards and bank cards you need to identify all data owners and processors you need to discover incidents of data remnants in other words physical attributes or artifacts of data that can remain on a storage device you must implement limitations on data collection and have a policy that allows collected pii and phi to be scrubbed before sharing with the research institute or for example the healthcare community cloud data privacy also involves including data loss prevention dlp engines for example the rsa dlp engine and a cisco email security appliance let's talk intellectual property the global shift towards service-ordered enterprises has enlarged the role of intangible assets and intellectual property the need for protection and control of data loss and leakage has increased drastically intellectual property can be elements such as copyrights and trademarks patents and formulas corporate trade secrets and upcoming marketing campaigns for new products or emerging products or services digital rights and licenses and cryptographic keys and passwords there are consequences to privacy breaches and data breaches we typically break these up into primary and secondary loss primary loss would be the productivity lost and incident handling or incident response to a data breach the response of your swarm team or your incident response team as they drop their normal duties and deal with the event or the incident or potential disaster replacement replacement of parts data from restoration mechanisms or replacement of employees who have been terminated due to data theft or data breach those are common primary losses secondary losses however can be larger because they can be cascading losses in terms of fines and judgments losing competitive advantage and a hit to your reputation those things can also lead to higher lending costs or the loss of partners or vendors or even customers some organizations due to the type of content that they're involved with will have digital rights management or drm this is access control technology that protects licensed digital intellectual property drm is used by publishers manufacturers and ip owners for digital content and device monitoring digital media licenses attempt to balance the rights of ip owners and internet users by protecting rights and profits for digital product manufacturers and retailers think of netflix hulu and spotify drm protects copyrighted digital music files apps software programs films tv shows games and other media we'll also see blockchain nft non-fungible tokens introduced to help protect digital rights moving forward here's an example of drm for adobe pdf files in other words applications and tools that deny authorized sharing tools that watermark the pdf files track document usage enforce expiration and integrate with e-commerce solutions you can even restrict these pdfs to specific ip cider ranges or revoke access based on elite's privilege principle another initiative to get privacy is data minimization this is a directive that states that collected and processed data should not be used or kept unless it's critical to operations the details should be determined early in the life cycle to support data privacy standards such as gdpr for the eu and organizations doing business with the eu you can also employ tokenization to enhance privacy data tokenization is a technique used to remove directly identifying elements from the underlying data the process replaces the raw data with randomly generated tokens or pseudonyms it is most often deployed with structured data like card numbers credit card numbers and national identifying numbers like the social security number ssn used in the united states in order to comply the original data does not leave the enterprise even to a cloud service provider also tokenization can be combined with encryption to achieve further defense in depth in order to implement data privacy data loss prevention and other compliance issues will rely heavily on security control frameworks such as the iso iec 27000 a very broad flexible and mature framework focused on information security it's the security equivalent of the more widely known iso 9000 quality standards for manufacturers you may use nist sp 800-53 revision 4. this has evolved over 20 years and can be seen as the father figure for other security control frameworks it's mature and comprehensive and can be aligned to other iso standards such as iso 9000 quality management it's very good for large businesses as well as those with a u.s connection you may use cobit 5 from isakka this is control objectives for information and related and it was created by the information systems audit and control association it's a framework and a supporting tool set that allows managers to bridge the gap between control requirements technical issues and business risks internationally you may use agate atelier the architecture the system deformation a day communication a framework for modeling computer or communication systems architecture there's ida bc interoperable delivery of european e-government services to business administrations businesses and citizens it's an eu program launched in 2004 that promoted the correct use of information and communications technologies ict for cross-border services in europe there's obashi obashi provides a method for capturing illustrating and modeling the relationships dependencies and data flows between businesses and information technology assets and resources in a business context [Music] generally speaking organizations will face cyber threats in three main areas first is disruption this is where cyber criminals will use new ransomware for example to seize the iot internet of things new types of devices and components and appliances that are recently on the internet then distortion the spread of misinformation using bots distributed denial of service attacks and other automated sources to cause a compromise of trust and deterioration for example advances in smart technology will negatively impact an enterprise's ability to control their information other emerging cyber crime issues include artificial intelligence ai enhanced adaptive malicious software ai fuzzing to start automate and accelerate zero day attacks machine learning poisoning trojans and backdoor malware hacking of smart contracts based on buggy or unsecured deployment and vulnerabilities of cloud computing this data is from a report from 2019 and it's still valid today and it will still be for the next few years it's from oracle and kpmg their cloud threat report the question was what are the biggest cyber security challenges currently experienced by your organization today the largest percentage at 33 percent was detecting and reacting to security incidents in the cloud coming in second at 29 percent lack of skills and qualified staff and that's why you're here as a cissp candidate because we need more qualified security professionals from the same report this question asks which of the following represents the biggest cloud security challenges for your organization coming in almost 40 percent was maintaining secure configurations for our cloud resident workloads the next three responses are equally pertinent satisfying our security team that our public cloud infrastructure is secure maintaining strong and consistent security across our own data center and public cloud environments in use and cloud-related security event management challenges the most common cyber attacks used to externally perform data breaches are ransomware malware variants phishing and denial of service ddos botnets the most common internal threat is the personally compromised privileged insider data breaches have become more widespread primarily due to cloud computing and the increased usage of digital storage for example social media breaches accounted for 56 of data breaches in the first half of 2018 according to itweb over the last 10 or more years there have been 300 data breaches involving the theft of 100 000 or more records according to forbes the u.s had 1244 breaches in 2018 and 446.5 million exposed records according to statista data breaches exposed 4.1 billion records in the first six months of 2019 according to forbes and as of 2019 cyber attacks are considered among the top five risks to global stability according to the world economic forum bottom line regardless of what year this is now when you're taking this training these numbers are only going up we also have to contend with licensing issues security professionals must be familiar with the issues involving software licensing and agreements for example there's contractual license agreements these are traditional written contracts and contracts that are digitally signed there's the shrink wrap license agreement these are the ones that are written on the packaging for example the software packages that you buy at a brick and mortar retailer or an online retailer there's click-through license agreements these are very common these are the ones we click through during install either off of a digital drive or off the internet and those cloud service provider license agreements as well this of course depends on the managed service infrastructure service software as service or platform as service if your organization deals with import and export of hardware and software as a security practitioner you may have to contend with import and export issues in the u.s these mandates began during the cold war to control trans-border flows for example the international traffic and arms regulations itar control the export of items that are specifically designated as military and defense items the export administration regulations ear cover a broader set of items and your organization may be subject to these regulations in addition supply chain security is critical when you're engaged in import export activities cybersecurity-related trade conflict is an emerging global phenomenon countries can do nothing or they can develop import trade barriers or they can restrict procurement or they can develop norms or they can escalate the conflict those are their five main options companies within those countries can make recommendations they can comply they can avoid they can collaborate or they can compromise based on the situation when it comes to import and export issues encryption export controls are a key issue that the security practitioner should be aware of from a management standpoint and again on the cissp exam a lot of these issues and topics are from an i.t management standpoint for example the department of commerce's bureau of industry and security sets forth regulations on the export of encryption products outside the united states there's also the issue of trans-border data and information flow considerations should always include the flow of data and information and goods across international borders and all legal and regulatory implications these issues can change rapidly based on various geopolitical factors security initiatives must also consider variances and cultural norms customs sensitivities and behaviors for example customs customs that differ between the european union and aipac policies controls and procedures can differ based on region countries are typically under different regulations and mandates you have to be aware of those when you're dealing with other countries we can rely upon security architectures like agate idabc obashi idol iso or togaf as mentioned the department of commerce's bureau of industry and security bis controls non-military cryptographic exports and you may have to deal with them cloud computing is transcending traditional boundaries and jurisdictional barriers as well and introducing a wide variety of new challenges [Music] well here's a topic that probably belongs in the next course when we look at dealing with personnel issues and employment issues however dealing with investigations from a security standpoint can often go beyond just working with hr or in the hiring practices so let's look at some reasons why security practitioners would engage in investigations first as i mentioned you have employment candidates so they go through screening processes and hiring processes and often as we'll see here in a moment they can go through elaborate investigations and background checks depending upon the sensitivity of their job role and responsibility if somebody works for the government or for a particular agency and they get promoted to a higher sensitivity level let's say for example from secret to top secret or they get a promotion they may have to go through additional background checks possibly more stringent background checks also different organizations will do periodic investigations or periodic reviews as part of their employment policy so for example certain investigations done in the military or in government agencies may be done every one to three to five years regardless of promotion or job roller responsibility the security practitioner may have to investigations as part of compliance or auditing or privacy policy issues and the investigation may be part of incident response if you're a member of the incident response team or an irt swarm team who joins others for a particular event or incident you may be involved in investigations or if you have forensic skills you'll be involved in cyber forensic investigations in the united states we have the national background investigation bureau now nbib this used to be called the opm you can see there at the bottom but now it's the nbib and here you can see an example of some investigations you would go through for a government agency job or a government agency promotion we start out reviewing and scheduling the investigation and then typically we'll do inquiries the inquiries involve phone calls email maybe even sending out letters or surveys to employers schools law enforcement and all of the references they put on their resume then they'll be automated national agency checks with the opm the office of professional management the fbi the federal bureau of investigation the cia the central intelligence agency the state department the ins i think we call that the uscis now the u.s citizenship and immigration services credit bureau for example and maybe even the irs and then you have field work and that involves actually going out and doing personal interviews in person meeting with previous employers previous schools administrators law enforcement and personally following up on references then the investigation is reviewed and then possibly it's an iterative process you go back and do additional investigation once you're satisfied the investigation is closed and documented and the results are sent in a report to the parties are involved the employer the agency whoever in this course on security governance principles you learned how to align security with business roles and responsibilities you learned about governance do you care and due diligence we discussed a wide variety of compliance legal regulatory as well as import export issues and investigations for the purposes of security in the next course you'll learn about policy development and implementation working with vendors consultants and contractors and security awareness education and training in this first lesson we're going to explore policy development and implementation policies specifically security policies establish a general framework within which to work and a guiding direction to take in the future the function of a policy is to classify guiding principles direct behavior and offer stakeholder guidance and a security control implementation roadmap administrative technical and physical controls an information security policy is a directive that outlines how an enterprise plans on protecting its data applications and systems in other words how are we going to achieve confidentiality integrity availability authenticity possession control and utility it helps ensure compliance with legal and regulatory requirements as well as preserve an environment that sustains security principles policy documents are high-level overview publications either in written format or published to an intranet that guide the way in which various controls and initiatives are implemented when you develop your information security policy it should have six elements first it needs to be sanctioned the policy must have the support of executive management or the c-suite or c team this means visible participation and ongoing action communication and even campaigning to achieve the level of investment in prioritization that you need next it needs to be applicable to the organization so from a strategic standpoint the information security policy has to support the guiding principles the mission the charter the goals of the organization tactically speaking it has to be relevant to those who must comply to the policy it should be realistic can it be successfully implemented the policies need to reflect the reality of the environment in which they're going to be deployed information security policies and procedures should only ask for what is attainable for assuming the policy objective is to advance the guiding principles of the enterprise then we want to assume a positive outcome we're not going to set up our constituents for failure it should be flexible the policy should be able to accommodate change and be adapted as necessary an adaptable information security policy will recognize that security is not static it's not a one-time point-in-time endeavor instead it's an ongoing continual improvement process to support the organizational mission an information security policy needs to be comprehensive the policy scope must include all of the relevant parties all the business units all the stakeholders all of the entities it must be inclusive it needs to take into account the objectives of your organization regulations international laws cultural norms of your employees your vendors your suppliers your business partners obviously your customers but also things like environmental impacts and socio-political phenomena and of course don't forget the shareholders if you have them it needs to be enforced it needs to be statutory so enforceable means that the administrative physical or technical controls can be put into place to support the policy and yet you have the appropriate sanctions in place if someone is not adhered to the policy and typically there's several levels from a verbal warning to termination hopefully several stages in between but the bottom line the information security policy should be sanctioned applicable realistic flexible comprehensive and enforced policies can change and they will change based on new technologies for example you may have a internet of things hardware authentication policy this is new technology authentication at the hardware and firmware level for iot new policies for user behavioral analytics that we apply to our end users often analyzed with ai and machine learning so you're going to have policies for deep learning and machine learning and artificial intelligence that could include virtual reality and augmented reality through your endpoint devices new policies for interacting with cloud providers relating specifically to that shared responsibility model between you as a consumer and your provider infrastructure platform software as a service and with rapid changes to mobility such as 5g and wpa3 you'll have organic and dynamic enterprise mobility management policies in the next lesson we'll talk more about the subsets of policies things like standards guidelines processes and procedures [Music] let's begin this lesson talking about standards standards allow an information technology staff to be consistent and systematic standards specify the use of specific technologies in a uniform way because no one individual practitioner can know everything standards also help to provide consistency in the enterprise because it's unreasonable to support multiple versions of hardware and software unless necessary standards are usually mandatory and the most successful iit organizations have mandatory standards in order to improve efficiency and to help keep things as simple as possible remember that security principle keep it simple guidelines however provide a list of suggestions on how one can do things more effectively guidelines are similar to standards however they're more flexible and they're not typically mandatory they're used to define how standards should be developed or to guarantee some level of adherence to general security policies some of the best guidelines available are in repositories known as best practices for example the nist computer security resource center the nsa security configuration guide the center for internet security cis top 20. below that we have procedures also referred to as processes and practices procedures are usually required although they're the lowest level of the policy chain procedure documents are longer and more detailed than standards and guidelines documents procedure documents include implementation details usually with step-by-step instructions and graphics topological maps flow charts the types of media you would see at a site such as lucidchart procedure documents are extremely important for helping large organizations achieve the consistency of deployment necessary for a secure environment procedures processes and practices are often deployed using infrastructure as code in yaml or json format using tools such as terraform or amazon web services cloud formation remember procedures are also known as practices we also hear procedures referred to as sops standard operating procedures these are step-by-step instructions that define how workers carry out routine tasks sops can greatly improve efficiency quality performance communication and compliance with regulations often sops can best be delivered using automation and orchestration tools some considerations for standard operating procedures would be describe the purpose and limits of the procedures offer all of the steps needed to complete the process leaving nothing out clarify concepts and terminology often you'll have a glossary consider health and safety issues in hardware and physical implementation and list the location of all necessary supplemental resources necessary to carry out the standard operating procedure the acceptable use policy is often considered one of the most important sections of a written security policy the aup identifies how employees are expected to use resources in the organization it defines rules of behavior and codes of conduct for example using proper and acceptable language avoiding illegal activities avoiding disturbing or disrupting of other systems do not reveal personal information avoid data leakage or data loss and do not reveal confidential information in other words protect intellectual property personal health information and personally identifiable information an aup can have several categories for example one for mobility and wireless that's very common a category for handling operating systems and software for example policies that forbid the installation of type 2 hypervisors like vmware player or oracle virtualbox the allowing and denying of personal cloud storage such as dropbox the usage of removable media and can that media be used just inside the facility or can the media leave and enter the premises aups for email usage anti-phishing and web browsing acceptable use policies that are also controlled with url filtering and data loss prevention engines and policies for file sharing or peer-to-peer file sharing sites some managerial controls for aup would be the change management process having a leased privilege policy enforcing mandatory vacations enforcing separation of duties rotation of duties having a clean desk policy and social media usage policies earlier i referenced enforcement as being a characteristic of a good is policy several stages of enforcement could involve first initial verbal reprimands or warning then an official written warning followed up by temporary suspension with or without pay then termination and possibly criminal or civil legal action for example leading to incarceration reimbursement and or restitution if crimes were committed [Music] in this short lesson we'll look at some things we talked about briefly in the previous course the security practitioners relationship to employment candidate screening and hiring hr and legal departments must work closely with security policy steering committees and other groups to determine the best practices for screening and hiring at the start of an interview for an example it's not uncommon to sign an nda a non-disclosure agreement otherwise known as the confidentiality agreement also many organizations have employees sign an additional employment contract once they accept the offer new employees should sign off on all security policies as well as the acceptable use policy aup activities include working with headhunter organizations and online hiring sites such as indeed.com security practitioners may be involved in confirming some or all of their references they may assist in approving education certifications and especially experience when it comes to security they may do additional fact checking of resumes particularly who are going to have a security role in the organization background checks and credit checks may be performed and all activities must adhere to compliance and privacy requirements it's not uncommon for a security practitioner an officer or an engineer or an architect to conduct technical interviews especially when hiring somebody in the security department technical or phone interviews before the actual on-site interview or meeting in this lesson we're going to talk about onboarding remember onboarding is new hire procedures it's not done until you actually hired that candidate it's the process of providing assets guidance knowledge skills and the behaviors needed or expected for a role or responsibility on a team or in a group you may provide them videos printed materials computer-based training lectures there may be formal and informal meetings and assigned mentors at least for the first 45 to 90 days there'll be introductions to other employees explanation of standards and practices otherwise known as standard operating procedures sops this documentation should clearly define all the roles and responsibilities of the onboarded employee you want to provision all devices all endpoint devices and equipment mobile devices or layout the bring your own device security policies pads laptops other equipment you should deliver security awareness through training and acceptable use policy expectations this should be a written document that the employee who's onboarded will sign and of course additional human resources activities and in the process be sure to remove any ambiguity and uncertainty the onboarding process should be concise clear and comprehensive it's very likely that the employee who's onboarded signed a non-disclosure agreement during the interview process however there may be additional confidentiality agreements that must be signed by the new hire an nda is a legal contract between two or more parties it's a confidential relationship that is often strictly enforced especially on the employee side there can be severe consequences for the employee or ex employee who violates the non-disclosure agreement it can be business to business or it could be business and employee the nda identifies confidential information that they wish to share with each other but not any external parties and possibly other employees or departments within the company itself depending upon the sensitivity level of the new hire for example if they're hired at a top secret level the nda may lay out consequences for exposure to lower sensitivity levels the information can include intellectual property trade secrets technologies marketing campaigns ideas new processes new products and services and if it's military or government agencies certain types of files that are secret or top secret the nda has the goal of restricting the sharing of information with other people and entities as mentioned ndas are commonly used during the interview process but then introduced again in the onboarding process a new trend to be aware of on the cissp exam is automating the onboarding process enterprises often deploy systems that involve self-service onboarding of personal devices the employee registers a new device and the native supplicant is automatically provisioned for that user and device and installed using a supplicant profile that is pre-configured to connect the device to the corporate network this is often in a 802.1x pnac environment using an authentication protocol like tls protected eap or cisco's eap fast the onboarding process can also be automated using a software as a service provider often assisted by a cloud access security broker caspy remember offboarding is the reverse process of onboarding and can actually involve more security vulnerability than the onboarding process we'll discuss some of those issues in the upcoming lesson [Music] let's continue our discussion from the previous lesson and talk about off-boarding or employment termination and transfer best practices realize that several of the processes that you'll go through when an employee is terminated or that employee decides to leave on their own will still apply to an employee who is transferred to a different division or business unit or promoted to a higher position or sensitivity level or even demoted so termination and transfer depends upon the circumstances and there can be a wide array of circumstances regardless we must document all procedures for revoking an outgoing employee or transferred employee access we must monitor and audit the employee closely in the last hours or days of service specifically an employee who's leaving the organization if possible with human resources departments and possibly legal departments terminate the employee face to face as opposed to using let's say video conferencing or a phone call or text and there should be a witness an in-person witness whenever possible in today's environment this may not be feasible however it's a best case scenario or best practice you must meet all regulatory requirements for example in the united states there's a warn act the worker adjustment and retraining notification act that comes into play to protect workers from the impact of the unexpected and sudden loss of employment basically a layoff that is longer than six months or reducing the working hours by 50 percent in six months there's also sarbanes-oxley that can come into play as well or socks there should be a process and policy in place to disable or delete accounts and revoke digital certificates and digital signatures that revocation of the certificate should also be communicated to the trusted third party for example the certificate authority either the enterprise ca or a third-party ca so that the certificate revocation list crl or ocsp database protocol can be used to alert all other parties that are using the public key of that certificate accounts may be disabled for a certain period of time for example 12 months and then finally deleted the employee whether it be termination or transfer should return all property all physical and intellectual property it may be necessary to modify or update any corporate controlled social media sites for example some organizations have a corporate facebook or other presence needs to be modified for the terminated or transferred employee and remember ex-employees or former employees are near the top of the list of potential threat agents moving forward so this should be an area of the risk register or ledger or risk log on an ongoing basis another important principle of termination and transfer is having the ability to do follow-up interviews if at all possible in a follow-up interview you can often gather a lot of information that's helpful to expose risk and security vulnerability as the employee who's being transferred or terminated may be more willing to expose weaknesses or vulnerabilities that weren't discovered through assessment or gap analysis in the exit interview or the release interview try to identify factors that led to the employee leaving if it was voluntary how can the organization improve to keep employees if applicable discover any potential unknown security vulnerabilities or issues with other employees that haven't been exposed remind the exiting employee or transferred employee of their agreements their nda's other agreements and responsibilities review the nda that they signed when they started remind the employee of what they're forbidden to discuss with other people and other entities even as an ex-employee or transfer employee always adhere to well-defined off-boarding security policies and procedures that should be well documented in the written security policy or published security policy and collect all corporate assets and property both physical and logical often if the property is not delivered in a timely manner there's some process in place to delay final compensation or the final salary payment to that employee until they deliver the equipment or other assets if necessary you may need to involve law enforcement to collect the valuable assets on the exam it's important to remember different types of agreements and documentation when dealing with third parties and remember as a security practitioner the third party can be internal to the organization for example through a service desk or it could be an external third party entity now from an external standpoint we often use what we call service level agreements or slas slas define the precise responsibilities of the service provider and sets customer expectations the sla also clarifies the support system the help desk the technical support the service desk responds to problems or outages for an agreed level of service an sla can be internal between business units or departments as well as external but usually if it's internal we're going to call that an ola which we'll look at in the next topic these agreements should be used with new third-party vendors or cloud providers providing software as a service infrastructure as a service and platform as a service for 24-hour support from a security standpoint often we're concerned with the cryptographic methods that are used to protect data in transit and data at rest identity and access management and privacy issues such as leakage and loss of intellectual property phi pii and other sensitive information in other words in this agreement or our relationship how do we accomplish confidentiality integrity availability as well as non-repudiation an ola documents the pertinent information for regulating the relationship between internal service recipients and an internal i.t area or department that service provider could be a service desk the difference between an sla and an ola is what the service provider is promising the customer versus what the functional it groups promise to each other in the ola an ola often corresponds to the structure of an sla from a document standpoint for example a lot of similar information but there'll be some specific differences based on the enterprise and the functional it groups another document is the memorandum of understanding mou this is also referred to sometimes as a memorandum of agreement or moa another word for mou actually more common is the letter of intent a formal mou or moa usually precedes a more formal agreement or contract it defines common courses of action and high level roles and responsibilities in management of a cross-domain connection often it's a commitment to move forward with the contract process with that provider and delivering a high degree of confidence that you're not going to continue the search or seek other providers during that time it will usually terminate the customer provider search process so that subsequent time and resources can be dedicated to the next steps or the next phases of a more formal contract process another common agreement is the ra the reciprocal agreement this is a contract between two organizations that have similar infrastructures and technologies these agreements are difficult to legally enforce a common goal is that one can be a recovery site for the other in case of a disaster or a lengthy outage a reciprocal agreement is a quid pro quo arrangement in which two or more parties agree to share their resources in an emergency or to achieve a common objective for example safe media storage of backup tapes and other documentation so for example data backup where you have two departments or organizations that agree to store each other's backup data on their computers or disaster planning whereby each party agrees to allow another to use its site facilities resources etc after an event or a disaster an interoperability agreement ia is an agreement between two or more entities for collaboration and data exchange it's often used by sister companies that are both under a holding group or a high level corporation it is a binding agreement for sharing information systems telecommunications software and data an ia interoperability agreement is not the same as a reciprocal agreement ra an example would be the interconnection security agreement isa that a customer signs for aws direct connect to a service provider or azure express route as mentioned earlier these agreements bring into play third parties let's look at some other risk factors of dealing with third parties that these agreements are often put in place to protect us from we have to look at reliability of our vendors and our suppliers the safety quality and security of the supply chain vulnerabilities with business partners in keeping certain data and information private for the purposes of regulation and governance the risks of end-of-life eol products and services where the vendor or the supplier no longer has the particular component or no longer provides support for that eol product or service or the posture of the vendor or the supplier for end of service which had been previously documented in a service level agreement or interconnection agreement or some other type of contract if you continue to use that product or service how do you manage the relationship and all of the needs if the product or service is end of life or end of service those are prevalent third-party risk factors that security practitioners have to contend with on a weekly basis in the final lesson of this course you want to explore security awareness education and training there is a difference between awareness and training and education awareness is commonly underutilized however if there is an awareness policy or program in place it can often be overdone in other words you overwhelm the employee with information saturate them with security awareness best practices however we can increase awareness by using self-paced computer-based training modules the employee can watch videos go through classroom training we can also raise awareness by exposing them to posters newsletter articles email and bulletins remember we want to combine the carrot and the stick in other words you want to have enforcement when security aups and policies are not adhered to but also some occasional rewards for maintaining a secure environment and buying in to security policy programs and initiatives we can remind users with system banners drink cups mouse pads notepads and other media training education is more formal it can be awareness training however there'll be more advanced training for sensitive areas or users with elevated roles or higher privilege levels you should have security training for new hires also technical security training for your it staff and accelerated advanced training for your security staff your security practitioners engineers administrators and architects and specialized training and educational instruction for the c-suite or other executive management or key stakeholders they should be aware of the organization's mission charter and vision from a security standpoint they need to know all applicable security policies and procedures some example security topics in the education and training would be password and badge policy for example using their multi-factor authentications anti-tailgating and piggybacking in other words not going into the building or onto a floor onto an elevator and into an area without using their badge or token not piggybacking on somebody else's credentials a clean desk policy at the end of the day moving everything from the desk putting it into a locked drawer locked cabinet or safe to raise awareness for email and webmail phishing attacks raising awareness and educating employees on different types of social engineering attacks and hoaxing training to prevent data loss and data leakage and an overall education on all relevant governance regulations and mandates the security training should be role-based in other words you have your general endpoint users who should go through training and be aware of their aups but you'll also have training for data owners system owners data and system custodians and stewards will go through specialized training custodians a more technical training where stewards will be more business related training train your administrators and privileged users special training for executive users who may not have the deep security knowledge or a more layman's point of view to technology and security you have executive management otherwise known as the c-suite or c team and even in some situations the board of directors can go through role-based security training here's an example awareness or education program first identify the program's scope what are your goals and who is the target audience for example is it executive management or receptionists motivate and get buy-in from employees and management this is critical identify the optimal and affordable training modalities webinars classroom training self-paced computer-based training and other modalities and then administer and maintain the security awareness and training initiative finally evaluate and improve continuously [Music] in this course on security policy you learned about security policy development and implementation employment and personnel policies third-party policies and agreements and security awareness and training in the next course you'll explore data and asset classification information and asset handling and data roles and life cycles [Music] let's begin this course looking at three different states of data we need to be aware of because it's going to affect a wide variety of things like risk management risk assessment data loss prevention usage of various crypto systems a wide variety of administrative technical and physical controls so first we have data at rest this is what we commonly think of as the information that's stored on hard disks on workstations and servers the hard disk drives the solid-state drives raid arrays it's also data on memory cards sd cards for example the data in your data center that you either access over the storage area network or network attached storage it could be object storage storage in the cloud for example at amazon web servers s3 or azure blob it could be data addressed on archives and backup drives external and removable drives then we have data in transit data and transits also referred to as data in motion and this is typically data sent either over a wired network like ethernet or a fiber fitting ring for example or a fiber metropolitan area network it could also be data sent across the rf spectrum for example with wireless 802.11 something it could be cellular it could be satellite data that's in transit and then we have data in use and this is volatile data this would be data in the registers on your central processing unit of your workstation or your devices this could be volatile data in ram memory volatile storage maybe data that's cached in in-memory only storage like redis or memcached clusters data in use how do we protect data at rest just a quick overview of course we're going to spend the rest of our time in this training for cissp to look specifically at technical controls but logically we're going to rely on conventional perimeter-based defenses things like firewalls remember a firewall is a metaphor it can be software it can be hardware it can be a combination you know preventing a fire from spreading from one zone or domain to another it could be intrusion detection systems intrusion prevention systems and things like anti-virus anti-malware conventional systems we can deploy defense in-depth access controls or identity and access management to determine who's authentic and who's authorized to access data and what can they do possibly introducing additional factors like multi-factor authentication before they can access that data at rest we want to use secure principles often with dual operator it takes more than one person to access some data or separation of duties where one particular subject can access parts of data and some other person can access other parts of data or other areas of storage on disk drives or cloud storage cryptographic systems are extremely valuable in protecting data at rest sometimes it's done automatically so for example if you store an object in a cloud provider it will be encrypted with advanced encryption standard aes 128 or 256 typically either cbc mode or gcm mode so we can protect volumes disks and individual files at rest using software encryption we can also use full disk encryption fed or move or copy those files over to self-encrypting drives we can also get protection by partitioning our storage for example certain data stored in a secure enclave on a mobile device or we can use hardware security modules ssm to store cryptographic keys at rest and these are technologies we'll explore even further throughout the cissp training as a security practitioner you're probably very familiar with protecting data in motion or data in transit we can do this by using encapsulation methods for example a protocol ipnip can be used we can have dedicated channels that don't even involve cryptographic mechanisms you just dedicate least lines between one site and another maybe a dedicated fiber connection more often than not though we're going to use cryptosystems for example ssl tls and specifically when you're going to websites today you're most likely using tls 1.1 1.2 or 1.3 and then higher going forward ipsec or ipsecurity is a very popular way to create virtual private networks for ipv4 and ipv6 at layer 3 of the osi model protecting wireless data in motion more often than not we're going to be using wpa2 and more recently wi-fi protected access 3 wpa3 and include some of those new security mechanisms for example management frame protection you can protect data in motion with ieee 802.1x pnac port based network access control that's going to be helping us get security at layer two also at layer two of the osi model 802.11 ae maxsec where we can pretty much get the same type of services that we get at layer 3 packets and datagrams with ipsec but we can apply those to frames at layer 2 confidentiality origin authentication integrity so these are all a lot of methods we use today to protect data in motion when it comes to protecting data in use for example information that's stored in ram memory or in caches let's say redis caches this is the least mature protection system there's a lot of overhead in protecting data in use due to encryption and decryption and it's costly and difficult to implement there are some newer methods for protecting volatile data in memory such as homomorphic encryption we'll talk about that in an upcoming course but in a nutshell you conduct calculations on encrypted data without actually decrypting it we can also use trusted computing systems to protect data in use for example se linux and more recently we're starting to rely on machine learning and artificial intelligence algorithms and engines which give us cutting-edge visibility and memory protection [Music] there's a very good reason why we're going to look at data and asset classification here so early on in this course in the next course we'll be looking at risk management and before you can do risk assessment risk analysis and risk management you have to have an understanding of your assets and your data you need to know what you have to manage your risk and that involves data and asset classification and it's quite likely that your organization is using a model an architecture that has sensitivity levels and classification possibly in lattices like a mandatory access control model such as bella padula or biba or clark wilson so you must have a well-established tagging and labeling schema that maps hopefully to a configuration management database a cmdb such as servicenow or it could be a cmdb in the cloud let's say amazon web services dynamodb we're discovering that no sql document style databases are actually superior than traditional relational database management systems we want to assess classify and label all of our facilities equipment all of our facilities areas different buildings different areas different rooms our equipment all of our physical assets our data and information assets human resources or people assets that's often done in conjunction with a directory service don't forget there's intangible assets or logical assets and intellectual property and these assets can be on-premises or on-prem they can be in the cloud or at some other service provider they may also be at disaster recovery sites let's dig deeper into the cmdb a configuration management database is not a typical data warehouse it plays a critical role in several it management initiatives such as it service management itsm for example maybe you're using itil4 or cobit5 and it asset management or itam cmdbs assist various it services to help us better align with our business needs by providing current and accurate data for change in patch management for incident and problem management for availability management and release and deployment management configuration management practices offer the necessary data we need concerning assets and their configurations including their interactions and interoperabilities and interdependencies with other assets and other asset classes this assists administrators and it managers with problem resolution with incident response with network component deployment and formulating strategy as well as budgetary forecasting and overall decision making [Music] let's continue our discussion of information and asset handling requirements labeling concerns the classification and prioritization of data systems applications really all assets to determine the level of protection and how the asset should be handled handling controls who has access to the assets and what actions they can take handling is based on labeling and how it's been classified when choosing a classification level for an asset and this can go beyond data there are several criteria or attributes that we can use now the most common and the most obvious is value and if it's valuable it should be protected and realize it's not just the financial value but also the relative value to delivering the value proposition the service or the product also it's the value that can be generated or revenue to be generated going forward not just the present value of the asset the classification level can be based on architecture so for example the subjects and objects are restricted by a mandatory access control model or mac or possibly a risk-based attribute based access control model the age of the object or the data can be a key factor for example over time the value of assets can diminish if it's a physical asset we call that depreciation however information and data and files can also depreciate in value over time as well over a period of months or years you may have automatic declassification based on the age of the object age is also closely related to useful life if the information is made obsolete it can then be declassified or devalued useful life can also be the fact that a new version or a new generation of the object has been created and therefore the previous iteration or version is no longer valuable or it's lost all of its value closely related to architecture is personal association so if data or a particular subject is personally identifiable or if it's health information or if it's at a particular sensitivity level based on its association with personnel who are also at that same level of secret or top secret it can affect the classification and labeling and handling in the previous lesson i mentioned the importance of having a configuration management database made up of configuration items so typically we use a key value no sql document style database so for example we see a schema here that's just one example of many schema you could have on the exam what's important is that you do develop a key value pair schema and that it's consistent and it's comprehensive so here we can see the person who collected the asset uh the name of that person or the data the source of that asset or that data the category which is a tag determined by your entry to the category field this could also be information coming from let's say logging or netflow collectors or sensors you also have the source host the host name of the particular device or configuration item so the bottom line is make sure you have a well established and comprehensive and consistent schema [Music] in this lesson let's talk specifically about asset management and the first takeaway for the exam is this should be automated bottom line but asset management involves the tracking of all physical and if possible logical assets tracking them for various characteristics their location tracking changes their disposition all of this to improve risk management risk assessment and possible asset recovery for business continuity activity now whether an asset is real estate or software or a set of secret keys the asset manager's main task is to supervise all the activities related to asset management and this would involve the asset management software the automated tools that are used and of course we're assuming that you have a dedicated asset manager in your organization although this may be somebody else with the role of server administrator database administrator instead of somebody with the title of asset manager digital asset manager however is a growing enterprise role for example managing images type 2 hypervisors cryptographic keys other logical and digital assets and data again automation and orchestration systems are vital especially for medium to large enterprises at the heart of asset management is asset inventory control for a lot of companies whose value proposition is a product just in time is prevalent but even if your organization doesn't have necessarily a product to sell you still deal with vendors and wholesalers in a supply chain and they themselves are also using just in time in other words they have just enough to deliver to their customers at a given point in time so we often see backlogs and back orders if there's disruption to the supply chain for example like a global pandemic managing the inventory and doing it in an automated fashion will help you keep your budget in line it allows for better security as well because you are tracking all of your physical and logical assets and of course efficient management of your operating capital this may not be your job role but as a security practitioner you want to be aware of the fact that your organization is assessing the type of inventory that's being kept determine the quantity of goods you must keep on hand these could be the goods that you're selling to your customers but they could also be the ones that you're getting from your own vendors track the market trends of your competitors you want to make sure that you have access to the same products and services that your competitors have from various vendors identify your minimum stock levels this ties into your jit and again as i mentioned this is an inventory strategy used to increase efficiency and decrease waste by acquiring goods only as needed in the production process and again this can tie into you who are selling physical products to your customers or your relationship to vendors and understanding that most of your vendors will be using just in time inventory that can also tie into your disaster response your disaster recovery as well there are some best practices for fixed asset inventory software realize the scope of your project in other words you want inventory to be not necessarily over the entire enterprise but to break it down in a modular fashion for example inventory of your hot spares for your infrastructure devices your switches your routers your firewalls your multi-layer switches your sensors your servers assign responsibility for your asset management if you don't have a dedicated asset manager or digital asset manager then somebody with a different job title or role will need to take this on and again often these are server operators server administrators other types of domain administrators make sure that all of your qualified personnel are understanding basic fixed asset procedures and this would involve having some you know knowledge of accounting systems and inventory systems not just always relying upon the automation of this having some ability to work manually if necessary due to an event or a disaster obviously moving forward we're going to rely on automated software to do this and look for emerging technological trends for example software as a service providers and other service providers that could offer this asset control where everything is being stored in the cloud and we're actually running tools machine learning processes and engines from the cloud provider as well and one of our main goals is to identify and clear out ghost assets we call ghost i.t you'll also hear this referred to as shadow i.t these could be things like virtualization creep where you have type 2 hypervisors and ghost software and ghost applications running on workstations and laptops of your end users employees using peer-to-peer file sharing to download programs and files that are unauthorized you know freeware shareware pirated anti-copyright content not only are we creating a baseline for our inventory but we're also looking for those ghost assets and to remove them from the enterprise [Music] in this short video we want to identify different roles of the types of people in our organization that will deal specifically with the assets of data now these roles can also apply to other assets as well but for the sake of the exam we're going to look at these from the standpoint of data and of course the first one we have is the data owner the data owner owns the information especially in a discretionary access control model so the owner is typically the person who creates the object so somebody who creates a file or a folder in active directory or somebody who creates a document library in microsoft sharepoint the owner often has the discretion to share or even assign permissions and rights to these objects they create the owner often determines the tagging and the classification level they may actually even provide the label itself they'll fill in the schema so based on the key which are already provided they'll fill in the values next we have stewards a steward manages the data and often additional metadata so metadata of course is data about data so it could be files for example say these are files that are stored centrally at a cloud provider so maybe amazon web services elastic file system where a bunch of linux systems are accessing these files in this file system that can grow you know up to petabyte levels they're going to manage this from a business perspective they'll often be the ones that are going to ensure compliance over where the these files are stored either locally or in the cloud so based on standards based on established controls maybe they're responsible for data quality think of the steward in that standpoint the data custodian is different than the steward a custodian is the keeper of the information or the data from a more technical perspective okay so not so much from a compliance perspective the steward is more administrative controls where the custodian is more technical controls and so they would be ensuring for example the confidentiality integrity availability of that data is maintained you also have data processors a data processor is not an owner they have no stewardship they have no custodial abilities with the data they're simply there to input into a system taking raw data and putting it into some type of system so they can be converted to information so just basically data entry they may be involved in you know doing batch jobs or other activities but they have no other relationship with the data other than just simply input output and processing now ultimately the responsibility for data will typically rely on a executive manager an officer so the way that data is handled and managed even beyond ownership is going to be with a chief information officer or a chief privacy officer or maybe a chief technology officer so that's basically kind of where the buck stops but on the exam make sure that you're aware from a high level management standpoint these roles of owner steward custodian processor and officer [Music] these next two lessons really represent the core heart and soul of this course the main information to be retained for the cissp exam what we're going to look at here is a particular data life cycle and we're going to look at six different steps in this life cycle i'm going to go through these and describe these one at a time these are the six phases of data life cycle according to the cissp objectives so you may see various data life cycles in different training books different online articles websites whatever but these are the six that are on the exam objectives so make sure that you focus on these even though you may see these described in other ways in other materials or other formats so we have phase one collection phase two location phase 3 maintenance phase 4 remnants and then phase 5 and 6 are often kind of a binary okay either you retain it or you destroy it but we have two separate phases we decide to retain it retention and then phase six is destruction often referred to as disposition okay so let's break down this life cycle for the exam first we have data collection this is also called data capture by the way and there are three key methods through which data is typically captured or collected one is data acquisition this is the consumption of readily available data that has been produced by an entity typically outside of the organization we're obtaining the data from some external entity there's data entry this is the generation of new data values for the organization by human operators or devices that produce data for and within the enterprise we also call that data input and then data reception this is the receiving and the capture of data generated by devices these can be devices within our organization generating logs or net flow records or other output this could also be cloud-based through a managed security service provider or a cloud access security broker or some type of system let's say cloudwatch from amazon web services or google cloud platform services that are operating in the cloud and on-prem so examples here seam systems netflow collection logs also industrial control systems scada systems basically any information system and even those that are linked to the iot internet of things continuing data collection unless data is collected it cannot be analyzed it can't be matched for patterns it cannot be used for data-driven business decisions in other words without the raw data and the collection of raw data we cannot then derive information and we need information to give us knowledge and wisdom as part of the maturity life cycle now a key aspect of data collection is that only data necessary for organizational or business needs should be collected this is a general data principle along with things like data abstraction and making sure that there's no direct access to the underlying raw data article 25 of the gdpr the general data protection regulation by the way mandates that many companies protect data by design and by default enterprises should integrate data protection principles into business activities in the beginning and throughout the entire data life cycle next we have data location where is the data going to be stored we have object storage which performs best for big content and high stream throughput think of storing pdf files graphics files audio video files things like that as far as geography goes objects are typically stored across multiple regions often through a cloud provider like ibm cloud or microsoft azure object storage is typically highly scalable seemingly infinitely to petabyte and beyond object storage at cloud providers like google cloud storage offers at least four nines of high availability and often 11 to 15 nines of durability object storage has additional metadata with it it's extensible metadata so this customizable data about data allows it to be easily organized and retrieved it's better for versioning and it's extensible it feeds into analytics block storage is traditional gives us strong performance with database and transactional data often stored on hard disk drives solid-state drives in raid arrays being accessed by storage area networking to name one topology however with block storage the greater the distance between the storage and the application the higher the latency that's a consideration because block storage is part of a file system and it's being accessed and addressed that way we don't really have the extensibility we typically are part of some type of tree structure that does limit scalability whereas objects are more flat more of a horizontal structure in other words objects are typically accessed through apis and https urls whereas data stored in blocks are using a file system and it's not really extensible so there's no metadata this is a great table to kind of refer to for the exam we have different types of databases other relational databases which are traditional for you know e-commerce and transactional types of things there's the key value or no sql database which we use for you know high volume web apps gaming applications in memory storage so for example using memcached or redis clusters of in-memory only storage for caching other things like that we have document databases that are again document and key value are highly related okay there's wide column there's relatively new graph type databases that are excellent for creating relationships between the nodes so we have these for fraud detection to create profiles of social networking and social media usage recommendation engines social scores things that countries would be doing to their citizens there's time sensitive and time series databases and of course ledger databases that are built on blockchain technology immutable transparent verifiable systems of record supply chain banking transactions and of course cyber currencies so once we have the collection of the data the capture of the data then we determine the location of the data either object storage or various options of database storage in blocks we have data maintenance maintenance is initiated once the data has been collected and located maintenance involves offering data to points where usage and synthesis happen in other words some models have this as the transition from raw data to information usable information meaningful information data maintenance is also about processing the data without yet deriving any value from it for the enterprise that's where information becomes knowledge and in the final phase ultimately wisdom maintenance involves processes such as movement integration cleansing augmentation deduplication and the familiar etl extract transform load functions maintenance is the goal of a far-reaching array of data management activities and because of this data governance faces many challenges in the area of data maintenance next we have data remnants data remnants is the data metadata and artifacts that are left over after a software deletion process or a software move action or move process this is known residual risk when handling data during the life cycle to counter the risk of malicious data recovery physical destruction is always the best choice however there's other methods there's fundamentally three categories of ways to handle data remnants one we can clear it clearing or cleaning involves wiping or overwriting the data with zeros or ones the data may be recoverable under the clearing method then there's purging purging is stronger than clearing it's a more enduring form that can include methods like sanitizing or degaussing which is basically a magnetic process data is not considered recoverable by any known methods after purging the strongest technique is destruction this includes shredding pulverizing burning the disk or the media and you could also put encryption into this in some organizations how long a particular document or record is stored can be just as important as what is being stored a data retention policy helps to define what is stored how it's stored and how long it is stored and how it's disposed of when the time arrives realized with data retention it can go back to data location whereas fresher more pertinent data can be stored transactionally and then archived or long-term storage data can go to object storage so data retention often involves different types of location at different points in time during a workflow or a life cycle periodic audits help to ensure the data records or documents are removed when they are no longer needed and you should implement an automated disk or object storage life cycle either on-prem or you can do this using tools in the cloud in some scenarios for example in amazon web services you can deploy a storage gateway either a rack mount device or installed into a hypervisor that storage gateway can connect with a high-speed connection to your service provider partner of amazon web services and you can use that for a wide variety of different data lifecycle management processes including database migration backup and restore disaster recovery a wide variety of different services [Music] in this final lesson let's dig deeper into that last or that final phase or stage of the data life cycle that's asset disposal or disposition in the asset disposal processor phase plans are developed for discarding system information physical hardware software and making transitions often to a new system the information hardware and or software may be moved to another system or may be archived may be discarded or maybe destroyed or disposed of if performed improperly the disposal phase can result in the unauthorized disclosure of sensitive data in fact it is at this phase where the most vulnerability is introduced when it comes to data storage or data at rest when archiving information organizations should consider the need and methods for future retrieval if any the disposal activities ensure the orderly termination of the system or data or software and preserve vital information about the system so that some or all of it can be reactivated in the future if necessary emphasis is given to proper preservation of the data processed by the system so that data is effectively migrated to another system are archived in accordance with applicable records management regulations and policies for potential future access so really the the bottom line here from a cisp standpoint is asset disposal and disposition must consider the need for potential future access or retrieval of that data before that decision is made to destroy the removal of information from a storage medium such as a hard disk or tape should be done in accordance with the organization's security requirements there are several ways to destroy or sanitize there's burning shredding pulping and pulverizing for paper records we use pulverizing for microfilm or microfiche we can also pulverize laser discs or optical discs and document imaging applications for computerized data we can do magnetic degaussing we could do shredding or cutting for dvds as well degaussing can also involve demagnetizing magnetic tapes in other words removing the magnetic field purging means clearing everything off the media wiping involves overriding every sector of a drive with zeros and ones for example the dod 5220.22-m sanitation method is one of the most common sanitation methods used in data destruction software and in general is still perceived as an industry standard in the united states and then there's encryption encrypting all files before deleting or disposing of the media here's an example medical offices should maintain documentation of the destruction of health records including the following the date of destruction the method of destruction description of the disposed records of course stripping out any pii or phi inclusive dates and a statement that the records were destroyed in the normal course of business as well as the signatures of the individuals supervising and witnessing the destruction digital signatures in the situation would be acceptable [Music] in this course on asset classification and life cycle we explored data and asset states and classification information and asset handling requirements data roles and the data life cycle and asset destruction and sanitation in the next course we'll look at risk assessment risk analysis and management counter measure selection and implementation and monitoring measuring and reporting risk [Music] well since this entire course is on risk management it makes sense to start out defining risk first we have what's called inherent risk inherent risk is an assessed level of raw or untreated risk it can be defined as the natural level of risk inherent in a process or an activity system application without doing anything to reduce the likelihood of attack or mitigate the severity of a mishap another definition is the current risk level given the existing set of controls which may be incomplete or less than ideal rather than an absence of any controls residual risk is the amount of risk or danger associated with an action or event remaining after natural or inherent risks have been reduced using risk controls the general formula to calculate residual risk is residual risk equals inherent risk minus the impact of risk controls some methodologies such as open fare define impact of risk controls as increasing the resistance or difficulty against threat actors and threat agents as a review let's look at the four types of risk treatment or handling also referred to as risk appetite these will affect the type of controls and the resources used to implement those controls based on different asset classes of vulnerability first we have risk avoidance this is stopping or rejecting the activity that introduces the risk for example deciding not to process credit cards on your web servers that you host in your headquarters dmz risk transference or sharing the risk is transferred for example to an insurance company with a cyber security rider or to a cloud provider with their shared responsibility model risk reduction or risk mitigation here the risk is reduced to an acceptable level by implementing controls the important principle here is not to expend more resources in people time and money than the value of the asset you're trying to protect that includes the long-term revenue gaining characteristics of that asset for example a relational database management system and then risk acceptance tolerating the potential loss by introducing no countermeasures or controls in other words not purchasing flood insurance because your facility is not in a flood zone just like we have to define risk and define the way we're going to treat risk we have to define vulnerability vulnerabilities should be quantified as much as possible for example as a percentage of probability most executive managers and c-suites and c teams and steering committees would like to see some real numbers not just a list of potential disasters you want to deliver meaningful metrics that display the likelihood that a threat agent's actions will result in a loss including the loss frequency and loss magnitude or impact vulnerability can be a derived value based on the threat capability of actors and agents combined with the resistance of your existing security controls going hand in hand with assessing vulnerability is making sure that you've assessed your assets you've labeled them and you've categorized them and prioritized them why you'd like to begin assessing vulnerability with the most valuable mission critical assets for example your client and server operating systems understanding the posture of patches your updates and security fixes creating a baseline for the browsers that you're using the client software the versions and the types of endpoints workstations laptops mobile devices pads and iot components as well as embedded software you must understand the methods of access for data in transit wired wireless vpn cellular understand the posture of your remote networkers have a broad understanding of the available control types and categories that are used at your organization and are you using two-factor or multi-factor authentication this is just a starting point when we're assessing vulnerability something we're looking for in an ongoing basis is an indicator of compromise and ioc these are network or host-based cyber observables these are forensic artifacts of an incursion or disturbance in other words they're part of an attack kill chain it's a measurable event or a stateful property in your controlled cyber domain they could be things like registry entries compressed files encrypted files on disk i p addresses code running in memory and other unrecognized files there are a wide variety of tools that we can use to gather information for vulnerability assessment obviously we have various logs that are output from systems applications firewalls and sensors and more we have snmp version 2c and version 3 traps or informs information systems to netflow collectors we have security information and event management or seam systems next generation ips alerts and logs cloud-based visibility tools tools that we run in the cloud at the cloud service providers iaas or paas infrastructure as a service or platform as a service or using those cloud-based tools on-premises or on-prem and lately using machine learning engines and artificial intelligence ai data analysis another valuable tool is online vulnerability databases this is a collection and distribution of information about exposed computer security vulnerabilities it typically categorizes and defines an identified vulnerability and its variants along with a timeline and coding the database usually assesses the potential impact on affected systems based on a qualitative scale for example a scale of 1 to 5 or a scale of 1 to 10 often mapping that to colors like green yellow and red vulnerability databases may also provide mitigations countermeasures work arounds and hyperlinks to various updates and security fixes two of the most trusted and often used resources are the common vulnerabilities and exposures cve which is a list of entities from mitre.org that represents publicly known cybersecurity vulnerabilities it consists of an id number a description and public references the cve is used by the national vulnerability database the nvd along with that we have the common vulnerability scoring system cvss this is an open standard for weighing the severity of computer system vulnerabilities it uses a uniform and consistent scoring method ranging from 0 to 10 with 10 being the highest severity besides the cve with mitre we have the national vulnerability database with nist iss x-force database symantec security focus bid database and at-risk from sans.org when you have time do a web safari to these five websites and explore these vulnerability databases to broaden your knowledge for the exam working in concert with vulnerability databases is vulnerability scanning obviously http and https is the most common traffic so for example web application vulnerability scanners are the most common due to heavy usage of http in https for example burp suite and owasp zap these are automated tools that can scan web applications and look for common security vulnerabilities such as cross-site scripting cross-site request forgery sql and command injection path traversal and insecure server configuration another emerging threat that must be assessed is access to the dark web the dark web is also called an overlay networks or a dark nets you need special software configurations or authorization through special tools to access the dark web which by the way your employees may be doing or vendors employees as part of the supply chain the deep web or the dark web is not indexed by search engines it's an elaborate peer-to-peer network that can be accessed by tor browsers freenet i2p and riffle to name a few and on the dark web your employees and your partners employees and vendors employees can find pretty much anything and everything they want you can also assess vulnerabilities by using open source intelligence tools or osint this is any data or information that can be collected legally from free public sources concerning an individual for example an employee or an entire organization for example a potential strategic partner it's usually information found on the internet but it can be sourced from books or reports in a public library articles in a newspaper or magazine or statements in a press release and freedom of information act reporting and it's not necessary to do this manually you can gather this information using tools like maltego sharing centers and code repositories like github among others other threat intelligence sources would be automated indicator sharing ais this comes from the cyber security and infrastructure security agency cisa this capability enables the real time exchange of machine-readable cyber threat indicators there's also stixx the structured threat information exchange this is a standardized language developed by mitre in a collaborative way to represent structured information about cyber threats taxi is the trusted automated exchange of indicator information this is a transport vehicle for services and message exchanges to allow the sharing of information about cyber threats other threat intelligence sources would be predictive analysis tools and threat maps using cutting-edge machine learning and artificial intelligence tools to better predict future threats [Music] in this lesson we're going to explore risk assessment and analysis this all begins with documentation we want to have a risk assessment document to record the processes used to identify the probable threats and propose subsequent action plans if the hazard occurs we want to document assets at risk such as people buildings information technology utility systems machinery raw materials and finished goods not to mention data applications and systems and there are many templates and prototypes available online the inputs to our document would be identifying the hazards for example fire natural hazards terrorism pandemic disease mechanical breakdown or cyber attacks we want to characterize the properties of those hazards and the potential magnitude which leads to identifying and assessing the assets at risk from these hazards people the supply chain information technology our reputation or confidence of our organization and the environment as well as regulatory and contractual obligations the output from this should be measurable vulnerability preferably in a quantitative format analyzing the impact and magnitude casualties business interruption financial loss fines and penalties and lawsuits this information comes from ready.gov part of our documentation may be a risk and threat matrix in this matrix on the horizontal axis we see different event types accidental data leaks espionage financial fraud misuse opportunistic data theft physical theft product or service alteration sabotage and violence on the vertical axis we see non-hostile hostile and unknown actors or agents as you examine this diagram notice that the one hostile actor who has the potential to unleash every event type is the disgruntled insider that should be considered your highest vulnerability when it comes to internal structured threat actors remember the disgruntled insider could be a present employee or someone who just left the company risk assessment leads to risk analysis here's a classic qualitative analysis where we use scales of 1 to 5 or 1 to 10 so for example for a certain asset or an asset class and a particular scenario or threat we have the likelihood one being improbable five being frequent for impact or magnitude we have one being negligible and five being disastrous from this information we compare this to our asset and a particular threat and they come up with a heat map so our vulnerable assets that have a likelihood or probability of likely or frequent with moderate or critical or disastrous impact will go into the high area and that's where we should begin introducing our technical administrative physical and operational controls the challenge to qualitative analysis however is that it's relative and more subject to bias in addition it's more difficult to determine exactly what is occasional versus likely or minor versus moderate notice there's no 2.5 or 3.7 in other words there's no nuance involved what many organizations will do is perform a semi-quantitative analysis based on their qualitative analysis which may be mature well established and heavily invested in this case they'll attach some real numbers to the labels so for example negligible a value of 1 means no impact moderate a value of three for this organization means greater than or equal to one million dollars where disastrous is complete impact or catastrophic the likelihood improbable equals one which is rare or almost never occasional a value of three once in the last five years but not in the last year and frequent equals five which is several times a year so the risk of a particular event on a particular asset may be 4 times 3 equals 12 and you can compare those numbers based on assets and scenarios to better determine where to expend your resources for counter measures raising difficulty and resistance there's also classic quantitative analysis here we see the classic dr michael whitman equation we have ale annualized loss expectancy this is the expected monetary loss that you would expect for an asset due to a risk over a one year period the asset could be a mobile device valuables in a safe a security appliance data at rest credit card transactions etc or you could apply this to an entire asset class or an entire rack in the data center an important aspect of the ale is it can be used directly in your cost benefit analysis if a threat or risk has an ale of 5000 then it's probably not worth spending 10 000 a year to mitigate the threat ale equals av times ef av is the value of the asset a monetary value times ef the exposure factor ef is expressed as a percentage of financial value loss from a single incident it's the percentage of the assets total value so av times ef is our sle our single loss expectancy if sle equals av times ef then the annualized loss expectancy equals sle times the aro the annualized rate of occurrence this is described as an estimated frequency of the threat occurring in the one year period if we expect this incident to occur every two years for example 0.5 then the ale would be calculated as sle times aro or for example 30 thousand times zero point five which gives us an ale of fifteen thousand if it was once every three years we'd have an ale of ten thousand these values should help us determine how we'll treat or handle risk on an asset or asset class basis under various probable scenarios and will populate the risk register as a reminder ale is annualized loss expectancy a b is the asset value ef is the exposure factor sle is single loss expectancy and aro is the annualized rate of occurrence in this formula we have the three components of risk on a time frame in this case a fiscal year or an annual year we have magnitude or impact and we have probability or likelihood another emerging method is factor analysis of information risk or fare here risk is determined over a particular time frame based on loss event frequency and loss magnitude the loss event frequency is based on the threat event frequency and level of vulnerability all of these values should be a percentage loss magnitude is defined by primary loss or primary losses and secondary loss or secondary losses the goal here is to get calibrated estimates using monte carlo simulations pert charts and other formulas if necessary you can decompose this even further for example the loss event frequency to determine loss event frequency we begin with the threat event frequency and the vulnerability level however we may need to further decompose that to get more accurate calibrated estimates so for example to determine threat event frequency we'll look at the frequency of contact by the threat actor or the threat agent and the probability of action by the threat agent or threat actor we can also break down or decompose vulnerability into the threat capability of the actor and our level of difficulty or resistance through our technical administrative operational and physical controls [Music] now several times so far throughout this training we've already alluded to security controls let's review different security control categories and types to make sure that we have these down for the cissp exam administrative controls are also referred to as managerial controls this would be policies procedures best practices and guidelines things like password policies hiring and screening policies mandatory vacations and security awareness training then we have the category of technical controls these control access to a resource using hardware and software components they can be physical they can be virtual talking about firewall systems using encryption mechanisms enforcing passwords with identity and access management and multi-factor authentication for example using biometrics or smart cards it could be ids or ips sensors both host based or network based then we have physical controls this is access to the campus to the facilities or specific areas or rooms using things like locks fences guards video cameras gates bollards and more here's some other examples of administrative technical and physical for example under physical we see motion detectors cable conduits badges dogs and alarms under administrative or managerial we see rotation of duties separation of duties supervision of employees and effective termination practices on the cissp exam they may also have a category known as operational operational is a combination of technical and physical controls we also have security control types we have preventative which stop an attacker from performing an attack for example locks fences bollards security guards detective identifying that an attack is happening for example cameras infrared sensors or acoustic sensors corrective corrective restores a system to a particular state before an attack this is often a difficult concept on the cissp exam so an example of a technical corrective control would include patching a system quarantining a virus terminating a process either manually or automatically or rebooting a system the corrective control would be putting into place an incident response plan a deterrent discourages the attacker from performing the attack this could be signage a software banner or the presence of one of the other security control types seeing a security guard or a camera can in itself be a deterrent so there can be overlap in these control types then there's compensating or recovery this aids a control like preventative or detective that's already in place this is also a difficult concept so examples of commentating controls would be things like implementing segregation of duties or separation of duties sod to prevent error and fraud making sure that at least two individuals were responsible for separate parts of any task or function another compensating control is adding supervisory control in certain areas and uncertain people keep in mind that sometimes an organization is unable to fulfill a particular control requirement maybe a legitimate technical or business constraint so if you're unable to meet a requirement for example for pci dss you want to make sure that your compensating controls meet some criteria do all you can to meet the original intent and rigor of the requirement also try to find some other similar level of defense something to offset the control that you cannot put in place for some reason and if you can go above and beyond in the other control areas as a compensation mechanism [Music] from the standpoint of cissp it's important for the security practitioner or the security manager or ciso to have some type of framework for their security guidance and their security governance there's several popular frameworks from several organizations out there that are very influential the national institute of centers of technology nist is very influential especially when it comes to looking at questions you might see on the cissp exam this is a big source of their content and they have a cyber security framework that has basically five phases to it identify protect detect respond and recover in the identify phase you'll do your asset management your asset valuation your assessments you'll understand your business your business model your value proposition your service your product your environment and you identify risks by doing risk assessment and analysis and managing those risks specifically through four different risk treatments in the protect phase you're going to have awareness of the controls you're going to use we looked at those previously that's where you'll have your security awareness and training and education understanding how to secure your data at rest and data in transit block based data storage object based data storage on prem or in the cloud protection and procedures for information ongoing maintenance and their protective technologies in the detect area you're going to look for indicators of compromise anomalies and events negative events or negative occurrences are incidents an incident that rises to a critical level is a disaster you'll perform continuous security monitoring use a wide variety of detection processes and have solid communications in play in respond you'll do response planning building your incident response team or your swarm team also involving good communication analysis mitigation techniques and continual improvements and in the recovery phase that's your business continuity or continuity of operations and recovery planning that also involves backup and restore business impact analysis solid communication and ongoing continual improvements another important organization is the cis the center for internet security this is a repository for cyber security best practices tools threat assessment and threat awareness cis leverages the power of a global i.t community to help defend public and private organizations against various cyber threats using the cis benchmarks that's a consensus created secure configuration guideline for hardening data systems and applications the cis controls the cis top 20 is a strict ordered and simplified set of cyber security best practices and guidelines it's very popular it's used by a large number of multinational corporations and cloud service providers there's also cis secure suite this is a membership program that offers an automated combination of the cis benchmarks cis controls and the cis cat pro into a commanding and efficient cyber security resource ciscat pro allows users to evaluate conformance to best practices and guidelines and expand compliance scores on an ongoing basis organizations like amazon web services google cloud platform and microsoft azure the three leaders of the gartner quadrant will partner with the cloud security alliance the csa this is an organization committed to defining and raising awareness of guidance for organizations of all types and sizes to guarantee a secure cloud computing experience csa offers the ccm the cloud control matrix version 4 to ensure handling of requirements that stem from new cloud technologies new controls and new security responsibilities necessary auditability of controls for the organization and interoperability and compatibility with other standards as a homework assignment for the cissp exam you may want to research some other risk frameworks for example cobit 5 is an overarching comprehensive business and management framework that supports governance and i.t management for enterprises it's from the asaka or the isaca iso 31000 risk management are guidelines and principles a framework and a process for managing risk it's used by all types of organizations regardless of the sector or the size or their business activities this framework can help organizations achieve their security objectives improve the way they locate opportunities and threats and help them efficiently allocate and use resources for risk handling a parallel option would be to use iec 31010 2019 if you are in the credit card or bank card industry you want to use the payment card industry data security standard pci dss this is a cyber security framework that is actually supported by all of the major credit card companies and payment processing companies with the goal to keep credit and debit card numbers safe and also from isaca is risk i.t this framework fills in the gap between generic risk management concepts and a more detailed comprehensive i.t risk management approach it offers an end-to-end global view of risks related to the use of i.t and also a thorough treatment of risk management all the way from the top flowing down to all the operational modules regardless of the framework it's important for your enterprise to understand that to be able to manage significant types of risk especially in information systems and you have to leverage existing successful frameworks [Music] in this lesson i want to return to a diagram that we saw earlier in course number three now our topic here is counter measure selection and implementation now remember we want to approach this from more of an it management standpoint throughout the rest of this training course we will be looking at specific solutions that you would use to provide for security of the perimeter and of the network and of endpoints but the goal here is to understand as a manager which individuals who have roles and responsibilities from a security standpoint will be implementing perimeter and network security is it a different team that will be doing the endpoint security maybe it's your enterprise mobility management team that handles that most likely application security will be done by developers software engineers and data security maybe your database administrators if it's a small organization you might be responsible for implementing the perimeter security as well as the network security however as the organization gets larger often you'll have more of a modular approach where different teams or different elements of the security operations center have different zones or different domains so the same people who implement network security in the sense of network emission control ips sensors protecting the voice over ip may not be the same people that are putting in place your endpoint security your anti-virus your anti-malware as a matter of fact you may have a patch management group or initiative that is separate from the team that actually deploys the laptops in the mobile devices that does the imaging of the systems you may have different people who are there for fdcc compliance and privacy compliance your data loss prevention for example may not be an on-premises solution it may be a software service or a cloud-based solution often application security has a clear demarcation because typically the people in charge of application security are themselves software designers or software engineers or developers and architects the same team or people that provide perimeter security are probably not the same people who do static app testing and code review fuzz testing and dynamic app testing even though they may be involved with the database secure gateway and the web application firewall whether you're using a relational database management system or a document nosql style database or other types of solutions for example a graph database or quantum ledger database there's quite a few elements to data security obviously classification of the data encryption of the data for example aes 256 gcm but the identity and access management access of the data that will obviously probably be in line with your network security so again as a i t manager we have to understand who has the authority and who has a responsibility to choose the counter measures and the controls and to implement those the selection of those counter measures may involve a steering committee or a security team or even executive management and those who do the implementation have to go and get the funding and get the approval before they can deploy this in the security operations center or the network operations center or the corporate land or call center or whatever and of course there's policy management and all the aspects that go into that we're talking about risk management right now we've talked about it security governance security awareness training so several of these issues have been covered so far in this course you will look at things like penetration testing and application security architecture and design and continual visibility through monitoring and response making sure we have the right products and the right services and the right people in place to deliver that continual improvement [Music] in this lesson we want to explore looking at the success of implementing your security controls one way that you could do this one approach is to use an sca a security control assessment this is a formal evaluation so it's an official assessment and overall look at a system or a development project or an application and you're going to compare this against a predefined set of metrics and controls the sca is performed in with or independently of a full security test and evaluation which we call an st e this is performed as part of an official security authorization for example for an audit or as part of a penetration test or perhaps compliance testing the sca and the st e will appraise the operational plan or the plant implementation of your controls the results are a risk assessment report that represents a gap analysis documenting the residual system application or data risk assessment tests conducted should include audits security reviews vulnerability scanning and penetration testing now whether you do a formal sca and or an stne the end result will be a determination of the maturity level of your security initiative or your organization's security governance once you complete your security control assessment you should be able to determine where you are in the cmm the capability maturity model hopefully you're not at level 1 which the cmm officially calls initial factor analysis of information risk fair calls this chaotic and that's what it is your decision making is a free for all it's based on ad hoc intuition with limited experience if any poorly defined decision making inconsistent results not aligned with any type of leadership or mission or charter there's no defined key risk indicators or key performance indicators or csf's critical success factors in fact there's very few meaningful metrics and any success from a security standpoint is based on individual heroics level two is what we call repeatable fair calls this the implicit mode the process has not codified there is no established schema you're still vulnerable to inconsistency from a security governance standpoint poorly aligned standards and practices any data or reports offered to decision makers is usually superficial it won't hold up to scrutiny or audit you have unclear roles and responsibilities people will make decisions outside their role or responsibility or level of authority here risk is defined purely qualitatively for example on a scale of one to five or a scale of one to ten with very little expert judgment you may have some established metrics but they're questionable as to how meaningful they are level three should be a baseline for almost all organizations we call this defined or early explicit as far as fare is concerned we have some standardized terminology and assessments are up to date you have better support for your security team you have improved visibility you have robust and defensible analysis of your applications systems and data you have established security teams for example incident response teams or swarm teams you have a steering committee you have a service desk you've populated your risk register or your wrist ledger and you've defined key performance and key risk indicators and you have well-calibrated meaningful metrics at this point you might have a more precise semi-quantitative risk analysis the best that most organizations can hope for is level 4 mature explicit or managed the processes are controlled they can be adjusted or adapted to particular projects without measurable loss and quality you have continual improvement in play you have visible quality you have up to date risk registers patch management vulnerability assessment penetration testing the data that you use for risk treatment or your risk handling is accurate it's based more on quantitative analysis whitman or open fare and you've got well-defined and well-tested indicators and metrics and you're well on your way to being certified or getting the assurance that you need from your governing body or your regulator or whoever is providing your governance course level five is rarely achieved but it's possible if you have mastery and extreme maturity in one of the frameworks we mentioned earlier like cobit 5 iso iec itil4 [Music] in this lesson we're going to look at monitoring and reporting and let's start out with our theme systems a very popular solution for getting visibility into the enterprise the term theme is a combination of security information management and security event management seems systems which can be physical or virtual or cloud-based centralizes the storage and analysis of logs and other security related documentation to perform near real-time analysis you can optionally send filtered and processed data to mine bigquery activities and data warehousing servers for machine learning algorithms in your data center or at a cloud service provider team systems allow security enabled professionals to take counter measures perform rapid defensive actions and handle incidents more successfully an example of a cloud-based seam is microsoft azure's sentinel as you can see in this diagram seam systems have a wide variety of different activities they can perform not all systems will perform all of these activities however the more broad the more comprehensive the better the results from a security standpoint seam systems can collect and analyze logs they can perform event correlation deduplication and normalization you can use them for forensics for it compliance you can monitor all different types of logs system security application you can audit object access you can get real-time alerts dashboards reporting you can perform file integrity monitoring system and device log monitoring and of course log storage and retention often however the seam system will use a different type of storage either data warehousing or object storage in the cloud to conduct cloud-based bigquery and machine learning some common sources of data that are sent to seam systems would be firewall appliances and firewall systems ids and ips sensors assorted server logs web servers email servers network time protocol servers directory servers infrastructure devices switches multi-layer switches and routers specialty email and web appliances or services and of course system logging including snmp traps and informs obviously the more that you can automate and orchestrate the security process of visibility and reporting the better your results automation's different than orchestration i.t automation from a security standpoint involves generating a single task to run automatically without any human intervention automation could involve sending alerts to a seam system dynamically triggering a serverless function at a cloud provider for example azure functions or aws lambda or adding a record to a database when a batch job is run orchestration involves managing several or many automated tasks or processes it's the more global activity as opposed to focusing on one task orchestration combines all of the individual tasks orchestration occurs with various technologies applications containers data sets middleware systems and more for example historically we would use docker to produce a containerized application and something like kubernetes as an orchestration service for all of our docker containers another solution is soar security orchestration automation and response soar is an assortment of software services and tools it allows organizations to simplify and aggregate security operations in three core areas threat and vulnerability management incident response and security operations automation security automation involves performing security related tasks without the need for human intervention soar can be defensive detection response and remediation or offensive vulnerability assessment and penetration testing used in active defense you should automate if the process is routine monotonous and time consuming now that we've automated our visibility and monitoring using seam and source systems let's generate some meaningful reports reports should involve meaningful metrics reports should have as much information as necessary or as needed but should not be a data overload you may need to express things in simpler terms or have different reports for different target audiences the recipients of your reports typically don't have the same technical experience or security knowledge that you do so it needs to be understood by your target audience dashboards are very effective often security engineers will use python or the r programming language to produce modern visually engaging reports so it helps to understand components of visual communication and best practices you want to avoid three-dimensional representations use a palette of sequential colors and avoid simple pie charts opt for scatter plots bars and bubble charts venn diagrams density plots and box plots utilize tools that deliver meaningful and easily digestible results cloud service providers have tools such as cloud watch cloud trail operations and insights create powerful dashboards with maximum visibility using our programming generate automated system reports provide written pdf summaries have engaging charts and graphs and make sure that you have after action reports which includes sections on lessons learned things you did right things you did wrong and how you can improve next time in the next lesson we'll look specifically at continual improvement life cycles [Music] let's talk about continual improvement in this lesson and in doing that we're going to take this overlay of three different models we'll start with the simple model of plan do check and act pdca so if you're going to have an initiative of continual improvement obviously you want to do a lot of planning as much as possible as much information gathering as possible then once it's time you do you have an action plan all along the way you keep checking in an iterative way for ways that you can improve once you find ways to improve based on the original due baseline then you act and those actions will involve continual improvement and lead to additional planning so it's a circle that we go through of continual improvement let's look next at the seven steps for continual improvement first in the upper right hand corner we identify the strategy for improvement we have a vision what is the business need or the organizational need we have strategic goals and we have tactical goals strategic goals are more based on governance your mission your value proposition tactical is the way that we actually implement continual improvement as well as operational goals that's an ongoing maintenance then in step 2 define what you will measure and then in step three we gather the data notice in that quadrant we're in the data quadrant where we just basically are collecting raw data the who the what the when the criteria maintaining data integrity making sure that we are secure by default and secure by design gathering data for our operational goals and the tools that we're going to use for measurement next we have step four how often do we process this data for continual improvement how do we format it what type of automation tools and systems do we do how do we test for accuracy notice that in this quadrant the data now becomes information raw data becomes usable information in some type of format feeding into tools and systems and engines in five we analyze the continual improvement information and we analyze the data we look for trends we look for moving targets we discover what improvements are necessary this is knowledge notice how information becomes knowledge once we start building in analysis modeling trend analysis and overall discovery of the necessary improvements in step 6 we present and use the information this is where we have our assessment summary and our action plans and then finally we implement the improvement notice that the ultimate implementation of improvement is the wise thing to do we start with raw data it becomes information it gets processed and then we get analysis which gives us knowledge and our ultimate goal continual improvement and the wisdom that's derived from this very important process [Music] in this lesson we will define threat modeling and look at several threat modeling methodologies the process of threat modeling involves creating an abstraction of a system or a prototype to identify risk and probable threats this is often done in a private cloud environment or a sandbox also referred to as a detonation chamber for example in the diagram we're looking at the different stages and phases of malware so a new malware variant could be tested and launched and analyzed in a sandbox environment or a detonation chamber in your own data center or perhaps at a cloud provider when this cyber threat modeling is applied to systems that are being developed it can help us lower vulnerabilities and lower risk with the widespread adoption of threat intelligence technologies most enterprises and this includes government agencies for example like the u.s department of defense cyber threat modeling is extremely important to these federal programs department of homeland security dhs nasa for example they're constantly adopting threat focused approaches to risk management a structured approach and ongoing processes to analyze the security of systems application and data threat modeling provides visibility increased security awareness and prioritization and understanding of the security posture but the ultimate goal of threat modeling is to mitigate all threats and prevent future attacks as well as quickly responding to new variants and zero-day code stride and pasta are common threat modeling methods stride stands for spoofing of user identity tampering repudiation information disclosure denial of service and elevation so in this methodology there are six different categories of malware threats spoofing tampering repudiation information disclosure dos and elevation or escalation of privileges it's a threat model initially developed by microsoft back in 1999 and it has six classes that represent attackers prevalent goals pasta stands for process for attack simulation and threat analysis it's a risk-oriented method that endeavors to link business objectives to technical requirements pasta has seven stages with the goal of delivering a dynamic process ranging from identification in stage 1 to enumeration to scoring in stage 7. trike and vast are two other common modeling solutions along with dread trike is a technique frequently used as a risk management tool during security audits it is a unique open source threat modeling method focused on enhancing the security auditing process from a cyber risk management perspective vast is visual agile and simple threat modeling vast attempts to address the limitations of other threat methodologies such as dread stride and pasta by using a more practical approach the founding principle of vast is that in order to be effective threat modeling must scale across the infrastructure and the entire devops portfolio vast has separate operational models and application models [Music] in this final lesson we're going to explore supply chain risk management scrm a huge challenge to modern supply chains is that several even thousands of suppliers can actually contribute to a single product many risks exist because vendors employees for example may introduce cybersecurity vulnerabilities from hardware software and services used in the chain some tiers of the supply chain may be considered proprietary as well so that a lack of visibility impedes the security life cycle this can make third party assessment and monitoring more difficult in other words it can be almost impossible to do security control awareness at certain points in the chain at the heart of scrm is delivering meaningful metrics an analysis related to exposing specific areas of the supply chain for example cargo disruption trends exposing transit modalities the threats that are posed by terrorist groups or activist groups and other criminal syndicates there are country risk variables such as the rule of law and the effectiveness or inability to provide law enforcement at different phases of the chain when dealing internationally governments continually work with the trade community to manage and mitigate supply chain security risk weaknesses in the supply chain for example were exposed especially during the year 2020 during the global pandemic from a u.s point of view the u.s customs and border protection cbp and customs trade partnership against terrorism c tpat programs are heavily involved in supply chain security and risk initiatives involve third party assessment and monitoring as well as setting minimum security requirements and service level requirements here's an example scrm process starting with the npi the new product introduction notice that we have a life cycle here plan source make internally then make externally and then deliver it's a simple model for a new product but we have to identify and document risks early on in step one in step 2 we create a supply chain risk management framework in step 3 we monitor risk using customized and automated tools in step 4 we implement our governance and conduct regular audits in step 5 we manage unknown risks by building strong defense in depth all along the supply chain whenever possible in a security aware culture as a partnership between all the members of the supply chain [Music] in this course risk management you learned about vulnerability and risk assessment counter measure selection and implementation risk frameworks monitoring measuring and reporting and threat modeling and supply chain risk management in the next lesson you'll learn about practical cryptography cryptographic life cycles and key management and pki public key infrastructure [Music] in this course on practical cryptography we're going to begin with the type of algorithm that's been around since the beginning of recorded history we have examples of this from julius caesar we have examples from early in the world wars basically it's symmetric key algorithms and it's where the same secret key is used for encryption and decryption the secret key must be shared between the sender and the receiver in a secure fashion there's a lot of ways to do this you can do it out of band you can do it in band for example with the courier or delivery service or some other secured communication channel or as we'll see later we can actually use an asymmetric key crypto system to derive a shared secret key between two parties over an untrusted network the strength of the symmetra key crypto system is related directly to the management of those keys and the size of those keys and how they're shared key lengths are typically from 40 bits to 512 bits in most circumstances the key length should not be less than 64 bits in length longer keys are less susceptible to a successful brute force attack symmetra key algorithms provide wire speed encryption they're commonly used to encrypt bulk data for example data sent between two peers on a site-to-site vpn or the customer managed keys used at cloud service providers to encrypt data at rest symmetric key algorithms commonly deploy confusion and diffusion techniques added to transposition to make them impervious to attack and they can be accelerated by hardware for example hardware security modules from gemalto and these algorithms use stream ciphers where the bits are xored or exclusive or in a stream or they can be block ciphers where an entire block let's say of 128 bits is encrypted if the final block doesn't have 128 bits historically it would be padded up to 128 bits block ciphers are often used there are several modes for symmetric key algorithms an older mode a traditional mode is ecb electronic code book this is the simplest mode where each block of data is simply encrypted with the same key and it leverages a very large code book a replacement for ecb was cbc cipher block chaining we also saw this mode in upgrades the 802.11 wireless networking cbc improves on electronic codebook by making the encryption of each block dependent on ciphertext of the previous block we also see this today in the technology we refer to as blockchain ctm or counter mode generates the next key stream block by encrypting successive values of a counter by adding counters to block chaining we make the algorithm less deterministic and then we have gcm galwa counter mode this is an authenticated encryption mode that concurrently combines confidentiality authenticity and integrity and it's based on galwa fields in galwa counter mode we used a large initialization vector or pseudo-random nonce along with several layers of confusion and diffusion in addition we have a counter to make it less deterministic gcm is also an aead authenticated encryption authenticated decryption which means we don't need an extra hmac or hashed message authentication code if we deploy it with its own gmac gcm is one of the most popular modes used by advanced encryption standard aes aes became effective as a u.s federal government standard on may 26 2002 after approval from the secretary of commerce aes is included in the iso iec 18033-3 standard it's available in many different encryption packages and it's the first and only publicly accessible cipher approved by the nsa for top secret information when used in an nsa approved cryptographic module the next category are asymmetric key algorithms with asymmetric keys different keys are used for encryption and decryption specifically they're a generated key pair they're generated together and they're mathematically related the public key will be shared with many but the private key hopefully is kept secret by the owner or the system that derived the key pair asymmetric key algorithm keys range from 512 bits to 4096 bits in length asymmetric keys are not used to protect bulk data but rather smaller amounts of information like the amount of information in your national id card or passport or driver's license or the information in a digital certificate or a 128-bit symmetric session key these algorithms are slower so they're not suitable for bulk data encryption the design of many asymmetric key algorithms is based on the problem of factoring the product of large prime numbers as a matter of fact when using asymmetric algorithms one of the first actions is to generate a large prime number key management of asymmetric algorithms is simpler and more secure primarily because of public key infrastructure and it's best suited for digital signatures and session key exchanger agreement protection services rsa is the most popular commercial asymmetric algorithm but there's also dsa elliptic curve dsa pgp pretty good privacy gpg and others and the diffie-hellman protocol when using an asymmetric key crypto system for privacy or confidentiality let's say when alice wants to send a confidential message to bob alice will first get bob's public key which is readily available for example in a digital certificate she will encrypt the message with bob's public key and then bob will decrypt it with his secretly held private key on the other hand if alice wants to send bob a message and give bob a high degree of confidence that the message came from alice origin authentication she will encrypt it with her private key and then bob will acquire her public key and then decrypt it with alice's public key the diffie-hellman key exchange is the first key agreement asymmetric algorithm used for generating shared secret keys over an untrusted channel once the two parties securely develop shared secrets they could then use the keys to derive subsequent keys these keys can then be used with symmetric key algorithms to transmit information in a protected way diffie-hellman can also be used to establish public and private keys however rsa tends to be used instead the rsa algorithm is also capable of signing public key certificates whereas the dippy helmet key exchange is not diffie-hellman is used by tls ipsec secure shell pgp and many other uses in the divi helmand key exchange modulo math is used there are different diffie-hellman groups diffie-hellman group 14 is a 20 048-bit modulus this is the minimum acceptable diffie-hellman group 19 uses a 256-bit elliptic curve as opposed to modulo math this one's acceptable diffie-hellman group 20 is a 384-bit elliptic curve this is considered next-generation encryption and diffie-hellman groups 21 and 24 both use elliptic curve and are considered next generation the original diffie-hellman mode dh used the same shared secret all the time between parties the original dh by itself does not provide authentication of the communicating parties diffie-hellman ephemeral or ephemeral dh uses different shared secrets each time between the parties a cryptographic key is called ephemeral if it's generated for each execution of a key establishment process elliptic curve diffie-hellman uses ec public and private key pairs the same shared secret is used all the time between the parties the very popular ec dhe elliptical curve ephemeral diffie-hellman uses ec public and private key pairs and a different shared secret is used each time between the parties this is emerging as the most popular mode of diffie-hellman [Music] in this lesson we're going to look at cryptographic hashing a cryptographic hash also referred to as a hash value a message digest a fingerprint or checksum maps data of any arbitrary length to a fixed length string for example 256 bits using sha2 it produces a digest from 128 to 512 bits in length it's based on an irreversible one-way mathematical function as you can see in the diagram we input data of arbitrary length into the hash function and we take the fixed length hash result and we append it or attach it to the original data and send that along to the recipient to meet the goal of integrity in the cia triad for a cryptographic hash function to be considered reliable or trustworthy it should be unfeasible if not impossible to generate the original data from the hash value it should be deterministic and quick to compute and any small change for example one bit flipping from zero to 1 or vice versa should result in the avalanche effect in other words the resulting fixed length hash should be completely different and it should be collision resistant in other words no two different data or message inputs should generate the same hash value that's one of the reasons why you should never use md5 in any new security implementation md5 was based on the md4 algorithm it should no longer be used it's been deprecated sha-1 which produces 160-bit digest should also be set aside in favor of sha-2 256-bit sha or sha-3 3 was not meant to replace sha2 but it's highly recommended you can also use race integrity primitives evaluation message digest or ripe md with its 128 160 256 and preferably 320 bit versions now hashing alone gives us integrity however if we combine a shared secret key with the algorithm we get origin authentication as well we call these hmacs hashed message authentication codes the hash is the hash function for example sha-256 the key is the mac as you can see here we want integrity in origin authentication for information sent between routers this can be inside the organization or it could be over the internet using a protocol let's say border gateway protocol the routing update itself is the data it goes to the hash function combined with a shared secret key and that gets appended to the routing update notice however we don't get confidentiality or privacy to do that we would use other cryptographic algorithms once the other router receives the advertisement or hello or update it uses the same cryptographic hash algorithm and the shared secret key to make sure that the information has not been changed in transit and that it came from an authenticated source another party with the same secret key here's an example of the same thing with a transaction the transaction is the data that's combined with the shared secret key and by the way this shared secret key could be derived using diffie-hellman in one of its modes it could even be a key derived from the original dippy helmet key that goes through the hashing algorithm and then gets appended to the original transaction and sent to the recipient hmacs are often combined with symmetric key cryptosystems to protect data in bulk giving confidentiality integrity and origin authentication [Music] digital signatures are a form of electronic signature that was designed to replace and or augment physical handwritten signatures they're a mathematical algorithm commonly used to validate the authenticity and integrity of a message such as an email a credit card or online transaction or some form of digital document realize digital signatures do not provide confidentiality or privacy they generate a virtual fingerprint that is unique to an entity and used to identify users and protect information in digital messages or documents such as a digital certificate digital signatures are more secure than other forms of electronic signatures let's say a user wants to send a digitally signed purchase order for one hundred thousand dollars to a vendor the original purchase order or original data of arbitrary length goes through a sha 1 or shad 2 hash the result is a fingerprint or a digest the fingerprint or digest is then encrypted or signed using the private key of the sender in this example the rsa algorithm that result is then attached or appended to the original purchase order sent over the untrusted network and then the vendor or the recipient will use the public key of the sender which it got through a authenticated channel or a handshake or possibly in a digital certificate and using this same hash function unpacks the result matches the original hash and concludes that the purchase order has maintained its integrity and there's a high degree of confidence that it came from the authenticated sender because the sender's private key was used a digital certificate is a form of a file used to tie cryptographic key pairs to entities such as individuals websites devices or entire organizations if public trust is needed then a trusted certificate authority ca will assume the role of a third party to validate identify and associate them with cryptographic pairs using the digital certificates in fact the certificate authority could also generate the key pairs on behalf of the sender or the receiver the key pair consists of a public key and a private key the public key is included in the certificate while the private key is kept secure or at least that's the goal the owner of the private key can then use it to sign documents and the public key can be used to verify the validity of those signatures a common format for digital certificates is based on the x 509 v3 standard which consists of a public key a digital signature and other metadata such as a unique serial number about the entity linked to the certificate also included is information about the issuing certificate authority we'll revisit digital certificates when we talk about public key infrastructure pki later on in this course [Music] in this lesson let's explore advanced forms of cryptography starting with the aforementioned elliptic curve cryptography elliptic curve cryptography uses rich mathematical functions based on points on an elliptic curve the algorithm computes discrete logarithms of elliptic curves which is different from calculating discrete logarithms in a finite field the smaller and more efficient keys offer exceptional speed and strength for example a 256 elliptic curve key equals a 3072 normal key for example a key used with traditional diffy helmet elliptic curve cryptography is used in digital signatures key distribution and encryption it's excellent for mobile devices and iot it uses less processing power and it gets better security with a smaller key space it's also faster there are two common public applications of elliptic curve cryptography there's the elliptic curve digital signature algorithm ecdsa this introduces a variant of the digital signature algorithm dsa by utilizing the elliptic curve cryptography however there exists some political and technical concerns in the usage of ecdsa more popular is elliptic curve diffie-hellman and elliptic curve diffie helmet ephemeral ecdh and ecdhe this is a key agreement protocol that's extremely popular it allows two parties each having an elliptic curve public private key pair to generate a shared secret key or keying material over an insecure channel this shared secret may be directly used as a key or it may derive another key the key or the derived key can then be used to encrypt successive communications using a symmetric key cipher quantum computing personal computers use bits ones and zeros whereas quantum computers use qubits qubits are typically subatomic particles such as electrons or photons quantum computing derives its power from the fact that qubits can represent numerous possible combinations of 1 and 0 at the same time this ability to simultaneously be in multiple states is called superposition post-quantum cryptography involves developing new crypto systems that can be implemented using today's existing computers but they will be resistant to attacks from tomorrow's quantum computers which are much more powerful much faster and stronger post-quantum computing involves increasing the size of digital keys developing more complex trap door functions lattice-based cryptography and super singular isogeny key exchange quantum communications is the combination of quantum physics and computing quantum communication leverages the laws of quantum physics and quantum computing to protect data some organizations are transmitting highly sensitive data using qkd quantum key distribution qkd sends encrypted data as normal bits over the network while the decryption key information is encoded and transmitted in a quantum state using the aforementioned qubits these networks are theoretically ultra secure homomorphic encryption helps us protect data in use or data resident in volatile memory data remains encrypted while still being processed cloud service providers like aws for example can apply homomorphic encryption functions on the encrypted data for example in a redis elastic cache this encryption commonly uses public private key pairs and it gets its results by using algebraic operations on the encrypted ciphertext [Music] one of the general truths about cryptography is that the main weakness of cryptosystems is in the key life cycle obviously that introduces the human factor so in this lesson we're going to look at a key management life cycle keys can be generated through the key manager which could be software hardware security modules managed by a key owner or custodian and it can be done through a trusted third party regardless you're using a cryptographically secure random or pseudo-random bit generator the keys along with their attributes will then be stored in the key storage database which again could be an hsm and must be encrypted by a master key other attributes or metadata include name activation date size and instance and those keys may be cryptographically hashed a key can then be activated upon its creation or set to be activated automatically or manually at a later date and time after key generation we have key distribution and loading the objective of distribution and loading is to install the new key into a secure cryptographic device either manually or electronically this could be into let's say a secure enclave on a mobile device for manual distribution keys must be distributed and loaded in key shares to avoid the full key being viewed in the clear when symmetric keys are installed it is recommended that they be encrypted by a public key or key encryption key prior to being delivered for deployment the key should be deployed and tested for a certain time period to ensure that operations are successful in order to avoid any potential data loss or theft after key distribution and loading we have key backup and key storage in order to recover a key that's been lost during its use a secure backup copy should be made available backup keys can be stored in a protected form on external media cd optical disk usb drive a cyber currency hard wallet one that uses a usb cable or a camera it could be a hardware security module hsm or by using an existing traditional backup solution either local or over the network when a symmetric key or an asymmetric private key is being backed up it must also be encrypted and stored it's also helpful to use mirroring or raid arrays for high availability and redundancy another part of the key life cycle is the normal use and replacement of keys the key management system should allow an activated key to be retrieved by authorized systems and authorized users the system should also effortlessly manage current and past instances of the encryption key the key manager will replace a key automatically through a previously established schedule or if it's suspected of compromise for example an access key at a cloud service provider that's being used by programmatic users or service accounts when replacing keys the goal is to bring an extra key into active use by the system and to convert all stored secure data with the new key archival refers to offline long-term storage for keys that are no longer in operation long-term storage can be accomplished with gemalto hardware security modules or cloud-based hsm these keys usually have data associated with them that may be needed for future reference such as long-term storage of emails there may also be associated data or metadata in other external systems we call this remnants when archiving a key it must be encrypted to add security and before a key is archived it should be proven that no data is still being secured with the old key the final phase of the life cycle is key disposal or key disposition all instances or certain instances should be completely removed the end of life for a key should only occur after an adequately long archival phase and after adequate analysis to ensure that loss of the key will not correspond to loss of any data or loss of any other useful keys there are three ways to remove a key from operation key destruction key deletion and key termination the most effective is key destruction [Music] key stretching is the process of lengthening symmetric keys to at least 128 bits the initial key the password or passphrase is fed into an algorithm and an enhanced key is produced after many iterations this increases the time it takes to perform brute force attacks on a key common algorithms are bcrypt and pbkdf2 key stretching can be done by password managers software tools hsms or using other protocols like wpa3 for wireless key escrow is where a third party has a copy or has access to private keys they're only allowed access under strict conditions for example a court order this could be cloud service providers it could be cyber currency exchanges or other third-party entities that deal with symmetric keys private and public keys for example certificate authorities issues can arise with key escrow for example the request for access process can be expensive or complicated some entity has to authorize the legitimacy of a request and there's the risk of granting the access and what's the vulnerability of the systems involved in the key escrow process i've mentioned the term hsm or hardware security module several times this is a tamper-proof hardened device that provides crypto processing and protection of cryptographic keys and functions now realize the hsm can be a physical one rack unit or two rack unit device but hsms can also be installed in hypervisor situations this allows for multi-tenancy or abstracted hsm or hsas hsms involve partitioned administration and strict security domains they can be used to apply corporate key use policies for example to be compliant to gdpr pci dss sarbanes-oxley and others hsms can be used in place of software crypto libraries and ssl tls accelerators key administrators can access hsms either over a network or more often in an out of band method with direct connectivity from a management device and they can store symmetric keys and asymmetric keys and be involved in every phase of the key management life cycle [Music] on the cissp exam it's important to be familiar with public key infrastructure or pki especially from a management standpoint pki is based on the concept of the trusted introducer that came from pgp in other words the trusted third party in a web of trust is a certificate authority in its most basic form pki is a scalable method for securely distributing and revoking public keys on a global basis there are many cas or certificate authorities these are trusted introducers companies like comodo geotrust digicert and thought these organizations securely store issue and sign certificates and in some situations they may actually generate public and private key pairs on behalf of clients in the diagram user a generates a public and private key pair for example using rsa so they have a private and public key they will also get the public key of the certificate authority that'll be stored in some root storage on an operating system or in a browser store the user's public key is stored in a digital certificate issued by the ca and the ca will sign or encrypt that certificate with their private key certificates can now be exchanged over untrusted networks as their certificates and public keys of entities are now verified with the public key of a certificate authority user a and user c can now conduct transactions it's important however that early on the exchange of digital certificates between user a and user c be done over a trusted channel or a secure channel for example transport layer security or ipsec in addition user a and user c don't necessarily have to have the root certificate of the same certificate authority because there's a web of trust between cas globally on the internet two public key algorithms will be involved there's one within the certificate this is the subject's public key algorithm for example some 160 bit elliptic curve mechanism and the one that was used by the certificate authority to sign or encrypt the certificate such as rsa 2048 those two algorithms will be in the digital certificate along with a serial number a subject alternative name for example an ip address or a fully qualified domain name and other extensions to offer better security cas will have a hierarchical trust model realize that the certificate authority can be an enterprise ca within an organization or it could be a public ca like comodo or godaddy or you can install a ca on an edge device for a vpn gateway there's several applications regardless there is a hierarchy the root ca provides certificates to intermediate cas if you have them intermediate cas provide certificates to users and other down level intermediate cas the route could be online or for security reasons it could be offline or air gapped if it's online it's connected to the network and it issues certificates over the network if it's offline it's disconnected and it issues certificates on removable media certificate chaining is very important it's referred to as a trust delegation where each ca signs the public key of the ca level below alternatively it's very common for cas to cross-certify each other without some strict hierarchical relationship being in place that goes back to earlier when i said user a and user c don't necessarily have to have the root certificate of the same ca on their systems or their browser store the ca must be in trusted store but it's not possible to include all cas a chain of trust is established by an issued to field and they issued by field in the diagram the root ca issues a certificate to intermediate 1 then intermediate 1 issues 1 to intermediate 2. this will be in the certificate issued to the user which can be a person an organization a system or a device if the primary goal of pki is to securely globally distribute public keys and certificates the secondary goal is to revoke a certificate when it's no longer valid the crl certificate revocation list is the original method using a list of certificates that are invalid based on the serial number the crl is issued by the certificate authority who granted the certificate and the ca also determines how often this is updated generated and published it can be done in defined intervals for example every four hours or pushed immediately it can also be downloaded by clients regularly but not in real time realize that suspending a certificate is not the same as revoking a certificate suspension does not place the serial number on the certificate revocation list so if someone loses a device or goes on a leave of absence you can suspend the certificate as opposed to revoking it a newer method built on the crl concept is online certificate status protocol or ocsp this is a method for a web browser for example to determine the validity of an ssl tls certificate by verifying with the vendor of the certificate it's an online transactional database built on the concept of the crl by using the serial numbers generated and published immediately clients can query the database anytime although many such as web browsers will bypass this process ocsp improves security but it can cause websites to load slower also not all vendors and websites support ocsp because of the impact on interoperability or for financial competitive reasons ocsp stapling is a method for quickly and safely determining whether a tls server certificate is valid stapling involves the web server downloading a copy of the vendor's response which you can then deliver directly to the browser or some other web client the web server can provide the browser information on the validity of its own certificates instead of requesting the information from the certificate's vendor to provide ocsp stapling a status underscore request underscore extension in tls 1.2 or higher is used by the web client to indicate support for this feature the tls server will send fresh certificate information in the tls handshake protocol stapling supports only one ocsp response and is used to check the status of the server certificate only and finally we have certificate pinning this is a security method for associating a service with certificates and public keys it offers three key improvements it reduces the attack surface by letting owners pin the cas that are allowed to issue certificates for their domain names realize any ca can issue a certificate for any domain for example google pre-loaded public key pinning for most of their sites starting in chrome 13 browsers pinning provides key continuity without relying upon public certificate authorities pinning can be used for authentication using a secure channel for example transport layer security [Music] cryptology is made up of two disciplines cryptography which is the creation and generation of cryptosystems and cryptanalysis cryptanalysis is the study and practice of exploiting weaknesses in communication protocols and cryptosystems most known methods of cryptanalysis like brute force are ineffective in most situations and on most modern cryptographic algorithms based on the fact of there's not enough time there's a lack of computing power with the exception of quantum computing and there's good key management and good life cycles because of automation orchestration or cloud-based key management services and hsms most weaknesses are found in the implementation and key management as opposed to the algorithm itself classical cryptanalysis typically involves two disciplines mathematical analysis that involves analytical attacks which exploit the internal structure of the encryption methods and brute force analysis which treats the algorithm as a black box and tries all possible keys in the key space we also have implementation attacks the most common implementation attack is a side channel attack where one would measure the electrical power consumption of a processor operating on a secret key the power trace is used to determine zeros and ones and ultimately gives information about plain text or keys to be successful with an implementation attack you typically need physical access let's say to a smart card or an hsm or be in a proximity to a wireless access point another form of cryptanalysis is to perform social engineering this involves the exploitation of human weaknesses by leveraging the ability to trick coerce or extort a subject for information it can involve spoofing hoaxing shoulder surfing dumpster diving fishing attacks and more most attackers will attempt to find vulnerabilities through social engineering before using other cryptanalytic methods which are more time consuming and costly from a resource standpoint for example let's look at attacking the rsa protocol the most common commercial asymmetric key crypto system you can do a protocol attack to exploit weaknesses in the way that rsa is used however padding and proper implementation can counter measure protocol attacks then you have mathematical attacks the best example would be to use quantum computing to factor the modulus although 1024 bit rsa is adequate a modulus of 2048 to 4096 bits is highly recommended in all new security implementations today or consider using elliptic curve then there's side channel attacks information about the private key is leaked via a physical channel such as power consumption spa and timing behavior [Music] in this course on practical cryptography you learned about cryptographic methods digital signatures and digital certificates key life cycles management and public key infrastructure pki and advanced cryptography and cryptanalytic attacks coming up in the next course you'll learn about controlling physical and logical access access control architectures and access models [Music] now as we dive into this course i want to emphasize the fact that in course 12 coming up later we're going to dedicate that to site and facility security what we're going to mention here in our first lesson is the importance in identity and access management to remember it's not just logical access that we need identity access control but physical access as well so we're going to touch on that real quickly here and remember that we're going to be dealing with lighting and cameras and often that works together because if our cameras are functioning at night whether they be closed circuit television or webcams we want to make sure we have lighting along with that the right kind of lighting we want to make sure we have no dead spots in our cameras and our lighting also we have barricades we have bollards the barricades may be out at the perimeter for example a gate a wood gate that comes down maybe a metal gate tire shredders and bollards in front of the entry of our facility we have different types of fencing different types of gates and cages we may have security guards within our facility or maybe at a guard gate at the edge they may be contractors they may be employees they may be armed or unarmed and along with security guards will have signage signs being a key deterring control our most valuable assets if they fit will be in safes and secure enclosures secure closets for example we need to protect all of our cable runs all of our distribution frames throughout our building our facility we want to air gap some of our most critical servers for example you may have man traps we'll talk more about that in course 12. locking mechanisms and biometrics tokens cards and badges that we use to control physical access those same badges can be used to control physical access to different areas or elevators or different buildings or floors we can also use those same cards or tokens to get access to systems like our personal computers or our workstations and servers various different physical controls will generate different types of alarms sensors will send alarms of various types to email to sms to management stations we may have cable locks locks on our docking stations for our laptops and screen filters to block over the shoulder reconnaissance and of course fire prevention detection and suppression this is just a quick rundown of techniques and controls that we can use to get physical security and then of course there's logical access we'll have ipsec and tls vpn gateways running on routers firewalls servers or specialty appliances they may also be virtual vpn gateways at a cloud service provider or a sas provider we can use ieee 802.1x port-based network access control at layer 2. we can have identity and access management multi-factor authentication something you know something you have and something you are perhaps a biometric we can use identity providers such as active directory in kerberos in addition our identity providers may also have federated access to other service providers for example using sample 2.0 or oauth 2.0 and open id connect and we can control logical access with access keys and secure logical tokens for example json in the next lesson we'll begin our journey through identity and access management [Music] in this lesson we're going to explore cloud and third-party identity models and where we're dealing with cloud providers and third parties more often than not today we're using hypervisor technology or virtualization so before we dive into cloud models let's just quickly define different types of hypervisors first we have the type 1 hypervisors we also call this native or bare metal here you install the hypervisor software directly onto the hardware so for example you would format your solid state drive and install your open source hypervisor like kvm or zen or a proprietary hypervisor then on that hypervisor you'll install one or more guest operating systems onto the host and possibly applications now the goal of the hypervisor is not just to allow multiple operating systems and applications to run simultaneously but also to as kind of a supervisor as all the different operating systems and applications share the same finite amount of resources on the underlying hardware the disk space the cpu cycles and the ram memory for example type 1 hypervisors are the type that are running at your cloud service providers a type 2 hypervisors is one you might run on your personal computer for example you have the underlying hardware of a workstation or a laptop and then a host operating system like linux or windows and then you install your hypervisor software for example oracle virtualbox or vmware workstation notice here are host operating systems running other applications besides the hypervisor and the hypervisor of course can have multiple guest operating systems and multiple applications you won't get the same kind of performance that you get with type 1 hypervisors but they can be very convenient for doing penetration testing creating sandbox environments or testing out operating systems and applications before you put them into production virtualization has some vulnerabilities for example vm sprawl this is when the number of vms overtakes the administrator's ability to manage them and the available resources this can introduce problems like using unlicensed software or downloaded apps it could be a digital rights management issue or a data leakage issue it may be against the aup to even install these type 2 hypervisors on corporate systems to avoid vm sprawl we want to enforce a strict process for deploying vms have a library of standard vm images that represent the only authorized images that can be used if we're using hypervisors at all you want to archive or recycle underutilized vms and use a virtual machine lifecycle management tool or a cloud service provider managed service there's also vm escape this is a huge issue for identity and access management it's a serious threat where a running process in the guest interacts directly with the host operating system to protect against vm escape we want to patch our vms and vm software on a regular basis we only want to install what we need on the host and the vms install verified and trusted applications only for example only digitally signed applications and have strong iam access controls and passwords preferably multi-factor authentication in this section of the exam they also want you to be aware of the different cloud computing service types this relates directly to identity and access management according to nist iaas is the capability provided to the consumer to provision processing storage networks and other fundamental computing resources where the consumer is able to deploy and run arbitrary software which can include operating systems and applications the consumer with ias does not manage or control the underlying cloud infrastructure but has control over operating systems storage deployed applications and possibly limited control of select networking components for example firewalls when using a cloud provider you'll be leveraging regions all over the world and within those regions you have zones or availability zones those zones are miles or tens of miles apart and within those zones you have two or more data centers cloud providers also have edge locations in metropolitan areas all over the world for content delivery networking and amazon web services if you're using infrastructure as a service they will provide the components at the bottom the regions the availability zones and the edge locations they'll provide their own identity and access management they'll provide endpoints to connect to services over their cloud and of course their foundation services would be compute storage databases and networking everything above that is the responsibility of the customer including the customer identity access management either using their iam components or your own single sign-on solution with your own identity provider platform as a service according to nist says the capability provided to the consumer is to deploy onto the cloud infrastructure consumer created or acquired applications created using programming languages and tools supported by the provider the consumer does not manage or control the underlying cloud infrastructure including network servers operating systems or storage but as control over the deployed applications and possibly application hosting environment configurations common platform as a service services would be development and software development kit platforms for java php python and more container services for things like docker and kubernetes orchestration managed and fully managed relational and document databases manage security and threat modeling services and single sign-on machine learning artificial intelligence internet of things iot blockchain media services and more according to nist here's sas software as a service the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure think office 365. the applications are accessible from various client devices through a thin client interface such as the web browser the consumer does not manage or control the underlying cloud infrastructure including network servers operating systems storage or even individual application capabilities with the possible exception of limited user specific application configuration settings common sas services would be customer relationship management crm human resources and workplace tools such as a virtual help or service desk finance sales billing and marketing email collaboration and cloud storage business intelligence and security services we also have cloud provider models the private model is basically deployed in a sandbox within an organization a private cloud could be at the provider or it could be on-premises in the public model it's deployed by a provider for customer access and consumption for example using amazon web services google cloud platform or microsoft azure community cloud is deployed by a consortium in a certain sector for example insurance medical or government and then a hybrid cloud is simply a combination of public and or private and or community [Music] in this lesson we're going to look at fundamental concepts of security models security models are used to decide which subjects can access particular objects the specification of a security policy models are typically implemented by enforcing integrity confidentiality origin authentication and non-repudiation controls the designer will determine how the models will be used and integrated into specific designs and it may be done by an individual or a committee or a security team keep in mind that security is best enforced with a multi-level or hierarchical security system let's look at the bell la padula model this is the first mathematical model with a multi-level security policy used to define the concept of a secure state machine models of access and outlined rules of access a secure state machine will know every state and all the transactions between those states as related to subjects and objects the bella padula model focuses on ensuring that subjects with different clearances are properly authenticated by having the necessary security clearance need to know and formal access approval before accessing an object under different classification levels all mandatory access control or mac systems are based on the bellapadula model because of its multi-level security and it's been adopted by most government agencies bella padula is a state machine model and is used to control access and complex systems the state of a machine is collected to verify the security of a system as it defines the behavior of a set number of states the transitions between those states and the subsequent actions that can take place a particular state typically consists of all current permissions and instances of subjects accessing objects if a subject can access objects only in adherence with the security policy then the system is considered secure here's a simple example of a state machine state one is opened state two is closed in state one our example is to open the door in state two our action is to close the door and notice the transition conditions are the transitions between the opened and closed state the same thing applies to subjects and objects and the transitions between those states in a mac model bella padula has a rule set first is a simple security rule a subject at a given security level for example an agent at a government agency cannot read data that resides at a higher security level that's the no read up rule the star property or asterisk rule states that a subject at a given security level cannot write information to a lower security level that's the no write down rule the strong star property rule states that a subject who has read and write capabilities can only perform those functions at the same security level nothing higher and nothing lower and then there's the tranquility principle subjects and objects cannot change their security levels once they've been instantiated keep in mind that bella padula is a confidentiality model and there can be exceptions to these rules as long as it's decided upon by the security team or the committee before the model is implemented the biba model is an integrity model it was developed after bella padula and it uses a lattice of integrity levels unlike bella padula it has the simple integrity rule or no read down this rule states that a subject cannot read data from a lower integrity level the star integrity rule no write up maintains integrity by stating that a subject cannot write data to an object at a higher integrity level the invocation property states that a subject cannot invoke or call upon a subject at a higher integrity level biba or biba is also an information flow model like bel la padula because they're most concerned about data flowing from one level or one sensitivity level to another for example secret to the level top secret above it or secret to the level sbu sensitive but unclassified below it an information flow model observes information flows in a state machine data is considered in individual discrete compartments based on classification and need to know principles subject clearance overrules the object classification and the subject security profile must contain one of the categories listed in the object label that enforces need to know for example bella padula model prevents information flowing from higher source levels to lower source levels the biba model prevents information flowing from lower integrity level to higher integrity levels often an information flow model and other models will be expressed in lattices or in matrices for example here we can see that there's a bi-directional information flow between a and c but only a unidirectional flow from b to c and c to d and a to d there is no flow whatsoever between a and b any information flow outside of this model is referred to as a covert channel there's also the clark wilson integrity model this was developed after biba and deals with information integrity objects can be modified by subjects using read write operations integrity verification procedures ivps are programs that run periodically to check the consistency of the integrity rules the cdis that are usually defined by vendors the integrity goals of clark wilson model are to prevent unauthorized users from making modifications and further to ensure separation of duties prevents authorized users from making improper modifications and ensure well-formed transactions maintaining internal and external consistency let's look at access control methodologies now starting with a very popular role-based access control otherwise known as rbac with rbac access decisions rely typically on organizational charts or the established roles and responsibilities of individuals in your organization or perhaps a location in a user base in a directory for example like active directory the role is typically set based on evaluating the essential objectives and architecture of the enterprise an rbac framework is determined by security administrators and officers or possibly server admins but it's not at the discretion of the user it's often built into the framework or the platform for example in a relational database management system you may have pre-established roles another example is in the medical center where your different roles may be doctor rn pa or physician's assistant specialist technician attendant receptionist and others each one of these roles can have rights and permissions assigned to it then when a new specialist is hired at the medical center they immediately are placed into that group or that container and they inherit those rights and permissions we're seeing companies with a lot of turnover or lots of transient temporary and contractor users move from a discretionary access control model that we'll see here in a moment to a role based due to the flexibility and easier management the advantages of role-based access control is that it's easy to implement and control often the roles can be assigned using written security policy also rbac is built into many security frameworks such as database management systems or iam identity and access management at cloud providers like aws and google cloud platform it often easily aligns with accepted security principles some disadvantages however are the fact that scope creep or privilege creep can take place over time therefore the rules and access must be rigorously audited also if there's multi-tenancy capabilities need things like active directory organizational units or cloud provider organizational units to group up multiple accounts our next access control methodology is rule-based access control rule-based access control uses acronyms like rbac or rb-rbac rule-based often dynamically assigns roles to users based on criteria defined by the custodian or the system administrator a rule could also be a time-based access control list for example if network time protocol is being used so you can control access to a particular drive object during only business hours for example 6 am to 6 pm monday through friday rule-based access controls are common for infrastructure devices like routers switches firewalls and appliances to use rule-based access controls so for example you could have a list of numbered entries or you're going to permit tcp based http https and ftp control and data from any network two servers 10.10 10.11 and 10.12 as the packet comes inbound to the router it examines the metadata or the header information of ip and tcp and compares that information to this numbered list once there's a match it will take the action for example permit and then move on to the next datagram or packet if none of the rules match at the end of the list there is an explicit deny any with a log entry so the deny will be sent to a syslog server you can also use rule-based access controls at cloud providers here's an example of a network accol or network access control list at amazon web services you can see the type of protocols and services that you can match on some of these are standard ports like pop3 110 imap 143 ldap 389 some of these however are going to be specific to the cloud provider for example redshift using port 5439 for data warehousing you also determine port ranges the source ip address prefix in a cider format and your allow or deny action again rule-based access controls are a static or a stateless access control mechanism [Music] next we have mandatory access control or mac now we actually covered these earlier in the course when we talked about bella padula and biba and clark wilson those are all mac models however mac is a strictly non-discretionary methodology and it secures data by assigning sensitivity labels and then compares those labels to the level of user sensitivity it is appropriate for extremely secure systems such as multi-level secure military applications or government agencies these organizations use classification levels like top secret and secret and confidential its main advantage is that the mac model is based on a need to know and it's strictly adhered to therefore scope creep or privilege creep is minimized all mac systems are based on the bel la padula model for confidentiality this was the first mathematical model with a multi-level security policy used to define the concept of a secure state machine and models of outlined rules of access a mac model is non-discretionary there is no owner of an object who can determine other permissions or file sharing on that object the state machine is determined by committee or by a security team before any subjects begin to access any objects [Music] one of the most common models used especially in commercial environments is the discretionary access control or dac in this model dac restrict access to data and systems based on the identity of users and or their group membership often in some type of directory service for example active directory access results are usually based on authorization granted to a user based on the various forms of credentials presented at the time of authentication for example an active directory would often be a kerberos ticket in most dac implementations the owner of the resource can actually change its permissions at their discretion or grant access to that object or shares to that object in addition share permissions a dac framework can deliver the capability for granular access control advantages to dac is that it's easy to implement and operate often very intuitive it aligns with the least privileged security principle when properly implemented and the object owner can have a lot of control over granting access this can encourage interoperability and productivity especially in a flatter organization or a projectized organization some disadvantages however documentation of the access must be strictly maintained in other words an active changing configuration management database and the discretionary access model has the highest propensity for privilege or scope creep compared to all of the other models [Music] in this short lesson we'll talk about a very important and popular form of access control known as abac attribute based access control abac goes further by controlling access to entities by weighing rules against the attributes or characteristics of the subject's actions and the actual request environment for example is it in a vpn or is it not is it local access is it remote access abac relies upon the evaluation of people's characteristics attributes of i.t components for example what are you accessing and when it could use heuristic engines it can take in environmental factors and other situational variables often we used advanced iam systems such as cisco's identity services engine and other tools to create authorization profiles based on a wide variety of different variables and the beauty of abac systems is they can enforce both discretionary access control dac as well as strict mandatory access control models in fact we're starting to see more abac being used in government and military mac implementations [Music] next let's talk about another emerging modern method for access control known as risk-based access control this is also referred to as risk adaptable access control or radac risk-based considers the obstacles of traditional access control approaches to sharing of information it's a model that seeks to imitate real-world decision-making while considering operational needs and security risks together with every access control decision radak realizes that situational conditions will drive the relative weight of these two factors when authorizing access radak can support extremely restrictive policies as well as those that offer the broader sharing with added risk under specific conditions here we see a possible flow chart let's assume there's an access request first we'll determine the security risk typically using some type of quantitative mechanism like open fare factor analysis of information risk perhaps using pert and monte carlo simulations then in step two we'll determine if the security risk is acceptable based on our policy if it is acceptable we'll go to step three where the policy requires verification of operational need then in step five we'll assess the operational need if the security risk in step 2 is not considered acceptable then we'll go to step 4 where the policy allows for operational need to possibly override the security risk if you can then we go to step five let's assume in step four that the policy doesn't allow us to override the security risk or it's false then we will deny access and go to post decision processing going back to step 3 if the policy requires verification of an operational need we then go to step 5 and assess the operational need and then in step 6 we determine if the operational need is sufficient per policy if it's not we deny access if it's true then we grant access notice regardless of whether we grant access or deny access there is post-decision processing which leads to continual improvement or lessons learned [Music] in this course identity and access management principles you learned about controlling physical and logical access virtualization in cloud service types and deployments security models and access control models in the next course we'll explore authentication and authorization protocols provisioning and de-provisioning entities and implementing identity management idm and multi-factor authentication